id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.17284 | Differentially Private Computation of Basic Reproduction Numbers in
Networked Epidemic Models | The basic reproduction number of a networked epidemic model, denoted $R_0$,
can be computed from a network's topology to quantify epidemic spread. However,
disclosure of $R_0$ risks revealing sensitive information about the underlying
network, such as an individual's relationships within a social network.
Therefore, we propose a framework to compute and release $R_0$ in a
differentially private way. First, we provide a new result that shows how $R_0$
can be used to bound the level of penetration of an epidemic within a single
community as a motivation for the need of privacy, which may also be of
independent interest. We next develop a privacy mechanism to formally safeguard
the edge weights in the underlying network when computing $R_0$. Then we
formalize tradeoffs between the level of privacy and the accuracy of values of
the privatized $R_0$. To show the utility of the private $R_0$ in practice, we
use it to bound this level of penetration under privacy, and concentration
bounds on these analyses show they remain accurate with privacy implemented. We
apply our results to real travel data gathered during the spread of COVID-19,
and we show that, under real-world conditions, we can compute $R_0$ in a
differentially private way while incurring errors as low as $7.6\%$ on average. | Bo Chen, Baike She, Calvin Hawkins, Alex Benvenuti, Brandon Fallin, Philip E. Paré, Matthew Hale | 2023-09-29T14:38:02Z | http://arxiv.org/abs/2309.17284v1 | # Differentially Private Computation of Basic Reproduction Numbers
###### Abstract
The basic reproduction number of a networked epidemic model, denoted \(R_{0}\), can be computed from a network's topology to quantify epidemic spread. However, disclosure of \(R_{0}\) risks revealing sensitive information about the underlying network, such as an individual's relationships within a social network. Therefore, we propose a framework to compute and release \(R_{0}\) in a differentially private way. First, we provide a new result that shows how \(R_{0}\) can be used to bound the level of penetration of an epidemic within a single community as a motivation for the need of privacy, which may also be of independent interest. We next develop a privacy mechanism to formally safeguard the edge weights in the underlying network when computing \(R_{0}\). Then we formalize tradeoffs between the level of privacy and the accuracy of values of the privatized \(R_{0}\). To show the utility of the private \(R_{0}\) in practice, we use it to bound this level of penetration under privacy, and concentration bounds on these analyses show they remain accurate with privacy implemented. We apply our results to real travel data gathered during the spread of COVID-19, and we show that, under real-world conditions, we can compute \(R_{0}\) in a differentially private way while incurring errors as low as \(7.6\%\) on average.
## I Introduction
Compartmental epidemic models have been used to model the spread of epidemics, assess pandemic severity, predict spreading trends, and facilitate policy-making [1]. This progress has been in part propelled by advancements in network science [2, 3, 4, 5]. Due to their complexity, it can be difficult to communicate the intricate details and conclusions of these models [6], though the basic reproduction number of a spreading process has emerged as one concise way to convey information about the spread of epidemics [7, 8].
The basic reproduction number of a spreading process, denoted \(R_{0}\), is the average number of individuals that an infected person will infect in a fully susceptible population [7]. Intuitively, higher \(R_{0}\) values indicate greater transmissibility. For example, the basic reproduction numbers for diseases like measles, SARS-CoV-1, and the Ebola virus are approximately 14.7, 3.1, and 1.9, respectively [8].
Researchers have defined basic reproduction numbers for networked epidemic models [2], which capture not only the transmissibility of the epidemic process but also the effect of the graph structure. For example, in a networked susceptible-infected-susceptible (SIS) model, basic reproduction numbers less than or equal to \(1\) ensure that the size of the infected population eventually converges to zero [2]. Thus, \(R_{0}\) can be used to forecast the future behavior of an epidemic and communicate with the public in a concise way.
Unfortunately, it is well-known that sharing even scalar-valued graph properties like \(R_{0}\) can pose privacy threats [9, 10, 11, 12, 13]. In particular, one can initiate a _reconstruction attack_, in which an attacker combines released graph properties (here, \(R_{0}\)) with other information to reconstruct the underlying graph information, such as the weights in a weighted graph, which can be sensitive. For example, consider a residential community of a small number of households, whose interactions with other communities contribute to the modeling of graph weights. Then one may be able to infer the travel habits of a person by reconstructing these graph weights; see [9, 10, 11, 12, 13] for additional discussion of privacy threats for graphs. In addition, this type of privacy risk extends to large regions as well [14]. Thus, despite the importance of \(R_{0}\), it is undesirable to publish \(R_{0}\) without any protections.
In this work, we provide these protections by using differential privacy [15, 16] to protect graph weights when computing \(R_{0}\). Our implementation uses an input perturbation approach, which first adds noise directly to the matrix of graph weights, then computes \(R_{0}\) from this private matrix. Differential privacy provides strong, formal privacy protections for sensitive data, and it is desirable here because differentially private data may be freely post-processed without harming its guarantees [17]. In particular, after privatizing the matrix of weights, we can compute \(R_{0}\) and use it for epidemic forecasting without harming privacy.
To ensure that private values of \(R_{0}\) enable useful analyses, we use the bounded Gaussian mechanism [18], which only generates private outputs within specified ranges. We follow this approach because \(R_{0}\) and graph weights are nonnegative, which ensure that their private forms are as well. Moreover, as a motivating example, we present a new way to use \(R_{0}\) to bound the level of penetration of an epidemic into a community, which may also be of independent interest. Specifically, we bound the size of the uninfected population in a community at equilibrium, and this bound is a function of only \(R_{0}\).
Our specific contributions in this work are:
1. A result to use values of \(R_{0}\) to analyze the spread of an epidemic in terms of the eventually remaining susceptible population.
2. A mechanism for differential privacy that protects the
underlying graph weights when publishing the basic reproduction number \(R_{0}\).
3. Privacy-accuracy tradeoffs that quantify both (i) the expected deviation from the true value of \(R_{0}\) and (ii) the accuracy of predictions of the remaining susceptible population as functions of the strength of privacy.
We use travel data from Minnesota during the COVID-19 pandemic show that a real-world deployment of this privacy framework leads to errors as low as \(7.6\%\) on average.
**Relation to prior work:** There exist numerous differential privacy implementations for graph properties, including counts of sub-graphs and triangles [9, 10], degree distributions [11, 12], and algebraic connectivity [13, 19]. In many of these prior works, differential privacy has been applied with edge and node adjacency [20, 21, 22] to obfuscate the absence and/or presence of a pre-specified number of edges or nodes. In contrast, we consider graphs with node and edge sets that are publicly known. We do so because networked epidemic models often use vertices to represent communities and/or cities and use edges to represent connections such as highways or flights, all of which are publicly known. We instead use weight adjacency [15] and protect the weights in a weighted graph.
Differential privacy has been used to protect the eigenvalues of certain types of matrices [13, 19, 23]. We differ by privatizing matrices of weights in weighted graphs, which those works do not consider. Work in [24] adds noise drawn from a matrix-variate Gaussian distribution to a matrix for privacy protection. However, such noise is unbounded and our work instead adds bounded noise to ensure that privatized weights and values of \(R_{0}\) remain non-negative.
## II Background and Problem Formulation
In this section, we introduce notation, background on epidemic models and privacy, and problem statements.
### _Notation_
We use \(\mathbb{R}\) to denote the real numbers, \(\mathbb{R}_{\geq 0}\) to denote the non-negative reals, and \(\mathbb{R}_{>0}\) denote the positive reals. For a random variable \(X\), \(\mathbb{E}[X]\) denotes its expectation and \(\text{Var}[X]\) denotes its variance. Let \(\mathbf{1}_{T}(\cdot)\) denote the indicator function of set \(T\). We use \([n]\) to denote \(\{1,2,\ldots,n\}\). For a real square matrix \(M\), we use \(\rho(M)\) to denote its spectral radius. For any two matrices \(A,B\in\mathbb{R}^{n\times n}\), we write \(A\geq B\) if \(a_{ij}\geq b_{ij}\), \(A>B\) if \(a_{ij}\geq b_{ij}\) and \(A\neq B\), and \(A\gg B\) if \(a_{ij}>b_{ij}\), for all \(i,j\in[n]\). These comparison notations between matrices apply to vectors as well. For a vector \(v\in\mathbb{R}^{n}\), we write \(\text{diag}(v)\) to denote the diagonal matrix whose \(i^{th}\) diagonal entry is \(v_{i}\) for each \(i\in[n]\). We use \(||\cdot||_{F}\) to denote the Frobenius norm of a matrix.
Let \([a,b]^{n}\) be the Cartesian product of \(n\) copies of the same interval \([a,b]\). For graphs, let \(G=(V,E,W)\) denote an undirected, connected, and weighted graph with node set \(V\), edge set \(E\), and weight matrix \(W\), where \(w_{ij}\geq 0\) denotes the \(i^{th},j^{th}\) entry of the weight matrix \(W\). Let \(|\cdot|\) denote the cardinality of a set. For a given weight matrix \(W\), we use \(n_{w}=|\{w_{ij}>0:i,j\in[n]\}|\) to denote the number of positive entries in \(W\). We use \(\mathcal{G}_{n}\) to denote a set of all possible undirected, connected, weighted graphs \(G\) on \(n\) nodes. We also use the special functions
\[\varphi(x) =\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}x^{2}\right), \tag{1}\] \[\Phi(x) =\frac{1}{2}\left(1+\frac{2}{\sqrt{\pi}}\int_{0}^{\frac{\gamma}{ \sqrt{2}}}\exp(-t^{2})dt\right), \tag{2}\]
which are the probability density function and the cumulative distribution function of the standard normal distribution, respectively.
### _Networked Epidemic Models_
We consider networked susceptible-infected-susceptible (SIS) and susceptible-infected-recovered (SIR) models. Let \(\bar{G}=(V,E,B)\in\mathcal{G}_{n}\) denote a connected and undirected spreading network that models an epidemic spreading process over \(n\) connected communities. Let \(V\) and \(E\) denote the communities and the transmission channels between these communities, respectively. We use \(s(t),x(t),r(t)\in[0,1]^{n}\) to represent the susceptible, infected, and recovered state vectors, respectively. That is, for all \(i\in[n]\), the value of \(s_{i}(t)\in[0,1]\) is the portion of the population of community \(i\) that is susceptible at time \(t\); the values of \(x_{i}(t)\) and \(r_{i}(t)\) are the sizes of the infected and recovered portions of community \(i\), respectively. We use \(B\in\mathbb{R}_{\geq 0}^{n\times n}\), with \(b_{ij}\in[0,1]\) for all \(i,j\in[n]\), to denote the transmission matrix and \(\Gamma=\text{diag}(\gamma_{1},\gamma_{2},\ldots,\gamma_{n})\), with \(\gamma_{i}>0\) for all \(i\in[n]\), to denote the recovery matrix. Thus, the value of \(b_{ij}\) captures the transmission process from the community \(j\) to community \(i\), while \(\gamma_{i}\) captures the recovery rate of community \(i\). The networked SIS and SIR models are
\[\begin{cases}\dot{s}(t)&=-\text{diag}(s(t))Bx(t)+\Gamma x(t),\\ \dot{x}(t)&=\text{diag}(s(t))Bx(t)-\Gamma x(t),\end{cases} \tag{3}\]
and
\[\begin{cases}\dot{s}(t)&=-\text{diag}(s(t))Bx(t),\\ \dot{x}(t)&=\text{diag}(s(t))Bx(t)-\Gamma x(t),\\ \dot{r}(t)&=\Gamma x(t),\end{cases} \tag{4}\]
respectively. For all \(i\in[n]\), \(s_{i}(t)+x_{i}(t)+r_{i}(t)=1\)[2].
For networked SIS and SIR spreading models, researchers have defined _the next generation matrix_\(W=\Gamma^{-1}B\) to characterize the global behavior of networked SIS and SIR models in (3) and (4) [2, 3, 4]. One can then compute the basic reproduction number from \(W\) via \(R_{0}=\rho(W)\).
**Remark 1**.: _Developments in [25, 4] suggest that the basic reproduction number in compartmental models is linked to the remaining susceptible population at the disease-free equilibrium, which represents the level of penetration in a community. This level of penetration quantifies the virus' impact, namely how many individuals will become infected._
To safeguard the weights in \(W\), it is essential to provide privacy for \(W\) when publishing \(\rho(W)\). Since \(R_{0}\) is defined in terms of \(W\) rather than \(B\), we will privatize \(W\) directly. To reflect our focus, we define a weighted graph for a spreading network as \(G=(V,E,W)\), with \(W=\Gamma^{-1}B\), and we focus on this class of graphs going forward.
### _Differential Privacy_
Differential privacy is enforced by a randomized map, called a privacy _mechanism_, which must ensure that nearby inputs to the mechanism produce outputs that are statistically approximately indistinguishable from each other. In this paper, we adopt weight adjacency [15], which formalizes the notion of "nearby" for weighted graphs.
**Definition 1**.: [15] Fix an undirected weighted graph \(G=(V,E,W)\in\mathcal{G}_{n}\). Then another undirected weighted graph \(G^{\prime}=(V,E,W^{\prime})\) is _weight adjacent_ to \(G\), denoted \(G\sim G^{\prime}\), if \(||W-W^{\prime}||_{F}=\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}(w_{ij}-w_{ij}^{\prime} )^{2}}\leq k\), where \(k>0\) is a user-specified parameter. \(\Diamond\)
Definition 1 states that two graphs are weight adjacent if they have the same edge and node sets, and the distance between their weight matrices is bounded by \(k\) in the Frobenius norm. We next introduce the definition of differential privacy in the form in which we will use it in this paper.
**Definition 2** (Differential Privacy [17]).: Let \(\epsilon>0\) be given and fix a probability space \((\Omega,\mathcal{F},\mathbb{P})\). Then a mechanism \(\mathcal{M}:\Omega\times\mathbb{R}_{\geq 0}^{n\times n}\rightarrow\mathbb{R}_{ \geq 0}^{n\times n}\) is \(\epsilon\)-differentially private if, for all weight adjacent graphs \(G=(V,E,W)\) and \(G^{\prime}=(V,E,W^{\prime})\) in \(\mathcal{G}_{n}\), it satisfies \(\mathbb{P}\big{[}\mathcal{M}(W)\in S\big{]}\leq e^{\epsilon}\cdot\mathbb{P} \big{[}\mathcal{M}\left(W^{\prime}\right)\in S\big{]}\) for all sets \(S\) in the Borel \(\sigma\)-algebra over \(\mathbb{R}_{\geq 0}^{n\times n}\). \(\Diamond\)
The privacy parameter \(\epsilon\) controls the strength of privacy and a smaller \(\epsilon\) implies stronger privacy. Differential privacy even with large \(\epsilon\), e.g., \(\epsilon>10\), provides much stronger empirical privacy than no differential privacy [26, 27, 28, 29]. In this paper, we provide differential privacy through input perturbation. For a weighted graph \(G=(V,E,W)\), input perturbation first privatizes \(W\) itself by randomizing it, then computes \(R_{0}\) from the private \(W\). Due to differential privacy's immunity to post-processing, the resulting \(R_{0}\) is also differentially private.
### _Setup for Private Analysis_
In this subsection, we formalize the information that the sensitive graph \(G\) discloses to epidemic analysts and the information it should conceal.
We assume epidemic analysts have access to a graph's vertex set \(V\) and edge set \(E\). However, we do not share the transmission matrix \(B\), the recovery matrix \(\Gamma\), or the next generation matrix \(W\) with them since these are sensitive. In addition, it is well-known that publishing even scalar-valued graph properties can pose substantial privacy threats [9, 10, 11, 12, 13]. As a result, the value of \(R_{0}\) is not shared with epidemic analysts either. Instead, they will _only_ receive a differentially private version of \(R_{0}\), denoted by \(\vec{R}_{0}\).
Lastly, we assume that each entry \(w_{ij}\) lies in an interval \((\underline{w}_{ij},\bar{w}_{ij}]\), where \(\underline{w}_{ij}\) and \(\bar{w}_{ij}\) are known lower and upper bounds and will be shared with analysts. It should be noted that while sharing these bounds conveys some information about the underlying graph, it is not highly sensitive information. Other publicly available data sources or databases, such as the number of highways connecting communities or community population statistics, can be used to infer information of this kind. In practice, one can therefore group values of \(w_{ij}\) into certain ranges without harming privacy, which is possible precisely because approximate ranges of these values can be inferred from publicly available data.
### _Problem Statement_
We next state the problems that we will solve.
**Problem 1**.: _Build an upper bound on the level of penetration of a community (in the sense of Remark 1) within a spreading network by using its basic reproduction number \(R_{0}\)._
**Problem 2**.: _Develop a differential privacy mechanism to provide differential privacy in the sense of Definition 2 for the next generation matrix \(W\) when computing \(R_{0}\)._
**Problem 3**.: _Given a reproduction number \(R_{0}\), for private values \(\tilde{R}_{0}\) generated by the proposed mechanism, develop bounds on the expected accuracy loss \(\mathbb{E}[|\tilde{R}_{0}-R_{0}|]\) of the developed mechanism as a function of privacy level._
**Problem 4**.: _Analytically evaluate the utility of the private reproduction number \(\tilde{R}_{0}\) in modeling the level of penetration of networked spreading processes._
### _Probability Background_
**Definition 3**.: [30] The _truncated Gaussian_ random variable, written as \(\text{TrunG}(\mu,\sigma,l,u)\), that lies within the interval \((l,u]\), where \(-\infty<l<u<+\infty\), and centers on \(\mu\in(l,u]\) is defined by the probability density function \(p_{TG}\) with
\[p_{TG}(x)=\begin{cases}\frac{1}{\sigma}\frac{\varphi\left(\frac{x-\mu}{\sigma} \right)}{\Phi\left(\frac{u-\mu}{\sigma}\right)-\Phi\left(\frac{l-\mu}{\sigma} \right)}&\text{if }x\in(l,u]\\ 0&\text{otherwise,}\end{cases}\]
and \(\sigma>0\), where \(\phi(\cdot)\) is from (1) and \(\Phi(\cdot)\) is from (2). \(\Diamond\)
## III Penetration Analysis with \(R_{0}\)
In this section, we illustrate the value of \(R_{0}\) in epidemic analysis by demonstrating one type of information that can be obtained from \(R_{0}\). As previously mentioned in the problem formulation, it is possible to use \(R_{0}\) to infer the remaining susceptible population within a community, referred to as the _level of penetration_ of an epidemic. This information enables us to determine the total number of individuals within a given community who will be infected by a virus over time.
In particular, we will quantify the relationship between \(R_{0}\) and the proportion of the susceptibles within community \(i\) at a disease-free equilibrium, denoted \(s_{i}^{*}\), for all \(i\in[n]\)1. To do so, we first rewrite the dynamics of the networked SIS and SIR models in (3) and (4) each with two separate components: (i) nonlinear dynamics [25, Eq.(2)] to model the susceptible states \(s(t)\), which are
Footnote 1: Note that a simulation of [25] studies the susceptible proportion within a community \(i\), \(i\in[n]\), at the disease-free equilibrium through a different way of defining the reproduction number of a networked spreading process, i.e., \(R_{0}=\rho(B\Gamma^{-1})\). In addition, [25] applies its developed results to networked epidemic spreading dynamics without proving that the networked spreading models satisfy the conditions on its developed results.
\[\dot{s}(t) =f(s(t),x(t)), \tag{5}\] \[u(t) =I\text{diag}\{s(t)\}Bx(t);\]
(ii) linear dynamics [25, Eq.(3)] with external input to model the infected states \(x(t)\), which are
\[\dot{x}(t) =-\Gamma x(t)+Iu(t), \tag{6}\] \[y(t) =Ix(t).\]
where \(I\) is the identity matrix. We use the coupled dynamics in (5)-(6) to capture the networked \(SIS\) models, where \(f(s(t),x(t))=-I\text{diag}s(t)Bx(t)+\Gamma x(t)\). Similarly, when \(f(s(t),x(t))=-I\text{diag}s(t)Bx(t)\), we use (5)-(6) to represent SIR models, where \(r(t)=1-s(t)-x(t)\) is omitted.
**Assumption 1**.: _The graph \(G=(V,E,W)\in\mathcal{G}_{n}\) has a symmetric weight matrix \(W\), i.e., \(W=W^{T}\)._
We enforce Assumption 1 for simplicity in this work, and we defer analysis of the non-symmetric case to a future publication. We then have the following result to bound the level of penetration of an epidemic.
**Theorem 1**.: _Let \(G\in\mathcal{G}_{n}\) be given, and suppose that a spreading process is modeled either by an \(SIS\) or \(SIR\) model. Then, for some \(i\in[n]\), there exists a community \(i\) such that the infected proportion \(s_{i}^{*}\) at disease-free equilibrium is upper bounded via \(s_{i}^{*}\leq\frac{1}{R_{0}}\)._
_Proof:_ See the Appendix A. \(\blacksquare\)
If the nodes in network \(G\) are individuals, then Theorem 1 can directly reveal an individual's probability of being infected. If the nodes are not individuals, then, as discussed in the Introduction, the value of \(R_{0}\) can reveal sensitive information within \(G\). Therefore, privacy protections are required that can simultaneously safeguard this information and enable the use of Theorem 1 to analyze an epidemic.
## IV Privacy mechanism for \(R_{0}\)
In this section we develop an input perturbation mechanism to provide differential privacy. Specifically, we utilize the bounded Gaussian mechanism [18] to privatize the next generation matrix \(W\).
### _Input Perturbation Mechanism_
We start by defining the sensitivity, which quantifies the maximum possible difference between two weighted graphs that are adjacent in the sense of Definition 1.
**Definition 4** (\(L_{2}\)-sensitivity).: Let \(G=(V,E,W)\in\mathcal{G}_{n}\) and \(G^{\prime}=(V,E,W^{\prime})\in\mathcal{G}_{n}\) be adjacent in the sense of Definition 1. Then the \(L_{2}\)-sensitivity of the weights, denoted \(\Delta_{2}w\), is defined as \(\Delta_{2}w=\max_{G\sim G^{\prime}}\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}(w_{ij}- w_{ij}^{\prime})^{2}}\), where \(n=|V|\) is the number of nodes. \(\diamondsuit\)
From Definition 1, \(\Delta_{2}w\leq k\). We use this upper bound to calibrate the variance of noise for privacy protection.
**Mechanism 1** (Bounded Gaussian mechanism).: _Fix a probability space \((\Omega,\mathcal{F},\mathbb{P})\). Let \(G=(V,E,W)\in\mathcal{G}_{n}\). Then for \(D=(\underline{w}_{ij},\bar{w}_{ij})\), the bounded Gaussian mechanism \(M_{BG}:D^{n\times n}\times\Omega\to D^{n\times n}\) generates independent private weights \(\tilde{w}_{ij}\sim\text{TranG}(w_{ij},\sigma,\underline{w}_{ij},\bar{w}_{ij})\) for all positive entries \(w_{ij}\) on and above the main diagonal of \(W\) (see Section II-D for discussion of \(\underline{w}_{ij}\) and \(\bar{w}_{ij}\)). The entries below the main diagonal mirror the values above it to ensure symmetry. This mechanism satisfies \(\epsilon\)-differential privacy if_
\[\sigma^{2}\geq\frac{k\left(\frac{k}{2}+\sqrt{\sum_{i=1}^{n}\sum_{j=i}^{n}( \bar{w}_{ij}-\underline{w}_{ij})^{2}\cdot\mathbf{1}_{\mathbb{R}_{>0}}(w_{ij}) }\right)}{\epsilon-\log(\Delta C(\sigma,c))}, \tag{7}\]
_where \(\Delta C(\sigma,c)=\frac{\Phi\left(\frac{w_{ij}-\underline{w}_{ij}-c_{ij}}{ \sigma}\right)-\Phi\left(\frac{-c_{ij}}{\sigma}\right)}{\Phi\left(\frac{w_{ij }-\underline{w}_{ij}}{\sigma}\right)-\Phi(0)}\) and \(c\in\mathbb{R}^{n\times n}\geq 0\) is an upper triangular matrix with \(c_{ij}>0\) if and only if \(w_{ij}>0\) for all \(i,j\in[n]\). Matrix \(c\) can be found by solving the optimization problem in [18, (3.3)]. \(\diamondsuit\)_
**Remark 2**.: _The minimal value of \(\sigma\) that satisfies (7) can be found using [18, Algorithm 2]. Meanwhile, (7) implies that a larger \(\epsilon\) gives weaker privacy and leads to a smaller \(\sigma\)._
**Remark 3**.: _The bounded Gaussian mechanism does not add noise to any weight \(w_{ij}=0\). Such a weight indicates that there is no edge between nodes \(i\) and \(j\), and thus the bounded Gaussian mechanism does not alter the presence or absence of an edge in a graph._
Given \(G=(V,E,W)\), and suppose the bounded Gaussian mechanism generates an \(\epsilon\)-differentially private weights matrix \(\tilde{W}=M_{BG}(W)\). Now we can compute a private reproduction number \(\tilde{R}_{0}\) using the private graph \(\tilde{G}=(V,E,\tilde{W})\) by using \(\tilde{R}_{0}=\rho(\tilde{W})\). The private reproduction number \(\tilde{R}_{0}\) provides \(W\) with the same level of privacy protection, \(\epsilon\), since differential privacy is immune to post-processing [17] and the computation of \(R_{0}\) simply post-processes the private matrix \(\tilde{W}\). The accuracy of \(\tilde{R}_{0}\) is quantified next.
**Theorem 2**.: _Consider a graph \(G=(V,E,W)\) and denote its basic reproduction number by \(R_{0}=\rho(W)\). Suppose Mechanism 1 is applied to \(G\), and for all \(i,j\in[n]\) define the constants \(\alpha_{ij}=\frac{\underline{w}_{ij}-w_{ij}}{\sigma}\) and \(\beta_{ij}=\frac{\underline{w}_{ij}-w_{ij}}{\sigma}\). Also let \(\tilde{G}=(V,E,\tilde{W})\) denote the privatized form of \(G\) and denote its basic reproduction number by \(\tilde{R}_{0}=\rho(\tilde{W})\). Then the error induced in \(R_{0}\) by privacy obeys the bounds_
\[\mathbb{E}\big{[}|\tilde{R}_{0}-R_{0}|\big{]}\leq\sigma\sqrt{n_{w }-\xi_{e}}\leq\sigma\sqrt{n_{w}} \tag{8}\] \[\text{Var}\big{[}|\tilde{R}_{0}-R_{0}|\big{]}\leq\sigma^{2}\cdot(n _{w}-\xi_{e})\leq\sigma^{2}n_{w}, \tag{9}\]
_where \(n_{w}\) denotes the number of non-zero entries in the weight matrix \(W\) and_
\[\xi_{e}=2\sum_{i=1}^{n}\sum_{j=i+1}^{n}\frac{\beta_{ij}\varphi( \beta_{ij})-\alpha_{ij}\varphi(\alpha_{ij})}{\Phi(\beta_{ij})-\Phi(\alpha_{ij})} \cdot\mathbf{1}_{\mathbb{R}_{>0}}(w_{ij})\] \[+\sum_{i=1}^{n}\frac{\beta_{ii}\varphi(\beta_{ii})-\alpha_{ii} \varphi(\alpha_{ii})}{\Phi(\beta_{ii})-\Phi(\alpha_{ii})}\cdot\mathbf{1}_{ \mathbb{R}_{>0}}(w_{ii}).\]
_Proof:_ See the Appendix B. \(\blacksquare\)
Recall that in Remark 2, a larger \(\epsilon\) indicates a smaller \(\sigma\), resulting in both \(\mathbb{E}[|\tilde{R}_{0}-R_{0}|]\) and \(\text{Var}[|\tilde{R}_{0}-R_{0}|]\) being closer to \(0\), which is intuitive. In addition to such qualitative analysis, one can use Theorem 2 to predict error on a graph-by-graph basis. For example, consider a complete graph \(G=(V,E,W)\) with \(|V|=15\) nodes, \(|E|=225\) edges (including self loops), and \(w_{ij}=0.25\) for all \(i,j\in[15]\). If we set
\(0.3\) and \(\bar{w}_{ij}=0.2\) for \(i,j\in[15]\), and set \(\epsilon=5\) and \(k=0.01\), then we have \(\mathbb{E}[|\tilde{R}_{0}-R_{0}|]\leq 0.43\) and \(\text{Var}[|\tilde{R}_{0}-R_{0}|]\leq 0.19\), where \(R_{0}=3.75\). In this example, the absolute difference \(|\tilde{R}_{0}-R_{0}|\) is a random variable whose mean and variance are smaller than \(0.43\) and \(0.19\), respectively. Hence, if we use \(\tilde{R}_{0}\) instead of \(R_{0}\) to conduct epidemic analysis, e.g., to estimate the average number of infected individuals generated by a single infected case, the deviation that results from using \(\tilde{R}_{0}\) is not likely to be large. In general, the bounds in (8) and (9) describe the distribution of the error \(|\tilde{R}_{0}-R_{0}|\) in the worst case, which helps analysts to predict the error that results from providing a given level of privacy protection \(\epsilon\).
An appealing feature of differential privacy is that its protections are tunable, and here the parameters \(\epsilon\), \(k\), \(\bar{w}_{ij}\), and \(\underline{w}_{ij}\) can be tuned to balance privacy and accuracy.
### _Use of \(\tilde{R}_{0}\) for Epidemic Analysis_
Theorem 1 shows that \(R_{0}\) can be used to bound the level of penetration in an epidemic spreading network, though, given the sensitive information that can be revealed by \(R_{0}\), it should be privatized before being shared. An epidemic analyst may thus only have access to the private \(R_{0}\), and the question then naturally arises as to how accurate Theorem 1 is when using a private value of \(R_{0}\). We answer this next.
**Theorem 3**.: _Fix a sensitive graph \(G=(V,E,W)\in\mathcal{G}_{n}\) and a privacy parameter \(\epsilon\). Consider also a private graph \(\tilde{G}=(V,E,W)\) whose weight matrix \(\tilde{W}=M_{BG}(W)\) is generated by Mechanism 1. For the true reproduction number \(R_{0}=\rho(W)\), the private reproduction number \(\tilde{R}_{0}=\rho(\tilde{W})\), and any \(t\in(0,R_{0}-\xi_{p})\) we have_
\[\mathbb{P}\left[\left|\frac{1}{\tilde{R}_{0}}-\frac{1}{R_{0}}\right|<\max\{u _{1},u_{2}\}\right]\geq 1-4\exp(-v^{2}),\]
_where_
\[u_{1} =\frac{1}{R_{0}}-\frac{1}{R_{0}+t+\xi_{p}},\ u_{2}=\frac{1}{R_{0 }-t-\xi_{p}}-\frac{1}{R_{0}},\] \[v^{2} =\frac{t^{2}}{2\sigma^{2}}-4.4n\] \[\xi_{p} =\sigma\cdot\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}\left(\frac{\varphi \left(\alpha_{ij}\right)-\varphi\left(\beta_{ij}\right)}{\Phi\left(\beta_{ij} \right)-\Phi\left(\alpha_{ij}\right)}\right)^{2}\cdot\mathbf{1}_{\mathbb{R}_{ >0}}(w_{ij})},\]
_where the parameter \(\sigma\) is from Mechanism 1._
_Proof:_ See the Appendix C.
Recall that Theorem 1 states that \(\frac{1}{R_{0}}\) bounds the level of penetration. By using Theorem 3, we can characterize the distribution of the difference between the true upper bound on the level of penetration, \(\frac{1}{R_{0}}\), and the private upper bound on the level of penetration, \(\frac{1}{R_{0}}\). Hence, the result in Theorem 3 demonstrates the accuracy of Mechanism 1 when using the privatizing graph weights.
For example, consider a complete graph \(G=(V,E,W)\) with \(|V|=15\) nodes, \(|E|=225\) edges (including self loops), and \(w_{ij}=0.25\) for each \(i,j\in[15]\). Then, if we set \(\bar{w}_{ij}=0.3\) and \(\underline{w}_{ij}=0.2\) for \(i,j\in[15]\), and set privacy parameters \(\epsilon=5\) and \(k=0.01\), we have \(\mathbb{P}\left[\left|\frac{1}{\tilde{R}_{0}}-\frac{1}{\tilde{R}_{0}}\right|< 0.054\right]\geq 0.92\), which indicates that the deviation of using the private upper bound is smaller than \(0.054\) with high probability (0.92), and thus \(\tilde{R}_{0}\) can be used without substantially harming accuracy.
## V Simulations
In this section, we present simulation results for generating \(\tilde{R}_{0}\) using Mechanism 1. We use a graph \(G=(V,E,W)\) to model networked data that estimates the number of trips between Minnesota counties [31] (shown in Figure 1) via geolocalization using smartphones [32]. The data provides an estimate of the total number of trips made by individuals between counties in Minnesota from March 2020 to December 2020. We choose a weekly time scale in an effort to average out periodic behaviors and use this average to estimate the daily flow of individuals between counties. The work in [32] constructs the asymmetric transmission matrix \(B^{\prime}\) by leveraging the daily flow between two counties, i.e., by setting \(b^{\prime}_{ij}\) as the daily traffic flow from county \(i\) to \(j\), for all \(i\), \(j\in[87]\). In order to satisfy Assumption 1, we set the matrix \(B\) with \(b_{ij}=b_{ji}=\frac{b^{\prime}_{ij}+b^{\prime}_{ji}}{87^{\prime}}\) and \(b_{ii}=\frac{|\sum_{i}b^{\prime}_{ij}-\sum_{i}b^{\prime}_{ij}|}{87^{\prime}}\) for all \(i,j\in[87]\), which results in \(b_{ij}\in[1.172\times 10^{-6},0.621]\) for all \(i,j\in[87]\). The recovery rate for all \(i\in[87]\) is \(\gamma_{i}=\frac{1}{3}\). Thus, the next generation matrix of \(G\), namely \(W=\Gamma^{-1}\tilde{B}\), is symmetric with \(|V|=87\) representing Minnesota's \(87\) counties, and \(|E|=3565\) is the number of edges that represent travel connections between pairs of counties. The network's basic reproduction number is \(R_{0}=\rho(W)=3.54\).
Through this formulation of \(B\) and \(W\), the weights in \(W\) are proportional to the volume of travel between counties, and larger values of an entry \(w_{ij}\) indicate a higher volume of travel between counties \(i\) and \(j\). We classify the weights into three categories, which are low, medium, and high travel flows, which correspond to the weight ranges \((0,0.01]\), \((0.01,0.1]\), and \((0.1,3]\), respectively. We set the adjacency parameter to \(k=0.001\). This choice of \(k\) is because over half of the entries in the weight matrix \(W\) are much smaller than \(k\), indicating that this choice of \(k\) certainly fulfills our objective of protecting individuals. In fact, in more than half of the entries of \(W\), it simultaneously protects _all_ individuals whose travel is encoded in that entry. In our simulations, we generated \(100\) private graphs for each \(\epsilon\in[5,20]\) using Mechanism 1 on \(G\).
Fig. 1: Infection map of Minnesota [31].
We compute and plot the absolute differences \(|\tilde{R}_{0}-R_{0}|\) and \(\left|\frac{1}{R_{0}}-\frac{1}{R_{0}}\right|\) for each \(\epsilon\in[5,20]\), which are shown in Figures 2 and 3, respectively. Recall from Remark 2 that higher values of the privacy parameter \(\epsilon\) imply weaker privacy, and the simulation results confirm that weaker privacy guarantees result in smaller errors. For all values of \(\epsilon\in[5,20]\), the empirical average of \(|\tilde{R}_{0}-R_{0}|\) is small (between \(0.27\) and \(0.45\), incurring errors from \(7.6\%\) to \(12.7\%\) on average) compared to the true value \(R_{0}=3.54\). Similarly, the empirical average of \(\left|\frac{1}{R_{0}}-\frac{1}{R_{0}}\right|\) is from \(0.019\) to \(0.031\), incurring errors from \(7.0\%\) to \(11.2\%\). Additionally, both the values of \(|\tilde{R}_{0}-R_{0}|\) and \(\left|\frac{1}{R_{0}}-\frac{1}{R_{0}}\right|\) are concentrated around their empirical averages. These simulation results demonstrate that \(\tilde{R}_{0}\) maintains enough accuracy under privacy to enable useful analyses alongside protecting information.
## VI Conclusions
This paper presents an input perturbation mechanism that provides differential privacy to graph weights when computing the basic reproduction number of an epidemic spreading network. The proposed mechanism uses bounded noise and the corresponding privacy-accuracy tradeoffs are quantified. We also develop a concentration bound to evaluate privacy-accuracy tradeoffs in terms of the remaining susceptible population within a community when the proposed mechanism is applied to a networked SIS or SIR model. Future works include applications of the proposed privacy mechanism in the control of epidemic spreading.
|
2306.17401 | Optimizing Initial State of Detector Sensors in Quantum Sensor Networks | In this paper, we consider a network of quantum sensors, where each sensor is
a qubit detector that "fires," i.e., its state changes when an event occurs
close by. The change in state due to the firing of a detector is given by a
unitary operator which is the same for all sensors in the network. Such a
network of detectors can be used to localize an event, using a protocol to
determine the firing sensor which is presumably the one closest to the event.
The determination of the firing sensor can be posed as a Quantum State
Discrimination problem which incurs a probability of error depending on the
initial state and the measurement operator used.
In this paper, we address the problem of determining the optimal initial
global state of a network of detectors that incur a minimum probability of
error in determining the firing sensor. For this problem, we derive necessary
and sufficient conditions for the existence of an initial state that allows for
perfect discrimination, i.e., zero probability of error. Using insights from
this result, we derive a conjectured optimal solution for the initial state,
provide a pathway to prove the conjecture, and validate the conjecture
empirically using multiple search heuristics that seem to perform
near-optimally. | Caitao Zhan, Himanshu Gupta, Mark Hillery | 2023-06-30T05:06:35Z | http://arxiv.org/abs/2306.17401v6 | # Optimizing Initial State of Detector Sensors in Quantum Sensor Networks
###### Abstract.
In this paper, we consider a network of quantum sensors, where each sensor is a qubit detector that "fires," i.e., its state changes when an event occurs close by. The change in state due to the firing of a detector is given by a unitary operator which is the same for all sensors in the network. Such a network of detectors can be used to localize an event, using a protocol to determine the firing sensor which is presumably the one closest to the event. The determination of the firing sensor can be posed as a _Quantum State Discrimination_ problem which incurs a probability of error depending on the initial state and the measurement operator used.
In this paper, we address the problem of determining the optimal initial global state of a network of detectors that incur a minimum probability of error in determining the firing sensor. For this problem, we derive necessary and sufficient conditions for the existence of an initial state that allows for perfect discrimination, i.e., zero probability of error. Using insights from this result, we derive a conjectured optimal solution for the initial state, provide a pathway to prove the conjecture, and validate the conjecture empirically using multiple search heuristics that seem to perform near-optimally.
Quantum Sensor Network, Quantum State Discrimination, Initial State Optimization +
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
## 1. Introduction
Quantum sensors, being strongly sensitive to external disturbances, are able to measure various physical phenomena with extreme sensitivity. These quantum sensors interact with the environment and have the environment phenomenon or parameters encoded in their state, which can then be measured. Estimation of a single continuous parameter by quantum sensors can be further enhanced by using a group of entangled sensors, improving the standard deviation of measurement by a factor of \(1/\sqrt{N}\) for \(N\) sensors (Hirsch and Kwiecinski, 2018). In general, a network of sensors can facilitate spatially distributed sensing; e.g., a fixed transmitter's signal observed from different locations facilitates localization via triangulation. Thus, as in the case of classical wireless sensor networks, it is natural to deploy a network of quantum sensors to detect/measure a spatial phenomenon, and there has been recent interest in developing protocols for such quantum sensor networks (QSNs) (Kang et al., 2018; Ghahramani et al., 2018; Zhan et al., 2018; Zhan et al., 2018).
**Initial State Optimization.** Quantum sensing protocols typically involve four steps (Kang et al., 2018): _initialization_ of the quantum sensor to a desired initial state, transformation of the sensor's state over a _sensing_ period, _processing_, and a _measurement_. Quantum sensor networks would have similar protocols. In general, the initial state of the QSN can have a strong bearing on the overall performance (i.e., accuracy) of the sensing protocol. E.g., in certain settings, an entangled initial state is known to offer better estimation than a non-entangled state (Hirsch and Kwiecinski, 2018; Zhan et al., 2018). If entanglement helps, then different entangled states may yield different estimation accuracy. Thus, in general, determining the initial state that offers optimal estimation accuracy is important to designing an optimal sensing protocol. The focus of our work is to address this problem of determining an optimal initial state. Since an optimal initial state depends on the sensing and measurement protocol specifics, we consider a specific and concrete setting in this paper involving detectors. To the best of our knowledge, ours is the only work (including our recent work (Zhan et al., 2018)) to address the problem of determining provably optimal initial states in quantum sensor networks.
**QSNs with Detector Sensors.** We consider a network of quantum "detector" sensors. Here, a detector sensor is a qubit sensor whose state changes to a unique final state, when an event happens. More formally, a sensor with initial state \(\ket{\psi}\) gets transformed \(U\ket{\psi}\) when an event happens, where \(U\) is a particular unitary matrix that signifies the impact of an event on the sensor. Such detector sensors can be very useful in _detecting_ an event, rather than _measuring_ a continuous parameter representing an environment phenomenon. More generally, we consider a network of quantum detector sensors wherein, when an event happens, exactly one of the sensors fires--i.e, changes its state as above. In general, a network of such detector sensors can be deployed to _localize_ an event -- by determining the firing sensor and hence the location closest to the event. Our paper addresses the problem of optimizing the initial global state of such QSNs to minimize the probability of error in determining the firing sensor.
**Contributions.** In the above context, we make the following contributions. We formulate the problem of initial state optimization in detector quantum sensor networks. We derive necessary and sufficient conditions for the existence of an initial state that can detect the firing sensor with perfect accuracy, i.e., with zero probability of error. Using the insights from this result, we derive a conjectured optimal solution for the problem, and provide a pathway to proving the conjecture. We also develop multiple search-based heuristics for the problem, and empirically validate the conjectured solution through extensive simulations. Finally, we extend our results to unambiguous discrimination measurement scheme.
## 2. ISO Problem and Related Work
**Setting.** See Fig. 1. Consider \(n\) quantum sensors deployed in a geographical area, forming a quantum sensor network. Each sensor stores a qubit whose state may potentially change due to an event in the environment. Let \(\ket{\psi}\) denote the initial (possibly, entangled) state of the \(n\) sensors. Let \(U\) be a unitary operator that represents the impact of an event over a qubit in a sensor. Let the two eigenvectors of \(U\) be \(\{u_{+},u_{-}\}\), and without loss of generality, let the corresponding eigenvalues be \(\{e^{+i\theta},e^{-i\theta}\}\) where \(\theta\in(0,180)\) degrees; thus, \(U\ket{u_{\pm}}=e^{\pm i\theta}\ket{u_{\pm}}\). Let \(\ket{\phi_{i}}=(I^{\otimes t}\otimes U\otimes I^{\otimes(n-i-1)})\ket{\psi}\), where \(U\) appears in the \(i^{th}\) position and \(i\in\{0,\cdots,n-1\}\), represents the state of the system after the event affects the \(i^{th}\) sensor. _We assume that events in our setting affect exactly one sensor with uniform probability._ We refer to the \(n\) possible resulting states \(\{\ket{\phi_{i}}\}\) as the **final states**; these final states have an equal prior probability of occurrence on an event.
Figure 1. ISO Problem. Given \(n\) deployed quantum sensors, an event changes the state of one of the sensors (\(i^{th}\) sensor in the figure) by a unitary operator \(U\). Quantum state discrimination with the optimal measurement is used to determine the firing sensor. The ISO problem is to determine the initial state (possibly, entangled) that minimize the probability of error in discriminating the potential final states. The dashed lines connecting the sensors signify potential entangled global state.
Objective Function \(P(\left|\psi\right\rangle,U)\). When an event results in the system going to a particular final state \(\left|\phi_{i}\right\rangle\), we want to determine the sensor (i.e., the index \(i\)) that is impacted by the event by performing a global measurement of the system. For a given setting (i.e., \(\left|\psi\right\rangle\) and \(U\)), let \(M(\left|\psi\right\rangle,U)\) be the optimal _positive operator-valued measure_ (POVM) measurement for discriminating the final states \(\{\left|\phi_{i}\right\rangle\}\), i.e., the measurement that incurs the minimum probability of error in discriminating the final states \(\{\left|\phi_{i}\right\rangle\}\) and thus determining the index \(i\) given an unknown final state. Let \(P(\left|\psi\right\rangle,U)\) be the (minimum) probability of error incurred by \(M(\left|\psi\right\rangle,U)\) in discriminating the final states \(\{\left|\phi_{i}\right\rangle\}\).
**IS0 Problem Formulation.** Given a number of sensors \(n\) and a unitary operator \(U\), the ISO problem to determine the initial state \(\left|\psi\right\rangle\) that minimizes \(P(\left|\psi\right\rangle,U)\). In other words, we wish to determine the initial state \(\left|\psi\right\rangle\) that yields the lowest probability of error in discriminating the final states when an optimal POVM is used for discrimination.
_For clarity of presentation, we consider only the minimum error measurement scheme, till the last Section 8 where we extend our results to unambiguous discrimination measurement scheme._
**Paper Organization.** The rest of the paper is organized as follows. We end this section with a discussion on related work. In the following section (SS3), we establish a necessary and sufficient condition for _three_ final states to be orthogonal--and hence, the existence of an initial state such that \(P(\left|\psi\right\rangle,U)=0\). We generalize the result to an _arbitrary_ number of sensors in SS4, and give an optimal solution for the ISO problem when the orthogonality condition is satisfied. In SS5, we use the insights from SS4 to derive a conjectured optimal solution for an arbitrary \(U\) and the number of sensors; in the section, we also provide a pathway to proving the conjecture. In the following sections, we develop search-based heuristics for the problem (SS6) and use these heuristics to empirically validate our conjectured solution through simulations (SS7). Finally, we extend our results to unambiguous discrimination measurement scheme in SS8.
### Related Work
**Continuous Parameter Estimation using Quantum Sensors.** In prior works [(12; 27)], protocols have been studied for estimation of a single parameter using multiple sensors [(15)], multiple parameters [(23; 25)], a single linear function over parameters [(12; 25; 28)], and multiple linear functions [(1; 27)]. These and many other works [(12; 14; 25)] have also investigated whether the entanglement of sensors offers any advantage in improving the estimation accuracy. Some of the above works have optimized the measurement protocols (e.g, [(12; 26)]) in the addressed settings, but none of the above works have addressed the problem of initial state optimization. To the best of our knowledge, all prior works have modeled the sensed parameters in the continuous domain, e.g., these parameters could be the strength of a magnetic field at different locations. In contrast, in some sense, our work focuses on the estimation of discrete-valued parameters.
**Optimal State Discrimination.** There has been a lot of work on quantum state discrimination [(2; 3; 5; 6)] - wherein the goal is to determine the optimal measurement protocol to minimize the probability of error in discriminating a set of given states. A closed-form expression is known only for two states and very specialized cases for a larger number of states. However, numerical techniques exist (e.g., SDP based [(11)]). Our work differs in the following ways: (i) The set of final states which we want to discriminate is very specialized. (ii) Our goal is to optimize the initial state - that minimizes the probability of error using an optimal POVM (in some sense, we implicitly assume that an optimal POVM for a given set of final states can be derived).
**Initial State Optimization.** Recent works have used variational circuits to seek an optimized probe state for a set of sensors, in the context of classical supervised learning [(30)] and (continuous) parameter estimation [(20)] under noise. In contrast, we seek provably optimal initial state solutions. To the best of our knowledge, the only other work that has investigated the initial state optimization problem is our recent preliminary work [(16)] where we address the same
problem as in this paper. In (Hilber, 2017), we give an optimal solution for the case of \(n=2\) sensors, and, for the general case of \(n\) sensor, we derive close-form expressions for probability of error for a heuristic solution for a restricted class of initial states, and investigate the benefit of entanglement in the initial state.
## 3. Orthogonality of final states for three sensors
Note that an optimal solution for two sensors (i.e., \(n=2\)) is known and is based on geometric ideas (See (Hilber, 2017) and SS5); however, the solution for two sensors doesn't generalize to higher \(n\). For \(n\geq 3\), instead of directly determining the optimal solution, we first focus on determining the the conditions (on \(U\)) under which the optimal initial state yields orthogonal final states. We start with the specific case of \(n=3\), as this gives us sufficient insight to generalize the results to an arbitrary number of sensors. Determining the conditions for orthogonality also helps us in conjecturing the optimal initial state for general settings.
The basic idea for deriving the condition on \(U\) that yields orthogonal final states (i.e., the below theorem) is to represent the final states on an orthonormal basis based on \(U\)'s eigenvalues and eigenvectors; this allows us to derive expressions for the pairwise inner products of the final states, and equating these products to zero yields the desired conditions. We now state the main theorem and proof for three sensors.
Theorem 3.1.: _Consider the \(\mathsf{ISO}\) problem, with the unitary operator \(U\), initial state \(\left|\psi\right\rangle\), and final states \(\{\left|\phi_{i}\right\rangle\}\) as defined therein. Recall that the eigenvalues of \(U\) are \(\{e^{+i\theta},e^{-i\theta}\}\). When the number of sensor \(n\) is three, we claim the following._
_For any \(\theta\in\left[60,120\right]\) degrees, there exists a \(\left|\psi\right\rangle\) such that \(\left|\phi_{0}\right\rangle,\left|\phi_{1}\right\rangle,\left|\phi_{2}\right\rangle\) are mutually orthogonal. Also, the converse is true, i.e., for \(\theta\in\left(0,60\right)\cup\left(120,180\right)\), there is no initial state that makes \(\left|\phi_{0}\right\rangle,\left|\phi_{1}\right\rangle,\left|\phi_{2}\right\rangle\) mutually orthogonal._
_Proof:_ Let us first start analyzing the inner product of \(\left|\phi_{0}\right\rangle\) and \(\left|\phi_{1}\right\rangle\). Let \(z_{0}=\left\langle\phi_{0}|\phi_{1}\right\rangle\). We see that:
\[z_{0} =\left\langle\psi\right|\left(U^{\dagger}\otimes I\otimes I)(I \otimes U\otimes I)\left|\psi\right\rangle\] \[=\left\langle\psi\right|\left(U^{\dagger}\otimes U\otimes I \right)\left|\psi\right\rangle\]
Since \(U\) is unitary, its eigenvectors \(u_{-}\) and \(u_{+}\) are orthogonal. It is easy to confirm that the following eight eigenvectors of the middle-part (\(U^{\dagger}\otimes U\otimes I\)) form an _orthonormal basis_\(\{\left|u_{-}u_{-}u_{-}\right\rangle,\left|u_{-}u_{+}\right\rangle,\left|u_{ -}u_{+}u_{-}\right\rangle,\left|u_{-}u_{+}u_{+}\right\rangle,\left|u_{+}u_{-} u_{-}\right\rangle,\)\(\left|u_{+}u_{+}u_{-}\right\rangle,\left|u_{+}u_{-}\right\rangle,\left|u_{+}u_{+}u_{-} \right\rangle\}\). We denote these eight eigenvectors as \(\{\left|j\right\rangle\mid j=0,\cdots,7\}\), with the \(\left|j\right\rangle\) eigenvector "mimicking" the number \(j\)'s binary representation when \(u_{-}\) and \(u_{+}\) are looked upon as \(0\) and \(1\) respectively (so, \(\left|3\right\rangle\) is \(\left|u_{-}u_{+}u_{+}\right\rangle\)).
We can write the initial state \(\left|\psi\right\rangle\) in the \(\{\left|j\right\rangle\}\) basis as
\[\left|\psi\right\rangle=\sum_{j}\psi_{j}\left|j\right\rangle.\]
Thus, we get
\[z_{0} =\left\langle\psi\right|\left(U^{\dagger}\otimes U\otimes I \right)\sum_{j}\psi_{j}\left|j\right\rangle\] \[=\sum_{j}\left|\psi_{j}\right|^{2}e_{j}\]
where \(\{e_{0},e_{1},\ldots,e_{7}\}\) are the eigenvalues corresponding to the eight eigenvectors \(\{\left|j\right\rangle\}\). For \(z_{0}\), the corresponding eigenvalues are \(1,1,e^{+2i\theta},e^{+2i\theta},e^{-2i\theta},e^{-2i\theta},1,1\). Thus, putting in the specific eigenvalues in the above equation, we get
\[z_{0}=(|\psi_{2}|^{2}+|\psi_{3}|^{2})e^{+2i\theta}+(|\psi_{4}|^{2}+|\psi_{5}|^{ 2})e^{-2i\theta}+(|\psi_{0}|^{2}+|\psi_{1}|^{2}+|\psi_{6}|^{2}+|\psi_{7}|^{2}) \tag{1}\]
Similarly, for \(z_{1}=\langle\phi_{1}|\phi_{2}\rangle=\langle\psi|\left(I\otimes U^{\dagger} \otimes U\right)|\psi\rangle\), we get the below. Note that the _corresponding_ eigenvalues for \(z_{1}\) changes to: \(1,e^{+2i\theta},e^{-2i\theta},1,1,e^{+2i\theta},e^{-2i\theta},1\); see Observation 1 in SS4. Thus, we get:
\[z_{1}=(|\psi_{1}|^{2}+|\psi_{5}|^{2})e^{+2i\theta}+(|\psi_{2}|^{2}+|\psi_{6}|^ {2})e^{-2i\theta}+(|\psi_{0}|^{2}+|\psi_{3}|^{2}+|\psi_{4}|^{2}+|\psi_{7}|^{2}) \tag{2}\]
Similarly, for \(z_{2}=\langle\phi_{0}|\phi_{2}\rangle=\langle\psi|\left(U^{\dagger}\otimes I \otimes U\right)|\psi\rangle\), we get:
\[z_{2}=(|\psi_{1}|^{2}+|\psi_{3}|^{2})e^{+2i\theta}+(|\psi_{4}|^{2}+|\psi_{6}|^ {2})e^{-2i\theta}+(|\psi_{0}|^{2}+|\psi_{2}|^{2}+|\psi_{5}|^{2}+|\psi_{7}|^{2}) \tag{3}\]
Now, for \(\left|\phi_{0}\right\rangle,\left|\phi_{1}\right\rangle,\left|\phi_{2}\right\rangle\) to be mutually orthogonal, we need \(z_{0}=z_{1}=z_{2}=0\). This yields the following seven Equations 4-10.
Imaginary Equations. For the imaginary parts of \(z_{0},z_{1},z_{2}\) to be zero, we need the following to be true. We refer to these equations as the **Imaginary** equations.
\[|\psi_{2}|^{2}+|\psi_{3}|^{2}=|\psi_{4}|^{2}+|\psi_{5}|^{2} \tag{4}\] \[|\psi_{1}|^{2}+|\psi_{5}|^{2}=|\psi_{2}|^{2}+|\psi_{6}|^{2}\] (5) \[|\psi_{1}|^{2}+|\psi_{3}|^{2}=|\psi_{4}|^{2}+|\psi_{6}|^{2} \tag{6}\]
Real Equations. For the real parts of \(z_{0},z_{1},z_{2}\) to be zero, we need the following to be true. We refer to these equations as the **Real** equations.
\[-(|\psi_{2}|^{2}+|\psi_{3}|^{2}+|\psi_{4}|^{2}+|\psi_{5}|^{2}) \cos(2\theta) =|\psi_{0}|^{2}+|\psi_{1}|^{2}+|\psi_{6}|^{2}+|\psi_{7}|^{2} \tag{7}\] \[-(|\psi_{1}|^{2}+|\psi_{5}|^{2}+|\psi_{2}|^{2}+|\psi_{6}|^{2}) \cos(2\theta) =|\psi_{0}|^{2}+|\psi_{3}|^{2}+|\psi_{4}|^{2}+|\psi_{7}|^{2}\] (8) \[-(|\psi_{1}|^{2}+|\psi_{3}|^{2}+|\psi_{4}|^{2}+|\psi_{6}|^{2}) \cos(2\theta) =|\psi_{0}|^{2}+|\psi_{2}|^{2}+|\psi_{5}|^{2}+|\psi_{7}|^{2} \tag{9}\]
Above, the terms with \(cos(2\theta)\) are on the left-hand-side (LHS) and the remaining terms on the right-hand-side (RHS). Finally, as \(\psi_{j}\) are coefficients of \(|\psi\rangle\), we also have
\[\sum_{j}|\psi_{j}|^{2}=1 \tag{10}\]
**Existence of \(|\psi\rangle\) when \(\theta\in[60,120]\).** Let us assume \(|\psi_{0}|^{2}=|\psi_{7}|^{2}=y\) and \(|\psi_{i}|^{2}=x\) for \(1\leq i\leq 6\). These satisfy Equations 4-6, and the Equations 7-9 yield:
\[-4x\cos(2\theta) =2x+2y\] \[-(2\cos(2\theta)+1)x =y\]
The above has a valid solution (i.e., \(x,y\geq 0\), and \(2y+6x=1\) from Eqn. 10) when \(\cos(2\theta)\leq-\frac{1}{2}\) i.e., when \(\theta\in[60,120]\).
When \(\theta\in(0,60)\cup(120,180)\), no existence of \(|\psi\rangle\)Let \(a=|\psi_{0}|^{2}+|\psi_{7}|^{2}\). Then, by using Equation 4 in Equation 7 and so on, we get the following:
\[-2(|\psi_{4}|^{2}+|\psi_{5}|^{2})\cos(2\theta) =a+|\psi_{1}|^{2}+|\psi_{6}|^{2}\] \[-2(|\psi_{2}|^{2}+|\psi_{6}|^{2})\cos(2\theta) =a+|\psi_{3}|^{2}+|\psi_{4}|^{2}\] \[-2(|\psi_{1}|^{2}+|\psi_{3}|^{2})\cos(2\theta) =a+|\psi_{2}|^{2}+|\psi_{5}|^{2}\]
Adding up the above equations and rearranging, we get:
\[(-2\cos(2\theta)-1)\sum_{j=1}^{6}|\psi_{j}|^{2}=3a\]
Thus, we need \((-2\cos(2\theta)-1)\geq 0\), as \(a\geq 0\), i.e., we need \(\cos(2\theta)\leq-\frac{1}{2}\). Thus, for \(\theta\in(0,60)\cup(120,180)\), there is no solution to the above equations. _Note that we have not used any symmetry argument here._
## 4. Orthogonality of Final States for \(n\) Sensors
In this section, we generalize the result in the previous section to an arbitrary number of sensors greater than 3.1
Footnote 1: For two sensors, the single equation corresponding to the Equations 7–9 can be made equal to zero on both sides with \(\theta=45\) degrees and zeroing all coefficients on the RHS (which is possible due to lack of other equations).
Theorem 2. _Consider the \(\mathsf{ISO}\) problem, with the unitary operator \(U\), initial state \(|\psi\rangle\), and final states \(\{|\phi_{i}\rangle\}\) as defined therein. Recall that the eigenvalues of \(U\) are \(\{e^{+i\theta},e^{-i\theta}\}\). Let \(n\geq 3\) be the number of sensors. We claim the following._
_For any \(\theta\in[T,180-T]\) degrees, there exists a \(|\psi\rangle\) such that the set of \(n\) states \(\{|\phi_{i}\rangle\}\) are mutually orthogonal, where \(T\) is given by:_
\[T=\frac{1}{2}\arccos\left(-\frac{\lceil\frac{n}{2}\rceil-1}{\lceil\frac{n}{2} \rceil}\right)\]
_Note that \(T\in(45,90)\) degrees. In particular, the values of \(T\) for increasing \(n\) are: 60 (\(n=4\)), 65.9 (\(n=5,6\)), 69.3 (\(n=7,8\)), 71.6 (\(n=9,10\))._
_We claim that the converse of the above is also true, i.e., for \(\theta\in(0,T)\cup(180-T,180)\), there is no initial state \(|\psi\rangle\) that makes \(\{|\phi_{i}\rangle\}\) mutually orthogonal._
Before we prove the theorem, we define the partitioning of coefficients and state an observation.
Partitioning the Coefficient-Squares \(\{|\psi_{j}|^{2}\}\) into "Symmetric" SetsNote that just renumbering the sensors does not change the optimization problem. Based on this intuition, we can group the eigenvectors \(|j\rangle\) (and the corresponding coefficients \(\psi_{j}\)'s) into equivalent classes. Let \(n\) be the number of sensors. Since only the coefficient-squares \(\{|\psi_{j}|^{2}\}\) appear in the expression for pairwise inner-products of the final states, we just partition the coefficient-squares rather than the coefficients \(\{\psi_{j}\}\) themselves--as only the coefficient-squares are relevant to our proposed solution and discussion. We partition the set of \(2^{n}\) coefficient-squares into \(n+1\) symmetric sets \(\{S_{k}\}\) as follows:
\[S_{k}=\{|\psi_{j}|^{2}\ |\ |j\rangle\ \text{ has $k$ number of $u_{+}$}\}\ \ \forall\ 0\leq k\leq n\]
For each \(0\leq k\leq n\), let \(R_{k}\) be the number of coefficient-squares from \(S_{k}\) in the RHS of a Real equation, and \(L_{k}\) be the number of coefficient-squares from \(S_{l}\) in the LHS of Real equation. (Note that, by Observation 1 below, for any \(k\), the number of coefficient-squares of \(S_{k}\) that are in the RHS (LHS) is the same for all Real equations.) **For the
case of \(\mathbf{n}=3\), we have \(S_{0}=\{|\psi_{0}|^{2}\},S_{1}=\{|\dot{\psi}_{1}|^{2},|\dot{\psi}_{2}|^{2},|\dot {\psi}_{4}|^{2}\},S_{2}=\{|\psi_{3}|^{2},|\psi_{5}|^{2},|\dot{\psi}_{7}|^{2}\}, S_{3}=\{|\dot{\psi}_{7}|^{2}\}.\) Also, we have \(R_{0}=R_{1}=R_{2}=R_{3}=1\), while \(L_{0}=L_{3}=0,L_{1}=L_{2}=2\). We will use the above terms to prove the theorem.
**Observation 1**.: _For a_ Real _equation \(E\) corresponding to the inner-product of final states \(\phi_{i}\) and \(\phi_{j}\) (for \(0\leq i,j\leq n-1\)), a coefficient-square \(|\dot{\psi}_{r}|^{2}\) appears in the RHS of the equation \(E\) iff the bit-representation of the number \(r\) has either both 0's or both 1's at the \(i^{th}\) and \(j^{th}\) most-significant bits._
**Lemma 1**.: _For \(n\geq 3\),_
\[min_{1\leq k\leq(n-1)}\frac{R_{k}}{L_{k}}=\frac{\lceil\frac{n}{2}\rceil-1}{ \lceil\frac{n}{2}\rceil}.\]
_Thus, for the given \(T\) in Theorem 2, \(L_{k}\cos(2T)+R_{k}=0\) for some \(k\), and \(R_{k}+\cos(2T)L_{k}\geq 0\) for all \(k\)._
_Proof:_ For \(n\geq 3\) and \(0\leq k\leq n\), from Observation 1 we get that:
\[R_{k}=\binom{n-2}{k-2}+\binom{n-2}{k}\]
\[L_{k}=2\binom{n-2}{k-1}.\]
Above, we assume \(\binom{x}{y}=0\) if \(y>x\) or \(y<0\). Now, a simple analysis shows that:
\[\left(min_{2\leq k\leq(n-2)}\frac{\binom{n-2}{k-2}+\binom{n-2}{k}}{2\binom{n- 2}{k-1}}\right)=\frac{\lceil\frac{n}{2}\rceil-1}{\lceil\frac{n}{2}\rceil}\]
Since, for \(n\geq 3\), \(R_{1}=R_{n-1}=n-2\) and \(L_{1}=L_{n-1}=2\), we get the lemma.
**Observation 2**.: _Let \(\sum_{i}x_{i}=c\), for a set of variables \(x_{i}\geq 0\) and a constant \(c>0\). The equation \(\sum_{i}c_{i}x_{i}=0\), where \(c_{i}\) are some constants, has a solution if and only if (i) at least one of the constants is positive and one of the constants is negative, or (ii) one of the constants is zero._
### Proof of Theorem 2
_Proof:_ If \(\theta\in[T,180-T]\). Let the set of all coefficient-squares in each \(S_{k}\) to be equal to \(x_{k}\), for each \(k\). Then, each Imaginary equation becomes:
\[\sum_{k=0}^{n}(L_{k}/2)x_{k}=\sum_{k=0}^{n}(L_{k}/2)x_{k} \tag{11}\]
Each Real equation becomes:
\[-\cos(2\theta)\sum_{k=0}^{n}L_{k}x_{k}=\sum_{k=0}^{n}R_{k}x_{k} \tag{12}\]
\[\sum_{k=0}^{n}(R_{k}+\cos(2\theta)L_{k})x_{k}=0 \tag{13}\]
By Observation 2, the above equation (and thus, all Real equations) can be made true by appropriate choices of \(x_{k}\) since
1. \(R_{k}+\cos(2\theta)L_{k}\) is positive for \(k=0\) as \(L_{0}=0\) and \(R_{0}=1\).
2. \(R_{k}+\cos(2\theta)L_{k}\) is negative or zero for some \(k\) by Lemma 1 when \(\theta\in[T,180-T]\).
If \(\theta\in(0,T)\). Adding all the Real equations gives the following. Below, \(f(j)=k\) such that \(|\psi_{j}|^{2}\in S_{k}\).
\[-\cos(2\theta)\sum_{j=0}^{2^{n}}\binom{n}{2}\frac{L_{f(j)}}{|S_{f(j)}|}|\psi_{j} |^{2}=\sum_{j=0}^{2^{n}}\binom{n}{2}\frac{R_{f(j)}}{|S_{f(j)}|}|\psi_{j}|^{2}\]
The above gives:
\[\sum_{j=0}^{2^{n}}\frac{1}{|S_{f(j)}|}(R_{f(j)}+\cos(2\theta)L_{f(j)})|\psi_{j} |^{2}=0\]
The above equation doesn't have a solution as \((R_{k}+\cos(2\theta)L_{k})>0\) for all \(k\) for \(\theta<T\) by Lemma 1.
**Optimal Initial State under Theorem 2's Condition.** Based on the above theorem, we can derive the optimal initial state under the condition of Theorem 2; the optimal initial state yields mutually-orthogonal final states.
Corollary 1.: _Consider the ISO problem, with the unitary operator \(U\), initial state \(|\psi\rangle\), and final states \(\{|\phi_{i}\rangle\}\) as defined therein. Recall that the eigenvalues of \(U\) are \(\{e^{+i\theta},e^{-i\theta}\}\). Let \(n\geq 3\) be the number of sensors. When \(\theta\in[T,180-T]\) degrees, where \(T\) is defined in Theorem 2, an optimal initial state \(|\psi\rangle\) that yields mutually orthogonal final states \(n\) states \(\{|\phi_{i}\rangle\}\) is given as follows.2_
Footnote 2: We note that there are many optimal solutions.
_Let \(S_{l}\) be the partition that minimizes the ratio \(R_{l}/L_{l}\). It follows from Lemma 1's proof (we omit the details) that \(l=\lfloor\frac{n}{2}\rfloor\), and \(R_{l},L_{l}\), and \(S_{l}\) are given by:_
\[L_{l} =|S_{l}|\times\frac{\lceil\frac{n}{2}\rceil}{2\lceil\frac{n}{2} \rceil-1}\] \[R_{l} =|S_{l}|\times\frac{\lceil\frac{n}{2}\rceil-1}{2\lceil\frac{n}{2 }\rceil-1}\] \[|S_{l}| =\binom{n}{\lfloor\frac{n}{2}\rfloor}\]
_Then, the coefficients of an optimal initial state \(|\psi\rangle\), when \(\theta\in[T,180-T]\) degrees with \(T\) defined in Theorem 2, are such that their coefficient-squares are as follows:_
\[|\psi_{j}|^{2} =\frac{1}{|S_{l}|-\cos(2\theta)L_{l}-R_{l}} \forall\ |\psi_{j}|^{2}\in S_{l}\] \[|\psi_{j}|^{2} =\frac{-\cos(2\theta)L_{l}-R_{l}}{|S_{l}|-\cos(2\theta)L_{l}-R_{l}} \forall\ |\psi_{j}|^{2}\in S_{0}\] \[|\psi_{j}|^{2} =0 \forall\ |\psi_{j}|^{2}\notin S_{l}\cup S_{0}\]
Proof.: The proof of the above Corollary easily follows from the fact that each coefficient-square of the solution is positive (from Lemma 1), and that the coefficient-squares of the solution satisfy Eqn. 13 (and Eqn. 11 trivially) as well as the constraint in Eqn. 10.
## 5. Conjectured Optimal Iso Solution
**Provably Optimal Solution for Two Sensors.** The above joint-optimization problem for the case of \(2\) sensors can be solved optimally as follows. First, we note that the minimum probability of error in discriminating two states for a given initial state \(|\psi\rangle\) is given by:
\[P_{e}=\frac{1}{2}\left(1-\sqrt{1-|\langle\psi|(U\otimes U^{-1})|\psi\rangle|^{2}} \right). \tag{14}\]
Now, when the eigenvalues of \(U\) are \(\{e^{+i\theta},e^{-i\theta}\}\), as in our ISO problem, then the initial state \(|\psi\rangle\) that minimizes the above probability of error for \(0\leq\theta\leq\pi/4\) and \(3\pi/4\leq\theta\leq\pi\) can be shown to be the following entangled state:
\[|\psi\rangle=\frac{1}{\sqrt{2}}(|u_{+}\rangle|u_{-}\rangle+|u_{-}\rangle|u_{+} \rangle). \tag{15}\]
For \(\pi/4\leq\theta\leq 3\pi/4\), there exists an initial state that yields orthogonal final states. The above follows from the techniques developed to distinguish between two unitary operators [10]; we refer the reader to our recent work [16] for more details. Unfortunately, the above technique doesn't generalize to \(n\) greater than \(2\), because for greater \(n\), there is no corresponding closed-form expression for minimum probability of error.
**Conjectured Optimal Solution For \(n\) Sensors.** The main basis for our conjectured optimal solution is that an optimal initial state must satisfy the _symmetry of coefficients_ property which is defined as follows: an initial state satisfies the _symmetry of coefficients_ property, if for each \(k\), the set of coefficient-squares in \(S_{k}\) have the same value. The _intuition_ behind why an optimal initial state must satisfy the _symmetry of coefficients_ property comes from the following facts:
1. The optimal initial solution, under the condition of Theorem 2, satisfies the symmetry of coefficients property.
2. Since sensors are homogeneous, "renumbering" the sensors don't change the optimization problem instance fundamentally. Thus, if \(|\psi\rangle\) is an optimal initial state, then all initial state solutions obtained by permuting the orthonormal basis \(\{|j\rangle\}\) corresponding to a renumbering of sensors,3 must also yield optimal initial states.4 Now, observe that an initial state that satisfies symmetry of coefficients property remains unchanged under any permutation of the orthonormal basis \(\{|j\rangle\}\) corresponding to a renumbering of sensors. Footnote 3: Note that renumbering the sensors is tantamount to renumbering the bits in the bit-representation of \(j\) integers of the orthonormal basis \(\{|j\rangle\}\). See Theorem 3’s proof for more details. Footnote 4: Note that this fact doesn’t imply that the optimal solution must satisfy the symmetry of coefficients property.
3. Similarly, due to the homogeneity of sensors, an optimal initial state must yield "symmetric" final states--i.e., final state vectors that have the same pairwise angle between them. Now, from Observation 1, we observe that an initial state that satisfies symmetry of coefficients yields final states such that their pairwise inner-product value is same.
Finally, it seems intuitive that this common (see #3 above) inner-product value of every pair of final states should be minimized to minimize the probability of error in discriminating the final states. Minimization of the common inner-product value, within the constraints of the problem, yields the below optimal solution conjecture.
**Conjecture 1**: _Consider the_ ISO _problem, with the unitary operator \(U\), initial state \(|\psi\rangle\), and final states \(\{|\phi_{i}\rangle\}\) as defined therein. Recall that the eigenvalues of \(U\) are \(\{e^{+i\theta},e^{-i\theta}\}\). Let \(n\geq 3\) be the number of sensors. For a given \(\theta\in(0,T]\cup[180-T,180)\), degrees, where \(T\) is from Theorem 2, the optimal initial state \(|\psi\rangle\) for the_ ISO _problem is as follows._
_Let \(S_{l}\) be the partition that minimizes \((R_{l}+\cos(2\theta)L_{l})/(R_{l}+L_{l})\), where \(R_{l}\) and \(L_{l}\) are as defined in the previous section. The coefficients of the optimal solution are such that their coefficient-squares are given by:_
\[|\psi_{j}|^{2} =1/|S_{l}| \forall\ |\psi_{j}|^{2}\in S_{l}\] \[|\psi_{j}|^{2} =0 \forall\ |\psi_{j}|^{2}\notin S_{l}\]
We note the following: (i) The above conjecture optimal solution is provably optimal for \(n=2\), with \(T=45\) degrees; see Eqn. 15 above and [26]. (ii) The above conjectured optimal solution yields orthogonal final states for \(\theta=T\). In particular, it can be easily shown that the above conjectured optimal solution is the same as the solution derived in Corollary 1 for \(\theta=T\). (iii) The proposed state in the above conjecture is a Dicke State and can be prepared deterministically by linear depth quantum circuits with no ancilla qubits [4]. We now show that the above conjecture can be proved with the help of the following simpler conjecture.
**Proving Symmetry of Coefficients.** Based on the intuition behind the above Conjecture 1, one way to prove it would be to prove symmetry of coefficients--i.e., an existence of an optimal solution wherein the coefficient-squares in any \(S_{k}\) are equal. Proving symmetry of coefficients directly seems very challenging, but we believe that the below conjecture (which implies symmetry of coefficients, as shown in Theorem 3) is likely more tractable. Also, _the below Conjecture has been verified to hold true in our empirical study (see SS7).
**Conjecture 2**.: _For a given \(U\). consider two initial states \(|\psi\rangle=\sum\limits_{j}\psi_{j}\,|j\rangle\) and \(|\psi^{\prime}\rangle=\sum\limits_{j}\psi^{\prime}_{j}\,|j\rangle\) such that (i) they are two unequal coefficient-squares, i.e., for some \(j\), \(|\psi_{j}|^{2}\neq|\psi^{\prime}_{j}|^{2}\), and (ii) they have the same objective function value, i.e., \(P(|\psi\rangle\,,U)=P(|\psi^{\prime}\rangle\,,U)\). We claim that the "average" state given by_
\[\left|\psi_{\text{avg}}\right\rangle=\sum\limits_{j}\sqrt{\frac{|\psi_{j}|^{2 }+|\psi^{\prime}_{j}|^{2}}{2}}\,|j\rangle\]
_has a lower objective function value, i.e., \(P(|\psi_{\text{avg}}\rangle\,,U)<P(|\psi^{\prime}\rangle\,,U)\). _
We now show that the above Conjecture is sufficient to prove the optimal solution Conjecture 2.
**Theorem 3**.: _Conjecture 2 implies Conjecture 1._
_Proof:_ We start by showing that Conjecture 2 implies symmetry of coefficients, and then, minimize the common pairwise inner-product values of the final states.
Conjecture 2 implies Summary of Coefficients. First, note that for a given initial state \(|\psi\rangle\), we can generate \((n!-1)\) other "equivalent" initial states (not necessarily all different) by just renumbering the sensor (or in other words, permuting the basis eigenvectors). Each of these initial states should yield the same objective value \(P()\) as that of \(|\psi\rangle\), as it can be shown that they would yield essentially the same set of final states. As an example, the following two initial states are essentially equivalent (i.e., should yield the same objective value \(P()\)); here, the sensors numbered 1 and 2 have been interchanged.
\[\psi_{0}\,|0\rangle+\psi_{1}\,|1\rangle+\psi_{2}\,|2\rangle+\psi_{3}\,|3 \rangle+\psi_{4}\,|4\rangle+\psi_{5}\,|5\rangle+\psi_{6}\,|6\rangle+\psi_{7}\, |7\rangle\]
\[\psi_{0}\,|0\rangle+\psi_{2}\,|1\rangle+\psi_{1}\,|2\rangle+\psi_{3}\,|3 \rangle+\psi_{4}\,|4\rangle+\psi_{6}\,|5\rangle+\psi_{5}\,|6\rangle+\psi_{7}\, |7\rangle\]
More formally, for a given initial state \(|\psi\rangle=\sum_{j}\phi_{j}\,|j\rangle\), a permutation (renumbering of sensors) \(\pi:\{0,1,\ldots,n-1\}\mapsto\{0,1,\ldots,n-1\}\) yields an equivalent initial state given by \(|\psi^{\prime}\rangle=\sum_{j}|\phi_{\Pi}(j)|\,|j\rangle\) where \(\Pi:\{0,1,\ldots,2^{n}-1\}\mapsto\{0,1,\ldots,2^{n}-1\}\) is such that \(\Pi(j)=i\) where the bits in the bit-representation of \(j\) are permuted using \(\pi\) to yield \(i\). It can be shown that the set of final states yielded by \(|\psi\rangle\) and \(|\psi^{\prime}\rangle\) are essentially the same (modulo the permutation of basis dimensions), and hence, they will yield the same probability of error and thus objective value \(P()\).
Now, consider an optimal initial state \(|\psi\rangle=\sum_{j}|\phi_{j}|\,|j\rangle\) that doesn't have symmetry of coefficients--i.e., there is a pair of coefficient-squares \(|\phi_{i}|^{2}\) and \(|\phi_{j}|^{2}\) such that they are in the same set \(S_{k}\) but are not equal. The numbers \(i\) and \(j\)
have the same number of \(1\)'s and \(0\)'s in their binary representation, as \(|\phi_{i}|^{2}\) and \(|\phi_{j}|^{2}\) belong to the same set \(S_{k}\). Let \(\Pi\) be a permutation function (representing renumbering of the \(n\) sensors) such that \(\Pi(i)=j\). Consider an initial state \(|\psi^{\prime}\rangle=\sum_{j}\psi_{\Pi(j)}\ |j\rangle\), which has the same probability of error as \(|\psi\rangle\). Now, applying Conjecture 2 on \(|\psi\rangle\) and \(|\psi^{\prime}\rangle\) yields a new initial state with a lower objective value \(P()\), which contradicts the optimality of \(|\psi\rangle\). Thus, all optimal initial-states must satisfy the symmetry of coefficients.
Maximizing the Pairwise Angle. Now, an optimal initial-state with symmetry of coefficient will yield final states that have the same pairwise inner-product values (this follows from Theorem 2's proof). Also, we see that each pairwise inner-product value is (see Eqns.11 and 13 from SS4):
\[\sum_{k=0}^{n}(R_{k}+\cos(2\theta)L_{k})x_{k} \tag{16}\]
with the constraint that
\[\sum_{k=0}^{n}(R_{k}+L_{k})x_{k}=1.\]
_When \(\theta\in(0,T]\)._ By Lemma 1, note that \((R_{k}+\cos(2\theta)L_{k})x_{k}\geq 0\) for all \(k\), for \(\theta\in(0,T)\). We show in Lemma 2 below that, for states with equal and positive pairwise inner-products, the probability of error in discriminating them using an optimal measurement increases with an increase in the common inner-product value. Thus, the optimal initial state must minimize the above inner-product value expression in Eqn. 16. Now, from Observation 3 below, the inner-product value above is minimized when the coefficient-squares in the \(S_{l}\) that minimizes \((R_{k}+\cos(2\theta)L_{k})/|S_{l}|\) are non-zero, while the coefficient-squares in all other \(S_{k}\)'s where \(k\neq l\) are zero. This proves the theorem.
_When \(\theta\in[180-T,180)\)._ Note that \(\cos(2\theta)=\cos(2(180-\theta))\), and since \((180-\theta)\in(0,T]\) for \(\theta\in[180-T,180)\), we can use the same argument as above for this case as well.
**Observation 3**: _Let \(\sum_{i}a_{i}x_{i}=1\), for a set of positive-valued variables \(x_{i}\) and positive constants \(a_{i}\). The expression \(\sum_{i}c_{i}x_{i}\), where constants \(c_{i}\)'s are all positive, has a minimum value of \(\min_{i}c_{i}/a_{i}\) which is achieved by \(x_{i}=1/a_{i}\) for the \(i\) that minimizes \(\min_{i}c_{i}/a_{i}\). \(\Box\)_
**Minimizing Probability of Error in Discriminating "Symmetric" Final States.** We now show, using prior results, that if the pairwise inner-products (and hence, angles) of the resulting final states \(|\phi_{i}\rangle\) are equal, then the probability of error in discriminating them is minimized when the pairwise inner-product values are minimized.
**Lemma 2**: _Consider \(n\) states to be discriminated \(\phi_{0},\phi_{1},\ldots,\phi_{n-1}\) such that \(\left\langle\phi_{i}\big{|}\phi_{j}\right\rangle=x\), for all \(0\leq i,j\leq n-1\) and \(i\neq j\). The probability of error in discriminating \(\phi_{0},\phi_{1},\ldots,\phi_{n-1}\) using an optimal measurement increases with an increase in \(x\) when \(x\geq 0\)._
_Proof:_ The optimal/minimum probability of error using the optimal POVM for a set of states with equal pairwise inner products can be computed to be [13]:
\[P_{e}=1-\frac{1}{n}\left(\sqrt{1-\frac{(n-1)(1-x)}{n}}+(n-1)\sqrt{\frac{1-x}{ n}}\right)^{2}\]
Let the inner term be \(y\), such that \(P_{e}=1-(y^{2}/n)\). The derivative of \(y\) with respect to \(x\) is given by:
\[\frac{n-1}{2\sqrt{n}}\left(\frac{1}{\sqrt{nx+1-x}}-\frac{1}{\sqrt{1-x}}\right).\]
The above is negative for \(x\geq 0\). Thus, for a given number of sensors \(n\) and \(x\geq 0\), the probability of error \(P_{e}\) increases with an increase in \(x\).
**Summary.** In summary, we propose the Conjecture 1 for the optimal solution for the ISO problem, based on the symmetry of coefficients. We also propose a Conjecture 1 which seems simpler to prove, and provably implies Conjecture 2. We further provide empirical validation of these conjectures using several search heuristics in the following sections.
## 6. Search heuristics
```
Input: The initial state \(\mathbf{x}\), \(i\)th element of \(\mathbf{x}\), step size Output: A neighbor \(\mathbf{x}^{\prime}\) of \(\mathbf{x}\)
1\(\mathbf{x}^{\prime}\leftarrow\mathbf{x}\) ;
2\(direction\leftarrow\) Generate a random unit 2D-vector ;
3\(direction^{\prime}\leftarrow\) convert \(direction\) to complex number ;
4\(\mathbf{x}^{\prime}[i]\leftarrow\mathbf{x}^{\prime}[i]\) + \(direction^{\prime}\times stepSize\) ;
5\(\mathbf{x}^{\prime}\leftarrow Normalize(\mathbf{x}^{\prime})\) ; // \(\mathbf{x}^{\dagger}\mathbf{x}=1\) return\(\mathbf{x}^{\prime}\) ;
```
**Algorithm 1**FindNeighbor(\(\mathbf{x},i,stepSize\))
In this section, we design three search heuristics to determine an efficient ISO solution, viz., hill-climbing algorithm, simulated annealing, and genetic algorithm. In the next section, we will evaluate these heuristics and observe that they likely deliver near-optimal solutions. We start with a numerical (SDP) formulation of determining an optimal measurement, and thus, develop a method to estimate the objective value \(P(\left|\psi\right\rangle,U)\) of a given initial state \(\left|\psi\right\rangle\).
**Semi-Definite Program (SDP) for State Discrimination.** We now formulate a semi-definite program (SDP) to compute the optimal measurement for a given initial state; this formulation allows us to determine the optimal measurement using numerical methods, and thus, facilitates the development of the search heuristics for the ISO problem as described below. Given a set of (final) quantum states, determining the optimal measurement that discriminates them with minimum probability of error is a convex optimization problem, and in particular, can be formulated as a semi-definite program [11]. Let the final states be \(\left\{\left|\phi_{i}\right\rangle\right\}\) with prior probabilities \(p_{i}\), where \(\sum_{i}p_{i}=1\). Let \(\left\{\Pi_{i}\right\}\) be the POVM operator with \(\Pi_{i}\) being the element associated with the state \(\left|\phi_{i}\right\rangle\), and let \(Tr()\) be the trace operator. The SDP program to determine the optimal measurement operator can be formulated as below, where the objective is to minimize the probability of error.
\[\min_{\Pi_{i}\in\mathcal{B}}1-\sum_{i=0}^{n-1}p_{i}Tr(\Pi_{i}\left|\phi_{i} \right\rangle\left\langle\phi_{i}\right|) \tag{17}\]
subject to the constraints:
\[\Pi_{i}\succeq 0,\quad 0\leq i\leq n-1 \tag{18}\]
\[\sum_{i=0}^{n-1}\Pi_{i}=I \tag{19}\]
Above, Eqn. 18 ensures that every measurement operator is positive semidefinite, while Eqn. 19 ensures that the set of measurement operators is a valid POVM, i.e., the summation of all measurement operators is the identity matrix. Eqn. 17 minimizes the probability of error expression for a given POVM measurement and set of quantum states.
```
Input: Unitary operator \(U\) Input: Number of sensor \(n\) Output: Initial State \(\mathbf{x}\)
1\(\mathbf{x}\leftarrow\) a random state vector with a length of \(2^{n}\);
2\(bestObjective\gets P(\mathbf{x},U)\) ;
3\(stepSize\gets 0.1\) ;
4\(stepDecreaseRate\gets 0.96\) ;
5whileTermination Condition Not Satisfieddo
6for\(i=1\) to \(2^{n}\)do
7\(neighbors\leftarrow\) Find 4 neighbors, call FindNeighbor(\(\mathbf{x},i,stepSize\)) four times ;
8\(bestStep\gets 0\) ;
9for\(j=1\) to \(4\)do
10\(objective\gets P(neighbors[j],U)\) ;
11if\(objective<bestObjective\)then
12\(bestObjective\gets objective\) ;
13\(bestStep\gets j\) ;
14
15 end if
16
17 end for
18if\(bestStep\) is not 0then
19\(\mathbf{x}\leftarrow\) neighbors[bestStep] ;
20
21 end if
22
23 end for
24\(stepSize\gets stepSize\times stepDecreaseRate\) ;
25
26 end for return\(\mathbf{x}\) ;
```
**Algorithm 2**HillClimbing(\(U\), \(n\))
The Objective Value of an Initial State.To design the search-based heuristics, we need a method to estimate an objective value for a given initial quantum state that evaluates its quality. In our context, for a given initial state \(\ket{\psi}\), the ISO problem's objective function \(P(\ket{\psi},U)\) could also serve as the objective function in a search-based heuristic. \(P(\ket{\psi},U)\) can be directly estimated using the Eqn. 17 above.
\[P(\ket{\psi},U)=1-\sum_{i=0}^{n-1}p_{i}Tr(\Pi_{i}\ket{\phi_{i}}\bra{\phi_{i}}) \tag{20}\]
where \(\ket{\phi_{i}}=(I^{\otimes i}\otimes U\otimes I^{\otimes(n-i-1)})\ket{\psi}\) are the final states, and the optimal measurement \(\{\Pi_{i}\}\) can be computed numerically using the SDP formulation given above.
Based on the above method to estimate the objective function \(P()\), we can develop search heuristics for the ISO problem; at a high-level, each heuristic searches for a near-optimal initial state by starting with a random initial state \(\mathbf{x}\) and iteratively improving (not necessarily in every single iteration) by moving to \(\mathbf{x}\)'s better neighbor based on the objective value \(P()\) of \(\mathbf{x}\).
**Hill-Climbing5 (HC) Search Heuristic.** The Hill-climbing (HC) heuristic starts with randomly picking an initial quantum state for the \(n\)-sensors, i.e., a \(2^{n}\) length vector \(\mathbf{x}\) of complex numbers with \(\mathbf{x}^{\dagger}\mathbf{x}=1\). During each iteration, we
look into one element of the state vector \(\mathbf{x}\) at a time. And for each element, we look into four random "neighbors" of the initial state (as described below), and pick the neighbor with the lowest objective value \(P()\). We repeat the process until reaching the termination criteria, i.e., the improvement (if any) of moving to the best neighbor is smaller than a threshold (i.e., \(10^{-6}\)). We also set a minimum number of 100 iterations.
To find a neighbor of a quantum state, we update one element of the state vector \(\mathbf{x}\) at a time--by adding to it a random unit vector multiplied by a step size which decreases with each iteration (a post-normalization step is done to maintain \(\mathbf{x}^{\dagger}\mathbf{x}=1\)). For each element, we look into four random neighbors instead of one, to increase the chance of discovering better neighbors. See Algo. 1 for the neighbor-finding procedure and Algo. 2 for the overall Hill Climbing heuristic.
**Simulated Annealing (SA) Heuristic.** The above Hill-climbing heuristic can get stuck in a local optimal. Thus, we also develop a more sophisticated Simulated Annealing (SA) [19] metaheuristic which has a mechanism to jump out of a local minimum. By convention, SA applies the concept of energy \(E\). In our context, the energy is the equivalent of the objective function value \(P()\). In essence, SA allows itself to transition to solutions with worse objective values
```
Input: Unitary operator \(U\) Input: Number of sensor \(n\) Output: Initial State \(\mathbf{x}\)
1\(\mathbf{x}\leftarrow\) a random state vector with a length of \(2^{n}\);
2\(stepSize\gets 0.1\) ;
3\(T\leftarrow\) Standard deviation of some \(\mathbf{x}\) neighbors' objective values ;
4\(stepDecreaseRate\gets 0.96\) ;
5\(coolingRate\gets 0.96\) ;
6\(stdRatio\gets 1\) ;
7whileTermination Condition Not Satisfieddo
8for\(i=1\)to\(2^{n}\)do
9for\(j=1\)to\(4\)do
10\(\mathbf{x}^{\prime}\leftarrow\) FindNeighbor(\(\mathbf{x},i,stepSize\)) ;
11\(E_{1}\gets P(\mathbf{x},U)\) ;
12\(E_{2}\gets P(\mathbf{x}^{\prime},U)\) ;
13\(\Delta E\gets E_{2}-E_{1}\) ;
14if\(\Delta E<0\)then
15\(\mathbf{x}\leftarrow\mathbf{x}^{\prime}\)
16else
17\(\mathbf{x}\leftarrow\mathbf{x}^{\prime}\) with probability \(e^{-\Delta E/T}\)
18
19 end if
20
21 end for
22
23\(stepSize\gets stepSize\times stepDecreaseRate\) ;
24\(std\leftarrow\) Standard dev. of \(\mathbf{x}\) recent neighbors' scores;
25\(stdRatio\gets stdRatio\times coolingRatio\) ;
26\(T\leftarrow min(T\times coolingRate,std\times stdRatio)\) ;
27
28 end for return\(\mathbf{x}\) ;
```
**Algorithm 3**SimulatedAnnealing(\(U\), \(n\))
with a small (but non-zero) probability. In SA, the transition probability to a new neighbor state depends upon the improvement \(\Delta E\) in the objective function and is given by:
\[P(\Delta E)=\min(1,e^{-\Delta E/T}), \tag{21}\]
where \(T\) is the temperature. We note that when the new state's objective value is lower, then \(\Delta E\) is negative and thus, \(P(\Delta E)\) is 1, and the new state is readily transitioned to. Same as in [22], we set the initial temperature as the standard deviation of objective value of several initial state's neighbors. As the SA algorithm iterates, the temperature \(T\) gradually decreases. In our context, the following works well and leads to fast convergence, compared to other standard equations used in other contexts [21].
\[T_{n}=min\{(1-\epsilon)T_{n-1},(1-\epsilon)^{n}\sigma_{n-1}\}, \tag{22}\]
where \(\sigma_{n-1}\) is the standard deviation of objective values of the latest ten neighbors explored at the \((n-1)^{th}\) iteration. SA uses the same neighbor-finding method (Algo. 1) as in the previous Hill-climbing heuristic, with a similar termination condition as Hill-climbing except that we allow a few iterations for improvement. The pseudo-code of SA is shown in Algo. 3.
**Genetic Algorithm (GA) Heuristic.** The Genetic Algorithm (GA) is another popular metaheuristic algorithm for solving optimization problems. Inspired by the natural evolution of survival of the fittest [17], GA works by considering a "population" of candidate solutions and creating the next generation iteratively, until the best solution in a new generation does not improve from the best solution in the previous generation by at least a threshold. In our context, the initial population of candidate solutions is a set of random initial states. And candidate solutions are evaluated by a fitness function, which is conceptually the same as our objective function \(P()\) (Eqn. 20) except that the fitness function is the higher the better while \(P()\) is the lower the better. So, \(1-P()\) will serve as the fitness function for GA. The psuedo-code for GA is shown in Algo. 4. To create a new generation, we pick a pair of candidate states as
parents through the rank selection (Kang and Zhang, 2018) and then generate a pair of children states by using the two-point crossover method (Kang and Zhang, 2018). Finally, we mutate the children in a way similar to finding neighbors in Algo. 1.
## 7. Validating the conjectures empirically
In this section, we evaluate our search heuristics for varying \(U\) operator (i.e., varying values of \(\theta\)) and for \(n=2\) to \(5\) and observe that they likely deliver near-optimal initial state solutions to the ISO problem. Based on this observation, we show that our optimal solution Conjecture 1 is very likely true as well the Conjecture 2. Our search heuristics implementation and experiment's raw data are open-source at (Zhou and Zhang, 2018).
**Evaluation Setting.** Recall that, without loss of generality, we assume the eigenvalues of \(U\) to be \(\{e^{+i\theta},e^{-i\theta}\}\) with \(U\left|u_{\pm}\right\rangle=e^{\pm i\theta}\left|u_{\pm}\right\rangle\) where \(u_{\pm}\) are the two eigenstates of \(U\). In our evaluations, we vary the \(\theta\) in the range of \((0,180)\) degrees, and assume the prior probabilities of final states to be uniform. We consider four values of \(n\), the number of sensors, viz., 2, 3, 4, and 5. Running simulations for much larger values of \(n\) is infeasible due to the prohibitive
Figure 3. The objective value \(P()\), probability of error, of the candidate solution over iterations of the three search heuristics for a special value of \(\theta=46\) degrees and \(n=4\) sensors.
Figure 2. Performance of the three search heuristics for varying \(U\)’s parameter \(\theta\), for different number of sensors in the network. Genetic Algorithm (GA) is not shown explicitly, for clarity, but it also performs almost exactly the same as Hill-Climbing and Simulated Annealing (SA) which are plotted above.
computational resources needed. E.g., the estimated computation time to run any of the search heuristics for \(n=10\) will take \(10\)s of years, based on our preliminary estimates.6
Footnote 6: In our context, the Hill-Climbing heuristic goes through about \(100\) iterations and in each iteration, it needs to solve \(4\cdot 2^{n}\) instances of SDP formulations (Eqns 17–19) where \(n\) is the number of sensors. We use the Convex-Python CVXPY [9] package (which in turn used the Splitting Conic Solver [24]) to solve our SDP formulations, and observe that it takes more than an hour to solve a single SDP instance for \(n=10\); this suggests an estimate of \(10\)s of years of computation time for \(n=10\).
**Performance of Search Heuristics.** Fig. 2 shows the performance of the search heuristics under varying \(\theta\) and four values of \(n=2,3,4,5\), in terms of the IS0 objective function \(P(\left|\psi\right\rangle,U)\) for the initial state solution \(\left|\psi\right\rangle\). We make the following two observations:
1. All three heuristics perform almost exactly the same.
2. The heuristics deliver an initial state solution with \(P(\left|\psi\right\rangle,U)=0\) for the same range of \(\theta\) given in Theorem 2.
We also observe that the heuristics perform the same for \(\theta\) and \(\pi-\theta\), i.e., symmetric along the \(\theta=\pi/2\) line. Thus, in the remaining plots, we only plot results for \(\theta\in(0,\pi/2]\). Fig. 3 shows the convergence rates of the three heuristics for a specific value of \(\theta=46\) degrees and \(n=4\) sensors. We observe that HC converges the fastest, followed by SA and GA. After \(100\) iterations, the HC and SA end at a probability of error of \(5.85\%\), while GA ends at \(5.86\%\).
**Empirical Validation of Conjecture 2.** Recall that Conjecture 2 states that an "average" solution of two IS0 solutions with equal objective values have a lower objective value. To empirically validate Conjecture 2, we generate a random state \(\left|\psi\right\rangle\), and then, generate \(n!-1\) additional states of the same objective value \(P()\) by renumbering the sensors as discussed in Theorem 3's proof. Then, we take many pairs of these states, average them, and compute the objective value. Fig. 4 plots the objective value of the original state \(\left|\psi\right\rangle\), and the range of the objective values of the averaged states. We observe that the objective values of the averaged states are invariably less than that of \(\left|\psi\right\rangle\).
**Empirical Validation of the Optimal Solution Conjecture 1.** We now evaluate the performance of the initial state solution obtained by Conjecture 1 and compare it with the solution delivered by one of the search heuristics-Hill Climbing (HC). Here, we consider \(\theta\in(0,T)\cup(180-T,180)\) degree, where \(T\) is as defined in Theorem 2. In Fig. 5, we observe that the HC heuristic and Conjecture 1 solutions have identical performance, suggesting that Conjecture 1's solution is likely optimal based on our earlier observation that the search-heuristics likely deliver optimal solutions.
Figure 4: Empirical validation of Conjecture 2. For four different values of \(\theta\) and three different values of \(n\), we show that the objective value (Probability of Error) of the original initial state (the red circle) remains higher than the objective value of the many “averaged” states (range shown by blue the bar).
**Symmetry "Index" vs. Objective Value (Probability of Error).** In this final experiment, we investigate the impact of the symmetry of coefficients on the objective value of an initial state. Here, we only do experiments for \(n=3\) number of sensors. To this end, we define a notion of _symmetry index_ which quantifies the symmetry of coefficients in a given initial state. In particular, we define the _symmetry index_ for an initial state \(|\psi\rangle=\sum\limits_{j}\psi_{j}\,|j\rangle\) as:
\[\sum_{k=0}^{n}\sum_{|\psi_{1}|^{2},|\psi_{j}|^{2}}(|\psi_{i}|^{2}-|\psi_{j}|^{2 })^{2} \tag{23}\]
where \(S_{k}\) is the \(k\)th symmetric set as defined in Theorem 2. The symmetric index being zero implies that within each symmetric set, all the coefficient-squares are equal. Fig. 6 shows that the search heuristic essentially generates solutions with lower and lower symmetry index, and finally, converges to a solution with zero symmetry index value. This is true for all three search heuristics (Figs. 5(a)) and for varying \(\theta\) (Fig. 5(b)). Given Fig. 3 already shown that the objective
Figure 5. The Conjecture 1’s solution performs almost exactly as the Hill-Climbing heuristic when \(\theta\in(0,T]\cup[180-T,180)\), degrees, where \(T\) is from Theorem 2. For \(n=2\), Conjecture 1’s solution matches with the provably optimal solution from [26] with \(T\) being 45 degrees.
Figure 6. Symmetry-index of the candidate solutions over iterations.
value decrease as the searching iterations go on, we can conclude that the objective value and the symmetry index decrease simultaneously when the iterations go on. Furthermore in Fig. 6(a), we show the correlation between symmetry index and objective value through a scatter plot--with the objective value generally decreasing with the decrease in symmetry index. Fig. 6(b) zooms-in to the later iterations of the heuristics wherein the symmetry index is very low (less than 0.08) to show a clearer view of the correlation.
## 8. Unambiguous discrimination measurement
Till now, we have only considered the minimum error measurement scheme wherein the measurement operator always outputs a state, though sometimes incorrectly and thus incurring a certain probability of error. We now consider an alternative measurement scheme of unambiguous measurement [(3)] where there are no errors, but the measurement can fail, i.e. giving an inconclusive outcome. The unambiguous measurement scheme thus may incur a probability of failure. Fortunately, our results for the minimum error measurement scheme also hold for the unambiguous discrimination measurement scheme and objective, as observed below.
1. The sufficient and necessary condition for orthogonality derived in Theorem 2 is a property of the states and the operator \(U\), and is independent of the measurement scheme. Thus, Theorem 2 hold for unambiguous discrimination scheme.
2. The intuition behind Conjecture 1 is based on the homogeneity of sensors and symmetry of the problem setting (e.g., symmetric eigenvalues of \(U\), uniform probability of final states, etc.). Thus, we believe the optimal initial state solution for an unambiguous discrimination scheme is the same as in the case of minimum error scheme. Thus, Conjecture 1 should hold.
3. Conjecture 2 is independent of the measurement scheme.
4. We prove the version of Lemma 2 corresponding to the unambiguous measurement below. Thus, Theorem 3 also holds for unambiguous measurement.
5. The optimization problem of determining the optimal measurement \(\{\Pi_{i}\}\) for unambiguous discrimination scheme can also be formulated as an SDP [(2)], and thus can be computed numerically. Thus, the search heuristics from
Figure 7. The correlation between the objective value (probability of error) and the symmetry index.
SS6 will also work for unambiguous measurement with the corresponding SDP for unambiguous discrimination scheme.
Lemma 3 ().: _Consider \(n\) states to be discriminated \(\phi_{0},\phi_{1},\ldots,\phi_{n-1}\) such that \(\left\langle\phi_{i}\middle|\phi_{j}\right\rangle=x\), for all \(0\leq i,j\leq n-1\) and \(i\neq j\). The probability of_ **failure** _in discriminating \(\phi_{0},\phi_{1},\ldots,\phi_{n-1}\) using an optimal measurement (for unambiguous discrimination) increases with an increase in \(x\) when \(x\geq 0\)._
Proof.: The optimal/minimum probability of failure using the optimal POVM for a set of states with equal pairwise inner products is equal to \(x\) when \(x\geq 0\)(Hanan et al., 2018). Thus, the lemma trivially holds.
## 9. Future Directions
Beyond proving the stated Conjectures in the paper, there are many generalizations of the addressed ISO problem of great interest in terms of: (i) More general final states (e.g, two sensors may change at a time, allowing for multiple impact matrices \(U_{1},U_{2}\), etc.), (ii) Restricting the measurement operators allowed (e.g., allowing only the projective measurements), to incorporate practical considerations in the implementation of measurement operators. We are also interested in proving related results of interest, e.g., ISO initial-state solution being the same for minimum error and unambiguous discrimination.
|
2309.16824 | The fork and its role in unification of closure algebras | We consider the two-pronged fork frame $F$ and the variety $\mathbf{Eq}(B_F)$
generated by its dual closure algebra $B_F$. We describe the finite projective
algebras in $\mathbf{Eq}(B_F)$ and give a purely semantic proof that
unification in $\mathbf{Eq}(B_F)$ is finitary and not unitary. | Ivo Düntsch, Wojciech Dzik | 2023-09-28T20:06:30Z | http://arxiv.org/abs/2309.16824v4 | # The fork and its role in unification
###### Abstract
We consider the two-pronged fork frame \(F\) and the variety \(\mathbf{Eq}(B_{F})\) generated by its dual closure algebra \(B_{F}\). We describe the finite projective algebras in \(\mathbf{Eq}(B_{F})\) and give a purely semantic proof that unification in \(\mathbf{Eq}(B_{F})\) is finitary and not unitary.
## 1 Introduction
Unification of (first order) terms was introduced by J.A.Robinson (1976) as the basic operation of the resolution principle used in automated theorem provers. Nowadays unification plays an essential role in many applications of logic to Computer Science, especially in Automated Deduction. Unification theory for equational theories and logics is an important topic, for example, for automatic deduction and rewriting systems. It has been a well researched concept for some time; for details we refer the reader to the chapter by Baader and Snyder [2] in the Handbook of Automated Reasoning, and for an early account of the role of unification in Computer Science we point the reader to the overview by Stan Burris [7]. Algebraic unification via projective algebras was introduced and investigated
by Ghilardi [16]. He showed that for any equational theory the symbolic and the algebraic unification types coincide via a suitable translation. He concludes that, broadly speaking, unification in varieties depends only on finitely presented projective algebras. The syntactic approach to unification often requires long and complicated calculations, while the algebraic approach via projective algebras often offers much simpler solutions. It is particularly useful for the determination of the unification type of a given theory or a logic. Another important step (or aspect) consists of effectiveness of this determination.
In this paper the algebraic approach via projective algebras is demonstrated to determine the unification type of varieties of closure algebras. In particular, we investigate in detail the equational theory of the complex algebra \(B_{F}\) of the 2-pronged fork \(F\), shown in Figures 1 and 2. Elements closed in \(B_{F}\) are shown as bullets \(\bullet\).
The 2-pronged fork, or simply, fork, and its logic \(L(F)\) have received some interest in the literature: Aiello et al. [1] have provided an axiomatization of \(\mathbf{Eq}(B_{F})\), Dzik [10] proved that it does not have unitary unification, and recently Dzik et al. [11] showed that the logic has finitary unification. The unification results were proved by \(n\)-Kripke models and syntactic means, and in this paper we provide purely semantic proofs using the duality between finite closure algebras and finite quasiorders; along the way we provide a characterization of the finite projective algebras in \(\mathbf{Eq}(B_{F})\). We also give a concrete example of an algebra in \(\mathbf{Eq}(B_{F})\) which has finitary and not unitary unification. As the algebraic approach may not be familiar to all readers, we shall briefly outline its concepts tailored to our situation in Section 3.
Notation and first definitions
The identity function on a set is denoted by \(id\), and the identity relation by \(1^{\prime}\). A _frame_ is a structure \(\langle W,R\rangle\) where \(W\) is a non-empty set and \(R\) a binary relation on \(W\); we denote the converse of \(R\) by \(R^{\smile}\). If \(x\in W\) we set \(R(x):=\{y\in W:x\ R\ y\}\). A non-empty subset \(U\) of \(W\) is _connected_ (in the sense of graph theory) if for all \(x,y\in U\) there is an \(R\cup R^{\smile}\) path from \(x\) to \(y\). A maximally connected subset of \(W\) is called a _connected component_, or just _component_. \(R\) is _rooted_ if there is some \(x\in W\) such that there is an \(R\)-path from \(x\) to any element of \(W\).
A _generated subframe_ of \(\langle W,R\rangle\) is a structure \(\langle V,S\rangle\) such that \(\langle V,S\rangle\) is a first order substructure of \(\langle W,R\rangle\) and satisfies
\[(\forall u,v)[u\in W\text{ and }u\ R\ v\ \Rightarrow v\in V]. \tag{2.1}\]
If \(\langle V,S\rangle\) is (isomorphic to) a generated substructure of \(\langle W,R\rangle\) we write \(\langle V,S\rangle\stackrel{{ g}}{{\hookrightarrow}}\langle W,R\rangle\).
A _bounded morphism_ is a mapping \(p:W\to V\) which preserves \(R\) and satisfies the _back condition_
\[p(x)\ S\ z\Rightarrow(\exists y\in W)[x\ R\ y\text{ and }p(y)=z].\]
If \(p\) is onto we write \(\langle W,R\rangle\stackrel{{ b}}{{\twoheadrightarrow}}\langle V,S\rangle\).
Suppose that \(\mathfrak{F}\) is a class of frames. We call \(\langle V,S\rangle\in\mathfrak{F}\)_injective with respect to_\(\mathfrak{F}\), if for every injective bounded morphism \(q:V\hookrightarrow W\) there is some surjective bounded morphism \(p:W\twoheadrightarrow V\) such that \(p\circ q\) is the identity on \(W\).
If \(R\) is a quasiorder, i.e. reflexive and transitive, then \(\theta_{R}:=\{\langle x,y\rangle:xRy\text{ and }yRx\}\cup 1^{\prime}\) is an equivalence relation which can be partially ordered by \(x/\theta\leq_{R}y/\theta\) if and only if \(x\ R\ y\); the classes of \(\theta_{R}\) are called _clusters_.
Suppose that \(\leq\) is a finite quasiorder on \(W\). For \(x\in W\) we let \(\downarrow x:=\{y\in W:y\leq x\}\), \(\uparrow x:=\{y\in W:x\leq y\}\), and \(\uparrow^{<}x:=\{y\in W:x\leq y\}\). If \(x\leq y\), then \(y\)_is a cover of_\(x\) if \(y\) is minimal in \(\{z\in W:x\leq z\}\). \(\emptyset\neq M\subseteq W\) is called an _antichain_, if all elements of \(M\) are pairwise incomparable. \(M\) is called _dense_ (or _complete_), if each element of \(W\) is above some element of \(M\). A \(\mu\)_-set_ is a dense antichain of \(W\)[16]. We say that \(\langle W,\leq\rangle\) is of _unification type_
_nullary_, if no \(\mu\)-set exists,
_unitary_, if \(W\) has a \(\mu\) set, and every \(\mu\)-set has cardinality \(1\),
_finitary_, if each \(\mu\)-set is finite,
_infinitary_, if it has an infinite \(\mu\)-set.
The _height_\(h(W)\) of \(W\)1 is the length of a longest chain of clusters, and the _width_\(w(W)\) of \(W\) is the cardinality of the largest antichain; observe that the elements of an antichain come from different clusters. The _local width_\(lw(W)\) of \(W\) is \(\max\{w(\uparrow x):x\in W\}\), and the _covering width_\(cw(W)\) is the largest number of covers any \(x\in W\) has. Clearly, \(cw(W)\leq lw(W)\leq w(W)\). If \(R\) is a rooted partial order, then \(lw(W)=w(W)\), and, if \(h(W)=2\), then \(c(w)=lw(W)\). However, in general none of the inequalities can be reversed.
Footnote 1: Also called _depth_ or _rank_[6, p. 46].
Throughout, \(\langle B,+,\cdot,-,0,1\rangle\) is a non-trivial Boolean algebra, usually referred to only by its universe. If \(B\) is a subalgebra of a Boolean Algebra \(A\) we denote this by \(B\leq A\). If \(A\) is a homomorphic image of \(B\) we write \(B\twoheadrightarrow A\).
A _closure operator_ on \(B\) is a mapping \(f:B\to B\) which satisfies \(f(0)=0\) and for all \(a,b\in B\)
\[f(a+b)=f(a)+f(b),\] \[a\leq f(a),\] \[f(f(a))\leq f(a).\]
The _dual mapping_ of \(f\) is defined by \(f^{\partial}(a):=-f(-a)\) and denoted by \(f^{\partial}\); such operator is called _interior operator_. A _closure algebra_ is a structure \(\langle B,f\rangle\), where \(B\) is a Boolean algebra and \(f\) is a closure operator; the pair \(\langle B,f^{\partial}\rangle\) is called an _interior algebra_. The variety of closure algebras is denoted by \(\mathbf{V_{CI}}\). A closure algebra \(\langle B,f\rangle\) is called _discrete_, if \(f\) is the identity. The two element closure algebra is denoted by \(\mathbf{2}\).
Throughout we suppose that \(\mathbf{V}\) is a non-trivial variety of closure algebras. \(\mathbf{V}\) is _locally finite_, if every finitely generated \(B\in\mathbf{V}\) is finite. An algebra \(A\in\mathbf{V}\) is called _projective in \(\mathbf{V}\)_ if and only if it is a retract of a free algebra.2 It is well known that \(B\) is projective in \(\mathbf{V}\) if and only if for every \(A\in\mathbf{V}\) and every surjective homomorphism \(p:A\twoheadrightarrow B\) there is some injective homomorphism \(q:B\hookrightarrow A\) such that \(p\circ q\) is the identity on \(B\), in other words, if \(B\) is a quotient of \(A\), then it is a retract of \(A\).
Footnote 2: Since epimorphisms in \(\mathbf{V_{CI}}\) are onto [19] the notions of weak projectivity of [3] and projectivity coincide.
\[A\xleftarrow{p}\xleftarrow{p}B\qquad\quad p\circ q=1^{\prime}.\]
Note that \(\mathbf{2}\) is projective in \(\mathbf{V}\). If \(\mathbf{V}\) is locally finite the projectivity of a finite algebra depends only on finite algebras:
**Lemma 2.1**.: _Suppose that \(\mathbf{V}\) is locally finite and that \(B\in\mathbf{V}\) is finite. Then \(B\) is projective in \(\mathbf{V}\) if and only if for every finite \(A\in\mathbf{V}\), every epimorphism \(A\twoheadrightarrow B\) is a retraction._
Proof.: "\(\Rightarrow\)": This is clear.
"\(\Leftarrow\)": Suppose that \(p:A\twoheadrightarrow B\) is an epimorphism. For every \(b\in B\setminus\{0,1\}\) choose some \(a_{b}\in p^{-1}(b)\), and let \(A^{\prime}\) be the subalgebra of \(A\) generated by \(\{a_{b}:b\in B\setminus\{0,1\}\}\). Since \(\mathbf{V}\) is locally finite, \(A^{\prime}\) is finite. Clearly, \(p^{\prime}\coloneqq p\upharpoonright A^{\prime}\) is an epimorphism \(A^{\prime}\twoheadrightarrow B\). By the hypothesis, there is some \(q\hookrightarrow A^{\prime}\subseteq A\) such that \(p^{\prime}\circ q=id(B)\), and it follows from the choice of \(A^{\prime}\) that \(p\circ q=id(B)\).
Let \(\langle B,f\rangle\) be a nontrivial closure algebra. Its _canonical frame_ is the structure \(\mathsf{Cf}(B):=\langle W_{B},R_{f}\rangle\) where \(W_{B}\) is the set of ultrafilters of \(B\), and \(R_{f}\) is the binary relation on \(W_{B}\) defined by
\[F\ R_{f}\ G\text{ if and only if }f[G]\subseteq F.\]
Conversely, if \(\mathfrak{B}:=\langle W,\,R\rangle\) is a frame, its _complex algebra_ is the structure \(\mathsf{Cm}(\mathfrak{B}):=\langle 2^{W},\langle R\rangle\rangle\), where \(\langle R\rangle:2^{W}\to 2^{W}\) is the mapping defined by
\[\langle R\rangle(X):=\{x\in W:R(x)\cap X\neq\emptyset\}.\]
We denote \(\mathsf{Cm}\,\mathsf{Cf}(B)\) by \(\mathsf{Em}(B)\) and call it the _canonical embedding algebra_ or _canonical extension_ of \(B\). It is well known that the mapping \(h:B\rightarrow\mathsf{Em}(B)\) is an embedding, and that \(B\cong\mathsf{Em}(B)\) if and only if \(B\) is finite [18]. It is well known that \(R_{f}\) is a quasiorder, and, if \(R\) is a quasiorder, then \(\langle R\rangle\) is a closure operator [18]. The following facts are decisive, see e.g. [4, Theorem 5.47]:
**Lemma 2.2**.: _Suppose that \(A,B\) are modal algebras and \(\mathfrak{B},\mathfrak{B}\) are frames. Then,_
1. _If_ \(A\hookrightarrow B\)_, then_ \(\mathsf{Cf}(B)\stackrel{{ b}}{{\rightarrow}}\mathsf{Cf}(A)\)_._
2. _If_ \(A\twoheadrightarrow B\)_, then_ \(\mathsf{Cf}(B)\stackrel{{ g}}{{\hookrightarrow}}\mathsf{Cf}(A)\)_._
3. _If_ \(\mathfrak{B}\stackrel{{ g}}{{\hookrightarrow}}\mathfrak{B}\)_, then_ \(\mathsf{Cm}(\mathfrak{B})\twoheadrightarrow\mathsf{Cm}(\mathfrak{B})\)_._
4. _If_ \(\mathfrak{B}\stackrel{{ b}}{{\rightarrow}}\mathfrak{B}\)_, then_ \(\mathsf{Cm}(\mathfrak{B})\hookrightarrow\mathsf{Cm}(\mathfrak{B})\)_._
If all structures considered are finite, then \(\mathsf{Em}(B)\cong B\), \(\mathsf{Cf}\,\mathsf{Cm}(\mathfrak{B})\cong\mathfrak{B}\), and the implications above are, in fact, equivalences. Thus, in this situation we may work either with frames or algebras. We shall use this duality throughout without further reference.
## 3 Algebraic unification
Suppose that \(\mathbf{V}\) is a locally finite variety of modal algebras. We shall briefly describe the concept of algebraic unification as presented by Ghilardi [16] as applicable to locally finite varieties. Let \(A\in\mathbf{V}\) be finite. A _unifier_ of \(A\) is a pair \(\langle u,B\rangle\) where \(B\) is finite and projective in \(\mathbf{V}\), and \(u:A\to B\) is a homomorphism.3 Observe that projectivity, and therefore, unifiability, depends on the chosen variety. The collection of unifiers of \(A\) is denoted by \(U_{A}\).
Footnote 3: Strictly speaking we should define a unifier for \(A\in\mathbf{V}\) as a triple \(\langle A,u,B\rangle\); we omit \(A\) because we consider only unifiers of a fixed \(A\).
Given two unifiers \(\langle u,B\rangle\) and \(\langle v,C\rangle\) of \(A\), we say that \(\langle u,B\rangle\)_is more general than_\(\langle v,C\rangle\),4. written as \(\langle u,B\rangle\succcurlyeq\langle v,C\rangle\), if there is a homomorphism \(h:B\to C\) such that the diagram in Figure 3 commutes, i.e. that \(v=h\circ u\); clearly, \(\langle U_{A},\succcurlyeq\rangle\) is a quasiorder.
Footnote 4: The quasiorder on \(U_{A}\) is not uniformly defined in the literature. We chose \(\succcurlyeq\) to be consistent with [16] and the \(\mu\)-sets of Section 2
If \(A\) itself is projective, then \(\langle i,A\rangle\) is more general than any of its unifiers. If \(\langle u,B\rangle\preccurlyeq\langle v,C\rangle\) and \(\langle u,B\rangle\succcurlyeq\langle v,C\rangle\) we write \(\langle u,B\rangle\approx\langle v,C\rangle\), The relation \(\approx\) is an equivalence relation on \(U_{A}\), and \(U_{A}/{\approx}\) can be partially ordered as described in Section 2. We say that the _unification type of \(A\)_ is the unification type of the partially ordered set \(\langle U_{A}/{\approx}\rangle\). Unification of \(A\) is called _filtering_[17], if \(U_{A}\) is up-directed: For every two incomparable unifiers \(\langle u_{1},B_{1}\rangle\) and \(\langle u_{2},B_{2}\rangle\) of \(A\) there is a third unifier of \(A\) more general than both of them. Note that if the unification of \(A\) is filtering, then the unification type of \(A\) is unitary or nullary.
By the homomorphism theorem a unifier \(\langle u,B\rangle\) of \(A\) is determined by the closed ideal \(u^{-1}(0)\) with associated congruence \(\theta=\ker(u)\), its canonical epimorphism \(p_{\theta}:A\twoheadrightarrow A/\theta\), and an embedding \(e\) into \(B\):
(3.1)
Figure 3: Quasiordering algebraic unifiers
In this sense, we can think of a unifier of \(A\) as a triple \(\langle\theta,e,B\rangle\) where \(\theta\) is a congruence on \(A\), \(B\) is a finite algebra projective in \(\mathbf{V}\), and \(e:A\hookrightarrow B\) is an embedding. Note that \(A/\theta\) need not be projective, but only needs to be embeddable into a projective algebra, equivalently, into a free algebra. If \(A/\theta\) is itself projective, then \(\langle p_{\theta},A/\theta\rangle\) is a unifier of \(A\), more general than \(\langle u,B\rangle\).
We denote by \(\mathbb{C}(A)\) the set of congruences of \(A\) for which \(A/\theta\) is isomorphic to a subalgebra of some projective algebra, also called _admissible congruences_. These congruences are decisive when investigating unifiers.
Using the decomposition of \(u\) and \(v\) as outlined above, we arrive at the situation in Figure 4.
Below we collect some simple properties of algebraic unifiers which we shall use later on.
**Lemma 3.1**.:
1. _Suppose that_ \(\langle u,B\rangle,\langle v,C\rangle\) _are unifiers of_ \(A\)_. If_ \(\langle u,B\rangle\succcurlyeq\langle v,C\rangle\)_, then_ \(\ker(u)\subseteq\ker(v)\)_. Consequently,_ 1. _If_ \(\ker(u)\) _and_ \(\ker(v)\) _are incomparable with respect to_ \(\subseteq\)_, then_ \(\langle u,B\rangle\) _and_ \(\langle v,C\rangle\) _are incomparable with respect to_ \(\succcurlyeq\)_._ 2. \(\langle u,B\rangle\approx\langle v,C\rangle\) _implies_ \(\ker(u)=\ker(v)\)_._
2. _If_ \(\langle u,B\rangle\in U_{A}\)_,_ \(C\) _is projective in_ \(\mathbf{V}\)_, and_ \(B\) _is a retract of_ \(C\)_, then,_ \(\langle u,B\rangle\approx\langle u,C\rangle\)_._
Proof.: 1. Suppose that \(h:B\to C\) with \(h\circ u=v\). If \(u(x)=0\), then \(h(u(x))=v(x)=0\).
2. Suppose w.l.o.g. that \(B\leq C\) with \(i:B\hookrightarrow C\) the identity embedding, and that \(p:C\twoheadrightarrow B\) is a retraction:
Figure 4: Quasiordering algebraic unifiers using quotients
Since \(i(u(x))=u(x)\) we have \(\langle u,B\rangle\succcurlyeq\langle u,C\rangle\). For the converse, let \(x\in A\); then, \(u(x)\in B\leq C\), and \(p(i(u(x)))=u(x)\), since \(p\upharpoonright B\) is the identity.
**Corollary 3.2**.:
1. _Suppose that_ \(\langle u,B\rangle\) _is a unifier of_ \(A\)_. Then, there is a finite free algebra_ \(F\) _such that_ \(\langle u,B\rangle\approx\langle u,F\rangle\)_._
2. _If_ \(\langle u,\mathbf{2}\rangle\) _is a unifier of_ \(A\) _and_ \(B\) _is finite and projective in_ \(\mathbf{V}\)_, then_ \(\langle u,\mathbf{2}\rangle\approx\langle u,B\rangle\)_._
Proof.: 1. Since \(B\) is projective, it is a retract of a free algebra \(F\). Since \(u[A]\) is finite, we may assume \(F\) to be finite as well.
2. By the hypothesis, \(\langle u,\mathbf{2}\rangle\) is a unifier of \(A\). Since \(B\) is projective, \(\mathbf{2}\) is a retract of \(B\), and the claim follows from Lemma 3.1(2).
Even \(u[A]\cong v[A]\) does not imply that \(\langle u,B\rangle\succcurlyeq\langle v,B\rangle\):
**Example 3.1**.: Let \(A\in\mathbf{V}\) and \(F\), \(G\) be different closed prime ideals of \(A\), and \(p_{F}:A\twoheadrightarrow A/F\) and \(p_{G}:A\twoheadrightarrow A/G\) be the canonical epimorphisms; then, \(A/F=A/G=\mathbf{2}\). If \(a\in F\setminus G\), then \(0=p_{F}(a)\neq p_{G}(a)=1\) which shows that there is no homomorphism \(h:\mathbf{2}\rightarrow\mathbf{2}\) such that \(h\circ p_{F}=p_{G}\).
**Lemma 3.3**.: _[_8_, p. 50]_ _If \(f:A\twoheadrightarrow B\) is surjective, and \(g:A\to C\) is a homomorphism such that \(\ker(f)\subseteq\ker(g)\), there is some \(h:B\to C\) such that \(g=h\circ f\)._
**Lemma 3.4**.: _Suppose that \(\langle u,B\rangle\) is a unifier of \(A\) and \(h:B\to C\) is an isomorphism. Then, \(\langle u,B\rangle\approx\langle h\circ u,C\rangle\)._
Proof.: Just consider the diagram
\(B\)\(h\)\(h^{-1}\)\(C\)
The fork
The _2-pronged fork_, or simply _fork_ is the frame \(F\) shown in Figure 1. Its complex algebra is denoted by \(B_{F}\), and the variety it generates by \(\mathbf{Eq}(B_{F})\); since \(\mathbf{Eq}(B_{F})\) is a finite variety, it is locally finite. A _fork algebra_ is a nontrivial finite algebra in \(\mathbf{Eq}(B_{F})\).5 A _fork frame_ has the form \(\mathsf{Cf}(B)\) for a fork algebra \(B\). Aiello et al. [1, Theorem 5.7] have shown that the variety \(\mathbf{Eq}(B_{F})\) is the variety of closure algebras which is characterized by the axioms
Footnote 5: These should not be confused with the fork algebras of e.g. [14].
\[f^{\partial}(f(x\cdot f(-x))+x)\leq x \mathbf{Grz}, \tag{4.2}\] \[-x\cdot f(x)\leq f(f^{\partial}(x)) \mathbf{BD_{2}},\] (4.3) \[-(x\cdot y\cdot f(x\cdot-y)\cdot f(-x\cdot y)\cdot f(-x\cdot-y) )=1 \mathbf{BW_{2}} \tag{4.1}\]
The axiom \(\mathbf{Grz}\) implies that a (finite) fork frame is a partial order [6], which we will denote by \(\leq\), possibly indexed. The first order frame conditions corresponding to (4.2), respectively, to (4.3) are 6
Footnote 6: Computed by SQEMA [15].
\[\forall y(x\leq y\Rightarrow(x=y\text{ or }\exists z_{1}(x\leq z_{1}\text{ and }\forall z_{2}(z_{1}\leq z_{2}\Rightarrow y=z_{2})))) \tag{4.4}\]
and
\[\forall y_{1}\forall y_{2}((x\leq y_{1}\text{ and }x\leq y_{2}) \Rightarrow(x=y_{1}\text{ or }x=y_{2}\text{ or }y_{1}=y_{2}\text{ or }\\ \forall z_{1}(x\leq z_{1}\Rightarrow(x=z_{1}\text{ or }y_{1}=z_{1}\text{ or }y_{2}=z_{1})))). \tag{4.5}\]
Together, (4.2) says that the height of a fork frame is at most two, and (4.3) says that every \(x\) is related to at most two other elements. Together they imply that a rooted fork frame has one of the following forms:
We denote by \(L^{1}_{W}\) the points on the lower level and by \(L^{2}_{W}\) the points on the upper level of a fork frame \(W\). The points in \(L^{1}_{W}\) correspond to the closed atoms of \(\mathsf{Cm}(W)\), whereas \(L^{2}_{W}\) corresponds to the set of its non-closed atoms.
Finite projective fork algebras
Our first result gives a necessary condition for a finite \(B\in\mathbf{Eq}(B_{F})\) to be projective. We prove a more general result:
**Theorem 5.1**.: _Let \(F_{n,m}\) be the class of finite partial orders of bounded height \(n\geq 2\) and covering width \(m\geq 2\), and \(\mathbf{V}_{n,m}\) be its associated variety. Suppose that \(B\in\mathbf{V}_{n,m}\) is finite and projective. Then, \(B\) is directly indecomposable, and \(\downarrow f(a_{1})\cap\ldots\cap\downarrow f(a_{k})\neq\{0\}\) for all non-closed atoms \(a_{1},\ldots a_{k}\), when \(k\leq m\)._
Proof.: Suppose that \(\langle V,\leq_{V}\rangle\) is the canonical frame of \(B\); we shall prove the dual statement. Assume that there are \(v_{1},\ldots,v_{k}\in L_{V}^{2}\) such that \(\downarrow v_{1}\cap\ldots\cap\downarrow v_{k}=\emptyset\). Choose some \(x\notin V\), and set \(W:=V\cup\{x\}\), \(\leq_{W}:=\leq_{V}\cup\{\langle x,v_{i}\rangle:1\leq i\leq k\}\cup\{\langle x,x\rangle\}\); then, \(\uparrow_{W}V\subseteq V\), and therefore \(V\) is a generated substructure of \(W\). Since the height of \(V\) is at least two, adding \(x\) as above does not increase the height, hence, \(\mathsf{Cm}(W)\in\mathbf{V}_{n,m}\). Suppose that \(p:W\twoheadrightarrow V\) is a bounded retraction, and \(p(x)=y\). Since \(p\) preserves order, \(y\leq v_{i}=p(v_{i})\) for \(1\leq i\leq k\). This contradicts the hypothesis.
Assume that \(V_{1},V_{2}\) are different connected components of \(V\), and let \(y_{i}\in V_{i}\) be maximal. Choose some \(x\notin V\), and set \(W:=V\cup\{x\}\), \(\leq_{W}:=\leq_{V}\cup\{\langle x,x\rangle,\langle x,y_{1}\rangle,\langle x,y_ {2}\rangle\}\). Since \(n,m\geq 2\), height and cover width of \(V\) are not increased, and we can proceed as above to arrive at a contradiction.
It follows that a finite algebra projective in \(\mathbf{V}_{n,m}\) is not a direct product of projective algebras is projective. Therefore, in \(\mathbf{V}_{n,m}\) unification is not filtering in the sense of Ghilardi and Sacchetti [17, Theorem 3.2]. Since \(\mathbf{Eq}(B_{F})=\mathbf{V}_{2,2}\), we obtain
**Corollary 5.2**.: _If \(\langle B,f\rangle\in\mathbf{Eq}(B_{F})\) is finite and projective, then \(B\) is directly indecomposable and \(\downarrow f(x)\cap\downarrow(f(y))\neq\{0\}\) for all non-closed atoms \(x,y\in B\)._
We now show that the conditions of Corollary 5.2 are sufficient for projectivity.
**Theorem 5.3**.: _Suppose that \(\langle B,f\rangle\in\mathbf{Eq}(B_{F})\) is finite and directly indecomposable, and that \(\downarrow f(x)\cap\downarrow f(y)\neq\{0\}\) for all non-closed atoms \(x,y\in B\). Then, \(B\) is projective in \(\mathbf{Eq}(B_{F})\)._
Proof.: Suppose w.l.o.g. that \(B\neq\mathbf{2}\). We will use duality, and set \(\langle V,\leq\rangle:=\mathsf{Cf}(B)\); furthermore, we suppose that \(V\) is a generated substructure of a fork frame \(W\). Since \(V\) is connected, it is contained in a component of \(W\), and by mapping all points of \(W\) outside this
component to a maximal point of \(V\), we may suppose w.l.o.g. that \(W\) itself is connected; in particular, for all \(x\in L^{1}_{W}\) there is some \(y\in L^{2}_{W}\) such that \(x\leq y\).
We will construct a bounded epimorphism \(p:W\twoheadrightarrow V\) by cases. It suffices to show that \(p\) preserves \(\leq\) and satisfies the back condition on \(\uparrow x\) for \(x\in L^{1}_{W}\). Let \(p\) be the identity on \(V\). We divide \(L^{1}_{W}\setminus V\) into three disjoint (possibly empty) sets:
\[W_{1} :=\{x\in L^{1}_{W}\setminus V:\uparrow^{<}x\subseteq V\}, \tag{5.2}\] \[W_{2} :=\{x\in L^{1}_{W}\setminus V:\uparrow^{<}x\cap V\neq\emptyset, \uparrow^{<}x\cap W\setminus V)\neq\emptyset\},\] (5.3) \[W_{3} :=\{x\in L^{1}_{W}\setminus V:\uparrow^{<}x\cap V=\emptyset\}. \tag{5.1}\]
1. \(x\in W_{1}\): Here, we consider two cases: 1. \(\uparrow^{<}x=\{v\}\): Set \(p(x):=v\). Then, \(p[\uparrow x]=\{v\}\) and clearly, \(p\upharpoonright\uparrow x\) is a bounded morphism. 2. \(\uparrow^{<}x=\{v,w\}\), \(v\neq w\): Choose some \(u\in\downarrow v\cap\downarrow w\cap V\), and set \(p(x):=u\); such \(u\) exists by the hypothesis. Then, the diagram \
If \(x\in X_{y}\), then \(x\not\in V\), since \(x\leq y\not\in V\) and \(V\) is a generated substructure of \(W\). Our next aim is to show that \(\{X_{y}:y\in Y\}\) is a partition of \(W_{2}\). Assume that \(y,y^{\prime}\in Y,y\neq y^{\prime}\), and \(x\in X_{y}\cap X_{y^{\prime}}\). Then, \(x\leq y,x\leq y^{\prime}\) and therefore, \(\uparrow^{<}x=\{y,y^{\prime}\}\subseteq L_{W}^{2}\setminus V\) by (4.3), respectively, (4.5). This contradicts \(x\in W_{2}\). If \(x\in W_{2}\) there is some \(y\in Y\) such that \(\leq y\), thus, \(x\in X_{y}\). Hence, \(p\upharpoonright X_{y}\) and \(p\upharpoonright X_{y^{\prime}}\) may be defined independently if \(y\neq y^{\prime}\). Let \(y\in Y\) and enumerate \(X_{y}=\{x_{1},\ldots,x_{k}\}\); then, \(x_{i}\leq y\) and \(x_{i}\leq v_{i}\) for exactly one \(v_{i}\in L_{V}^{2}\); note that the \(v_{i}\) are not necessarily different. Suppose w.l.o.g. that \(v_{i}=v_{1}\) for \(1\leq i\leq m\leq k\), and set \(p(y):=v_{1}\) as well as \(p(x_{i}):=v_{1}\) for \(1\leq i\leq m\). Then, \(p[\bigcup\{\uparrow x_{i}:1\leq i\leq m\}]=\{v_{1}\}\), and clearly, \(p\upharpoonright\bigcup\{\uparrow x_{i}:1\leq i\leq m\}\) is a bounded morphism. For \(i=m+1,\ldots,k\) choose some \(u_{i}\in L_{V}^{1}\) such that \(u_{i}\in\bigcup v_{1}\cap\bigcup v_{i}\) and define \(p(x_{i}):=u_{i}\). Now, the diagram shows that \(p\upharpoonright\uparrow x_{i}\) is a bounded morphism. This way we have defined \(p\upharpoonright\bigcup\{\uparrow x:x\in W_{2}\}\).
3. \(x\in W_{3}\): Thus far, we have well defined \(p\) on \(V\) and \(\uparrow(W_{1}\cup W_{2})\). If \(x\in\bigcup y\cap\bigcup y^{\prime}\) for some distinct \(y,y^{\prime}\in L_{W}^{2}\setminus V\), then \(p\) might already been defined on \(\uparrow^{<}x\) in the previous step for example, \[\
Finally, let \(\uparrow^{<}x=\{y,z\}\), \(y\neq z\). If both \(p(y)\) and \(p(z)\) have been defined choose \(u\in\downarrow p(y)\cap\downarrow p(z)\cap V\) and set \(p(x):=u\). Then,
shows that \(p\upharpoonright\uparrow x\) is a bounded morphism.
If only one of \(p(y),p(z)\) has been defined, say, \(p(y)\), choose \(v\in L^{2}_{V}\), set \(p(z):=v\), choose \(u\in\downarrow p(y)\cap\downarrow v\) and set \(p(x):=u\):
As in the previous case, \(p\upharpoonright\uparrow x\) is a bounded morphism. If neither \(p(y)\) nor \(p(z)\) have been defined, choose \(u\in L^{1}_{V},v\in L^{2}_{V}\) such that \(u\leq v\), and set \(p(y),p(z):=v\) and \(p(x):=u\). Clearly, \(p\upharpoonright\uparrow x\) is an epimorphism.
This completes the proof.
**Corollary 5.4**.: _If \(A\) is a finite directly indecomposable fork algebra with at most two non-closed atoms, then \(A\) is projective._
Proof.: If \(A\) has only one non-closed atom, then \(A\) is projective by Theorem 5.3. Thus, suppose that \(v,w\) are the non-closed atoms of \(A\). Let \(\langle V,\leq\rangle\) be the canonical frame of \(A\); then, \(L^{2}_{V}=\{v,w\}\). Since \(A\) is directly indecomposable, \(V\) is connected, hence, there is a path from \(v\) to \(w\). The restriction of \(\leq\) to the levels is the identity and therefore, there is some \(u\in L^{1}_{V}\) such that \(u\leq v\) and \(u\leq w\). Thus, the conditions of Theorem 5.3 are met, hence, \(A\) is projective
The variety of all closure algebras has no non-trivial injectives [5, Theorem 7.12]. The situation is different for \(\mathbf{Eq}(B_{F})\):
**Theorem 5.5**.: \(B_{F}\) _is injective in \(\mathbf{Eq}(B_{F})\)._
Proof.: We will prove the dual statement and use the fork as in Figure 1. Let \(W\) be a fork frame, \(p:W\twoheadrightarrow F\) be a bounded epimorphism, and suppose that \(p(x)=u\). Since \(p\) is bounded, there are \(y,z\in W\) such that \(x\leq y,x\leq z\) and \(p(y)=v,p(z)=w\). Then, the assignment \(u\mapsto x,v\mapsto y\) and \(w\mapsto z\) shows that \(p\) is a retraction. It remains to show that \(\{x,y,z\}\) is a generated substructure of \(W\), i.e. that \(\uparrow x=\{x,y,z\}\). This follows immediately from the height and width conditions (4.4) and (4.5).
## 6 The unification type of the variety generated by the fork
It was shown by Dzik et al. [12, Corollary 4.8] that \(\mathbf{Eq}(B_{F})\) has finitary but not unitary unification. In this section we will give a purely semantic proof for this result. Throughout this section \(\langle A,f\rangle\in\mathbf{Eq}(B_{F})\) is finite. If \(\langle u,B\rangle\) is a unifier of \(A\) we may suppose that \(u=e\circ p\) for some admissible congruence \(\theta\) on \(A\) and \(p:A\twoheadrightarrow A/\theta\) such that \(p\left[A\right]\cong u\left[A\right]\leq B\), and \(e:p\left[A\right]\hookrightarrow B\) is an embedding.
Consider the fork frame \(W\) pictured in Figure 5.
The complex algebra of \(W\) is denoted by \(\langle B_{W},f_{W}\rangle\). It has 5 atoms, and we identify these with the points of \(W\). The action of \(f_{W}\) on the atoms is given by
\[\begin{array}{l|cccc}x&u&u^{\prime}&t&v&w\\ f_{W}(x)&u&u^{\prime}&u+t&u+u^{\prime}+v&u^{\prime}+w\end{array}\]
The first result in this section establishes a connection between \(B_{W}\) and projective fork algebras.
**Theorem 6.1**.: _If \(A\in\mathbf{Eq}(B_{F})\) is finite and directly indecomposable, then \(A\) is projective in \(\mathbf{Eq}(B_{F})\) if and only if \(B_{W}\) is not a subalgebra of \(A\)._
Proof.: "\(\xrightarrow{}\)": We use duality and the notation of \(W\) as above. Suppose that \(V\) is an injective fork frame and assume that \(p:V\twoheadrightarrow W\) is a bounded epimorphism. Let \(r,s\in V\) such that
Figure 5: The \(W\) frame
\(p(x)=u\) and \(p(y)=u^{\prime}\). Since \(p\) is bounded there are \(r,s\in L^{2}_{V}\) such that \(x\leq r\), \(y\leq s\) and \(p(r)=t\), \(p(s)=w\). Since \(V\) is injective, there is some \(z\in L^{1}_{V}\) such that \(z\leq r\) and \(z\leq s\). Since \(p\) preserves order we have \(p(z)\leq p(r)=t\) and \(p(z)\leq p(s)=w\). However, since \(\downarrow t\cap\downarrow w=\emptyset\), such \(z\) does not exist.
"\(\Leftarrow\)": Suppose that \(B_{W}\leq A\); we need to find two non-closed atoms \(a,a^{\prime}\) of \(A\) with \(f(a)\cdot f(a^{\prime})=0\). With some abuse of language we let the atoms of \(B_{W}\) be the points in Figure 5; then, \(u=f(u)\leq f(t)\cdot f(v),u^{\prime}=f(u^{\prime})\leq f(w)\cdot f(v)\) and \(f(v)\cdot f(w)=0\). Let \(a_{t}\) and \(a_{w}\) be non-closed atoms of \(A\) with \(a_{t}\leq t\), \(a_{w}\leq w\). These exist, since each atom of \(B_{W}\) is a sum of atoms of \(A\), and \(t\), \(w\) are non-closed. Now, \(a\neq f(a)\leq f(t),a^{\prime}\neq f(a^{\prime})\leq f(w)\) and therefore \(f(a)\cdot f(a^{\prime})=0\). It follows that \(A\) is not projective by Theorem 5.1.
**Theorem 6.2**.: _The unification type of \(\mathbf{Eq}(B_{F})\) is finitary and not unitary._
Proof.: Suppose that \(A\in\mathbf{Eq}(B_{F})\) is finite, and \(\theta\in\mathbb{C}(A)\); then, there is some projective \(B\in\mathbf{Eq}(B_{F})\) such that \(A/\theta\leq B\). Since \(\mathbf{Eq}(B_{F})\) is locally finite, we may suppose that \(B\) is finite as well. Furthermore, \(A/\theta\) is directly indecomposable, since it is a subalgebra of the projective algebra \(B\) which is directly indecomposable.
Our first aim is to show that \(A/\theta\) itself is projective. Assume otherwise; then \(B_{W}\) is a subalgebra of \(A/\theta\) by Theorem 6.1. Since \(A/\theta\) is a subalgebra of \(B\), the algebra \(B_{W}\) is also a subalgebra of \(B\), contradicting that \(B\) is projective. This shows that \(\langle p_{\theta},A/\theta\rangle\) is a unifier of \(A\) for an admissible congruence \(\theta\).
Suppose that \(\langle u,B\rangle\) is a unifier of \(A\). Then, there are some admissible \(\theta\) such that \(p_{\theta}[A]=u[A]\), and some embedding \(e:p_{\theta}[A]\hookrightarrow B\) such that \(u=e\circ p_{\theta}\). Hence, \(\langle p_{\theta},p_{\theta}[A]\rangle\succcurlyeq\langle u,B\rangle\):
Thus, each unifier is below a unifier of the form \(\langle p_{\theta},p_{\theta}[A]\rangle\) for some \(\theta\in\mathbb{C}(A)\). As \(\mathbb{C}(A)\) is finite, there can be no infinite \(\mu\) set.
Let \(M\) be a maximal antichain in \(\mathbb{C}(A)\). Then, \(\{\langle p_{\theta},p_{\theta}[A]\rangle:\theta\in M\}\) is a \(\mu\) set: Let \(\langle u,B\rangle\in U_{A}\) and \(u=e\circ p_{\theta}\); then, \(\theta\in\mathbb{C}(A)\) and \(\langle p_{\theta},p_{\theta}[A]\rangle\succcurlyeq\langle u,B\rangle\). The maximality of \(M\) now implies that \(\langle p_{\psi},p_{\psi}[A]\rangle\succcurlyeq\langle p_{\theta},p_{\theta}[A]\rangle\) for some \(\psi\in M\).
Thus far we have shown that the unification type of \(\mathbf{Eq}(B_{F})\) is neither infinitary nor nullary. It remains show that unification in \(\mathbf{Eq}(B_{F})\) is not unitary. This follows from the more general Theorem 6.5.
We continue by exhibiting the unifier classes of \(B_{W}\), thus presenting a concrete example of an algebra in \(\mathbf{Eq}(B_{F})\) without unitary unification.
**Example 6.1**.: If \(a\in B_{W}\) is closed, we let \(p_{a}:B_{W}\twoheadrightarrow B_{W}/\!\!\downarrow\!a\) be the canonical epimorphism. An inspection7 of the quotients of \(B_{W}\) shows that of the 11 proper non-identity congruences of \(B_{W}\) only 5 lead to directly indecomposable algebras. Since projective algebras in \(\mathbf{Eq}(B_{F})\) are directly indecomposable and direct indecomposability is a universal property, these are the only ones which may appear as subalgebras of a projective algebra. Three of these are isomorphic to \(\mathbf{2}\), namely, \(B_{W}/\!\!\downarrow\!-t,\,B_{W}/\!\!\downarrow\!-v\) and \(B_{W}/\!\!\downarrow\!-w\), leading to the incomparable unifiers \(\langle p_{-t},B_{W}/\!\!\downarrow\!-t\rangle\), \(\langle p_{-v},B_{W}/\!\!\downarrow\!-v\rangle\), \(\langle p_{-w},B_{W}/\!\!\downarrow\!-w\rangle\). Since prime ideals are incomparable, it follows that these unifiers are incomparable with respect to \(\geqslant\) by Lemma 3.1. The other two directly indecomposable quotients are \(B_{W}^{1}:=B_{W}/\!\!\downarrow\!f(t)\) and \(B_{W}^{2}:=B_{W}/\!\!\downarrow\!f(w)\), each of which is isomorphic to the projective algebra \(B_{F}\). Let \(p_{i}:B_{W}\twoheadrightarrow B_{W}^{i}\) be the respective quotient mappings; then, \(\langle p_{1},B_{W}^{1}\rangle\) and \(\langle p_{2},B_{W}^{2}\rangle\) are unifiers of \(B_{W}\). In what follows we will show the following:
Footnote 7: Obtained from [13]. The source is available from the first author.
1. The unifiers \(\langle p_{1},B_{W}^{1}\rangle\) and \(\langle p_{2},B_{W}^{2}\rangle\) are incomparable with respect to \(\geqslant\).
2. Every unifier of \(B_{W}\) with at least four elements is equivalent to \(\langle p_{1},B_{W}^{1}\rangle\) or \(\langle p_{2},B_{W}^{2}\rangle\).
1. follows immediately from Lemma 3.1 since \(\downarrow\!f(t)\cap\downarrow\!f(w)=\{0\}\).
For 2., suppose that \(\langle u,A\rangle\in U_{B_{W}}\) has at least four elements. Since \(A\) is projective, \(B_{W}\) is not isomorphic to a subalgebra of \(A\) by Theorem 6.1, and therefore \(\ker(u)=\downarrow\!f(t)\) or \(\ker(u)=\downarrow\!f(w)\), apart from those quotients isomorphic to \(\mathbf{2}\). Suppose w.l.o.g. that \(\ker u=\downarrow\!f(t)\). Since \(\ker(p_{1})=\ker(u)\), there is an isomorphism \(q:u[B_{W}]\to B_{W}^{1}\) such that \(q\circ u=p_{1}\). Furthermore, \(u[B_{W}]\cong B_{F}\), so there is a retraction \(g:A\twoheadrightarrow u[B_{W}]\) by Theorem 5.5. We now have the following situation, where all diagrams commute:
Set \(h:=q\circ g\) and let \(x\in B_{W}\). Then, noting that \(g\upharpoonright u[B_{W}]\) is the identity, we obtain
\[(h\circ u)(x)=(q\circ g\circ u)(x)=(q\circ u)(x)=p_{1}(x),\]
which shows that \(\langle u,A\rangle\succcurlyeq\langle p_{1},B_{W}^{1}\rangle\). Next, we show that \(\langle p_{1},B_{W}^{1}\rangle\succcurlyeq\langle u,A\rangle\). Let \(q\) be as above and consider the following diagram:
Then, \(q^{-1}\circ p_{1}:B_{W}\to A\), and \((q^{-1}\circ p_{1})(x)=(q^{-1}\circ q\circ u)(x)=u(x)\) shows that \(\langle p_{1},B_{W}^{1}\rangle\succcurlyeq\langle u,A\rangle\). Finally, \(\langle u,A\rangle\not\succcurlyeq\langle p_{2},B_{W}^{2}\rangle\), since \(\ker(p_{1})\) and \(\ker(p_{2})\) are incomparable. Thus, both our claims are proved.
Next, we extend Theorem 6.2 to show that no variety \(\mathbf{V}\) of closure algebras containing \(B_{F}\) has unitary unification and prove a converse relative to varieties with finitary unification. First, we recall the concept of a splitting pair of a lattice, see e.g. [5]. A pair \(\langle\mathbf{V}_{1},\mathbf{V}_{2}\rangle\) of subvarieties of \(\mathbf{V}_{\mathbf{C}}\) is called _splitting_, if
1. \(\mathbf{V}_{1}\nleq\mathbf{V}_{2}\),
2. If \(\mathbf{V}^{\prime}\leq\mathbf{V}_{\mathbf{C}}\), then \(\mathbf{V}_{1}\leq\mathbf{V}^{\prime}\) or \(\mathbf{V}^{\prime}\leq\mathbf{V}_{2}\).
It is well known that \(\mathbf{V}_{1}\) is generated by a finite subdirectly irreducible algebra, say, \(B\), and that \(\mathbf{V}_{2}\) is the largest subvariety of \(\mathbf{V}_{\mathbf{C}}\) not containing \(B\), called the _splitting companion of \(B\)_; for details see e.g. [5]. It has been known for some time [20, Example, p. 158] that the splitting companion of \(B_{F}\) is the variety \(\mathbf{V}_{G}\) of closure algebras \(\langle B,f\rangle\) which satisfy the Geach condition
\[\mathbf{G}\qquad f(f^{\partial}(x))\leq f^{\partial}(f(x)).\]
These algebras are the algebraic models of the logic **S4.2**.
We say that \(\mathbf{V}\) satisfies the _weak disjunction property_ (WDP) [10] if for all terms \(\tau_{1},\tau_{2}\) in the language of \(\mathbf{V}\),
\[\mathbf{WDP}\ \ \mathbf{V}\models f^{\partial}(\tau_{1})+f^{\partial}(\tau_{2}) \approx 1\Rightarrow\mathbf{Eq}(\mathbf{2})\models\tau_{1}\approx 1\ \text{or}\ \ \mathbf{Eq}(\mathbf{2})\models\tau_{2}\approx 1.\]
It turns out that the WDP is intimately related to \(\mathbf{Eq}(B_{F})\):
**Theorem 6.3**.: \(\mathbf{V}\) _satisfies the WDP if and only if \(\mathbf{Eq}(B_{F})\leq\mathbf{V}\)._
Proof.: "\(\Rightarrow\)": Suppose that \(\mathbf{V}\leq\mathbf{V}_{\mathbf{C}}\) satisfies the _WDP_, and assume that \(\mathbf{Eq}(B_{F})\nleq\mathbf{V}\). Then \(\mathbf{V}\leq\mathbf{V}_{G}\), and thus, \(A\models f^{\partial}(f(-x)))+f^{\partial}(f(x))\approx 1\) for all \(A\in\mathbf{V}\). Choose some \(A\in\mathbf{V},A\neq\mathbf{2}\), and some \(x\in A,x\notin\{0,1\}\). Since \(\mathbf{V}\) satisfies the _WDP_, \(A^{B}\models x\approx 1\) or \(A^{B}\models-x\approx 1\), where \(A^{B}\) is the Boolean reduct of \(A\). This contradicts the assumption \(x\notin\{0,1\}\).
"\(\Leftarrow\)": This is [10, Lemma 10].
Unification is related to the WDP by the following result:
**Lemma 6.4**.: _[_10_, Lemma 9]_ _If \(\mathbf{V}\) satisfies the WDP, it does not have unitary unification._
**Theorem 6.5**.:
1. _If_ \(B_{F}\in\mathbf{V}\)_, then_ \(\mathbf{V}\) _does not have unitary unification, equivalently, if_ \(\mathbf{V}\) _has unitary unification, then_ \(B_{F}\notin\mathbf{V}\)_._
2. _If_ \(\mathbf{V}\) _has finitary unification and not unitary unification, then_ \(B_{F}\in\mathbf{V}\)_._
Proof.: 1. This follows from Theorem 6.3 and Lemma 6.4.
2. Suppose that \(\mathbf{V}\) has finitary unification and not unitary unification. Then, there is some \(\langle A,f\rangle\) whose set of unifiers is not directed, that is, unification for \(A\) is not filtering. From [17, Theorem 8.4] we obtain \(\mathbf{V}\nleq\mathbf{V}_{G}\), and thus, \(\mathbf{Eq}(B_{F})\leq\mathbf{V}\) by the splitting result.
The location of the unification types relative to the splitting \(\langle\mathbf{Eq}(B_{F}),\mathbf{V}_{G}\rangle\) is shown below.
Therefore, \(\mathbf{Eq}(B_{F})\) is the smallest variety of closure algebras with finitary but not unitary unification. In a sense the fork algebra \(B_{F}\) plays a role of ''a test algebra" for unitary
(finitary, not unitary) unification in varieties of closure algebras: If \(\mathbf{V}\) has unitary (finitary, not unitary) unification, then it does not (does) contain \(B_{F}\). The restriction to varieties with finitary unification is essential: Dzik et al. [11, 12] presented infinitely many varieties of locally finite Heyting algebras with unification zero. It can be shown that there are infinitely many locally finite varieties of Grzegorczyk algebras which have unification type zero that do not contain \(B_{F}\) (and infinitely many that contain \(B_{F}\)). This is the topic of ongoing work [9].
|
2309.09904 | Learning Generative Models for Lumped Rainfall-Runoff Modeling | This study presents a novel generative modeling approach to rainfall-runoff
modeling, focusing on the synthesis of realistic daily catchment runoff time
series in response to catchment-averaged climate forcing. Unlike traditional
process-based lumped hydrologic models that depend on predefined sets of
variables describing catchment physical properties, our approach uses a small
number of latent variables to characterize runoff generation processes. These
latent variables encapsulate the intrinsic properties of a catchment and can be
inferred from catchment climate forcing and discharge data. By sampling from
the latent variable space, the model generates runoff time series that closely
resemble real-world observations. In this study, we trained the generative
models using neural networks on data from over 3,000 global catchments and
achieved prediction accuracies comparable to current deep learning models and
various conventional lumped models, both within the catchments from the
training set and from other regions worldwide. This suggests that the runoff
generation process of catchments can be effectively captured by a
low-dimensional latent representation. Yet, challenges such as equifinality and
optimal determination of latent variables remain. Future research should focus
on refining parameter estimation methods and exploring the physical meaning of
these latent dimensions to improve model applicability and robustness. This
generative approach offers a promising alternative for hydrological modeling
that requires minimal assumptions about the physical processes of the
catchment. | Yang Yang, Ting Fong May Chui | 2023-09-18T16:07:41Z | http://arxiv.org/abs/2309.09904v3 | # Learning to Generate Lumped Hydrological Models
###### Abstract
A lumped hydrological model structure can be considered a generative model because, given a set of parameter values, it can generate a hydrological modeling function that accurately predicts the behavior of a catchment under external forcing. It is implicitly assumed that a small number of variables (i.e., the model parameters) can sufficiently characterize variations in the behavioral characteristics of different catchments. This study adopts this assumption and uses a deep learning method to learn a generative model of hydrological modeling functions directly from the forcing and runoff data of multiple catchments. The learned generative model uses a small number of latent variables to characterize a catchment's behavior, so that assigning values to these latent variables produces a hydrological modeling function that resembles a real-world catchment. The learned generative model can be used similarly to a lumped model structure, i.e., the optimal hydrological modeling function of a catchment can be derived by estimating optimal parameter values (or latent variables) with a generic calibration algorithm. In this study, a generative model was learned from data from over 3,000 catchments worldwide. The model was then used to derive optimal modeling functions for over 700 different catchments. The resulting modeling functions generally showed a quality that was comparable to or better than 36 types of lumped model structures. Overall, this study demonstrates that the hydrological behavior of a catchment can be effectively described using a small number of latent variables, and that well-fitting hydrologic model functions can be reconstructed from these variables.
\({}^{1}\)Department of Civil Engineering, The University of Hong Kong, Hong Kong SAR, China
## 1 Introduction
Deep learning methods have been widely used in rainfall-runoff modeling, where they are commonly used to learn functions that map climate forcing time series to catchment runoff hydrographs from observational data (Nearing et al., 2021; Shen and Lawson, 2021). A hydrological model of a catchment can be derived by fitting parameters of a neural network to minimize errors in runoff prediction. In many of the current studies, climate forcing is averaged over the catchment (S. Anderson and Radic, 2022). This is similar to conventional lumped hydrological models, where the average responses of a catchment to catchment-averaged climate forcings are modeled using empirical or physically based equations (Beven, 2011; M. P. Anderson et al., 2015; Yu, 2015; Coron et al., 2017).
In conventional lumped hydrological modeling, a unique model representation can be defined by a combination of a model structure and a set of parameter values (Spieler et al., 2020; Knoben et al., 2020). While the terms "model representation", "model structure", and "parameters" have been used without definition in many previous studies, this study provides simple definitions of these terms to avoid ambiguity and confusion in presentation and discussion. In this study, a unique model representation is referred to as a "model instance," which is a numerical function that can produce some outputs for given inputs. A model instance is essentially a hydrological modeling function. A model structure is referred to as a "model class", which is a function that can produce a model instance for a given set of settings, called "parameters". These definitions follow com
puter science conventions (Stefik and Bobrow, 1985; Wickham, 2019; Yang and Chui, 2023). Each catchment and model instance is said to have a specific hydrological function, which can be thought of as the behavior or actions of the catchment/instance in response to external drivers, such as the storage and release of water (Black, 1997; Wagener et al., 2007, 2008).
From these definitions, it is easy to compare the differences between the current deep learning methods and the conventional lumped hydrological modeling methods: the deep learning methods typically aim to learn model instances directly from the data, and the conventional lumped modeling methods focus on developing different model classes that are capable of generating good model instances for arbitrary catchments (Weiler and Beven, 2015). The difference between the goals of the two modeling approaches is similar to the difference between "discriminative modeling" and "generative modeling" approaches in machine learning. Discriminative modeling aims to learn good predictive models, and generative modeling aims to learn the underlying generative process that defines a joint distribution over random variables or simply how the observational data is generated (Kingma and Welling, 2019; Tomczak, 2022; Foster, 2022). A conventional lumped model class can be considered generative from this perspective, because it can generate model instances by sampling parameters from a certain distribution, and the sampled instances are expected to resemble the runoff generation functions of real-world catchments.
A generative model can also be used for discriminative modeling purposes (Kingma and Welling, 2019). For a given catchment and lumped model class, a discriminative model can be created by searching through the parameter space and selecting the one that is most likely to produce the observed runoffs under climate forcing. This discriminative model is considered as an optimal model instance for the catchment, and the process of finding the optimal model instance is known as model calibration in hydrological modeling (Beven, 2011; Efstratiadis and Koutsoyiannis, 2010; Pechlivanidis et al., 2011). Model calibration can be performed effectively in part because each point in the parameter space corresponds to a meaningful model instance (that closely resembles a real-world catchment), and the dimension of the parameter space is typically relatively low. As indicated by the 46 common model structures examined in Knoben, Freer, Fowler, et al. (2019), the number of parameters used in a lumped model class is typically less than 20.
Are the neural networks used in current hydrological studies generative? Yes, for a given network structure, different parameter values (i.e., weights and biases) can correspond to different model instances. That is, the same network structure can represent different catchments using different sets of parameter values. However, they have rarely been used in generative tasks. This is because the total number of parameters used in a network can easily exceed hundreds or thousands (Botterill and McMillan, 2023; Shen and Lawson, 2021; Kratzert, Klotz, Shalev, et al., 2019; Wong et al., 2020; Solgi et al., 2021), which is much larger than the number of parameters in a lumped model. The high dimensionality of the parameter space makes it difficult to sample meaningful model instances. In addition, the parameters usually do not have an explicit physical meaning. Thus, it is not clear how to define a subspace of the high-dimensional parameter space, from which one can more easily sample meaningful model instances.
Attempts have been made to facilitate the derivation of model instances for multiple catchments, which can also be useful for generating meaningful model instances. This is usually done in two ways: using other information to infer the parameter values that represent a catchment, and reducing the number of parameters needed to generate a model instance (i.e., defining a smaller and meaningful parameter space).
For example, in the regional modeling methods (Kratzert, Klotz, Herrnegger, et al., 2019; Xu and Liang, 2021), a general rainfall-runoff model is learned for different catchments, and a unique model instance is created by feeding the general model with a set
of numerical values. Here, a general model and its required numerical values can be considered as a model class and parameters, respectively. The parameter values are usually catchment attributes that describe the physical proprieties of the catchment or their derivatives (Kratzert, Klotz, Shalev, et al., 2019; Kratzert, Klotz, Herrnegger, et al., 2019; Feng et al., 2020). Therefore, ideally, we can feed the general model with randomly sampled catchment attributes to create a new model instance.
However, the regional modeling methods have several limitations that may limit their ability to generate meaningful model instances. First, by using catchment attributes as parameters, it is assumed that a regional model can learn a useful mapping between the selected catchment attributes and the hydrological behavioral characteristics of catchments. However, this assumption may not be valid in some cases. This is because, a regional hydrological model may simply use the catchment-specific attribute values to distinguish between the catchments, and then assign different runoff prediction modes to each unique catchment, regardless of whether there exists a true relationship between the attributes and the behavioral characteristics of the catchments. This point has been shown in Li et al. (2022) that, in a regional modeling setting, the random values assigned to each catchment can be a good substitute for catchment attributes for creating model instances that represent real-world catchments. Thus, a network may not be able to learn a meaningful mapping between catchment attributes and behavioral characteristics between catchments, and the instances created by varying the attribute values may be unrealistic (i.e., not representing any real-world catchments).
Second, in regional modeling, it is generally assumed that the selected catchment attributes can sufficiently well explain the different hydrological behaviors of real-world catchments. Since the catchment attributes are used as parameters to create model instances, they are typically measurable or estimable. However, this points to a critical difference between regional modeling methods and conventional lumped model classes, in that in a lumped model class, the parameters used to create different model instances are often catchment-averaged properties that cannot be directly measured, such as the maximum soil moisture storage capacity of a catchment (Beven, 2011).
Since it may not be feasible to identify (let alone measure or estimate) all of the variables that determine the hydrological behavioral characteristics of a catchment, it is desirable to use some other latent variables to describe the variation in behavioral characteristics of different catchments that cannot be explained by the attributes alone. The term "latent" is used here to describe variables that cannot be directly observed and do not have a clear physical meaning. The latent variables can be thought of as the underlying intrinsic characteristics of a catchment. They determine or explain the differences in behavioral characteristics between different catchments. Note that the parameters used in a lumped model class usually have an explicit physical meaning and therefore cannot be considered latent variables. The latent variables may also be referred to as implicit variables or latent factors (Kingma & Welling, 2019; Makhzani et al., 2015; Yang & Chui, 2023).
There are also other methods that can facilitate the generation of model instances from neural networks. For example, Botterill and McMillan (2023) used the rainfall-runoff time series of a catchment to infer the parameters needed to generate a model instance, and Ma et al. (2021) used a weight freezing technique from transfer learning (Weiss et al., 2016) to reduce the number of parameters needed to generate a model instance. However, it is also currently unclear how to incorporate the latent variables into these modeling frameworks to account for the residual variation (that cannot be explained by catchment attributes and other measurable variables).
Therefore, this study aims to propose a data-driven method for learning generative models of hydrological model instances from the climate forcing and runoff data of multiple catchments. Given a set of (low-dimensional) latent variable values, a learned
generative model will be able to generate a meaningful hydrological model instance that resembles a real-world catchment. The methods for estimating the optimal latent variable values (for generating well-fitting model instances) for an arbitrary catchment are also discussed in this study, and the quality of the resulting model instance is compared to model instances obtained using conventional lumped model classes.
## 2 Methods and materials
In this study, it is assumed that the variations between the hydrological behavioral characteristics of different catchments can be sufficiently well explained by some unknown variables, referred to as latent variables. And there exists a general hydrological model (i.e., a model class) that is conditioned on the latent variable values to make hydrological predictions. That is, a model instance is created by providing the general model with a set of latent variable values. A generative model is thus defined by the general model and a method for sampling parameter values. The approach to learning the generative model generally follows the auto-decoder method proposed in Park et al. (2019), with the exception that the stochasticity elements are removed in this study to obtain simpler catchment model instances. In an auto-decoder network, the latent variable values of each entity are not inferred from other variable values, but optimized from random values to minimize some objective functions. This section describes the setting of the machine learning problem, the methods used to learn the generative model, the application and verification of the generative model, and the data used in the numerical experiments.
### A generative model of hydrological model instances
Let \(C\) denote a set of \(N\) catchments that are indexed by \(i\), \(C=\{c_{i}|\ i=1,2,\cdots,N\}\). We assume that each catchment \(c_{i}\) is associated with a function \(f_{i}\) that predicts the runoff time series \(\mathbf{y}\) for a given climate forcing time series \(\mathbf{x}\):
\[\mathbf{y}=f_{i}\left(\mathbf{x}\right). \tag{1}\]
Note that \(\mathbf{x}\) and \(\mathbf{y}\) are multi-dimensional numerical vectors representing a multi-time step multivariate climate forcing time series and a multi-time step runoff time series, respectively.
We also assume that the runoff generation processes of \(c_{i}\) can be sufficiently well characterized by a \(d\)-dimensional latent vector \(\mathbf{z}_{i}\,\in\,\mathbb{R}^{d}\), such that there exists a general hydrological model \(g_{\theta}\) that can make good runoff predictions for \(c_{i}\) by conditioning its outputs on \(\mathbf{z}_{i}\), i.e., \(g_{\theta}\left(\mathbf{z}_{i},\mathbf{x}\right)\) is a good approximator of \(f_{i}\left(\mathbf{x}\right)\):
\[g_{\theta}\left(\mathbf{z}_{i},\mathbf{x}\right)\approx f_{i}\left(\mathbf{x }\right), \tag{2}\]
where \(\theta\) is the parameters to be learned from data, and \(\mathbf{z}_{i}\) follows a certain probability density function \(p\left(\mathbf{z}\right)\). For example, \(g_{\theta}\) can be represented by a neural network, and in this case \(\theta\) is the trainable parameters of the network. Note that the \(\mathbf{z}_{i}\) values of different catchments are to be learned automatically from the catchments' climate forcing and runoff data but not predefined.
Essentially, \(g_{\theta}\) and \(p\left(\mathbf{z}\right)\) jointly define a generative model of hydrological model instances: by sampling \(M\)\(\mathbf{z}\) values from \(p\left(\mathbf{z}\right)\), we can obtain \(M\) hydrological model instances \(g_{\theta}\left(\mathbf{z}_{1},\cdot\right)\) through \(g_{\theta}\left(\mathbf{z}_{M},\cdot\right)\). For a given climate forcing time series \(\mathbf{x}\), these model instances produce the predicted runoff time series \(\tilde{\mathbf{y}}_{1}\) through \(\tilde{\mathbf{y}}_{M}\). The processes for generating hydrological model instances and runoff predictions are illustrated in Figure 1.
### Learning a generative model
Assume that for a catchment \(c_{i}\), we have \(S_{i}\), a set of \(K_{i}\) pairs of climate forcing and runoff time series \(\left(\mathbf{x}_{j},\mathbf{y}_{j}\right)\):
\[S_{i}=\left\{\left(\mathbf{x}_{j},\mathbf{y}_{j}\right)|\,\,j=1,2,\cdots,K_{i} \right\}, \tag{3}\]
where \(j\) is the index.
The machine learning task here is then to learn a general hydrological model \(g_{\theta}\) and the latent vector \(\mathbf{z}_{i}\) value of the catchment from \(S_{i}\), such that the predicted runoff \(g_{\theta}\left(\mathbf{z}_{i},\mathbf{x}_{j}\right)\) can closely match the observed runoff \(\mathbf{y}_{j}\). Here, \(g_{\theta}\) is a neural network parameterized by \(\theta\), and \(\mathbf{z}_{i}\) is a \(d\)-dimensional numerical vector. For a given neural network structure and a \(d\) value, the optimal values of \(\theta\) and \(\mathbf{z}_{i}\) can be learned by solving the following optimization problem:
\[\min_{\mathbf{z}_{i},\theta}\sum_{\left(\mathbf{x}_{j},\mathbf{y}_{j}\right) \in S_{i}}L\left(g_{\theta}\left(\mathbf{z}_{i},\mathbf{x}_{j}\right),\mathbf{ y}_{j}\right), \tag{4}\]
where \(L\) is a loss function and is set to the root mean square error (RMSE) function in this study. The commonly used methods for training neural networks can generally be used to solve this problem, such as the backpropagation and stochastic decent algorithms (Goodfellow et al., 2016). The methods for determining the optimal network structure and \(d\) are described in Section 2.4.
For the \(N\) catchments in set \(C\), assuming that each catchment is associated with a climate forcing and runoff data set \(S_{i}\), the "overall" optimal \(\theta\) and \(\mathbf{z}_{i}\) values should then minimize the prediction errors for all the considered catchments, which can be derived by solving the following optimization problem:
\[\min_{\mathbf{z}_{i},\theta}\sum_{i=1}^{N}\left(\sum_{\left(\mathbf{x}_{j}, \mathbf{y}_{j}\right)\in S_{i}}L\left(g_{\theta}\left(\mathbf{z}_{i},\mathbf{ x}_{j}\right),\mathbf{y}_{j}\right)\right). \tag{5}\]
This problem can also be solved using common neural network training algorithms. As shown in Figure 2, the \(\theta\) and \(\mathbf{z}_{i}\) values are iteratively updated to minimize the runoff prediction errors until certain criteria is met. \(p\left(\mathbf{z}\right)\) is then estimated from the learned \(\mathbf{z}_{i}\) values.
In this study, we assume that \(p\left(\mathbf{z}\right)\) is an arbitrary probability density function that can be estimated directly from samples of \(\mathbf{z}\), meaning that there are no constraints on
Figure 1: Illustration of the processes of generating hydrological model instances and runoff predictions of a generative model.
the forms of \(p\left(\mathbf{z}\right)\). This is a simpler assumption compared to those used in some popular latent variable-based generative models, such as Variational Autoencoders (Kingma & Welling, 2019) and DeepSDF (Park et al., 2019), which typically restrict \(p\left(\mathbf{z}\right)\) to simple distribution functions (such as normal distributions) to allow for easy sampling of \(\mathbf{z}\) from latent space. In this study, a simpler assumption is used to clearly present the generative modeling approach. Also, no regularization terms are used in equations 4 and 5 to penalize high values of \(\mathbf{z}_{i}\)(Aggarwal, 2016). These simple treatments are shown to be sufficient for certain runoff prediction tasks in later sections of this paper.
### Application and verification of a generative model
In general, once the \(\theta\) value is learned, \(g_{\theta}\) can be used much like a conventional lumped model class for discriminative modeling tasks. That is, \(g_{\theta}\) requires only a few numerical values (i.e., the \(\mathbf{z}\) value) to generate a model instance, and these values can be optimized using the climate forcing and runoff data of a catchment. The resulting model instance can be thought of as a "calibrated model" that can be reused to predict runoffs under different climate forcing conditions.
However, it is important to specify and test the assumptions involved in the generative modeling approach. There are two types of assumptions involved, one explicit and one implicit. The explicit assumption is that the runoff generation processes of a catchment can be sufficiently well characterized by a fixed numerical vector, where "fixed" means that they are invariant under different climate forcing conditions. The implicit assumption is that for a "new" catchment not used for learning the generative model, \(c_{new}\notin C\), there exists at least one vector \(\mathbf{z}_{new}\) in latent space that can characterize its runoff generation processes sufficiently well such that \(g_{\theta}\left(\mathbf{z}_{new},\cdot\right)\) approximates the true runoff prediction function \(f_{new}\).
The explicit assumption is also used in many of the current hydrological modeling studies, where the calibrated model instances are used to predict runoffs under different climate forcing conditions. Therefore, testing methods used in these studies can be used here to test the explicit assumption. For example, the split-sample testing method
Figure 2: Illustration of the process of learning a generative model.
can be used, which evaluates the usefulness of a calibrated model instance when applied to a period not used for model calibration (i.e., a test period).
The implicit assumption can be verified if we can learn a general hydrological model \(g_{\theta}\) that is capable of accurately predicting the runoffs of an arbitrary catchment \(c_{new}\) (that is not used in learning \(g_{\theta}\)), given an optimal \(\mathbf{z}_{new}\) estimated from \(c_{new}\)'s data. For a catchment \(c_{new}\), the optimal value of \(\mathbf{z}_{new}\) associated with it can be derived by solving the following optimization problem:
\[\min_{\mathbf{z}_{new}}\sum_{(\mathbf{x}_{j},\mathbf{y}_{j})\in S_{new}}L\left( g_{\theta}\left(\mathbf{z}_{new},\mathbf{x}_{j}\right),\mathbf{y}_{j}\right), \tag{6}\]
where \(S_{new}\) is the available forcing and runoff data set of \(c_{new}\), and \(L\) is a loss function. This optimization problem is very similar to the one defined in Equation 4. The only difference is that \(\theta\) is a fixed value here instead of a decision variable. Since the dimension of \(\mathbf{z}_{new}\) (i.e., \(d\)) is low, this optimization problem can be solved using many general optimization algorithms that are commonly used in hydrology.
The process of finding the optimal \(\mathbf{z}_{new}\) value is illustrated in Figure 3, where \(n\) candidate \(\mathbf{z}\) values are iteratively updated according to the prediction errors using a general optimization algorithm, and the optimization process is terminated when certain criteria are met. The overall process is again very similar to the process of learning a generative model (Figure 2), except that here \(g_{\theta}\) is fixed and only the data from the catchment \(c_{new}\) (instead of a collection of catchments) is used.
### Architecture of a generative model and training strategy
The proposed generative model consists of two parts, \(g_{\theta}\) and \(p\left(\mathbf{z}\right)\). \(g_{\theta}\) can be represented by a neural network that takes two inputs, a latent vector \(\mathbf{z}\) and a multi-step and multi-dimensional climate forcing time series \(\mathbf{x}\). \(p\left(\mathbf{z}\right)\) is an arbitrary density function that is to be estimated from samples of \(\mathbf{z}_{i}\) values of different catchments (see Figure 2).
Figure 3: Illustration of the process of finding the optimal \(\mathbf{z}_{new}\) value for a catchment \(c_{new}\).
The \(g_{\theta}\) architecture used in this study is shown in Figure 4, where an LSTM network (Gers et al., 2000) parameterized by \(\theta\) is used to transform \(\mathbf{x}\) and \(\mathbf{z}\) into \(\mathbf{\hat{y}}\). The LSTM network processes the input in a sequential manner. At each step, \(\mathbf{z}\) is concatenated with the climate forcing of that time step, which is then fed into the network. This simple concatenation operation is used to ensure that the inputs at different time steps are of the same dimension. For an introduction to LSTM in rainfall-runoff modeling, see Kratzert et al. (2018). It is also possible to use other neural network architectures that are well suited for time series modeling, such as EA-LSTM (Kratzert, Klotz, Shalev, et al., 2019), Transformers (Vaswani et al., 2017), and Temporal Convolutional Networks (TCN; Bai et al. (2018)).
The latent vectors \(\mathbf{z}_{1},\cdots,\mathbf{z}_{N}\) of a set of \(N\) catchments can be effectively stored in an embedding matrix of size \(d{\times}N\), where \(d\) is latent dimension, as shown in Figure 4. To make runoff prediction for a given catchment \(c_{i}\), its associated \(\mathbf{z}_{i}\) is retrieved first from the embedding matrix using the index \(i\) and then fed to the LSTM network. The embedding matrix can be considered as a simple look-up table storing \(d{\times}N\) numeric values. During training, the gradient of the loss function with respect to a value can be computed. This allows the embedding matrix to be learned together with \(\theta\) using backpropagation algorithms.
There are several hyperparameters used in the model architecture shown in Figure 4, such as \(d\) (i.e., the dimension of latent vector), the dimension of the hidden state, and the number of recurrent layers of the LSTM network. A number of hyperparameters are also used in the model training process, including batch size, number of epochs, and learning rate, and type of the optimizer. These hyperparameters should be set prior to training and can affect the quality of the learned model. The candidate \(d\) value was 2, 4, 8, and 16, and the ranges of the other hyperparameters are listed in Table S1.
In this study, a Bayesian optimization algorithm implemented in the "Optuna" Python library (Akiba et al., 2019) was used to derive the optimal hyperparameters, which consists of the following steps. First, a training, validation, and test data set is created by
Figure 4: Architecture of the general hydrological model \(g_{\theta}\) used in this study.
extracting and combining the climate forcing and runoff time series of each catchment from their corresponding data sets \(S_{1},\cdots,S_{N}\). The Bayesian optimization algorithm is then applied to find the optimal values of the hyperparameters that minimize the runoff prediction error on the validation data set when learning \(\theta\) and the embedding matrix (i.e., the \(\mathbf{z}_{i}\) values of \(N\) catchments) on the training data set. The optimal hyperparameters are then applied to learn \(\theta\) and the embedding matrix from the data set combining the training and validation sets, and the resulting generative model is then tested on the test data set to estimate the generalization error (Cawley and Talbot, 2010). Finally, all available data is used to train a final generative model using the optimal hyperparameters. See Table S1 for the hyperparameter values used for the Bayesian optimization algorithm.
### Genetic algorithms for optimizing latent vector values of new catchments
As discussed in Section 2.3, the optimal latent vector value of a new catchment \(\mathbf{z}_{new}\) can be derived using a generic model calibration algorithm. This study used a simple genetic algorithm (GA) implemented in the "PyGAD" Python library (Gad, 2021). The optimal \(\mathbf{z}_{new}\) of a catchment minimizes the runoff prediction error on the calibration data set. In this study, we did not further split the calibration data set for optimizing the hyperparameters used in the GA, such as the number of generations, in order to obtain a baseline performance value (instead of a higher benchmark) and to demonstrate the effectiveness of the generative modeling approach. The hyperparameter values used are shown in Table S1. An introduction to GA is omitted here and can be found in Luke (2013).
### Climate forcing and runoff data sets used in numerical experiments
Three data sets covering catchments from different parts of the world were used in this study. They are CAMELS (Catchment Attributes and Meteorology for Large-sample Studies; Newman et al. (2014, 2015); Addor et al. (2017)), CAMELS-CH (Catchment Attributes and MEeorology for large-sample Studies - Switzerland; Hoge et al. (2023)), and Caravan (Kratzert et al., 2023). CAMELS, CAMELS-CH, and Caravan include catchments mainly from the US, Switzerland and neighboring countries, and around the world, respectively. A number of catchments were selected from each data set based on their location and data availability, as shown in Table 1. Catchment mean daily precipitation (P), temperature (T), and potential evapotranspiration (PET) were used as the climate forcing variables in this study because of their common use in many conventional lump models (Knoben et al., 2019), and daily runoff (Q) was used as the prediction target. These four variables were available in the Caravan and CAMELS-CH data sets and were extracted and derived from the CAMELS data set processed by Knoben et al. (2020).
As shown in Table 1, each data set is divided into a number of smaller subsets according to their role in the numerical experiments. For the Caravan data set, three subsets were created, and they were used to train and test generative models using the methods described in Section 2.4. Both the CAMELS and CAMELS-CH data sets were split into two subsets: a calibration and a test set. The catchments in the two data sets were considered as "new" catchments, for which the optimal \(\mathbf{z}_{new}\) values were derived and tested with the calibration and test sets using the methods described in Section 2.5.
### Benchmark data on lumped hydrological models' performance
Knoben et al. (2020) calibrated 36 lumped hydrological model classes for 559 catchments from the CAMELS data set. These model classes include Xinanjiang Model, TOPMODEL, HBV-96, VIC, and GR4J. They reported the runoff prediction accuracies of the calibrated model instances, as measured by the KGE (Kling-Gupta efficiency) cri
terion (Gupta et al., 2009). More details of the model performance data set can be found in Knoben et al. (2020). These values were used in this study as benchmarks for evaluating the usefulness of the generative models in runoff prediction tasks.
## 3 Numerical experiments and results
In the numerical experiments, we trained generative models (i.e., combinations of \(g_{\theta}\) and \(p\left(\mathbf{z}\right)\)) using the Caravan data set. Then, the generative models were tested on the CAMELS and CAMELS-CH data sets in terms of their ability to perform discriminative tasks (i.e., modeling runoffs for "new" catchments). Finally, we evaluate the effect of latent vector \(\mathbf{z}\) values on the predicted hydrographs.
### Using generative models in regional hydrological modeling
#### 3.1.1 Experiments performed
In the first experiment, we used the methods described in Section 2.4 to train a general hydrological model \(g_{\theta}\) using the training and validation sets of Caravan. The final model was trained on the combination of the two sets, and the optimal hyperparameters used during training were obtained by minimizing the prediction errors on the validation set when training on the training set. Each of the 3,007 selected Caravan catchments \(c_{i}\) was assigned a latent vector \(\mathbf{z}_{i}\). Thus, a model instance \(g_{\theta}\left(\mathbf{z}_{i},\cdot\right)\) was derived for each \(c_{i}\). The model instances \(g_{\theta}\left(\mathbf{z}_{i},\cdot\right)\) were tested on the test set using the KGE criterion. The purpose of this experiment is to evaluate the usefulness of the generative modeling approach in a regional modeling setting, where the model instances were jointly learned from data for multiple catchments.
The input climate forcing time series \(\mathbf{x}\) to each model instance \(g_{\theta}\left(\mathbf{z}_{i},\cdot\right)\) is a two-year (i.e., 730 days) P, T, and PET time series, and the prediction target \(\mathbf{y}\) associated with a sample of \(\mathbf{x}\) is the second year (i.e., from day 366 to day 730) Q time series. Thus,
\begin{table}
\begin{tabular}{l l l l} \hline \hline Data set &
\begin{tabular}{l} Number of \\ catchments \\ selected \\ \end{tabular} & Splits of the data set & Selection criteria \\ \hline \multirow{6}{*}{Caravan} & \multirow{6}{*}{3007} & Training period: \\ & & 1981-01-02 to 2000-12-31 & At least 2 years of runoff data \\ & & Validation period: \\ & & 2001-01-01 to 2010-12-31 & no intersection with CAMELS \\ & & Test period: \\ & & 2011-01-01 to 2020-12-30 & \\ \hline CAMELS & \multirow{6}{*}{559} & Calibration period: \\ (processed by \\ Knoben et al. (2020)) & & 1989-01-01 to 1998-12-31 & The same catchments used in \\ & & Test period: \\ & & 1999-01-01 to 2009-12-31 & \\ \hline CAMELS-CH & \multirow{6}{*}{157} & Calibration period: \\ (Caravan extension \\ version) & & 1981-01-01 to 2010-12-31 & are available in each period, and \\ \cline{1-1} & & Test period: \\ \cline{1-1} & & 2011-01-01 to 2020-12-31 & catchments. \\ \hline \hline \end{tabular}
\end{table}
Table 1: The data sets used in this study, with the methods of catchment selection and data set splitting presented.
the assumption used here is that the runoff depth of a catchment observed on a particular day is predictable by the one-year climate conditions that preceded that day.
When training \(g_{\theta}\) (i.e. updating the values of \(\theta\)), the \(\mathbf{x}\) and \(\mathbf{y}\) pairs of each \(c_{i}\) were randomly sampled from the specified period of different sets (as listed in Table 1). During validation and test (i.e., when \(\theta\) is fixed), the \(\mathbf{x}\) and \(\mathbf{y}\) samples were taken regularly from the specified period with an interval of one year, which ensures that the runoff time series can be predicted for the entire validation or testing period.
#### 3.1.2 Results
Using Bayesian optimization, we found that the optimal value of \(d\) (i.e., the latent vector dimension) was 8. For each of the 3,007 caravan catchments \(c_{i}\), an 8-dimensional latent vector \(\mathbf{z}_{i}\) was learned, corresponding to a concrete model instance. The non-exceedance probability of the KGE score of the model instances computed for the training and test periods is shown in Figure 5. The two non-exceedance probability curves show that the model instances generally had higher prediction accuracies during the training period than during the test period. Note that the training period here includes the periods specified by both the training and validation sets of Caravan.
The median KGE of the training and test periods is 0.691 and 0.655, respectively. This result indicates that the prediction accuracy was generally acceptable for more than 50% of the catchments, although performance gaps between the training and test periods can be observed.
The Caravan data set contains the forcing and runoff data of catchments of different countries or regions (Kratzert et al., 2023). The runoff data of Caravan were aggregated from several existing open data sets. Figure 6 thus compares the training and test
Figure 5: The non-exceedance probability of the training and test KGE scores obtained for each Caravan catchment. The median of the training and test KGE scores are also shown. For clarity, the results for KGE values less than -1 are not shown and can be found in Figure 6. The KGE score of one model instance during the training period was not available because the recorded runoffs were constantly 0.
KEG scores of model instances of catchments of different countries or regions. The training and test KGE scores are positively correlated for all countries or regions considered. For the catchments of the Great Britain and the Central Europe regions, both the training and the test KGE scores are generally higher than 0. However, in the other regions, as indicated by the few large negative KGE scores, some model instances have poor performance.
#### 3.1.3 Discussions
The test KGE results shown in Figures 5 and 6 indicate that the generative model learned from climate forcing and runoff data alone was generally useful in a regional modeling setting, at least for some catchments. Unlike most current regional modeling studies, no catchment attributes were used. The proposed generative modeling approach does not assume any correlation between catchment attributes and hydrological functions. It simply aims to learn a set of model instances (i.e., numerical functions) that can accurately reproduce the observed hydrographs without constraining the value of the catchment latent vectors. Future studies can thus explore how to incorporate catchment attributes and other physically meaningful information into the generative modeling framework to ensure that the learned model instances have certain desirable properties.
The optimal number of latent dimensions \(d\) was found to be 8, implying that the differences between the hydrological behavioral characteristics of different catchments can be well described using a small number of numerical values. This result is consistent with the findings in Yang and Chui (2023) that the degree of fit between a catch
Figure 6: Comparison of training and test KEG scores of model instances of catchments of different countries or regions. The label of each subplot indicates the name of the original data set from which the Caravan data set extracts the data. The subplot label also indicates the main country or region covered by each original data set. The KGE score of one model instance during the training period was not available because the recorded runoffs were constantly 0.
ment and a model instance can be sufficiently well explained by a small number of factors. However, the exact information encoded in each dimension is unknown. In contrast, the physical meaning of the parameters that are used in the conventional lumped model classes is known. In Section 3.3, the effect of latent vector values on the predicted hydrographs is examined.
Similar to the traditional lumped model classes, the generative model is also not immune to the overfitting problem, as indicated by the performance gap between the training and test KGE scores shown in Figures 5. The overfitting could be caused by the catchments having different runoff generation patterns during the training and test periods, and the model instances having learned "noise" in the forcing and runoff time series in addition to the true patterns. The problem associated with shifting runoff generation patterns may be mitigated by treating the catchment runoff function \(f_{i}\) and the catchment latent vector \(\mathbf{z}_{i}\) as functions of time, rather than assuming that they are fixed in Equation 2. And the problems of learning noise from data may be mitigated by the use of different machine learning architectures, regularization methods, and more training data. Since our goal is to provide a simple introduction to the generative modeling approach, these options were not explored in this study. In addition, the factors that contribute to the differences in the predictive performance across catchments were not investigated in this study.
### Using generative models as conventional lumped model classes
#### 3.2.1 Experiments performed
In the second experiment, we evaluated the usefulness of a generative model trained on data from a set of catchments in modeling "new" catchments outside the catchment set. The generative model used in the experiment was trained on the entire Caravan data set using the hyperparameters obtained in Section 3.1. The optimal latent vector \(\mathbf{z}_{new}\) values of the CAMELS and CAMELS-CH catchments were derived using the optimization method described in Section 2.5 with the calibration period data. The range of each dimension of \(\mathbf{z}_{new}\) was determined based on the minimum and maximum values of that dimension of the \(\mathbf{z}_{new}\) values derived for the catchments in the training data set. The resulting model instances, i.e., \(g_{\theta}\left(\mathbf{z}_{new},\cdot\right)\) associated with different catchments, were then tested on the test period data. The predictive performance of the CAMELS catchment model instances was then compared with that obtained for the conventional lumped model classes in Knoben et al. (2020).
#### 3.2.2 Results
The predictive performance of the model instances derived by optimizing the latent vector values is shown in Figure 7. The median test KGE scores were 0.737 and 0.808 for the CAMELS and CAMELS-CH catchments, respectively. The median KGE score of the CAMELS catchments was comparable to previous deep learning studies that learned model instances directly from the data, e.g., Botterill and McMillan (2023) and Li et al. (2022). Figure 6(a) also compares the performances of the model instances created using the generative model and 36 conventional lumped model classes. With the exception of the bottom 10% of model instances, the generative model-based model instances generally performed better than those derived from any of the lumped model classes.
The performance of the generative model based-instances of the CAMELS catchments was compared to their counterparts derived using the conventional lumped model classes, and their ranking is shown in Figure 8. The generative model-based instances ranked in the top 1, top 3, and top 10 for 44.9%, 56.0%, and 72.8% of the catchments, respectively. However, these models can have very low rankings in some catchments.
Figure 8: Histogram of the ranking of the generative model-based instances of CAMELS catchments when they are ranked together with the corresponding model instances derived from 36 conventional lumped models. The test KGE scores were used in the ranking. The lumped model instances were calibrated in Knoben et al. (2020).
Figure 7: The non-exceedance probability of the test KGE score of the model instances created using 36 conventional lumped model classes and the generative model obtained for (a) CAMELS catchments and (b) CAMELS-CH catchments. For clarity, the results for KGE values less than -1 are not shown in (a). The lumped model instances were calibrated in Knoben et al. (2020).
The generative model-based instance of each CAMELS catchment was compared to the best of the 36 corresponding lumped model instances in Figure 9. A weak to moderate positive correlation (Pearson correlation coefficient \(r\ =\ 0.496\)) can be observed between the KGE scores of the two sets of model instances. In some catchments, the generative model-based instances performed significantly worse than the best lumped model instances, as indicated by the negative KGE values. The KGE values of the best lumped model instances were rarely negative.
The spatial distribution pattern of the test KGE score of the generative model-based instances is shown in Figure 10a. The model instances of the catchments in the Pacific Mountain Systems and Appalachian Highlands regions (i.e., the west and east coasts of the US) generally have good predictive performance. Knoben, Freer, and Woods (2019) showed that KGE \(=\ 1-\sqrt{2}\ \approx-0.414\) when mean streamflow is used as a predictor, and catchments with KGE \(\leq-0.414\) were mostly found in the Interior Plains, Rocky Mountain System, and Intermontane Plateaus (i.e., central and western US). The spatial distribution pattern of the difference between the generative model-based instances and the best lumped model instances, in terms of the test KGE value, is shown in Figure 10b. The result shows that these two sets of model instances generally had comparable performance in the Pacific Mountain Systems and Appalachian Highlands regions, and in the other regions the generative model-based instances could perform significantly worse.
#### Discussions
The experimental results show that it is possible to learn a generative model of hydrological model instances directly from data such that, by sampling from its parameter space (i.e., latent vector space), model instances that resemble real-world catchments can be produced. Unlike conventional lumped model classes, the generative models are
Figure 9: The test KGE scores of the generative model-based instances compared to the best among the corresponding model instances created using 36 conventional lumped model classes. For clarity, the results for KGE values less than -1 are not shown. The lumped model instances were calibrated in Knoben et al. (2020).
Figure 10: Spatial distribution of (a) the test KGE scores of the generative model-based instances and (b) the difference in KGE score between the generative model-based instances and the best of the corresponding lumped model instances created using 36 conventional lumped model classes. The lumped model instances were calibrated in Knoben et al. (2020).
automatically learned from data without the use of specific hydrological knowledge. Thus, this study proposes a purely data-driven approach to easily generate meaningful hydrological model instances. Note that in contrast to many current studies, no catchment attributes are required to generate a model instance.
The model instances generated by the generative model were found to have comparable performance to those learned directly from the data using deep learning methods (i.e., the discriminative modeling approach), but with much less effort. To create a model instance for a catchment, the generative modeling approach requires only the estimation of a few parameters using a generic model calibration algorithm, while the discriminative modeling approach requires the learning of many more parameters (e.g., tens of thousands). Given a limited amount of data, reducing the number of trainable parameters in a numerical model may alleviate the overfitting problem. However, a formal verification of this assumption is needed.
In comparison to the 36 conventional lumped model classes, the generative model showed comparable or better ability to model the runoff generation process in the majority of the CAMELS catchments. Therefore, the generative models can be a useful addition to a modeler's lumped modeling toolbox. The better performance of the machine learning models in some catchments indicates that our hydrological knowledge (used to develop lumped model classes) was insufficient, which is similar to the conclusions of other machine learning research in hydrological modeling. However, it should be noted that the generative model can have poor performance in some catchments where the conventional lumped models can have good performance. The poor performance may be explained by that the training data set does not contain catchments with similar runoff generation characteristics, or that the ability of the generative model to extrapolate different runoff generation functions is limited. Future studies can therefore use a larger or more diverse data set for training of a generative model and then test whether this model has a broader applicability.
The comparison between the best lumped model and the generative model also shows that a collection of conventional lumped models can provide competitive results compared to the models derived using modern data-driven approaches. And the lower bound of their performance is much higher than that of the data-driven models. Thus, we argue that lumped models are still very useful for hydrological prediction tasks. The effectiveness of the lumped and data-driven models in hydrological prediction under different conditions should be rigorously tested using different methods, including metamorphic testing (Yang & Chui, 2021) and interpretable machine learning methods (Molnar, 2022).
### Effect of latent vector value on predicted runoff hydrograph
#### 3.3.1 Experiments performed
The purpose of this experiment is to investigate how the value of each dimension of the latent vector affects the predicted hydrograph. We chose a random catchment from the CAMELS data set, the Fish River catchment near Fort Kent, Maine (USGS 01013500). The model instance associated with the catchment was developed in Section 3.2, which was derived by optimizing the 8-dimensional latent vector to minimize prediction errors during the calibration period. We selected a period with the highest predicted runoff peak and then varied the value of each dimension of the latent vector by -50%, -10%, +10%, and +50% to investigate how the predicted hydrograph changed.
#### 3.3.2 Results
The hydrographs corresponding to different latent vector values are shown in Figure 11. The predicted hydrograph was found to be sensitive to the values of a few se
lected dimensions, such as the 3rd, 4th, and 7th dimensions. Changing the values of different dimensions can have different effects on the predicted hydrograph. For example, the predicted peak runoff increased when increasing the value of the 7th dimension; changing the value of the 3rd dimension changed the predicted recession limps and low flows. In general, the amount of change in the predicted hydrograph was found to be positively correlated with changes in the value of a particular dimension, such that a +50% change in latent vector value can cause a greater change in the predicted hydrograph than a +10% change.
#### Discussions
The experimental result shows that latent vector values can encode information that reflects certain hydrological behavioral characteristics of a catchment, such as how quickly runoff declines after peak runoff during a flood event. However, it can be difficult to determine the exact information encoded by each dimension. This is because a dimension can encode different information when the input forcing varies and when the latent vectors are at different locations in the latent space. The latent vector values may be more easily interpreted if certain constraints can be imposed on the latent vectors when training the model. For example, forcing each dimension to be independent of the other allows for a more straightforward analysis of each individual dimension. The sensitivity analysis methods currently used in hydrology (X. Song et al., 2015) may also be useful in studying how the latent vector value affects the predicted hydrograph.
Unlike conventional lumped model classes, the "meaning" of each parameter (i.e., each dimension of the latent vector) is automatically learned from the data, rather than
Figure 11: The predicted hydrographs of the catchment of Fish River near Fort Kent, Maine (USGS 01013500) using different latent vectors. The results obtained by changing the optimal latent vector value on a particular dimension are shown in each subplot. The hydrographs marked ”no change” correspond to the results of the optimal latent vector.
being predefined. It is interesting to evaluate whether the optimal latent vector of a catchment can be predicted by its physical properties. If this is the case, then a reasonable model instance can be generated for an ungauged catchment using the generative model and the physical properties of the catchment. The optimal latent vectors of different catchments may also be useful for catchment classification and catchment similarity analysis research. Further research on these topics is recommended.
## 4 Conclusions
This study presents a generative modeling approach to rainfall-runoff modeling. The generative models, which are automatically learned from the climate forcing and runoff data from a set of catchments, are capable of producing hydrological model instances (i.e., functions that predict runoffs based on climate forcing) that closely resemble real-world catchments. A model instance can be easily produced by providing the generative model with a numerical vector sampled from a relatively low-dimensional (i.e., 8-dimensional) space. For a given catchment, a well-fitting model instance capable of making accurate runoff predictions can be derived by searching through the parameter space using a generic model calibration algorithm. That is, a generative model can be used similarity as a conventional lumped hydrological model structure, that an optimal model instance for a catchment can be found through model calibration.
The generative models developed in this study were found to be capable of producing well-fitting model instances for catchments worldwide. For an arbitrary catchment not used to train the generative model, optimal model instances derived using a generic model calibration algorithm were found to be able to provide prediction accuracies comparable to or better than the conventional lumped model classes. These results suggest that it is possible to derive useful hydrological model structures from data without reliance on human knowledge of catchment-scale physical processes.
This study also shows that the hydrological behavior of catchments worldwide can be sufficiently well characterized by a small number of factors, i.e. a latent vector. The optimal value of these factors of a catchment can be estimated from the climate forcing and runoff time series alone, without applying any specific hydrological knowledge or referring to any catchment attributes (that describe the physical properties of the catchments). The low-dimensional representation of the catchment hydrological function can be useful for catchment similarity analysis and catchment clustering. However, the exact meaning of these factors is currently unknown, and future studies aimed at interpreting the learned factors are recommended.
###### Acknowledgements.
The LaTex template used to create this file is from the AGU Geophysical Research Letters AGUTeX Article [https://www.overleaf.com/latex/templates/agu-geophysical](https://www.overleaf.com/latex/templates/agu-geophysical) -research-letters-agutex-article/rnyzczmyvkbj. We thank Wouter Knoben for sharing the processed CAMELS forcing data used in Knoben et al. (2020) and commenting on the paper. The source code used in this research can be found in [https://github.com/stsfk/deep_lumped](https://github.com/stsfk/deep_lumped).
|
2309.11193 | RHALE: Robust and Heterogeneity-aware Accumulated Local Effects | Accumulated Local Effects (ALE) is a widely-used explainability method for
isolating the average effect of a feature on the output, because it handles
cases with correlated features well. However, it has two limitations. First, it
does not quantify the deviation of instance-level (local) effects from the
average (global) effect, known as heterogeneity. Second, for estimating the
average effect, it partitions the feature domain into user-defined, fixed-sized
bins, where different bin sizes may lead to inconsistent ALE estimations. To
address these limitations, we propose Robust and Heterogeneity-aware ALE
(RHALE). RHALE quantifies the heterogeneity by considering the standard
deviation of the local effects and automatically determines an optimal
variable-size bin-splitting. In this paper, we prove that to achieve an
unbiased approximation of the standard deviation of local effects within each
bin, bin splitting must follow a set of sufficient conditions. Based on these
conditions, we propose an algorithm that automatically determines the optimal
partitioning, balancing the estimation bias and variance. Through evaluations
on synthetic and real datasets, we demonstrate the superiority of RHALE
compared to other methods, including the advantages of automatic bin splitting,
especially in cases with correlated features. | Vasilis Gkolemis, Theodore Dalamagas, Eirini Ntoutsi, Christos Diou | 2023-09-20T10:27:41Z | http://arxiv.org/abs/2309.11193v1 | # RHALE: Robust and Heterogeneity-aware Accumulated Local Effects
###### Abstract
Accumulated Local Effects (ALE) is a widely-used explainability method for isolating the average effect of a feature on the output, because it handles cases with correlated features well. However, it has two limitations. First, it does not quantify the deviation of instance-level (local) effects from the average (global) effect, known as heterogeneity. Second, for estimating the average effect, it partitions the feature domain into user-defined, fixed-sized bins, where different bin sizes may lead to inconsistent ALE estimations. To address these limitations, we propose Robust and Heterogeneity-aware ALE (RHALE). RHALE quantifies the heterogeneity by considering the standard deviation of the local effects and automatically determines an optimal variable-size bin-splitting. In this paper, we prove that to achieve an unbiased approximation of the standard deviation of local effects within each bin, bin splitting must follow a set of sufficient conditions. Based on these conditions, we propose an algorithm that automatically determines the optimal partitioning, balancing the estimation bias and variance. Through evaluations on synthetic and real datasets, we demonstrate the superiority of RHALE compared to other methods, including the advantages of automatic bin splitting, especially in cases with correlated features.
## 1 Introduction
Recently, Machine Learning (ML) has been adopted across multiple areas of human activity, including mission-critical domains such as healthcare and finance. In such high-stakes areas, it is important to accompany predictions with meaningful explanations [24, 5]. For this reason, there is an increased interest in Explainable AI (XAI) [23, 14]. XAI literature distinguishes between local and global methods [18]. Local methods provide instance-level explanations [4], i.e., explain the prediction for a specific input, whereas global methods explain the entire model behavior [13]. Most of the times, global methods aggregate the instance-level explanations into a single interpretable outcome, usually a number or a plot.
Feature Effect (FE) [11] is a class of global explainability methods that quantify the average (across all instances) partial relationship between one feature and the output. The most popular FE methods are _Partial Dependence Plots_ (PDP) [6] and _Accumulated Local Effects_ (ALE) [1]. PDPs have been criticized [2, 17, 21] for providing misleading explanations in problems with highly correlated features, making ALE the only reliable solution in such cases. Nevertheless, ALE has two significant limitations. Firstly, the way ALE formulates the FE (_ALE definition_ of Eq. (2)), does not take into account the heterogeneity of instance-level effects, a quantity that is crucial for a complete interpretation of the average effect. Secondly, the way ALE estimates the FE from the instances of the training set (_ALE approximation_ at Eq. (3)) relies on a user-defined binning process that often results in poor estimations. Therefore, this paper presents RHALE (Robust and Heterogeneity-aware ALE), a FE method build on-top of ALE that overcomes these issues. To better understand the advantages of RHALE over ALE, consider the following example, which was first introduced in [9]:
\[\begin{split}& Y=0.2X_{1}-5X_{2}+10X_{2}1_{X_{3}>0}+\mathcal{E}\\ &\mathcal{E}\overset{i.d.}{\sim}\mathcal{N}(0,1),\quad X_{1},X_{ 2},X_{3}\overset{i.d.}{\sim}\mathcal{U}(-1,1)\end{split} \tag{1}\]
where we draw \(N=100\) samples, i.e, \(\mathcal{D}=\{(x^{i},y^{i})\}_{i=1}^{N}\). Given the knowledge of Eq. (1), the FE of \(X_{3}\) is zero because the term \(10X_{2}1_{X_{3}>0}\), where \(X_{3}\) appears, is part of the effect of \(X_{2}\). In contrast, \(X_{2}\) relates to \(Y\) in two opposite ways, as \(-5X_{2}\) when \(X_{3}<0\) and as \(5X_{2}\) otherwise. Therefore, the zero average effect of \(X_{2}\) after aggregating the two opposites effects, should not erroneously imply that \(X_{2}\) does not affect \(Y\). However, as demonstrated in Figure 1(a) (for \(X_{2}\)) and Figure 2(a) (for \(X_{3}\)) _ALE definition_ erroneously indicates that both variables are not associated with the output. This phenomenon, known as aggregation bias [16, 12], is a common issue of global XAI methods.
RHALE addresses this issue by quantifying the heterogeneity based on the standard deviation of the underlying instance-level effects. As shown in Figure 1(c), although the average \(X_{2}\) effect is zero, the presence of two opposing groups of instance-level effects, namely, \(5X_{2}\) and \(-5X_{2}\), is revealed by both (a) the shaded area in the top subfigure (the limits of the shaded area are the lines \(5X_{2}\) and \(-5X_{2}\)) and (b) the violin plots in the bottom subfigure (the distribution of the instance-level effects has most of its mass at \(-5\) and \(5\)). In contrast, in Figure 2(b), the zero heterogeneity states that \(X_{3}\) is indeed not related to the output.
The second limitation is that _ALE approximation_ requires an initial step where the feature axis is partitioned in \(K\) non-overlapping fixed-size bins. Afterwards, an average effect (bin-effect) is computed inside each bin, and ALE plot is the aggregation of the bin effects. Since there is no clear indication of an appropriate value for \(K\), users often rely on heuristics, such as ensuring that each bin contains on average at least \(\tau\) instances on average. In the example above, for \(\tau=5\), then \(K=20\), which, as we show in Figure 1(b), results in significant approximation errors. Specifically, the bin-effects and bin-std values deviate significantly from their ground-truth, which are 0 and 5, respectively. To overcome this limitation, RHALE _auto
matically determines the optimal sequence of variable-size bins._ For example, in Figure 0(c), RHALE automatically finds that it is optimal to create a single bin, which leads to a good approximation of both the average effect and the heterogeneity. The optimal bin splitting depends on the underlying instance-level effects (Section 3.2). Essentially, a wide bin reduces the variance of the estimation by increasing the samples inside the bin (better Monte Carlo approximation) but it may also introduce bias in the estimation. Using this insight, we formulate an optimization problem and propose an algorithm that minimizes the bias while ensuring that each bin has a sufficient number of samples. The main contributions of this paper are:
* A new feature effect method, RHALE, that addresses aggregation bias by providing information on the heterogeneity of instance-level effects.
* A formulation of the selection of variable-sized bins to balance RHALE bias and variance as an optimization problem.
* An algorithm that efficiently solves the optimal bin partitioning problem.
* A thorough experimental evaluation of RHALE on synthetic and real datasets, demonstrating its superiority over other feature effect methods, both in terms of accuracy and efficiency.
The code for reproducing all experiments is provided at [https://github.com/givasile/RHALE](https://github.com/givasile/RHALE).
## 2 Background and related work
Let \(\mathcal{X}\in\mathbb{R}^{d}\) be the \(d\)-dimensional feature space, \(\mathcal{Y}\) the target space and \(f(\cdot):\mathcal{X}\rightarrow\mathcal{Y}\) the black-box function. We use index \(s\in\{1,\ldots,d\}\) for the feature of interest and \(c=\{1,\ldots,d\}-s\) for the rest. For convenience, to denote the input vector, we use \((x_{s},\mathbf{x}_{c})\) instead of \((x_{1},\cdots,x_{s},\cdots,x_{D})\) and, for random variables, \((X_{s},X_{c})\) instead of \((X_{1},\cdots,X_{s},\cdots,X_{D})\). The training set \(\mathcal{D}=\{(\mathbf{x}^{i},y^{i})\}_{i=1}^{N}\) is sampled i.i.d. from the distribution \(\mathbb{P}_{X,Y}\). Finally, \(f^{<\texttt{method}>}(x_{s})\) denotes how \(<\texttt{method}>\) defines the feature effect and \(\hat{f}^{<\texttt{method}>}(x_{s})\) how it estimates it from the training set.
### Feature Effect Methods
The most popular feature effect methods are: _Partial Dependence Plots_ (PDP) and _Accumulated Local Effects_ (ALE). PDP defines the FE as an expectation over the marginal distribution \(X_{c}\), i.e., \(f^{\texttt{cop}}(x_{s})=\mathbb{E}_{X_{c}}[f(x_{s},X_{c})]\). A variation of PDP, known as Marginal Plots (MP), computes the expectation over the conditional distribution \(X_{c}|x_{s}\), i.e., \(f^{\texttt{cop}}(x_{s})=\mathbb{E}_{X_{c}|x_{s}}[f(x_{s},X_{c})]\). Both methods suffer from misestimations when features are correlated. PDP integrates over unrealistic instances and MP computes aggregated effects, i.e., attributes the combined effect of sets of features to a single feature [1]. ALE tackles these limitations, using a three-step computation; (a) the local effect at \((z,X_{c})\), \(f^{s}(z,X_{c})=\frac{\partial I}{\partial x_{s}}(z,X_{c})\), is computed with the derivatives \(\frac{\partial I}{\partial x_{s}}\) to isolate the effect of \(x_{s}\), (b) the expected effect at \(z\), \(\mu(z)=\mathbb{E}_{X_{c}|z}\left[f^{s}(z,X_{c})\right]\), is taken over \(X_{c}|z\), and, (c) the accumulation, \(\int\mu(z)dz\), retrieves the main effect. ALE definition is:
\[f^{\texttt{ALE}}(x_{s})=\int_{x_{s},\min}^{x_{s}}\underbrace{\mathbb{E}_{X_{ c}|X_{s}=\infty}\left[f^{s}(z,X_{c})\right]}_{\mu(z)}\partial z \tag{2}\]
where \(x_{s,\min}\) is the minimum value of the \(s\)-th feature. In real ML problems, \(p(\mathbf{X})\) is unknown, so [1] proposed estimating ALE from the training set with:
\[\hat{f}^{\texttt{ALE}}(x_{s})=\sum_{k=1}^{k_{x}}\frac{1}{|\mathcal{S}_{k}|} \sum_{i:\mathbf{x}^{(i)}\in\mathcal{S}_{k}}\left[f(z_{k},\mathbf{x}_{c}^{(i) })-f(z_{k-1},\mathbf{x}_{c}^{(i)}))\right] \tag{3}\]
where \(k_{x}\) the index of the bin such that \(z_{k_{x}-1}\leq x_{s}<z_{k_{x}}\) and \(\mathcal{S}_{k}\) is the set of the instances of the \(k\)-th bin, i.e. \(\mathcal{S}_{k}=\{\mathbf{x}^{i}:z_{k-1}\leq x_{s}\leq x_{s}\leq x_{s}\leq x_{s}\}\).
Figure 1: Feature effect for \(x_{2}\) on the simple example of Eq. (1); (a) ALE incorrectly suggests that \(X_{2}\) does not relate to \(Y\), (b) ALE with heterogeneity using \(K=20\) fixed-size bins leads to significant approximation errors, (c) RHALE accurately estimates both the main effect and the heterogeneity, indicating that the zero average effect comes from opposite groups of instance-level effects.
Figure 2: Feature effect for \(x_{3}\) on the simple example of Eq. (1). ALE plot suggests that \(X_{3}\) does not relate to \(Y\). However, as seen in Figure 0(a), this interpretation can be misleading. Only after noticing the zero heterogeneity (STD and bin-std are zero) of RHALE plot in (b), we can confirm this claim.
\(x_{s}^{(i)}<z_{k}\)). In Eq. (3) the axis of the \(s\)-th feature is split in \(K\) equally-sized bins and the average effect in each bin (bin effect) is estimated using synthetic instances, where \(x_{s}^{(i)}\) is set to the right \((z_{k})\) and left \((z_{k-1})\) limits of the bin. Recently, [8] proposed the Differential ALE (DALE) that computes the local effects of differentiable models without modifying the training instances:
\[\widehat{f}^{\text{DALE}}(x_{s})=\Delta x\sum_{k=1}^{k_{x}}\frac{1}{|\mathcal{ S}_{k}|}\sum_{i:\mathbf{x}^{(i)}\in\mathcal{S}_{k}}f^{s}(\mathbf{x}^{i}) \tag{4}\]
Their method allows formulating large bins without creating out-of-distribution instances and changing the bin size without the need to recalculate the instance-level effects. However, both Eq. (3) and Eq. (4) are limited to equal-width partitioning, which has limitations. As discussed in the Introduction, selecting between narrow and wide bins is challenging and, even more, there are cases (Figure 4) where neither narrow nor wide bins are appropriate. In these cases, it is necessary to use variable bin sizes (Figure 2(b)).
### Heterogeneity Of Local Effects
Relying only on the average effect may provide a misleading interpretation of the model. Thus, there is an increasing interest in quantifying the degree of divergence between local effects and the average effect, which is commonly referred to as heterogeneity of local effects. To measure heterogeneity, PDP has a local equivalent called Individual Conditional Expectation (ICE) [9]. ICE, along with its variations like c-ICE and d-ICE [9], creates one curve per instance, \(f_{i}^{\text{ICE}}(x_{s})=f(x_{s},\mathbf{x}_{c}^{(i)})\), on top of the average PDP plot, as seen in Figure 2(a). In this way, the user assesses the heterogeneity by visually inspecting the similarity between ICE curves. However, as demonstrated in Section 4.1, ICE plots have the same limitations as PDPs in cases of correlated features. Based on the variance of the ICE plots, [19] proposed a method to quantify the standard error around the PDP plot. Some other methods[12; 3; 20] attempt to address PDP-ICE failure in case of correlated features by clustering ICE plots based on their similarity. The focus of these works, however, is on regional effects, i.e., subsets of the input space with homogeneous effects, rather than global effects. Approaches such as the H-Statistic [7], Greenwell's interaction index [10], and SHAP interaction values [15] provide a metric that quantifies the level of interactions between feature pairs but do not provide insight into how interactions influence different parts of the feature effect plot. To the best of our knowledge, no existing method quantifies heterogeneous effects for ALE.
## 3 Rhale
RHALE visualizes the feature effect with a plot as illustrated in Figure 2(b). The plot includes (a) \(f_{\mu}^{\text{RHALE}}(x_{s})\), the robust estimation of ALE that shows the average effect (_RHALE estimation_), (b) \(\mathtt{STD}(x_{s})\), the standard deviation of the ALE effect that shows the heterogeneity of the instance level effects (_STD_), (c) \(\hat{\mu}_{k}\forall k\), the bin effects that show the average change on the output \(y\) given a small change in \(x_{s}\) (_bin effect_) and (d) \(\hat{\sigma}_{k}\forall k\) the bin standard deviations that quantify the heterogeneity inside each bin (_bin std_). In each bin, a violin plot on top of the bin effect shows the exact distribution of the local effects. The variable-size partitioning presented in Section 3.2 leads to an accurate estimation of these quantities.
To explain these four interpretable quantities and to highlight the advantages of RHALE compared to PDP-ICE, we will use a running example. We define a generative distribution \(p(\mathbf{x})=(x_{1})p(x_{2})p(x_{3}|x_{1})\) where \(x_{3}\) is highly correlated with \(x_{1}\), while \(x_{2}\) is independent from both. Specifically, \(x_{1}\) lies in \([-0.5,0.5]\) with most samples inside the first half, i.e. \(p(x_{1})=\frac{5}{6}\mathcal{U}(x_{1};-0.5,0)+\frac{1}{6}\mathcal{U}(x_{1};0,0.5)\), \(x_{3}\) is almost equal to \(x_{1}\) i.e., \(p(x_{3}|x_{1})=\mathcal{N}(x_{3};0,\sigma_{3}=0.01)\) and \(p(x_{2})=\mathcal{N}(x_{2};0,\sigma_{2}=2)\). In the experiments, we use 60 samples drawn i.i.d. from \(p(\mathbf{x})\). The predictive function is:
\[f(\mathbf{x})=\sin(2\pi x_{1})(1_{x_{1}<0}-21_{x_{3}<0})+x_{1}x_{2}+x_{2} \tag{5}\]
The simplicity of the toy example helps us isolate the effect of \(x_{1}\), which is \(f(x_{1})\approx-\sin(2\pi x_{1})1_{x_{1}<0}\). This is because \(x_{3}\approx x_{1}\), so \((1_{x_{1}<0}-21_{x_{3}<0}\approx-1_{x_{1}<0})\) and the effect of \(x_{1}x_{2}\) is \(x_{1}\mathbb{E}_{x_{2}}[x_{2}]=0\). Furthermore, the only term that introduces heterogeneity is \(x_{1}x_{2}\), due to \(x_{2}\sim\mathcal{N}(0,2)\) that varies among instances. Detailed derivations are provided at the Appendix B.1.
In Figure 2(b) we show that a user can interpret these effects from the RHALE plot. Specifically, from the top subplot, a user can interpret that (a) the average effect of \(x_{1}\) is \(\widehat{f}_{\mu}^{\text{RALE}}(x_{1})\approx-\sin(2\pi x_{1})1_{x_{1}<0}\), and is produced after aggregating (b) instance level effects that vary in the region \(-\sin(2\pi x_{1})1_{x_{1}<0}\pm 2x_{1}\). From the bottom subfigure, the user can interpret the FE at the bin-level. For example, for \(x_{1}\in[-0.5,-0.4]\) (c) the average change on the output is about \(\frac{\partial f}{\partial x_{2}}\approx 6\) units of \(y\) and is produced after aggregating instance-level changes that vary in \(6\pm 2\). Furthermore, the violin plots show the exact distribution of the instance-level changes.
For estimating the above quantities from the 60 available samples, the optimized partitioning divides the sinusoidal region in six bins (dense enough) and merges the constant region in a single bin (robust estimation), balancing estimation variance and bias (Section 3.2). In contrast, Figure 4 shows that _all_ fixed-size splits result in poor estimations; when using sparse bins (\(K=5\)) the estimation is biased, as will be explained in Section 3.1, and when using dense bins (\(K=50\)) the estimation has high variance.
A natural question that arises is whether we could come to the same interpretation using the PDP-ICE plot. At Figure 2(a) we observe that PDP with c-ICE, i.e., ICE curves centered to start from zero, lead to a completely misleading interpretation. For example, in \(x_{1}\in[0,0.5]\), PDP shows a negative sinusoidal average effect and c-ICE two heterogeneous effects; a negative sinusoidal when \(x_{3}^{(i)}\geq 0\) (for about \(\frac{5}{6}\) of the instances), and a linear when \(x_{3}^{(i)}<0\) (for about \(\frac{5}{6}\) of the instances). This is because PDP-based methods ignore the correlation between \(x_{1}\) and \(x_{3}\).
Figure 3: Feature effect for \(x_{1}\) on the example of Eq. 5. Due to feature correlations, only RHALE provides a robust estimation of the main effect and the heterogeneity.
### Definition
We define the heterogeneity at \(x_{s}=z\) as the standard deviation \(\sigma(z)\) of the instance-level effects, where:
\[\sigma^{2}(z)=\mathbb{E}_{x_{c}|X_{x}=z}\left[\left(f^{s}(z,X_{c})-\mu(z)\right) ^{2}\right] \tag{6}\]
The variability is introduced by the implicit feature interactions. If the black-box function does not have any interaction term, i.e., it can be written as \(f(\mathbf{x})=f_{s}(x_{s})+f_{c}(\mathbf{x}_{c})\) then the variability is zero. For the interval-based formulation, we define the bin effect \(\mu(z_{1},z_{2})\) and the bin standard deviation \(\sigma(z_{1},z_{2})\) as:
\[\mu(z_{1},z_{2})=\mathbb{E}_{z\sim\mathcal{U}(z_{1},z_{2})}[\mu(z)]=\frac{ \int_{z_{1}}^{z_{2}}\mu(z)\partial z}{z_{2}-z_{1}} \tag{7}\]
\[\sigma^{2}(z_{1},z_{2})=\mathbb{E}_{z\sim\mathcal{U}(z_{1},z_{2})}[\sigma^{2} (z)]=\frac{\int_{z_{1}}^{z_{2}}\sigma^{2}(z)\partial z}{z_{2}-z_{1}} \tag{8}\]
The bin effect and the bin standard deviation quantify the average effect and the heterogeneity inside a bin, i.e., for a population \(\mathbf{x}^{(i)}=(z^{(i)},\mathbf{x}_{c}^{(i)})\), where \(z^{(i)}\) is uniformly drawn from \(\mathcal{U}(z_{1},z_{2})\) and \(\mathbf{x}_{c}\) from \(X_{c}|z^{(i)}\). Denoting as \(\mathcal{Z}\) the sequence of \(K+1\) points that partition the axis of the \(s\)-th feature into \(K\) variable-size intervals, i.e., \(\mathcal{Z}=\{z_{0},\ldots,z_{K}\}\), the interval-based formulation of RHALE is:
\[\widehat{f}_{\mathcal{Z}}^{\text{\tiny{RHALE}}}(x_{s})=\sum_{k=1}^{k_{x}}\mu (z_{k-1},z_{k})(z_{k}-z_{k-1}) \tag{9}\]
where \(k_{x}\) is the index of the bin such that \(z_{k_{x}-1}\leq x_{s}<z_{k_{x}}\). Eq. (9) is no more than a piece-wise linear approximation of Eq. (2). The approximation of the bin effect and of the bin standard deviation is made from the set \(\mathcal{S}_{k}\) of instances with the \(s\)-th feature in the \(k\)-th bin, i.e., \(\mathcal{S}_{k}=\{\mathbf{x}^{i}:z_{k-1}\leq x_{s}^{(i)}<z_{k}\}\). The bin effect is estimated with:
\[\hat{\mu}(z_{k-1},z_{k})=\frac{1}{|\mathcal{S}_{k}|}\sum_{i:\mathbf{x}^{i} \in\mathcal{S}_{k}}\left[f^{s}(\mathbf{x}^{i})\right] \tag{10}\]
which is an unbiased estimator of Eq. (7) (Appendix A.1). The estimator of the bin deviation Eq. (8) is:
\[\hat{\sigma}^{2}(z_{k-1},z_{k})=\frac{1}{|\mathcal{S}_{k}|-1}\sum_{i:\mathbf{ x}^{i}\in\mathcal{S}_{k}}\left(f^{s}(\mathbf{x}^{i})-\hat{\mu}(z_{1},z_{2}) \right)^{2} \tag{11}\]
At Appendix A.2, we show that \(\hat{\sigma}^{2}(z_{1},z_{2})\) is an unbiased estimator of \(\sigma_{\star}^{2}(z_{1},z_{2})=\frac{\int_{z_{1}}^{z_{2}}\mathbb{E}_{X_{c}|X _{x}=z}[\left(f^{s}(z,X_{c})-\mu(z_{1},z_{2})\right)^{2}]\partial z}{z_{2}-z _{1}}\) and in Theorem 1 we prove that in the general case, \(\sigma_{\star}^{2}(z_{1},z_{2})\geq\sigma^{2}(z_{1},z_{2})\). Therefore, without a principled bin-splitting strategy, \(\hat{\sigma}^{2}(z_{1},z_{2})\) leads to an overestimation of the actual bin standard deviation \(\sigma^{2}(z_{1},z_{2})\).
**Theorem 1**.: _If we define (a) the residual \(\rho(z)\) as the difference between the expected effect at \(z\) and the bin effect, i.e, \(\rho(z)=\mu(z)-\mu(z_{1},z_{2})\), and (b) \(\mathcal{E}(z_{1},z_{2})\) as the mean squared residual of the bin, i.e., \(\mathcal{E}(z_{1},z_{2})=\frac{\int_{z_{1}}^{z_{2}}\rho^{2}(z)\partial z}{z_{2} -z_{1}}\), then it holds_
\[\sigma_{\star}^{2}(z_{1},z_{2})=\sigma^{2}(z_{1},z_{2})+\mathcal{E}^{2}(z_{1},z_{2}) \tag{12}\]
Proof.: The proof is at A.3 of the Appendix
We refer to \(\mathcal{E}^{2}(z_{1},z_{2})\) as bin error. Based on Theorem1, the estimation is unbiased only when \(\mathcal{E}^{2}(z_{1},z_{2})=0\).
### Automatic Bin-Splitting
RHALE approximation is affected by (a) the number of instances (estimation variance) and (b) the error term \(\mathcal{E}(z_{1},z_{2})\) (estimation bias), in each bin. On the one hand, we favor wide bins so that the estimation of \(\hat{\mu}(z_{1},z_{2}),\hat{\sigma}(z_{1},z_{2})\) comes from a sufficient population of samples (low estimation variance). On the other hand, we want to minimize the accumulated bin error, i.e., \(\mathcal{E}_{\mathcal{Z}}^{2}=\sum_{k=1}^{K}\mathcal{E}^{2}(z_{k-1},z_{k}) \Delta z_{k}\), where \(\mathcal{Z}=\{z_{0},\cdots,z_{K}\}\) and \(\Delta z_{k}=z_{k}-z_{k-1}\) (low estimation bias). We search for a partitioning that balances this trade-off.
**Corollary 2**.: _If a bin-splitting \(\mathcal{Z}\) minimizes the accumulated error \(\mathcal{E}_{\mathcal{Z}}^{2}\), then it also minimizes \(\sum_{k=1}^{K}\sigma_{\star}^{2}(z_{1},z_{2})\Delta z_{k}\)._
Proof.: The proof is based on the observation that \(\sum_{k=1}^{K}\sigma^{2}(z_{k-1},z_{k})\Delta z_{k}=\sigma^{2}(z_{0},z_{K})(z_ {K}-z_{0})\) which is independent of the bin-splitting. A detailed proof is provided at Appendix A.4.
Corollary 2 shows that minimizing \(\mathcal{E}_{\mathcal{Z}}^{2}\) is equivalent to minimizing \(\sum_{k=1}^{K}\sigma_{\star}^{2}(z_{k-1},z_{k})\Delta z_{k}\), which can be directly estimated from \(\sum_{k=1}^{K}\hat{\sigma}^{2}(z_{k-1},z_{k})\Delta z_{k}\). Based on that, we set-up the following optimization problem:
\[\min_{\mathcal{Z}=\{z_{0},\ldots,z_{K}\}} \mathcal{L}=\sum_{k=1}^{K}\tau_{k}\hat{\sigma}^{2}(z_{k-1},z_{k}) \Delta z_{k} \tag{13}\] \[\text{where} \Delta z_{k}=z_{k}-z_{k-1}\] \[\tau_{k}=1-\alpha\frac{|S_{k}|}{N}\] s.t. \[|\mathcal{S}_{k}|\geq N_{\text{\tiny{PPB}}}\] \[z_{0}=x_{s,min}\] \[z_{K}=x_{s,max}\]
The objective \(\mathcal{L}\) searches for a partitioning \(\mathcal{Z}^{*}=\{z_{0}^{*},\ldots,z_{K}^{*}\}\) with a low accumulated error \(\mathcal{E}_{\mathcal{Z}}^{2}\) and when many partitionings have similar accumulated errors, the coefficient \(\tau_{K}\) favors the one with wider bins (on average, more points per bin). The constraint of at least \(N_{\text{\tiny{PPB}}}\) points per bin sets the lowest-limit for a _robust_ estimation. The user can choose to what extent they favor the creation of wide bins through the parameter \(\alpha\) that controls the discount \(\tau_{k}\) and the parameter \(N_{\text{\tiny{PPB}}}\) that sets the minimum population per bin. A typical choice for \(\alpha\) is \(0.2\), which means a discount between range of \([0\%,20\%]\) and for \(N_{\text{\tiny{PPB}}}\) is \(\frac{N}{20}\), which means at least \(\frac{N}{20}\) points in each bin, where \(N\) is the dataset size.
For solving the optimization problem of Eq.13 we discretize the solution space. First, we set a threshold \(K_{\max}\) on the maximum number of bins which, in turn, defines the minimum bin width, i.e.
Figure 4: Estimation of the ALE effect, the standard error of ALE, the bin effect and the bin standard deviation using fixed-sized bins, \(K=5\) (left) and \(K=50\) (right).
\(\Delta x_{\min}=\frac{x_{z,\max}-x_{z,\min}}{K_{\max}}\). Based on that, we restrict the bin limits to the multiples of the minimum width, i.e. \(z_{k}=k\cdot\Delta x_{\min}\), where \(k\in\{0,\cdots,K_{\max}\}\). In this discretized solution space, we find the global optimum using Dynamic Programming. To define the solution, we use two indexes; index \(i\in\{0,\ldots,K_{\max}\}\) denotes the limit of the \(i\)-th bin \((z_{i})\) and the index \(j\in\{0,\ldots,K_{\max}\}\) denotes the \(j\)-th multiple of the minimum step, i.e., \(x_{j}=x_{s,\min}+j\cdot\Delta x_{\min}\). The recursive cost function \(T(i,j)\) computes the cost of setting \(z_{i}\) to \(x_{j}\):
\[\mathcal{T}(i,j)=\min_{l\in\{0,\ldots,K_{max}\}}\left[\mathcal{T}(i-1,l)+ \mathcal{B}(l,j)\right] \tag{14}\]
The term \(\mathcal{B}(l,j)\) is the cost of creating a bin with limits \([x_{l},x_{j})\). In our case, following Eq. (13), we set it to \(\tau_{k}\hat{\sigma}^{2}(x_{l},x_{j})(x_{j}-x_{l})\) if the bin is valid, i.e., \(|\mathcal{S}_{k}|\geq N_{\text{PPB}}\), and to \(\infty\) otherwise. The optimal partitioning \(\mathcal{Z}^{*}\) is given by solving \(\mathcal{L}=\mathcal{T}(K_{\max},K_{\max})\) and keeping track of the sequence of steps. Therefore, the main RHALE effect is estimated as in Eq. (15) and its standard deviation as in Eq. (16):
\[\hat{f}^{\text{BALE}}_{\mathcal{Z}^{*}}(x_{s})=\sum_{k=1}^{k_{x}}\hat{\mu}(z_{ k-1},z_{k})(z_{k}-z_{k-1}) \tag{15}\]
\[\text{STD}(x_{s})=\sqrt{\sum_{k=1}^{k_{x}}(z_{k}-z_{k-1})^{2}\hat{\sigma}^{2}( z_{k-1},z_{k})} \tag{16}\]
The bin effects \(\hat{\mu}_{k}\) are estimated using Eq. 10 and and the heterogeneity by the standard deviation \(\hat{\sigma}_{k}\) in each bin using Eq. 11.
**Computational Complexity.** The computational complexity of the DP solution is \(\mathcal{O}(K_{\max}^{3})\) because we use the DALE formula of Eq. 4. This allows us to precompute the instance-level effects once in the beginning, and then, the bin-splitting algorithm simply reallocates them to different partitionings without reevaluating \(f\) for each partitioning. As a result, for up to roughly \(K_{\max}=100\) bins, our algorithm runs in a couple of seconds, regardless of the dataset size or the cost of evaluating \(f\). On the other hand, a PDP-ICE plot needs to evaluate \(f\) on \(t\) positions along the \(x_{s}\) axis for all \(N\) dataset points, making it a much slower alternative. Additional details and experimental results on the computational aspect can be found in Appendix A5. Finally, it is worth noting that \(K_{\max}\) only sets an upper limit and the optimal sequence \(\mathcal{Z}^{*}\) can range from \(1\) to \(K_{\max}\).
## 4 Simulation examples
To formally evaluate RHALE, we rely on simulation examples as the evaluation requires knowledge of the ground truth generating distribution \(X\) and the black-box function \(f\). In contrast, in Section 5, we showcase the applicability of RHALE on a real-world dataset, but it impossible to conduct a formal evaluation in this setting. The evaluation of RHALE on simulation examples is two-fold. First, in Section 4.1 we conduct a formal comparison between RHALE and PDP-ICE to verify that RHALE performs well in cases with correlated features, which PDP-ICE struggles with. Second, in Section 4.2, we compare RHALE's automated bin-splitting approach against the traditional fixed-size approximation. We demonstrate that RHALE's bin-splitting technique produces more accurate estimations across various scenarios.
### RHALE vs PDP-ICE
We consider a data generating distribution \(p(\mathbf{x})=p(x_{3})p(x_{2}|x_{1})p(x_{1})\), \(x_{1}\sim\mathcal{U}(0,1)\), \(x_{2}=x_{1}+\epsilon\) and \(x_{3}\sim\mathcal{N}\left(0,\sigma_{3}^{2}=\frac{1}{4}\right)\). Here \(\epsilon\sim\mathcal{N}(0,0.01)\) is a small additive noise. The predictive function is:
\[f(\mathbf{x})=\alpha f_{2}(\mathbf{x})+\underbrace{f_{1}(\mathbf{x})1_{f_{1}( \mathbf{x})\leq\frac{1}{2}}}_{g_{1}(\mathbf{x})}+\underbrace{(1-f_{1}(\mathbf{ x}))1_{\frac{1}{2}<f_{1}(\mathbf{x})<1}}_{g_{2}(\mathbf{x})} \tag{17}\]
where \(f_{1}(\mathbf{x})=x_{1}+x_{2}\) is the additive term and \(f_{2}(\mathbf{x})=x_{1}x_{3}\) is the interaction term. We evaluate RHALE and PDP-ICE when (a) there is no heterogeneity (\(\alpha=0\)) and (b) there is is heterogeneity implied by the interaction term (\(\alpha>0\)). We use this simple example for being able to establish a ground truth for the main effect and the heterogeneity. For the main effect, we use that due to \(x_{2}\approx x_{1}\) we can determine the intervals where \(g_{1}\) or \(g_{2}\) are active. For the heterogeneity, we use the fact that under no-interactions, \(a=0\), the heterogeneity must be zero and we discuss separately the case with \(a>0\). For detailed derivations see Appendix B1.
**Case a: Interaction term disabled.** Given that \(x_{2}\approx x_{1}\), when \(0\leq x_{1}<\frac{1}{4}\), then \(f_{1}(\mathbf{x})<\frac{1}{2}\), so the effect is \(x_{1}\) and, similarly, when \(\frac{1}{4}\leq x_{1}<\frac{1}{2}\) the effect is \(-x_{1}\). Therefore, the ground truth effect is \(f^{\text{zz}}(x_{1})=x_{1}\,1_{x_{1}<\frac{1}{4}}+\left(\frac{1}{4}-x_{1} \right)1_{\frac{1}{4}\leq x_{1}<\frac{1}{2}}\). Since \(x_{1}\) does not interact with any other feature, the heterogeneity is zero. In Figure 5, we observe that PDP's main effect is wrong and ICE plots show heterogeneous effects. In contrast, RHALE estimates correctly both the average effect and the heterogeneity. Finally, we observe that RHALE's bin-splitting optimally creates three wide bins, \([0,\frac{1}{4}),(\frac{1}{4},\frac{1}{2}),(\frac{1}{2},1)\), in the regions with linear effect.
**Case b: Interaction term enabled.** The main effects are \(f^{\text{zz}}(x_{j})=x_{j}1_{x_{j}<\frac{1}{4}}+\left(\frac{1}{4}-x_{j} \right)1_{\frac{1}{4}\leq x_{j}<\frac{1}{2}}\) for features \(j=1,2\) and \(f^{\text{zz}}(x_{3})=\frac{1}{2}x_{3}\) for feature \(x_{3}\). The interaction term \(x_{1}x_{2}\) induces heterogeneous effects for features \(x_{1}\) and \(x_{3}\), and since the two variables are independent, the heterogeneity is \(\sigma_{3}=\frac{1}{2}\) for \(x_{1}\) and \(\sigma_{1}=\frac{1}{4}\) for \(x_{3}\). In Figure 6, we observe that RHALE correctly estimates the main effect and the heterogeneity of all features. In contrast, PDP-ICE only estimates correctly only the effect and the heterogeneity of \(x_{3}\). This confirms our previous knowledge that PDP-ICE performs well only when the interaction terms includes non-correlated features, like the term \(f_{2}(\mathbf{x})\). For the correlated features \(x_{1}\) and \(x_{2}\), both the average effect and the heterogeneity are erroneously estimated by PDP-ICE.
**Discussion.** The study shows RHALE's superiority under correlated features, where, PDP and ICE plots can provide highly misleading results. Additionally, RHALE's automatic bin splitting leads to a robust estimation of the average effect and of the heterogeneity, favoring wider bins in regions with (near) constant effects.
Figure 5: No interaction, Equal weights: Feature effect for \(x_{1}\) using RHALE (Left) and PDP-ICE (Right).
### RHALE vs ALE
In this simulation, we compare the performance of RHALE's automatic partitioning with ALE's fixed-size bin-splitting. To assess the accuracy of these approximations, we first estimate the ground truth average effect \(\mu\) and heterogeneity \(\sigma\) using a large dataset (\(N=10^{6}\)) with dense fixed-size binning (\(K=10^{3}\)). We then generate a smaller dataset (\(N=50\)) and compare the estimation of \(\hat{\mu},\hat{\sigma}\), using (a) fixed-size bins for several values of \(K\) against (b) RHALE's automatic partitioning. Our objective is to show that RHALE provides better estimates of \(\mu\) and \(\sigma\) compared to any fixed-size alternative.
The dataset is generated by sampling from \(p(\mathbf{x})=p(x_{2}|x_{1})p(x_{1})\) where \(x_{1}\sim\mathcal{U}(0,1)\) and \(x_{2}\sim\mathcal{N}(x_{1},\sigma_{2}^{2}=0.5)\). RHALE's approximation is denoted with \(\mathcal{Z}^{*}\) and the fixed-size with \(K\) bins as \(\mathcal{Z}^{\text{x}}\). The evaluation is based on the Mean Absolute Error (MAE) of the bin effect \(\mu\) and of the heterogeneity \(\sigma\) across bins, i.e.,
\[\mathcal{L}^{\mu}=\frac{1}{|\mathcal{Z}|-1}\sum_{k\in\mathcal{Z}}|\mu(z_{k-1}, z_{k})-\hat{\mu}(z_{k-1},z_{k})| \tag{18}\]
\[\mathcal{L}^{\sigma}=\frac{1}{|\mathcal{Z}|-1}\sum_{k\in\mathcal{Z}}|\sigma(z_ {k-1},z_{k})-\hat{\sigma}(z_{k-1},z_{k})| \tag{19}\]
The ground truth bin effect, \(\mu(z_{k-1},z_{k})\), and heterogeneity, \(\sigma(z_{k-1},z_{k})\) are obtained by averaging the dense fixed-size bins within the interval \([z_{k-1},z_{k}]\). We also calculate the mean residual error \(\mathcal{L}^{\rho}=\frac{1}{|\mathcal{Z}|}\sum_{k\in\mathcal{Z}}\mathcal{E}(z _{k-1},z_{k})\) to interpret cases where the bin standard deviation is biased.
We compare RHALE vs ALE in two different scenarios; when \(f\) is piecewise linear and \(f\) is non-linear. We execute \(t=30\) independent runs, using each time \(N=500\) different samples, and report the mean values of the metrics.
Piecewise Linear Function.Here, the black-box function is \(f(\mathbf{x})=a_{1}x_{1}+x_{1}x_{2}\), with \(5\) piecewise linear regions, i.e., \(a_{1}\) equals to \(\{2,-2,5,-10,0.5\}\) in the intervals defined by the sequence \(\{0,0.2,0.4,0.45,0.5,1\}\). The effect of \(x_{1}\) is \(f^{\text{GT}}(x_{1})=a_{1}x_{1}\) and the heterogeneity \(\sigma_{2}=\sqrt{0.5}\). As we observe in the top left of Figure 7, RHALE splits in fine-grained bins the intervals \([0.4,0.45]\), \([0.45,0.5]\) and unites in a single bin most of the constant-effect regions, e.g. region \([0.5,1]\). Therefore RHALE's estimation is better than any fixed-size binning in terms of both \(\mathcal{L}^{\mu}\) and \(\mathcal{L}^{\sigma}\).
Non-Linear Function.Here, the black-box function is \(f(\mathbf{x})=4x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}\), so the effect of \(x_{1}\) is \(f^{\text{GT}}(x_{1})=4x_{1}^{2}\) and the heterogeneity is \(\sigma_{2}\). When using a wide binning (low \(K\)) there is an increase in the mean residual error \(\mathcal{L}^{\rho}\) (bottom-right of Figure 8), resulting in a biased approximation of \(\sigma\). In contrast, narrow bins (high \(K\)) lead to a worse approximation due to number of samples
Figure 8: Bin-Splitting, non-linear function: RHALE’s approximation (Top-Left). RHALE vs fixed-size approximations in terms of: \(\mathcal{L}^{\mu}\) (Top-Right), \(\mathcal{L}^{\sigma}\) (Bottom-Left), \(\mathcal{L}^{\rho}\) (Bottom-Right).
Figure 6: With interaction, equal weights: From top to bottom, feature effect for features \(\{x_{1},x_{2},x_{3}\}\) using RHALE (left column) and PDP-ICE (right column).
Figure 7: Bin-Splitting, piecewise linear function: RHALE’s approximation (Top-Left). RHALE vs fixed-size approximations in terms of: \(\mathcal{L}^{\mu}\) (Top-Right), \(\mathcal{L}^{\sigma}\) (Bottom-Left), \(\mathcal{L}^{\rho}\) (Bottom-Right).
per bin. However, RHALE manages to compromise these competing objectives and achieves an (almost) optimal approximation of both \(\mu\) (top-right) and \(\sigma\) (bottom-left), as illustrated in Figure 8.
## 5 Real-world example
Here, since it is infeasible to access the ground-truth FE, we simply demonstrate the usefulness of quantifying the heterogeneity and the advantages of RHALE's approximation, on the real-world California Housing dataset [22].
**ML setup.** The California Housing is a largely-studied dataset with approximately \(20000\) training instances, making it appropriate for robust approximation with large \(K\). The dataset contains \(D=8\) numerical features with characteristics about the building blocks of California, e.g., latitude, longitude, population of the block or median age of houses in the block. The target variable is the median value of the houses inside the block in dollars that ranges between \([15,500]\cdot 10^{3}\), with a mean value of \(\mu_{Y}\approx 201\cdot 10^{3}\) and a standard deviation of \(\sigma_{Y}\approx 110\cdot 10^{3}\). We exclude instances with missing or outlier values and we normalize all features to zero-mean and unit standard deviation. We split the dataset into \(N_{tr}=15639\) training and \(N_{test}=3910\) test examples (80/20 split) and we fit a Neuvial Network with 3 hidden layers of 256, 128 and 30% with respectively. After 15 epochs using the Adam optimizer with learning rate \(\eta=0.02\), the model achieves a MAE of \(37\cdot 10^{3}\) dollars.
Below, we illustrate the feature effect for two features: latitude \(x_{2}\) and median income \(x_{8}\). The particular features cover the main FE cases, e.g. positive/negative trend and linear/non-linear curve, and are therefore appropriate for illustration purposes. Results for all features, along with details about the reprocessing, training and evaluation parts are provided in the Appendix B2.
Heterogeneity QuantificationFigure 9 illustrates the significance of RHALE's heterogeneity quantification for comprehensive interpretation of feature effects. We observe that both features exhibit significant interactions with other features leading to high heterogeneity. However, despite the high heterogeneity, we can confidently infer that the (a) latitude of the house (\(x_{2}\)) negatively impacts the price, and the (b) median income (\(x_{8}\)) has a positive influence on the price, for almost all instances.
Bin SplittingWe evaluate the robustness of RHALE approximation, following the same methodology as described in Section 4.2. We consider as ground-truth the effects computed on the entire training set (\(N_{tr}=15639\)) with dense fixed-size bin-splitting (\(K=80\)). Given the sufficient number of samples, we make the hypothesis that the approximation with dense binning is close enough to the ground truth. Next, we randomly select fewer samples, \(N=1000\), and compare RHALE's approximation against fixed-size approximation (for all \(K\)). We repeat this process for \(t=30\) independent runs and we report the mean values of \(\mathcal{L}^{\mu},\mathcal{L}^{\sigma}\). In Figure 10, we observe that RHALE achieves accurate approximations in all cases; \(\mathcal{L}^{\mu}\)\(\mathcal{L}^{\sigma}\) are close to the best among the fixed-size approximations.
## 6 Conclusion and further work
In this paper, we have introduced Robust and Heterogeneity-aware ALE (RHALE), a global feature effect method that addresses two major limitations of ALE. First, it quantifies the heterogeneity of local effects, which is essential for a complete interpretation of the feature effect. Second, it automates the bin-splitting process to improve the approximation of both the average effect and the heterogeneity. To achieve the latter, we proposed an automatic bin-splitting algorithm that balances estimation bias and variance by creating wider bins only when the underlying local effects are (near) constant. Our experiments on synthetic and real-world examples demonstrate RHALE's superiority over PDP-ICE, which struggles with correlated features, and traditional ALE, as automatic bin-splitting provides more accurate estimates than fixed-size splitting.
Limitations.While the standard deviation of local effects is a good way to express the _level_ of heterogeneity, it is challenging to interpret the _type_ of heterogeneity. Therefore, we use violin plots to provide the distribution of local effects (type of heterogeneity), but their explanatory power is limited within each bin. At a global level, i.e., between the bins, the user can only determine the magnitude of heterogeneity. Finally, the automatic-binning algorithm comes with three hyperparameters, \(K_{\max},\alpha,N_{\text{PPB}}\). Although, their default values work well in most cases, on exceptional scenarios, such as a very small dataset, may need to be adjusted appropriately for an optimal bin splitting.
Figure 10: Lower is better. RHALE (red line) vs ALE fixed-size bins (blue crosses) in terms of \(\mathcal{L}^{\mu}\) (left column), \(\mathcal{L}^{\sigma}\) (right column) for features \(x_{2}\) (top) \(x_{8}\) (bottom). We observe that RHALE’s estimation is better than (almost) any fixed-size alternative.
Figure 9: RHALE plot for features \(x_{2}\) (latitude) and \(x_{6}\) (median income). Apart from the average effects, i.e., negative for the \(x_{2}\) and positive for \(x_{8}\), the heterogeneity (\(\pm\)STD and BIN-STD) shows that instance-level effects are more heterogeneous on \(x_{2}\) case.
## Acknowledgements
This work was supported by the XMANAI project (grant agreement No 957362), which has received funding by the European Regional Development Fund of the EU (EU 2020 Programme, ICT-38-2020 - Artificial intelligence for manufacturing).
|
2307.16350 | SiO Outflows in the Most Luminous and Massive Protostellar Sources of
the Southern Sky | (Abridged) High-mass star formation is far less understood than low-mass star
formation. It entails molecular outflows, which disturb the protostellar clump.
Studying these outflows and the shocked gas they cause is key for a better
understanding of this process. This study aims to characterise the behaviour of
molecular outflows in the most massive protostellar sources in the Southern
Galaxy by looking for evolutionary trends and associating shocked gas with
outflow activity. We present APEX SEPIA180 observations (beamwidth $\sim$36")
of SiO outflow candidates of a sample of 32 luminous and dense clumps,
candidates to harbouring Hot Molecular Cores. We study the SiO(4-3) line
emission, an unambiguous tracer of shocked gas and recent outflow activity, the
HCO$^+$(2-1) and H$^{13}$CO$^+$(2-1) lines. 78% of our sample present SiO
emission. Nine of these also have wings in the HCO$^+$ line, indicating outflow
activity. The SiO emission of these 9 sources is more intense and wider than
the rest, suggesting that the outflows in this group are faster and more
energetic. Three positive correlations between the outflow properties were
found, which suggest that more energetic outflows bear to mobilise more
material. No correlation was found between the evolutionary stage indicator
$L/M$ and SiO outflow properties, supporting that outflows happen throughout
the whole high-mass star formation process. We conclude that sources with both
SiO emission and HCO$^+$ wings and sources with only SiO emission are in
virtually the same advanced stage of evolution in the high-mass star formation
process. The former present more massive and more powerful SiO outflows than
the latter. Thus, looking for more outflow signatures such as HCO$^+$ wings
could help identify more massive and active massive star-forming regions in
samples of similarly evolved sources, as well as sources with older outflow
activity. | N. Guerra-Varas, M. Merello, L. Bronfman, N. Duronea, D. Elia, R. Finger, E. Mendoza | 2023-07-31T00:01:36Z | http://arxiv.org/abs/2307.16350v1 | # SiO Outflows in the Most Luminous and Massive Protostellar Sources of the Southern Sky
###### Abstract
Context:High-mass star formation remains far less understood than low-mass star formation. It entails the ejection of matter through molecular outflows, which disturb the protostellar clump. Studying these outflows and the shocked gas caused by them is key for a better understanding of this process.
Aims:The present study aims to characterise the behaviour of molecular outflows in the most massive protostellar sources in the Southern Galaxy by looking for evolutionary trends and associating the presence of shocked gas with outflow activity.
Methods:We present APEX SEPLA180 (Band 5) observations (beamwidth \(\sim\)36\({}^{\circ}\)) of SiO(4-3) molecular outflow candidates towards a well-selected sample of 32 luminous and dense clumps, which are candidates to harbouring Hot Molecular Cores. We study the emission of the SiO(4-3) line, which is an unambiguous tracer of shocked gas, and recent and active outflow activity, as well as the HCO\({}^{\circ}\) (2-1) and H\({}^{13}\)CO\({}^{\circ}\)(2-1) lines.
Results:Results show that 78% of our sample (25 sources) present SiO emission, revealing the presence of shocked gas. Nine of these sources are also found to have wings in the HCO\({}^{\circ}\) (2-1) line, indicating outflow activity. The SiO emission of these 9 sources is generally more intense (\(T_{a}>1\) K) and wider (\(\sim 61\) km s\({}^{-1}\) FWZP) than the rest of the clumps with SiO detection (\(\sim 42\) km s\({}^{-1}\) FWZP), suggesting that the outflows in this group are faster and more energetic. Three positive linear correlations are found: a weak one between the bolometric luminosity and outflow power, and two strong ones: between the outflow power and the rate of matter expulsion, and between the kinetic energy and outflow mass. These correlations suggest that more energetic outflows bear to mobilise more material. No correlation was found between the evolutionary stage indicator \(L/M\) and SiO outflow properties, supporting that molecular outflows happen throughout the whole high-mass star formation process.
Conclusions:We conclude that sources with SiO emission and HCO\({}^{\circ}\) wings and sources with only SiO emission are in an advanced stage of evolution in the high-mass star formation process, and there is no clear evolutionary difference between them. The former present more massive and more powerful SiO outflows than the latter. Therefore, looking for more outflow signatures such as HCO\({}^{\circ}\) wings could help identify more massive and active massive star-forming regions in samples of similarly evolved sources, and could also help identify sources with older outflow activity.
## 1 Introduction
Massive stars are crucial to the evolution of the interstellar medium (ISM) and galaxies. However, they are difficult to observe and study because they are very scarce (about 1% of stellar populations), have short timescales, large heliocentric distances, and are embedded in very complex environments, with high extinction and turbulence (e.g. Motte et al. 2018). Thus, high-mass star formation remains much less understood than low-mass star formation.
The current picture for high-mass star formation takes place in massive dense cores (MDCs) embedded in massive clouds, called Massive Star-Forming (MSF) regions, and can be described by four main phases (van der Tak 2004; Motte et al. 2018; Elia et al. 2021; Urquhart et al. 2022; Jiao et al. 2023). Initially, in the quiescent or starless phase, the clump has no embedded objects and is not visible at 70 \(\mu\)m. Later, in the Young Stellar Object Phase (YSO), the clump has warmed up enough to be detected at 70 \(\mu\)m. In this stage, (\(\sim 10^{4}\) yrs), there is an embedded cold and collapsing prestellar core. When a protostar appears (protostellar phase, \(\sim 3\times 10^{5}\) yrs), it feeds on inflow material and its mass increases until becoming a high-mass protostar. Within this phase, the Hot Molecular Core (HMC) stage can be identified. Then, as the protostar evolves and its temperature increases, it starts emitting ultraviolet (UV) radiation, which is quickly ionising the surrounding gas. This starts the Ultra-Compact (UC) HII region phase (\(\sim 10^{5}\) - \(10^{6}\) yrs).
Massive star formation entails a greatly relevant feedback process: molecular outflows and jets, i.e., matter expulsion at
high velocities due to angular momentum conservation during matter accretion (Arce et al., 2007; Bally, 2016; Motte et al., 2018). This feedback process occurs throughout the whole of the high-mass formation process (Li et al., 2020; Yang et al., 2022; Urquhart et al., 2022), and outflow features have been used as an indication of massive star formation activity (Li et al., 2019; Liu et al., 2020). In order to understand how massive stars form, the study of molecular outflows in MSF regions is imperative.
The study of outflows heavily depends on the tracer used. There is no such thing as a perfect outflow tracer (Bally, 2016). However, the SiO molecule has been found to be a very good tracer of outflow activity and shocked material and has been broadly used for that purpose (e.g. Beuther et al., 2002; Lopez-Sepulcre, A. et al., 2011; Bally, 2016; Li et al., 2019; Liu et al., 2021; Liu et al., 2022; De Simone et al., 2022). Si falls onto the icy mantles of dust grains of the ISM. When hit by shocks of gas, the dust grains can sublimate to the gaseous state, releasing the Si. Thus, the Si in the gas phase can react with O to form SiO, making it observable in the millimetre (mm), sub-mm, and centimeter (cm) regimes via its rotational transitions (Schilke et al., 1997; Klaassen and Wilson, 2007; Gusdorf, A. et al., 2008). Unlike other molecules, SiO has a key advantage. Its emission is not easily contaminated by excited ambient material, making it an unambiguous tracer of shocked material and thus outflow activity. Studying SiO emission can shed light on the kinematics of outflows, as well as relevant chemical processes that occur in shock environments such as the formation of complex organic molecules (COMs) (Bally, 2016; Li et al., 2020; Rojas-Garcia et al., 2022). Both broad (Full Width Zero Power (FWZP) \(\geq\) 20 km s\({}^{-1}\)) and narrow (FWZP \(\leq\) 10 km s\({}^{-1}\)) spectral width SiO emission have been observed (Garay et al., 2010; Leurini et al., 2014; Bally, 2016; Csengeri et al., 2016; Li et al., 2019; Zhu et al., 2020). It is thought that broad-width emission is due to high-velocity collimated shocks, while narrow-width emission is associated with less collimated low-velocity shocks, such as cloud-cloud collisions. It is possible to identify wings in the SiO spectral profile when it has a broad width, which are manifestations of SiO outflows. These have been extensively studied (Beuther et al., 2002; Liu et al., 2021; Liu, D.J. et al., 2021). Moreover, SiO outflow activity has been found to decrease and get less collimated over time (Arce et al., 2007; Sakai et al., 2010; Lopez-Sepulcre, A. et al., 2011). However, there is currently no consensus on whether SiO abundance decreases over time (Csengeri et al., 2016; Li et al., 2019; Liu et al., 2022), and some works even conclude that SiO emission is much harder to interpret than just a shock tracer (Widmann et al., 2016).
Another very helpful outflow tracer is HCO\({}^{+}\) emission (Myers et al., 1996; Rawlings et al., 2004; Klaassen and Wilson, 2007; Bally, 2016; Li et al., 2019; Liu et al., 2020; He et al., 2021). This species traces the surrounding material of protostar regions (Rawlings et al., 2004) and traces the disk material in low-mass star formation (Dutrey et al., 1997). This species can trace both inflow and outflow motions (Klaassen and Wilson, 2007). When outflows are strong enough, they can be observed in broad high-velocity wings in the spectral profile (Wu et al., 2005; He et al., 2021). When HCO\({}^{+}\), and other species such as CO, are dragged outwards due to the outflow, their spectral profiles present wings. This is caused by the greater velocity gradient of the dragged material. However, the spectral profile of this species experiences significant absorption, which often complicates its analysis. Moreover, studying the emission of an HCO\({}^{+}\) isotopologue, H\({}^{13}\)CO\({}^{+}\), can help distinguish between outflows and ambient dense gas, as it traces dense gas only (see Section 3.1).
SiO line emissions are associated with recent and active outflow of matter, as shock chemistry processes happen in 10\({}^{2}\) to 10\({}^{4}\) yrs, whilst the HCO\({}^{+}\) wing observations are associated with old 'fossil' outflows (Arce et al., 2007; Klaassen and Wilson, 2007; Lopez-Sepulcre et al., 2016; Li et al., 2019). The detection of only one of these phenomena is enough to indicate the presence of a molecular outflow. If both are detected, then one can confirm without ambiguity that there is ongoing matter expulsion (Klaassen and Wilson, 2007). Outflows are associated with an advanced stage of evolution. If a molecular core does not have an outflow, it may be because it has not reached this point yet. However, it is also possible that it has already experienced a significant loss of material, and/or accretion and outflow activity has ceased.
In this work, we present SiO(2-1), HCO\({}^{+}\)(2-1) and H\({}^{13}\)CO\({}^{+}\)(2-1) APEX Band 5 observations toward a well-selected sample of 32 very massive and luminous protostellar sources, which are amongst the brightest clumps in the Southern Milky Way. This study aims to answer the following research questions: How do SiO outflows behave in the most massive protostellar sources? Do any SiO outflow properties exhibit an evolutionary trend? Is the presence of shocked gas associated with outflow activity? What can different outflow signatures tell us about similarly evolved sources?
This paper is organised as follows: in Sect. 2 we describe the studied sample, selection criteria and the observations. In Sect. 3, we analyse the data and describe how outflow and SiO detections were analysed. In Sect. 4, we calculate the outflows properties of the clumps and the SiO emission. In Sect. 5, we present relevant correlations, carry out a cross-check between SiO and outflow detections, and discuss possible evolutionary trends. Finally, in Sect. 6, we provide a summary of our main results and present our conclusions.
## 2 Observations
### Source Selection
The studied sample consists of 32 protostellar clumps (HMC candidates) from the Hi-GAL catalogue of compact sources (Elia et al., 2021), associated with CS(2-1) line emission from IRAS PSC sources with far infrared colours of Ultra Compact HII regions by Bronfman et al. (1996). Their characteristics are presented in Table 1 (sources marked with a black diamond \(\blacklozenge\) have saturated Hi-GAL observations; see Appendix A). They have been named with 'HC\({}^{\prime}\) (Hot Core) and a number in increasing order for convenience.The sources were selected as follows:
1. Kinematic distances \(<\) 6 kpc (these were obtained from Reid et al. (2009), Hi-GAL distances were not used; see Appendix B).
2. Strong CS emission (Bronfman et al., 1996): Main-beam temperature in CS \(T_{MB}\)(CS) \(\geq\) 1.5 K.
3. Surface densities above the threshold for the formation of massive stars: \(\Sigma>\) 0.2 g cm\({}^{-2}\)(e.g. Butler and Tan, 2012; Merello et al., 2018).
4. Masses \(>\) 100 \(M_{\odot}\)(Elia et al., 2021).
5. High values of the evolutionary stage indicator luminosity-to-mass ratio \(L/M\). Values larger than 1 are associated with internal protostellar heating and the appearance of new stars in massive clumps (Lopez-Sepulcre, A. et al., 2011; Molinari et al., 2016).
Although low-mass star-forming clumps may also have a large value of \(L/M\), the lower limit set on the mass of the clump
together with a large \(L/M\) ensures our sample will only consist of massive protostellar clumps.
The sources in our sample are in an advanced stage of evolution, either in the protostellar or compact HII region phase. A histogram of the luminosity-to-mass ratio \(L/M\), which acts as an evolutionary stage indicator, is presented in Fig. 1 (see Table 1). This parameter \(L/M\) has a minimum value of 3 \(L_{\odot}/M_{\odot}\), a maximum of 154 \(L_{\odot}/M_{\odot}\), a mean of 45 \(L_{\odot}/M_{\odot}\) and a median of 30 \(L_{\odot}/M_{\odot}\).
The sources are very energetic, with bolometric luminosities up to \(\sim 2.3\times 10^{5}L_{\odot}\). The selection criteria of the sources allowed us to work with luminous, strong and relatively near clumps only. They are among the most luminous sources in the southern sky, and some of them have been extensively studied (see Appendix C). The continuum dust properties of the sources are shown in Table 2 (sources marked with a black diamond \(\blacklozenge\) have saturated Hi-GAL observations; see Appendix A) and were obtained from a SED fit (Elia et al. 2021). Compared with the general population of Hi-GAL protostellar sources (Elia et al. 2017, 2021), we see that our sample is much more massive and luminous. The median value for H\({}_{2}\) mass derived from dust \(M_{dust}\) and bolometric luminosity \(L_{Bol}\) for our sample is 828 \(M_{\odot}\) and 19839 \(L_{\odot}\) respectively, whilst for Hi-GAL protostellar sources, these values are equal to 464 \(M_{\odot}\) and 1071 \(L_{\odot}\) respectively. When comparing with the rest of the Hi-GAL sources within 6 kpc, the contrast is even bigger, since the mean bolometric luminosity is 206 \(L_{\odot}\) and the mean dust mass is 88 \(M_{\odot}\) in this region of the Galaxy. Finally, Fig. 2 shows the distribution in the Galactic plane of the sources presented in this work.
### APEX Observations
Observations were made using the SEPIA180 instrument at the Atacama Pathfinder Experiment (APEX)1 in the Atacama Desert in Chile (Belitsky et al. 2018), for 10 nights between November 2018 and October 2019. The observations were made in single-pointing mode. For each source, a region free from emission was taken as reference from the dust maps at 250 \(\mu\)m by Herschel, typically at \(300-500^{\prime\prime}\) from the source. The system temperature was 174.30 K on average.
Footnote 1: [https://www.apex-telescope.org/ns/instruments/sepia/sepia180/](https://www.apex-telescope.org/ns/instruments/sepia/sepia180/)
The band was tuned at 172 GHz in the LSB to observe the H\({}^{13}\)CO\({}^{+}\)(2-1) (173.507 GHz) and SiO(4-3) (173.688 GHz) lines, and centred at the HCO\({}^{+}\)(2-1) line (178.375 GHz). The spectral resolution used was of \(\Delta V=0.2\) km s\({}^{-1}\), which gives a typical RMS noise temperature of 25 mK, and the main beam efficiency \(\eta_{MB}\) was equal to 0.83. APEX observations have an antenna temperature uncertainty of about 20 % (APEX staff, private communication). The frequency of the transitions used was obtained from the SPLATALOGUE2 database. This frequency range corresponds to the Band 5 of the Atacama Large Millimetre/Sub-millimetre Array.
Footnote 2: [https://splatalogue.online/advanced1.php](https://splatalogue.online/advanced1.php)
The beam angular size can be calculated with the frequency \(f\) of a spectral line with \(\theta^{\prime\prime}=7.8^{\prime\prime}\times 800/f\). For these observations, this results in a beam angular size of approximately 36\({}^{\prime\prime}\).
The CLASS software from the GILDAS3 software package was used to reduce the spectra. It is used to process single-dish spectra and is oriented towards the processing of a large number of spectra. For every spectral line, the baseline was subtracted at first order and then centred to the \(V_{LSR}\) of the corresponding source.
Figure 1: Histogram for the luminosity to mass ratio \(L/M\) in logarithmic scale. The red and purple vertical lines denote the median (30 \(L_{\odot}/M_{\odot}\)) and the mean (45 \(L_{\odot}/M_{\odot}\)) respectively. Groups A, B and C are defined in Table 6.
Figure 2: Galactic distribution of the sources. The orange point corresponds to the Sun and the green point to the Galactic centre. The red crosses are the sources studied in this paper. The blue circles are centred in the Sun and the black ones in the Galactic centre. They mark the 1 kpc, 5 kpc and 10 kpc distances.
## 3 Analysis
The SiO(4-3), H\({}^{13}\)CO\({}^{+}\)(2-1) and HCO\({}^{+}\)(2-1) spectral profiles of the HC02 source are presented in Fig. 3 as an example, where the red dotted line is at FWZP, and the blue line in the H\({}^{13}\)CO\({}^{+}\) panel shows the Gaussian fit (see Appendix D for the rest, in Figs. 22, 23,
absorption (Klaassen & Wilson 2007), thus it is not possible to quantify properties of the clump using this spectral profile only. In addition, H\({}^{13}\)CO\({}^{+}\) emission is optically thin and does not present absorption. Therefore, the Gaussian parameters fitted to the H\({}^{13}\)CO\({}^{+}\)(2-1) spectral profiles were used to calculate the properties of the cores for all the sources, and the velocity of the H\({}^{13}\)CO\({}^{+}\)(2-1) Gaussian peak was used to centre the standard of rest velocity V\({}_{LSR}\) of the sources. The parameters of the HCO\({}^{+}\)(2-1) spectral lines are displayed in Table 3, and the results from the Gaussian fit to the H\({}^{13}\)CO\({}^{+}\)(2-1) spectral profile are displayed in Table 4.
We note that if the outflow is strong enough, it is possible that the H\({}^{13}\)CO\({}^{+}\) line presents wings as well, like is the case for the source e.g. HC20 (see Fig. D.19). Since the sources in our sample are extreme, it is reasonable to see this feature. The wings in the H\({}^{13}\)CO\({}^{+}\) line are much smaller than the ones in the HCO\({}^{+}\) line, and it is still possible to properly fit a Gaussian curve.
Since wings in the HCO\({}^{+}\) spectral profile are a very good indicator of outflows, its (2-1) spectral line was used to determine the presence of molecular outflows through the detection of wings. In the literature, spectral lines with wings are usually modelled by a Lorentz distribution. However, this line presents significant absorption in our data, which makes any curve fit far too inaccurate. Thus, HCO\({}^{+}\) wings were checked for as follows: the sum of the widths of the red and blue wings was obtained by subtracting the baseline width of the Gaussian fit of the H\({}^{13}\)CO\({}^{+}\) line from the HCO\({}^{+}\) FWZP; if the sum of the widths is larger than 25 km s\({}^{-1}\), then the given source is deemed to have HCO\({}^{+}\) wings. As modelled by Gusdorf, A. et al. (2008) (see also Leurini et al. (2014)), shocks trigger erosion of grains and SiO can be efficiently produced only if shock speeds are above 25 km s\({}^{-1}\). Results show (see Table 3) that 28% of our sample (9 sources) present wings in the HCO\({}^{+}\) spectral profile (e.g. see Fig. D.7). However, we note that, due to the lack of spatial information, there is significant uncertainty in these results.
### SiO Detection
We checked whether the sources present SiO emission, which directly traces outflows and shocked gas. Emission at 5\(\sigma\) or more was considered. SiO emission was found in 78% of our sample (25 sources). Results are displayed in Table 5.
The material ejected by bipolar outflows conforms two lobes, as modelled in Rawlings et al. (2004). Depending on the inclination angle, this produces mainly two features: the red and blue wings. While centred at the velocity of each source, the width from the Gaussian fit to the H\({}^{13}\)CO\({}^{+}\)(2-1) line was subtracted from the SiO(4-3) spectral profile width to obtain the width of the red and blue wings. With these measurements (see Table 5) the velocity integrated intensity of the SiO wings were obtained and further used to calculate the physical properties of the outflows.
There are a few sources that have particularly intense SiO emission, such as the source HC02 (associated to IRAS 17574-2403). The peak of the emission of these sources is \(\geq\) 20 times the RMS and they have \(T_{peak}\)\(>\) 0.7 K. For a more detailed description and summary of previous works done on these sources, see Appendix C.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Source & \(L_{bol}\) & \(M_{dust}\) & \(T_{dust}\) & \(L/M\) \\ & (10\({}^{3}\)\(L_{\odot}\)) & (\(M_{\odot}\)) & (K) & (\(L_{\odot}/M_{\odot}\)) \\ \hline HC01 & 11.14 & 394.0 & 25.8 & 28.0 \\ HC02 \(\blacklozenge\) & 16.75 & 2017.0 & 25.0 & 72.0 \\ HC03 & 20.57 & 1141.0 & 23.1 & 18.0 \\ HC04 \(\blacklozenge\) & 0.77 & 8675.0 & 25.0 & 25.0 \\ HC05 & 22.04 & 540.0 & 33.4 & 41.0 \\ HC06 & 42.28 & 1328.0 & 22.6 & 32.0 \\ HC07 & 17.47 & 535.0 & 31.5 & 33.0 \\ HC08 & 227.44 & 2803.0 & 33.5 & 81.0 \\ HC09 & 222.95 & 1477.0 & 34.4 & 151.0 \\ HC10 & 121.44 & 1458.0 & 30.4 & 83.0 \\ HC11 & 68.39 & 701.0 & 25.6 & 98.0 \\ HC12 & 2.70 & 785.0 & 18.3 & 3.0 \\ HC13 & 142.72 & 929.0 & 31.5 & 154.0 \\ HC14 & 8.67 & 799.0 & 19.4 & 11.0 \\ HC15 & 13.55 & 1074.0 & 22.9 & 13.0 \\ HC16 \(\blacklozenge\) & 56.05 & 1962.0 & 25.0 & 3.0 \\ HC17 & 100.00 & 1400.0 & 32.0 & 71.0 \\ HC18 & 14.98 & 949.0 & 22.4 & 16.0 \\ HC19 & 2.22 & 756.0 & 18.8 & 3.0 \\ HC20 \(\blacklozenge\) & 114.35 & 4980.0 & 25.0 & 3.0 \\ HC21 & 19.10 & 1608.0 & 23.3 & 12.0 \\ HC22 & 13.86 & 321.0 & 24.1 & 43.0 \\ HC23 & 4.19 & 341.0 & 33.4 & 12.0 \\ HC24 & 6.12 & 390.0 & 20.0 & 16.0 \\ HC25 & 23.05 & 1001.0 & 23.2 & 23.0 \\ HC26 & 24.66 & 585.0 & 28.8 & 42.0 \\ HC27 & 67.66 & 1030.0 & 34.0 & 66.0 \\ HC28 & 33.62 & 230.0 & 40.0 & 146.0 \\ HC29 & 162.23 & 2083.0 & 27.5 & 78.0 \\ HC30 & 11.50 & 456.0 & 27.0 & 25.0 \\ HC31 & 4.56 & 105.0 & 25.9 & 43.0 \\ HC32 & 6.34 & 859.0 & 20.6 & 7.0 \\ \hline \end{tabular}
\end{table}
Table 2: Clump properties.
Figure 3: Spectral Profiles for the source HC02, Group A. The red dotted line is at FWZP, and the blue line in the H\({}^{13}\)CO\({}^{+}\) panel shows the Gaussian fit.
According to these results, we have divided the sample into three groups, as seen in Table 6: 9 sources present both SiO detection and HCO\({}^{+}\) wings, 16 sources present SiO emission and no HCO\({}^{+}\) wings, and the remaining 7 present neither.
## 4 Results
We calculated the following properties of the clumps and their outflows.
### Properties of the Clump
#### 4.1.1 Virial Masses
The Gaussian width of the H\({}^{13}\)CO\({}^{+}\)(2-1) spectral line was used to calculate the virial mass of the clump. Assuming that it is gravitationally bound, so that the virial theorem applies, the virial mass can be calculated as follows (e.g. Merello et al. 2013):
\[\left(\frac{M_{\rm virial}}{M_{\odot}}\right)=210\left(\frac{\Delta V}{\rm km~{ }s^{-1}}\right)^{2}\left(\frac{R}{\rm pc}\right) \tag{2}\]
where \(\Delta V\) is the line's Gaussian width, and \(R\) is the spatial beam size of the source.
Results show (see Table 7) that these sources have very high virial masses, reaching up to \(4.9\times 10^{3}M_{\odot}\). However, the beam of the observation instrument APEX is larger than the clump, thus the observations include more material.
#### 4.1.2 LTE Mass
The Local Thermodynamic Equilibrium (LTE) mass of the clump was calculated using the LTE formalism, assuming the emission is optically thin. The H\({}^{13}\)CO\({}^{+}\) is a linear and rigid rotor molecule. Thus, the column density \(N_{J}\) was calculated as follows (e.g. Garden et al. 1991; Sanhueza et al. 2012):
\[\begin{split} N_{J}&=\frac{3h}{8\pi^{3}\mu^{2}} \frac{U(T_{ex})}{(J+1)}\frac{\exp\left(\frac{F_{ex}}{kT_{ex}}\right)}{\left[1- \exp\left(\frac{-\mu T}{kT_{ex}}\right)\right]}\\ &\times\frac{\left(J(T_{ex})-J(T_{bg})\right)}{\int T_{MB}d\nu} \end{split} \tag{3}\]
Here, \(k\) is the Boltzmann constant, \(h\) is the Planck constant, \(\mu=3.89\times 10^{-18}\) (D) is the electric dipole momenta and \(E_{J}=hBJ(J+1)\) is the energy in the level \(J\) (in this case, \(J=1\)), where \(B=43377.165\) (MHz) is the rotational constant. The intensity \(J(T)\) is defined for a given frequency \(\nu\) as:
\[J(T)=\frac{h\nu}{k}\frac{1}{\exp\left(\frac{h\nu}{kT}-1\right)} \tag{4}\]
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Source & RMS & Area & FWZP & Peak \(T_{MB}\) & Wings \\ & (K) & (K km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (K) & \\ \hline HC01 & 0.04 & 11.9 & 23.97 & 2.73 & \\ HC02 & 0.10 & 124.7 & 92.12 & 13.00 & ✓ \\ HC03 & 0.09 & 11.7 & 19.86 & 2.35 & \\ HC04 & 0.10 & 50.8 & 29.45 & 4.74 & \\ HC05 & 0.11 & 7.3 & 19.86 & 2.15 & \\ HC06 & 0.11 & 25.5 & 46.58 & 2.37 & ✓ \\ HC07 & 0.12 & 16.1 & 16.10 & 4.52 & \\ HC08 & 0.03 & 61.4 & 53.43 & 6.83 & ✓ \\ HC09 & 0.03 & 13.9 & 26.37 & 3.29 & \\ HC10 & 0.03 & 6.4 & 27.06 & 2.17 & \\ HC11 & 0.03 & 5.1 & 18.84 & 1.24 & \\ HC12 & 0.04 & 9.0 & 19.86 & 2.49 & \\ HC13 & 0.05 & 28.7 & 27.06 & 5.74 & \\ HC14 & 0.03 & 17.3 & 28.13 & 4.12 & \\ HC15 & 0.02 & 24.8 & 31.17 & 3.46 & \\ HC16 & 0.02 & 44.2 & 55.82 & 4.58 & ✓ \\ HC17 & 0.03 & 23.7 & 53.77 & 2.20 & ✓ \\ HC18 & 0.04 & 11.4 & 40.75 & 2.02 & ✓ \\ HC19 & 0.05 & 8.3 & 10.96 & 2.55 & \\ HC20 & 0.05 & 104.3 & 45.89 & 11.14 & ✓ \\ HC21 & 0.05 & 45.2 & 35.27 & 9.37 & ✓ \\ HC22 & 0.09 & 17.8 & 23.29 & 3.32 & \\ HC23 & 0.03 & 17.0 & 27.06 & 3.25 & \\ HC24 & 0.04 & 9.8 & 19.18 & 2.67 & \\ HC25 & 0.05 & 4.6 & 19.52 & 1.21 & \\ HC26 & 0.03 & 21.5 & 20.21 & 5.43 & \\ HC27 & 0.04 & 20.1 & 52.40 & 2.44 & ✓ \\ HC28 & 0.02 & 15.5 & 24.66 & 2.28 & \\ HC29 & 0.02 & 33.3 & 29.45 & 4.51 & \\ HC30 & 0.03 & 19.3 & 28.08 & 4.11 & \\ HC31 & 0.03 & 14.4 & 14.73 & 2.88 & \\ HC32 & 0.03 & 12.4 & 20.20 & 3.45 & \\ \hline \end{tabular}
\end{table}
Table 3: HCO\({}^{+}\) spectral line data.
Finally, \(U(T)\) is the partition function:
\[U(T)=\sum_{J=0}^{\infty}g_{J}\exp\left[\frac{-hBJ(J+1)}{kT_{ex}}\right]\approx \frac{k}{hB}\left(T_{ex}+\frac{hB}{3k}\right), \tag{5}\]
where \(g_{J}=2J+1\) is the rotational degeneracy. A cosmic background temperature \(T_{bg}\) of 2.75 K was considered. The excitation temperature \(T_{ex}\) was set to a standard value of 30 K, which is close to the median of the dust temperatures (e.g. Elia et al. 2017) (\(26.4\pm 6.0\) K). The total column density in units of cm\({}^{-2}\) can be obtained by multiplying \(N_{J}\) by the abundance of H\({}^{13}\)CO\({}^{+}\): \(\left[\mathrm{H_{2}/H^{13}CO^{+}}\right]=3.0\times 10^{10}\)(Blake et al. 1987). Finally, the LTE mass is obtained by multiplying the total column density by the area of the source:
\[M_{\mathrm{LTE}}=N_{\mathrm{tot}}\pi R^{2} \tag{6}\]
where \(R\) is the radius of the beam. These results are also shown in Table 7.
We compared our LTE mass estimates to the clump mass derived from dust emission obtained by Hi-GAL (Elia et al. 2017, 2021). The ratio between the dust and LTE masses has a mean of 2.11, a median of 1.64 and a standard deviation of 1.34. The origin of this uncertainty can come from the assumed temperature as well as the H13CO+ abundance. Furthermore, the dust masses from Hi-GAL used here were calculated by fitting a SED which was scaled to the angular size at 250 \(\mu\)m (Elia et al. 2013; Motte, F. et al. 2010) (we note that for the four saturated sources of the sample, the mass used here was obtained using the flux at 500 \(\mu\)m instead; see Appendix A for the treatment of saturated sources). The H\({}^{13}\)CO\({}^{+}\) line and emission at 250 \(\mu\)m trace mostly the same dense gas and their angular sizes are within a factor of 2.35 on average (see Section 3), which is consistent with the difference seen between the masses. This indicates our estimates are sound.
The uncertainty of the molecular observations is \(\sim\) 20 %, which affects the Gaussian fit parameters, and propagates to the rest of the clump's properties. Because of the propagating error, the column density can be considered to have 10 - 20 % of uncertainty, whilst the mass uncertainty is \(<\) 30 %. The distances are also a source of uncertainty, which affects the beam radius and masses (see Appendix B).
#### 4.1.3 Virial and LTE Mass Comparison
The ratio between the virial and LTE mass can provide information about the gravitational stability of the source. High values of this ratio indicate that the internal kinetic energy of the source is too high and that it is not massive enough to be gravitationally
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Source & \(R\) & RMS & Area & Width & Position & Peak \(T_{MB}\) \\ & (pc) & (K) & (K km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (K) \\ \hline HC01 & 0.26 & 0.02 & 2.4 & 2.29 & 8.4 & 1.00 \\ HC02 & 0.26 & 0.03 & 18.7 & 4.68 & 9.3 & 3.75 \\ HC03 & 0.31 & 0.02 & 3.9 & 2.83 & 21.0 & 1.29 \\ HC04 & 0.46 & 0.03 & 7.5 & 7.09 & 66.8 & 0.99 \\ HC05 & 0.25 & 0.02 & 4.0 & 3.80 & 33.1 & 0.99 \\ HC06 & 0.26 & 0.02 & 7.0 & 3.91 & 37.3 & 1.67 \\ HC07 & 0.33 & 0.02 & 2.9 & 3.23 & 59.5 & 0.84 \\ HC08 & 0.38 & 0.01 & 6.8 & 4.58 & -39.5 & 1.40 \\ HC09 & 0.47 & 0.02 & 3.8 & 3.09 & -57.9 & 1.16 \\ HC10 & 0.40 & 0.02 & 4.4 & 3.19 & -54.9 & 1.30 \\ HC11 & 0.31 & 0.01 & 1.3 & 2.96 & -48.1 & 0.43 \\ HC12 & 0.23 & 0.02 & 1.8 & 1.99 & -40.2 & 0.86 \\ HC13 & 0.36 & 0.03 & 4.2 & 2.94 & -68.0 & 1.34 \\ HC14 & 0.34 & 0.03 & 1.6 & 2.52 & -64.9 & 0.59 \\ HC15 & 0.44 & 0.03 & 3.1 & 3.80 & -88.7 & 0.77 \\ HC16 & 0.33 & 0.03 & 5.8 & 4.40 & -62.7 & 1.24 \\ HC17 & 0.425 & 0.03 & 4.0 & 4.60 & -87.8 & 3.84 \\ HC18 & 0.28 & 0.03 & 2.8 & 2.99 & -48.5 & 0.88 \\ HC19 & 0.27 & 0.03 & 1.1 & 1.67 & -47.0 & 0.60 \\ HC20 & 0.29 & 0.03 & 22.4 & 5.48 & -53.2 & 3.84 \\ HC21 & 0.28 & 0.02 & 7.8 & 3.54 & -46.6 & 2.07 \\ HC22 & 0.23 & 0.02 & 2.7 & 2.44 & -34.3 & 1.04 \\ HC23 & 0.24 & 0.01 & 2.8 & 2.85 & -46.2 & 0.92 \\ HC24 & 0.24 & 0.02 & 1.9 & 2.23 & -40.8 & 0.78 \\ HC25 & 0.40 & 0.01 & 2.2 & 4.19 & -71.0 & 0.48 \\ HC26 & 0.24 & 0.02 & 4.1 & 3.04 & -40.8 & 1.27 \\ HC27 & 0.25 & 0.02 & 7.4 & 6.30 & -27.0 & 1.10 \\ HC28 & 0.34 & 0.01 & 2.6 & 3.74 & -31.5 & 0.65 \\ HC29 & 0.48 & 0.01 & 5.1 & 3.72 & -69.4 & 1.28 \\ HC30 & 0.24 & 0.01 & 3.8 & 3.09 & -18.0 & 1.16 \\ HC31 & 0.12 & 0.01 & 4.1 & 3.05 & -10.6 & 1.27 \\ HC32 & 0.325 & 0.01 & 1.1 & 2.30 & -20.5 & 0.44 \\ \hline \end{tabular}
\end{table}
Table 4: H\({}^{13}\)CO\({}^{+}\) Spectral Line Data.
stable. Since the material traced by H\({}^{13}\)CO\({}^{+}\) emission should be both gravitationally bound and in local thermodynamic equilibrium, the virial and LTE mass should match within a factor of \(\sim 2\) when magnetic effects are not considered. Results are presented in Table 7. The ratio between the virial and LTE mass has a mean value of 1.54, a median of 1.45, a minimum of 0.64 and a maximum of 3.07. Only 8 sources have a ratio of 2.00 or higher. Thus, the virial and LTE masses are in good agreement.
### Properties of the SiO Outflows
#### 4.2.1 Outflow Mass
We calculated the LTE mass for the SiO outflows in the red and blue wings. With an assumption of optically thin thermal SiO(4-3) emission in LTE (e.g. Liu et al., 2021), the column densities \(N_{J}\), \(N_{tot}\) and the LTE mass were calculated as described in Sect. 4.1.2 using a rotational constant \(B\) of 21711.96 (MHz), an electric dipole moment \(\mu\) of \(3.10\times 10^{-18}\) (D), an excitation temperature \(T_{ex}\) of 18 K and an SiO abundance [H\({}_{2}\)/SiO] of \(10^{9}\)(Liu et al., 2021; Sanhueza et al., 2012), standard values for this type of sources. If a higher \(T_{ex}\) of 25 K or 30 K was used, the values of the outflow masses would increase by 16.32% and 29.42% respectively. Since SiO abundances can drastically vary in different MSF regions (in up to six orders of magnitude, Ziurys et al. (1989); Martin-Pintado et al. (1992); Li et al. (2019b)), using an inadequate choice for this parameter could artificially modify the results. Here, we use a standard value for [H\({}_{2}\)/SiO], accepted for protostellar objects. Results are displayed in Table 8.
#### 4.2.2 Dynamical Properties
Further characteristics of the outflow were computed as follows (Beuther et al., 2002): momentum \(P_{out}\), kinetic energy \(E_{kin}\), characteristic timescale \(t\), mechanical force or flow of momentum \(F\), rate of matter expulsion \(\dot{M}\) and mechanical luminosity or power \(L_{m}\).
\[P_{out}=M_{r}V_{r}+M_{b}V_{b} \tag{7}\]
\[E_{kin}=\frac{1}{2}M_{r}V_{r}^{2}+\frac{1}{2}M_{b}V_{b}^{2} \tag{8}\]
\begin{table}
\begin{tabular}{c c c c} \hline Group & SiO Detection & HCO\({}^{+}\) Wings & No. of Sources \\ \hline A & \(\checkmark\) & \(\checkmark\) & 9 \\ B & \(\checkmark\) & - & 16 \\ C & - & - & 7 \\ \hline \end{tabular}
\end{table}
Table 6: Grouping of the sources.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Source & \(R\) & RMS & \multicolumn{3}{c}{Area} & Width & Position & Peak \\ & (pc) & (mK) & \multicolumn{3}{c}{(K km s\({}^{-1}\))} & \multicolumn{3}{c}{(km s\({}^{-1}\))} & (km s\({}^{-1}\)) & (K) \\ \hline & & & Full & Red Wing & Blue Wing & FWZP & Red Wing & Blue Wing & & \\ HC01 & 0.26 & 0.02 & 0.4 & - & - & 33.90 & - & - & 10.1 & 0.05 \\ HC02 & 0.26 & 0.028 & 26.8 & 7.5 & 14.6 & 92.12 & 40.11 & 47.33 & 10.0 & 1.11 \\ HC03 & 0.31 & 0.02 & 1.0 & 0.4 & 0.5 & 27.05 & 13.59 & 10.64 & 21.1 & 0.17 \\ HC04 & 0.46 & 0.03 & 8.4 & 1.1 & 3.8 & 45.55 & 11.51 & 26.95 & 67.2 & 0.59 \\ HC05 & 0.25 & 0.02 & 2.3 & 0.5 & 1.2 & 43.15 & 16.95 & 22.41 & 33.1 & 0.22 \\ HC06 & 0.26 & 0.02 & 5.0 & 2.1 & 1.7 & 49.32 & 24.06 & 21.35 & 36.9 & 0.39 \\ HC07 & 0.33 & 0.02 & 2.5 & 0.6 & 1.2 & 47.26 & 16.56 & 27.47 & 59.4 & 0.26 \\ HC08 & 0.38 & 0.01 & 13.5 & 5.5 & 6.1 & 74.66 & 35.58 & 34.50 & -39.1 & 1.57 \\ HC09 & 0.47 & 0.01 & 1.0 & 0.4 & 0.3 & 36.64 & 19.13 & 14.42 & -58.1 & 0.12 \\ HC10 & 0.40 & 0.01 & 0.8 & 0.1 & 0.3 & 19.18 & 4.88 & 11.11 & -53.5 & 0.15 \\ HC11 & 0.31 & 0.014 & 0.1 & - & - & 14.73 & - & - & -50.1 & 0.06 \\ HC12 & 0.23 & 0.02 & 1.3 & 0.4 & 0.6 & 30.82 & 11.74 & 17.09 & -39.2 & 0.25 \\ HC13 & 0.36 & 0.03 & 0.0 & - & - & 0.00 & - & - & -67.5 & 0.08 \\ HC14 & 0.34 & 0.03 & 0.0 & - & - & 0.00 & - & - & -64.7 & 0.08 \\ HC15 & 0.44 & 0.03 & 2.4 & 0.6 & 0.8 & 29.80 & 11.34 & 14.66 & -87.9 & 0.34 \\ HC16 & 0.33 & 0.03 & 6.5 & 2.4 & 2.2 & 42.81 & 20.25 & 18.17 & -62.9 & 0.48 \\ HC17 & 0.425 & 0.03 & 12.3 & 3.3 & 5.9 & 56.85 & 26.33 & 25.93 & -86.7 & 0.78 \\ HC18 & 0.28 & 0.03 & 15.6 & 1.3 & 0.5 & 36.99 & 19.59 & 14.40 & -49.4 & 0.21 \\ HC19 & 0.27 & 0.03 & 0.0 & - & - & 0.00 & - & - & 0.0 & 0.07 \\ HC20 & 0.29 & 0.02 & 11.8 & 1.3 & 6.4 & 50.34 & 18.90 & 25.97 & -53.3 & 0.94 \\ HC21 & 0.28 & 0.02 & 2.3 & 2.9 & 4.5 & 60.96 & 23.95 & 33.46 & -46.3 & 0.97 \\ HC22 & 0.23 & 0.02 & 19.1 & 0.8 & 2.0 & 87.67 & 27.45 & 57.78 & -34.6 & 0.15 \\ HC23 & 0.24 & 0.01 & 1.6 & 0.6 & 0.7 & 44.86 & 19.48 & 22.53 & -45.8 & 0.15 \\ HC24 & 0.24 & 0.01 & 1.5 & 0.6 & 0.6 & 46.92 & 21.63 & 23.06 & -40.3 & 0.15 \\ HC25 & 0.40 & 0.02 & 0.9 & 0.3 & 0.3 & 31.51 & 13.29 & 14.02 & -70.3 & 0.09 \\ HC26 & 0.24 & 0.02 & 1.5 & 0.4 & 0.6 & 30.14 & 9.84 & 17.25 & -40.5 & 0.22 \\ HC27 & 0.25 & 0.02 & 17.6 & 6.1 & 6.9 & 74.66 & 34.50 & 33.86 & -25.3 & 1.02 \\ HC28 & 0.34 & 0.01 & 0.8 & 0.3 & 0.2 & 31.85 & 15.56 & 12.55 & -30.8 & 0.10 \\ HC29 & 0.48 & 0.01 & 2.1 & 0.8 & 0.6 & 42.47 & 22.89 & 15.86 & -68.5 & 0.16 \\ HC30 & 0.24 & 0.01 & 0.0 & 0.9 & 0.2 & 45.21 & 29.28 & 12.83 & 18.7 & 0.15 \\ HC31 & 0.12 & 0.013 & 0.3 & - & - & 17.81 & - & -12.5 & 0.05 \\ HC32 & 0.325 & 0.01 & 0.3 & - & - & 33.56 & - & - & -19.2 & 0.04 \\ \hline \end{tabular}
\end{table}
Table 5: SiO Spectral Line Data.
\[t=\frac{2R}{(V_{r}+V_{b})} \tag{9}\]
\[F_{m}=\frac{P_{\rm out}}{t} \tag{10}\]
\[\dot{M}_{out}=\frac{M_{\rm out}}{t} \tag{11}\]
\[L_{m}=\frac{E_{\rm kin}}{t} \tag{12}\]
Here, \(R\) is the radius of the source, \(M_{\rm out}=M_{r}+M_{b}\) is the total outflow mass, and \(V_{b}\) and \(V_{r}\) are the maximum velocities of the outflows, which corresponds to \(|V_{LSR}-V_{i}|\)(Beuther et al., 2002; Liu, D.J. et al., 2021), where \(V_{i}\) is the velocity at each the extreme of the SiO emission baseline. Results are displayed in Table 8.
The uncertainty of the observations (20%; see Section 2.2) propagates to these calculated properties. The outflow column density and mass have about 10 - 20% and \(<\) 30% of uncertainty, respectively. The momentum and kinetic energy have the same uncertainty as the mass, whilst the force, rate of matter expulsion and mechanical luminosity have an additional source of uncertainty due to the dynamical timescales, which comes from the distances and the instrumental error (\(\sim\)20%).
A statistical description of these results is shown in Table 9. The typical rate of matter expulsion for mid and early massive protostars ranges from 10\({}^{-5}\) to a few times 10\({}^{-3}\)\(M_{\odot}\) yr\({}^{-1}\), and the typical momentum from 10\({}^{-4}\) to 10\({}^{-2}\)\(M_{\odot}\) km s\({}^{-1}\)(Arce et al., 2007). Our results for the rate of matter expulsion are well within this typical range. The momentum is generally higher, with a mean of 922 \(M_{\odot}\) km s\({}^{-1}\), indicating that the outflows in our sample are generally very fast and/or massive. Furthermore, the SiO outflows studied here present properties that are within the range of results presented in other works but are more massive and powerful. This is the case for the sample of protostellar candidates studied by Beuther et al. (2002), which were traced with CO(2-1), as well as the sample of massive young stellar objects (MYSOs) and compact HII regions traced with \({}^{12}\)CO(2-1) and \({}^{13}\)CO(2-1) by Maud et al. (2015). Similarly, our sample has more massive and powerful SiO outflows than other studies that have traced them with the SiO(5-4) spectral line, such as the sample of infrared dark clouds (IRDCs) studied by Liu et al. (2021), for which our results are larger by up to three orders of magnitude. Thus, not only our results are in good accordance with other works in the literature, but our sources stand out because of their powerful outflows.
## 5 Discussions
### Correlations
We looked for correlations between the properties of the outflows and the properties of their hosting clump. We found three relevant correlations: between the bolometric luminosity and outflow power, between the outflow power and the rate of matter expulsion, and between the kinetic energy of the outflow and outflow mass. Their rank correlation coefficients were calculated and the Python package sklearn.linear_model(Pedregosa et al., 2012) was used to perform a linear regression in order to find the slope of these correlations. Our results show that, for these three correlations, the rank correlation coefficients are positive, as well as the slopes.
A weak linear trend between the bolometric luminosity and the outflow power shown in Fig. 4 was found. This correlation tells us that, given that a source has a molecular outflow, the more luminous the source, the more energetic its outflow will be. Therefore, it is possible that it will power more massive and energetic outflows. However, this correlation has a large amount of dispersion, it is not possible to make definitive conclusions. Other studies have also found this correlation: Wu, Y. et al. (2004), Liu, D.J. et al. (2021) and Maud et al. (2015) found it in the \({}^{12}\)CO and \({}^{13}\)CO lines, and Liu, D.J. et al. (2021) also found in on the HCO\({}^{+}\) and CS lines. It has also been found in other SiO transitions by Cesgeni et al. (2016); Li et al. (2019b); Liu et al. (2021). All of these species are outflow tracers and the linear regressions done have always given a positive slope. Thus, in spite of being a weak correlation in our sample, it is still positive and is in agreement with the trends found by these other works. Thus, the idea that more luminous sources are able to produce more powerful shocks (Codella et al., 1999) cannot be discarded. Since this has been found in various SiO transitions and outflow tracers, and now in the SiO(4-3) one as well, it is a possibly universal trend to molecular outflows in MSF regions.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Source & \(M_{vir}\) & \(N_{tot}\) & \(M_{LTE}\) & \(M_{vir}/M_{LTE}\) & Group \\ & (\(M_{\odot}\)) & (10\({}^{22}\) cm\({}^{-2}\)) & (\(M_{\odot}\)) & & \\ \hline HC01 & 284 & 5.36 & 242 & 1.17 & C \\ HC02 & 1181 & 40.99 & 1839 & 0.64 & A \\ HC03 & 521 & 8.50 & 559 & 0.93 & B \\ HC04 & 4867 & 16.39 & 2381 & 2.04 & B \\ HC05 & 760 & 8.73 & 376 & 2.02 & B \\ HC06 & 828 & 15.24 & 693 & 1.19 & A \\ HC07 & 735 & 6.34 & 485 & 1.51 & B \\ HC08 & 1656 & 14.91 & 1438 & 1.15 & A \\ HC09 & 936 & 8.37 & 1245 & 0.75 & B \\ HC10 & 849 & 9.66 & 1047 & 0.81 & B \\ HC11 & 579 & 2.95 & 199 & 2.91 & C \\ HC12 & 195 & 4.00 & 150 & 1.30 & B \\ HC13 & 645 & 9.17 & 796 & 0.81 & C \\ HC14 & 450 & 3.49 & 271 & 1.66 & C \\ HC15 & 1332 & 6.81 & 901 & 1.48 & B \\ HC16 & 1334 & 12.69 & 937 & 1.42 & A \\ HC17 & 1887 & 8.78 & 1086 & 1.74 & A \\ HC18 & 519 & 6.16 & 322 & 1.61 & A \\ HC19 & 158 & 2.35 & 117 & 1.35 & C \\ HC20 & 1829 & 49.04 & 2823 & 0.65 & A \\ HC21 & 741 & 178 & 920 & 0.81 & A \\ HC22 & 284 & 5.92 & 208 & 1.37 & B \\ HC23 & 414 & 6.14 & 247 & 1.68 & B \\ HC24 & 247 & 4.07 & 156 & 1.58 & B \\ HC25 & 1488 & 4.73 & 526 & 2.83 & B \\ HC26 & 465 & 9.02 & 352 & 1.32 & B \\ HC27 & 2050 & 16.19 & 669 & 3.07 & A \\ HC28 & 993 & 5.67 & 441 & 2.25 & B \\ HC29 & 1400 & 11.12 & 1772 & 0.79 & B \\ HC30 & 473 & 8.39 & 318 & 1.49 & B \\ HC31 & 227 & 9.06 & 83 & 2.73 & C \\ HC32 & 360 & 2.38 & 172 & 2.09 & C \\ \hline \end{tabular}
\end{table}
Table 7: Properties obtained from H\({}^{13}\)CO+ emission.
[MISSING_PAGE_POST]
As shown in Fig. 5, there is a close linear relationship between the kinetic energy and the outflow mass. This correlation has a Spearman \(\rho\) coefficient of 0.93 and a slope of 0.69. It is strong and suggests that the more massive an outflow is, the faster it will be.
Note that this correlation is physically equivalent to one between the outflow power and the rate of mass expulsion, as these parameters are proportional to the inverse of the characteristic timescale (see Equations 11 and 12):
\[t=\frac{M_{out}}{M_{out}}=\frac{E_{kin}}{L_{m}} \tag{13}\]
Thus, this correlation carries the uncertainties of the dynamical timescale previously discussed. However, it is still useful to analyse this trend as it provides intuitive physical significance and a direct comparison with other works is possible. In our sample, this is a strong relation: its Spearman \(\rho\) rank correlation coefficient is equal to 0.97 and its slope is equal to 0.70 (see Fig. 6). This relationship tells us that the more powerful the outflow is, the faster it ejects matter. Therefore, in addition to expelling more material, energetic outflows expel this material faster. This trend has also been found by other studies, such as Beuther et al. (2002) and Liu, D.J. et al. (2021) in the CO line and its isotopologues. Liu, D.J. et al. (2021) found this trend in other outflow tracers too, such as HCO\({}^{+}\) and CS, indicating that this trend is common to outflows independent of the tracer used. Finding this trend in SiO here further supports this idea.
The correlations found here suggest that, since sources with a larger bolometric luminosity are more energetic, their outflows also are. These energetic outflows bear to mobilise more material. Thus, their rate of mass loss is high, which translates into large outflow masses. These correlations agree with the dependency suggested by Liu, D.J. et al. (2021) (see also Wu, Y. et al. 2004): bolometric luminosity \(\rightarrow\) outflow energy \(\rightarrow\) rate of mass loss \(\rightarrow\) outflow mass.
Figure 4: Bolometric luminosity (\(L_{\odot}\)) vs. outflow power \(L_{m}\) (\(L_{\odot}\)) in logarithmic scale.
Figure 5: Kinetic energy (\(10^{46}\) erg) vs. outflow mass (\(M_{\odot}\)) in logarithmic scale. The Spearman \(\rho\) rank correlation coefficient is equal to 0.93 and the slope is equal to 0.69.
Figure 6: Outflow power \(L_{m}\) (\(L_{\odot}\)) vs. rate of matter expulsion \(\dot{M}\) (\(10^{-4}M_{\odot}\) yr\({}^{-1}\)) in logarithmic scale. The Spearman \(\rho\) rank correlation coefficient is equal to 0.97 and the slope is equal to 0.70.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \(M_{out}\) & \(P_{out}\) & \(E_{kin}\) & \(t\) & \(\dot{M}\) & \(F\) & \(L_{m}\) \\ & (\(M_{\odot}\)) & (\(M_{\odot}\) km s\({}^{-1}\)) & (\(10^{46}\) erg) & (\(10^{4}\) yr) & (\(10^{-4}M_{\odot}\) yr\({}^{-1}\)) & (\(10^{-3}M_{\odot}\) km s\({}^{-1}\) yr\({}^{-1}\)) & (\(L_{\odot}\)) \\ \hline Mean & 30 & 928 & 27 & 1.6 & 29 & 102.69 & 271 \\ STD. & 36 & 1358 & 50 & 0.8 & 45 & 196.34 & 622 \\ Min. & 3 & 48 & 0.2 & 0.5 & 1 & 1.19 & 0.4 \\
25\% & 5 & 112 & 2 & 1.0 & 4 & 7.81 & 9 \\ Median & 11 & 271 & 4 & 1.5 & 7 & 14.42 & 29 \\
75\% & 41 & 1255 & 17 & 2.1 & 37 & 100.10 & 135 \\ Max. & 116 & 4813 & 183 & 4.1 & 187 & 884.91 & 2784 \\ \hline \end{tabular}
\end{table}
Table 9: Statistics of the properties of the outflow.
### Evolutionary Stage Indicator L/M
The luminosity to mass ratio \(L/M\) can be used as an evolutionary stage indicator (e.g. Molinari et al. 2016; Elia et al. 2021; Urquhart et al. 2022). In the first stages of star formation, the core is cold and actively accreting matter, so \(L/M\leq 1\). Eventually, the temperature and luminosity increase, while the mass remains virtually unchanged. This results in a value of \(L/M\geq 1\). It has been found that the more luminous sources in the Galaxy usually have \(L/M\geq 10\)(Lopez-Sepulcre, A. et al. 2011; Molinari et al. 2016). The sources in the sample studied here are in an advanced stage of the star-forming process, as \(L/M\) has values ranging from 3 \(L_{\odot}/M_{\odot}\) up to 177 \(L_{\odot}/M_{\odot}\), with a mean of 58 \(L_{\odot}/M_{\odot}\) and a median of 41 \(L_{\odot}/M_{\odot}\) (see Table 2). Furthermore, we looked for correlations between \(L/M\) and the SiO outflow properties and found that they are not correlated. Other works have also failed to find such correlations (Liu et al. 2022; Maud et al. 2015; Liu, D.J. et al. 2021). This strongly suggests that molecular outflows are present throughout the whole of the high-mass star formation process (Csengeri et al. 2016; Urquhart et al. 2022).
### Cross-Check between SiO and HCO\({}^{+}\) Wings Detections
All of the sources that present wings in the HCO\({}^{+}\) line, also present SiO emission above 5\(\sigma\). We compared these sources (Group A) with the rest of the clumps with SiO detections (Group B) (see Table 6). Table 10 shows a statistical comparison of the spectral parameters, outflow properties and the evolutionary stage indicator \(L/M\) between these groups.
SiO spectral profiles with FWZP \(>20\) km s\({}^{-1}\) are considered broad line widths due to high-velocity shocks (Duarte-Cabral, A. et al. 2014; Zhu et al. 2020). We note that all the sources in both Groups A and B exhibit broad line widths, with a minimum FWZP of 19.18 km s\({}^{-1}\), a mean value of 47.15 km s\({}^{-1}\), a median value of 44.86 km s\({}^{-1}\) and a maximum of 92.12 km s\({}^{-1}\). This means that all of the sources where SiO was detected have experienced recent outflow activity, which has manifested in high-velocity shocks, typically associated with collimated jets, and it is strong enough to produce SiO outflows (Csengeri et al. 2016; Li et al. 2019b; Liu et al. 2022). This confirms the sources in both Groups A and B are protostellar (Zhu et al. 2020).
Furthermore, the SiO spectral profile of the sources in Group A is wider and more intense, indicating that the outflows in this group are more massive, faster and more energetic than those in Group B (Schilke et al. 1997; Duarte-Cabral, A. et al. 2014; Liu et al. 2022) (see Table 10). Their mean outflow mass is larger than those in Group B by a factor of 4.8, their mean momentum by a factor of 7, their mean kinetic energy by a factor of 14.7, their dynamic timescale by a factor of 0.58, their rate of matter expulsion by a factor of 8.7, their outflow force by a factor of 12, and their outflow power by a factor of 25.1. Sources in Group A are possibly more collimated too, as broad-width SiO detection is associated with them. Further studies of the spatial morphology of these outflows could confirm this trend.
Even though SiO outflow activity has been found to decrease and get less collimated over time (Arce et al. 2007; Sakai et al. 2010; Lopez-Sepulcre, A. et al. 2011), results show that sources in Group A and B are virtually in the same stage of evolution because their evolutionary stage indicator \(L/M\) have similar values. Table 10 shows that the evolutionary stage indicator \(L/M\) has a mean value of 40 (\(L_{\odot}/M_{\odot}\)) and 47 (\(L_{\odot}/M_{\odot}\)), and a median of 32 (\(L_{\odot}/M_{\odot}\)) and 29 (\(L_{\odot}/M_{\odot}\)) for Groups A and B, respectively. Thus, sources in Group A and B are most likely equally evolved, even within the protostellar phase (Motte et al. 2018). The fact that sources in Group A have more intense and massive SiO outflows, but both groups are in the same evolutionary stage, suggests that the presence of wings in the HCO\({}^{+}\) spectral profile could help identify sources with stronger SiO outflows. Therefore, checking for wings in the HCO\({}^{+}\) line could help identify more massive and active MSF regions in samples of similarly evolved sources. Alternatively, since HCO\({}^{+}\) emission is associated with old outflow activity (Li et al. 2019a), sources in Group B might have just recently started exhibiting strong outflow activity, allowing for SiO outflows to be detected, and are too new to produce wings in the HCO\({}^{+}\) spectral profile. Meanwhile, sources in Group A might have started exhibiting outflow activity long enough time ago for wings in the HCO\({}^{+}\) spectral profile, a signal of old outflows, to form. In this scenario, SiO outflows detected from the sources in Group A have been active for a longer time, which might be a consequence of their large mass, power and luminosity. Thus, checking for wings in the HCO\({}^{+}\) line might also help identify sources whose outflows have been active for longer. Further research with larger samples and better resolution could provide insight into which of these two scenarios is more likely and significant.
## 6 Summary and Conclusions
The aim of this work was to study shocked matter and SiO outflows in very massive, luminous and powerful protostellar sources. We characterised them, searched for any evolutionary trends they might exhibit, associated the presence of shocked gas with outflow activity, and cross-checked SiO emission with other outflow signatures. We used single-dish APEX/SEPIA Band 5 observations of 32 of the brightest massive protostellar sources in the Southern Galaxy. We studied their outflow activity using the SiO and HCO\({}^{+}\) tracers, as well as their general clump properties with H\({}^{13}\)CO\({}^{+}\) emission. The following is a summary of our main results and conclusions:
1. The SiO emission detection rate above 5\(\sigma\) is 78% (25 sources). All SiO detections have a broad line width due to high-velocity shocks, which confirms these sources are well-evolved into the protostellar phase. Furthermore, 28% of our sample (9 sources) shows wings in the HCO\({}^{+}\) spectral profile.
2. We calculated the dynamical properties of the SiO outflows. Results show they are very massive, fast and energetic. Outflow mass has a minimum value of 3 \(M_{\odot}\), a median of 10 \(M_{\odot}\), a mean of 30 \(M_{\odot}\), and a maximum value of 116 \(M_{\odot}\). Outflow momentum has a minimum value of 50 \(M_{\odot}\) km s\({}^{-1}\), a median of 271 \(M_{\odot}\) km s\({}^{-1}\), a mean of 922 \(M_{\odot}\) km s\({}^{-1}\), and a maximum value of 4812 \(M_{\odot}\) km s\({}^{-1}\). The kinetic energy of the outflow has a minimum value of \(0.1\times 10^{46}\) erg, a median of \(4\times 10^{46}\) erg, a mean of \(27\times 10^{46}\) erg, and a maximum value of \(183\times 10^{46}\) erg. Outflow power has a minimum value of \(1.24\times 10^{-3}M_{\odot}\) km s\({}^{-1}\) yr\({}^{-1}\), a median of \(102.25\times 10^{-3}M_{\odot}\) km s\({}^{-1}\) yr\({}^{-1}\), and a maximum value of \(884.85\times 10^{-3}M_{\odot}\) km s\({}^{-1}\) yr\({}^{-1}\).
3. We found three positive linear correlations involving SiO outflows properties: a weak one between the bolometric luminosity and outflow power (Fig. 4), a strong one between the outflow power and the rate of matter expulsion (Fig. 6), and between the kinetic energy and outflow mass (Fig. 5). They are both positive correlations. The second and third have very low dispersion, but the former has high dispersion. These correlations suggest that the more energetic the outflow, the more material it bears to mobilise.
4. We did not find any correlations between the evolutionary stage indicator \(L/M\) and SiO outflow properties. This agrees with the idea that molecular outflows are a ubiquitous phenomenon throughout the process of high-mass star formation.
5. We performed a cross-check between SiO detection and the presence of wings in the HCO\({}^{+}\) line. Sources in Group A (sources that present both features; see Table 6) have more massive, faster, more energetic and more collimated SiO outflows than sources in Group B (sources that only exhibit SiO emission). Sources in both groups are in an advanced stage of evolution in the high-mass star formation process, and there is no clear evolutionary difference between them. Thus, since SiO emission is such a good tracer of outflow activity, checking for wings in the HCO\({}^{+}\) line could help identify more massive and active MSF regions in samples of similarly evolved sources. Alternatively, checking for wings in the HCO\({}^{+}\) line might help identify sources whose outflows have been active for longer, since HCO\({}^{+}\) traces old outflow activity, whilst SiO traces recent and active outflows.
Our findings show potential for further studies of sources in an advanced stage of evolution (\(L/M>10\)) and their molecular outflows.
###### Acknowledgements.
We thank the anonymous referee for their helpful and insightful comments. This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX) under programme ID [TPC-NNNN(R)]. APEX is a collaboration between the Max-Planck-Institut fur Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory, M.M. acknowledges support from ANID, Programa de Astronomia - Fondo ALMA-CONICYT, project 3119A5001. L.B. and R.F. gratefully acknowledge support by the ANID BASAL projects ACE210002 and FB210003, FONDEF ID2110359 and FONDECYT 1221662. E.M. acknowledges support under the grant "Maria Zambrano" from the UHU funded by the Spanish Ministry of Universities and the "European Union NextGenerationEU".
|
2310.20205 | The differential properties of certain permutation polynomials over
finite fields | Finding functions, particularly permutations, with good differential
properties has received a lot of attention due to their possible applications.
For instance, in combinatorial design theory, a correspondence of perfect
$c$-nonlinear functions and difference sets in some quasigroups was recently
shown [1]. Additionally, in a recent manuscript by Pal and Stanica [20], a very
interesting connection between the $c$-differential uniformity and boomerang
uniformity when $c=-1$ was pointed out, showing that that they are the same for
an odd APN permutations. This makes the construction of functions with low
$c$-differential uniformity an intriguing problem. We investigate the
$c$-differential uniformity of some classes of permutation polynomials. As a
result, we add four more classes of permutation polynomials to the family of
functions that only contains a few (non-trivial) perfect $c$-nonlinear
functions over finite fields of even characteristic. Moreover, we include a
class of permutation polynomials with low $c$-differential uniformity over the
field of characteristic~$3$. As a byproduct, our proofs shows the permutation
property of these classes. To solve the involved equations over finite fields,
we use various techniques, in particular, we find explicitly many Walsh
transform coefficients and Weil sums that may be of an independent interest. | Kirpa Garg, Sartaj Ul Hasan, Pantelimon Stanica | 2023-10-31T06:06:41Z | http://arxiv.org/abs/2310.20205v1 | # The differential properties of certain permutation polynomials over finite fields
###### Abstract.
Finding functions, particularly permutations, with good differential properties has received a lot of attention due to their possible applications. For instance, in combinatorial design theory, a correspondence of perfect \(c\)-nonlinear functions and difference sets in some quasigroups was recently shown [1]. Additionally, in a recent manuscript by Pal and Stanica [20], a very interesting connection between the \(c\)-differential uniformity and boomerang uniformity when \(c=-1\) was pointed out, showing that that they are the same for an odd APN permutations. This makes the construction of functions with low \(c\)-differential uniformity an intriguing problem. We investigate the \(c\)-differential uniformity of some classes of permutation polynomials. As a result, we add four more classes of permutation polynomials to the family of functions that only contains a few (non-trivial) perfect \(c\)-nonlinear functions over finite fields of even characteristic. Moreover, we include a class of permutation polynomials with low \(c\)-differential uniformity over the field of characteristic \(3\). As a byproduct, our proofs shows the permutation property of these classes. To solve the involved equations over finite fields, we use various techniques, in particular, we find explicitly many Walsh transform coefficients and Weil sums that may be of an independent interest.
Key words and phrases:Finite fields, permutation polynomials, \(c\)-differential uniformity 2020 Mathematics Subject Classification: 12E20, 11T06, 94A60
## 1. Introduction
Let \(\mathbb{F}_{q}\) be a finite field with \(q=p^{n}\) elements, where \(p\) is a prime number and \(n\) is a positive integer. We use \(\mathbb{F}_{q}[X]\) to denote the ring of polynomials in one variable \(\mathrm{X}\) with coefficients in \(\mathbb{F}_{q}\) and \(\mathbb{F}_{q}^{*}\) to denote the multiplicative group of nonzero elements of \(\mathbb{F}_{q}\). If \(F\) is a function from \(\mathbb{F}_{q}\) to itself, by using Lagrange's interpolation formula, one can uniquely express it as a polynomial in \(\mathbb{F}_{q}[X]\) of degree at most \(q-1\). A polynomial \(F\in\mathbb{F}_{q}[X]\) is a permutation polynomial of \(\mathbb{F}_{q}\) if the mapping \(X\mapsto F(X)\) is a bijection on \(\mathbb{F}_{q}\). Permutation polynomials over finite fields are of great interest due to their numerous applications in coding theory [5, 14], combinatorial design theory [6], cryptography [17, 21], and other areas of mathematics and engineering. These polynomials with some desired properties such as low differential uniformity, high algebraic degree and high nonlinearity act as an important candidate in designing cryptographically strong S-boxes and hence in providing secure communication.
Block ciphers are susceptible to a wide variety of attacks. One of the most effective cryptanalytic tools for attacking block ciphers is the differential attack introduced by Biham and Shamir in their paper [2]. To measure the resistance of a given function over a finite field (i.e., of a given S-box) against the differential attack, Nyberg [19] introduced the notion of differential uniformity as follows. Let \(F\) be a function, \(F:\mathbb{F}_{q}\to\mathbb{F}_{q}\). For any
Introduction
Let \(\mathbb{F}_{q}\) be a finite field, and let \(\mathbb{F}_{q}\) be a finite field. A _generalized_ (generalized) (generalized) (generalized) (generalized) _generalized_ (generalized) (generalized) _generalized_ (generalized) (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _(generalized) _generalized_ (generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) (generalized) _(generalized) (generalized) _(generalized) _(generalized) (generalized) _(generalized) (generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) _(generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized)) _((generalized) (generalized) (generalized) (generalized) (generalized)) ((generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) () (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) () (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) () (generalized) () (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) () (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) () (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) () (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) ()
## 2. Preliminaries
In this section, we first review a definition and provide some lemmas to be used later in subsequent sections. Throughout the paper, we shall use \(\operatorname{Tr}_{m}^{n}\) to denote the (relative) trace function from \(\mathbb{F}_{p^{n}}\to\mathbb{F}_{p^{m}}\), i.e., \(\operatorname{Tr}_{m}^{n}(X)=\sum_{i=0}^{\frac{n-m}{m}}X^{p^{mi}}\), where \(m\) and \(n\) are positive integers and \(m|n\). For \(m=1\), we use \(\operatorname{Tr}\) to denote the absolute trace. Also \(v_{p}(n)\) will denote the highest nonnegative exponent \(v\) such that \(p^{v}\) divides \(n\) (that is, the \(p\)-adic valuation).
We first recall the definition of the Walsh transform of a function.
**Definition 2.1**.: [11] For a function \(F:\mathbb{F}_{p^{n}}\to\mathbb{F}_{p}\), the Walsh transform of \(F\) at \(v\in\mathbb{F}_{p^{n}}\), is defined as
\[\mathcal{W}_{F}(v)=\sum_{X\in\mathbb{F}_{p^{n}}}\omega^{F(X)-\operatorname{ Tr}(vX)},\]
where \(\omega=e^{\frac{2\pi i}{p}}\) is a complex primitive \(p\)th root of unity.
We now present a lemma dealing with solutions of a cubic equation over a finite field with even characteristic. We take the form from [29, Lemma 2.2], which is derived from the original paper of Williams [27].
**Lemma 2.2**.: _[_29_, Lemma 2.2]_ _For a positive integer \(n\) and \(a\in\mathbb{F}_{2^{n}}^{*}\), the cubic equation \(X^{3}+X+a=0\) has_
1. _a unique solution in_ \(\mathbb{F}_{2^{n}}\) _if and only if_ \(\operatorname{Tr}_{1}^{n}(a^{-1}+1)=1\)_;_
2. _three distinct solutions in_ \(\mathbb{F}_{2^{n}}\) _if and only if_ \(p_{n}(a)=0\)_, where the polynomial_ \(p_{n}(X)\) _is recursively defined by the equations_ \(p_{1}(X)=p_{2}(X)=X,p_{k}(X)=p_{k-1}(X)+X^{2^{k-3}}p_{k-2}(X)\) _for_ \(k\geq 3\)_;_
3. _no solutions in_ \(\mathbb{F}_{2^{n}}\)_, otherwise._
Next, we recall five classes of permutation polynomials for which we investigate their \(c\)-differential uniformity. For an element \(\delta\) in a finite field \(\mathbb{F}_{p^{2m}}\), we let \(\bar{\delta}=\delta^{p^{m}}\).
**Lemma 2.3**.: _[_29_, Theorem 3.1]_ _Let \(m\) be a positive integer and \(p_{m}(X)\) be defined in Lemma 2.2. Let \(\delta\in\mathbb{F}_{2^{2m}}\) satisfies either \(\delta\in\mathbb{F}_{2^{m}}\), or \(\delta\not\in\mathbb{F}_{2^{m}}\) with \(\operatorname{Tr}_{1}^{2m}(\delta)=\operatorname{Tr}_{1}^{m}(1)\) and \(p_{m}((\delta+\bar{\delta})^{-1})\neq 0\). The polynomial_
\[F(X)=(X^{2^{m}}+X+\delta)^{2^{2m-2}+2^{m-2}+1}+X\]
_permutes \(\mathbb{F}_{2^{2m}}\)._
**Lemma 2.4**.: _[_29_, Theorem 2.1]_ _For a positive integer \(m\not\equiv 0\pmod{3}\) and any element \(\delta\in\mathbb{F}_{2^{2m}}\), the polynomial_
\[F(X)=(X^{2^{m}}+X+\delta)^{3\cdot 2^{2m-2}+2^{m-2}}+X\]
_permutes \(\mathbb{F}_{2^{2m}}\)._
**Lemma 2.5**.: _[_25_, Theorem 3.1]_ _For a positive integer \(m\not\equiv 0\pmod{3}\) and any element \(\delta\in\mathbb{F}_{2^{2m}}\), the polynomial_
\[F(X)=(X^{2^{m}}+X+\delta)^{3\cdot 2^{m-2}+2^{2m-2}}+X\]
_permutes \(\mathbb{F}_{2^{2m}}\)._
**Lemma 2.6**.: _[_16_, Proposition 8]_ _For a positive integer \(m\) and a fixed \(\delta\in\mathbb{F}_{p^{3m}}\) with \(\operatorname{Tr}_{m}^{n}(\delta)=0\), where \(n=3m\), the polynomial_
\[F(X)=(X^{p^{m}}-X+\delta)^{2p^{2m}+p^{m}}+(X^{p^{m}}-X+\delta)^{p^{2m}+2p^{m}}+X\]
_permutes \(\mathbb{F}_{p^{3m}}\)._
**Lemma 2.7**.: _[_16_, Proposition 10]_ _For a positive integer \(m\) and a fixed \(\delta\in\mathbb{F}_{3^{2m}}\), if \((1-[\operatorname{Tr}_{m}^{2m}(\delta)]^{4})\) is a square element in \(\mathbb{F}_{3^{m}}\), then the polynomial_
\[F(X)=(X^{3^{m}}-X+\delta)^{3^{m}+4}+(X^{3^{m}}-X+\delta)^{5}+X\]
_is a permutation of \(\mathbb{F}_{3^{2m}}\)._
We also recall some results providing us Walsh transform coefficients of some functions, that would be required later in our results.
**Lemma 2.8**.: _[_4_, Theorem 2]_ _Let \(K=\mathbb{F}_{2^{k}}\) and \(f(X)=\operatorname{Tr}(X^{2^{a}+1}+X^{2^{b}+1})\), \(0\leq a<b\). Furthermore, let \(d_{1}=\gcd(b-a,k)\), \(d_{2}=\gcd(b+a,k)\) and \(\nu=\max\{v_{2}(b-a),v_{2}(b+a)\}\), where \(v_{2}\) is the \(2\)-valuation, that is, the largest power of \(2\) dividing the argument. Also, let \(S_{d_{i}}=\{X\in K:\operatorname{Tr}_{K/\mathbb{F}_{2^{d_{i}}}}(X)=0\}\) and \(L_{i}=X+X^{2^{d_{i}}}+X^{2^{2d_{i}}}+\cdots+X^{2^{\frac{k}{2}-d_{i}}}\) for \(i=1,2\). Then we have the following cases_:__
_Case_ \(1\):__\(v_{2}(b-a)=v_{2}(b+a)=v_{2}(k)-1\) _does not hold. Then_:__
1. \(v_{2}(k)\leq\nu\)_. Then_ \(\mathcal{W}_{f}(\alpha)=0\) _if_ \(\alpha\not\in S_{d_{1}}\cap S_{d_{2}}\)_._
2. \(v_{2}(k)>\nu\)_. Then_ \(\mathcal{W}_{f_{u}}(\alpha)=0\) _if_ \(\left(L_{1}\circ L_{2}\right)(\alpha)\neq 0\)_._
_Case_ \(2\):__\(v_{2}(b-a)=v_{2}(b+a)=v_{2}(k)-1\)_. Then_ \(\mathcal{W}_{f}(\alpha)=0\) _if_ \(\left(L_{1}\circ L_{2}\right)(\alpha+1)\neq 0\)_._
Further, we also need a lemma given in [4] to evaluate \(\mathcal{W}_{f_{u}}(\alpha)\) for any \(\alpha\in K\) where \(f_{u}\) is a more general function \(f_{u}(x)=\operatorname{Tr}(uX^{2^{a}+1}+uX^{2^{b}+1})\), with \(u\in\mathbb{F}_{2^{d_{1}}}\), where \(d_{1}=\gcd(b-a,k)\).
**Lemma 2.9**.: _[_4_, Theorem 3]_ _Let \(K=\mathbb{F}_{2^{k}}\) and \(f(X)=\operatorname{Tr}(X^{2^{a}+1}+X^{2^{b}+1})\), \(0\leq a<b\). Furthermore, let \(d_{1}=\gcd(b-a,k)\), \(d_{2}=\gcd(b+a,k)\) and \(\nu=\max\{v_{2}(b-a),v_{2}(b+a)\}\). Also, let \(S_{d_{i}}=\{X\in K:\operatorname{Tr}_{K/\mathbb{F}_{2^{d_{i}}}}(X)=0\}\) and \(L_{i}=X+X^{2^{d_{i}}}+X^{2^{2d_{i}}}+\cdots+X^{2^{\frac{k}{2}-d_{i}}}\) for \(i=1,2\). Moreover, if \(f_{u}(x)=\operatorname{Tr}(uX^{2^{a}+1}+uX^{2^{b}+1})\) where \(u\in\mathbb{F}_{2^{d_{1}}}\), then we have the following cases_:__
_Case_ \(1\):__\(v_{2}(a)=v_{2}(b)<v_{2}(k)\) _does not hold. Then_:__
_For any_ \(u\in\mathbb{F}_{2^{d_{1}}}\) _there exists_ \(\beta\in\mathbb{F}_{2^{d_{1}}}\) _such that_ \(u=\beta^{2^{a}+1}\) _and_
\[\mathcal{W}_{f_{u}}(\alpha)=\mathcal{W}_{f}\left(\frac{\alpha}{\beta}\right).\]
_Case_ \(2\):__\(v_{2}(a)=v_{2}(b)<v_{2}(k)\)_. Then_:__
1. \(v_{2}(k)\leq\nu\)_. If_ \(\frac{\alpha}{u^{2-b}}\not\in S_{d_{1}}\cap S_{d_{2}}\)_, then_ \(\mathcal{W}_{f_{u}}(\alpha)=0\)_._
2. \(v_{2}(k)>\nu\)_. Then_ \(\mathcal{W}_{f_{u}}(\alpha)=0\) _if_ \(\left(L_{1}\circ L_{2}\right)\left(\frac{\alpha}{u^{2-b}}\right)\neq 0\)_._
The following lemma can be gleaned from the proof of [11, Proposition 2].
**Lemma 2.10**.: _Let \(m\) be a positive integer and \(n=2m\). Also, let \(a_{i}\in\mathbb{F}_{p^{n}}(i=0,\ldots,m)\) for an odd prime \(p\). Then the absolute square of Walsh transform coefficient of the function \(f(X)=\operatorname{Tr}\left(\sum_{i=0}^{m}a_{i}X^{p^{i}+1}\right)\) at \(v\in\mathbb{F}_{p^{n}}\) is given by_
\[|\mathcal{W}_{f}(v)|^{2}=\begin{cases}p^{n+\ell}&\text{ if }f(X)+\operatorname{ Tr}(-vX)\equiv 0\text{ on Ker }(L)\\ 0&\text{ otherwise},\end{cases}\]
_where \(\ell\) is dimension of kernel of the linearized polynomial \(L(X)=\sum_{i=0}^{m}(a_{i}X^{p^{i}}+(a_{i}X)^{p^{n-i}})\)._
For our first theorem, we will be using the following lemma that we will now prove. The result for the case of finite fields with odd characteristic is already mentioned in the above Lemma 2.10. We show that this result also holds in the case of \(p=2\), and we add its proof here for completeness.
**Lemma 2.11**.: _Let \(m\) be a positive integer and \(n=2m\). Also, let \(a_{i}\in\mathbb{F}_{2^{n}}(i=0,\ldots,m)\). Then the square of Walsh transform coefficient of the function \(f(X)=\operatorname{Tr}\left(\sum_{i=0}^{m}a_{i}X^{2^{i}+1}\right)\) at \(w\in\mathbb{F}_{2^{n}}\) is given by_
\[\mathcal{W}_{f}(w)^{2}=\begin{cases}2^{n+\ell}&\text{ if }f(X)+\operatorname{ Tr}(wX)\equiv 0\text{ on Ker }(L)\\ 0&\text{ otherwise},\end{cases}\]
_where \(\ell\) is dimension of kernel of the linearized polynomial \(L(X)=\sum_{i=0}^{m}(a_{i}X^{2^{i}}+a_{i}^{2^{n-i}}X^{2^{n-i}})\)._
Proof.: We can easily write the square of Walsh transform coefficient of the function \(f\) : \(X\mapsto\operatorname{Tr}\left(\sum_{i=0}^{m}a_{i}X^{2^{i}+1}\right)\) at \(w\) as
\[\mathcal{W}_{f}(w)^{2} =\sum_{X,Y}(-1)^{f(X)+\operatorname{Tr}(wX)}(-1)^{f(Y)+ \operatorname{Tr}(wY)}\] \[=\sum_{Y,Z}(-1)^{f(Y+Z)+\operatorname{Tr}(w(Y+Z))}(-1)^{f(Y)+ \operatorname{Tr}(wY)}\text{ (where }X=Y+Z)\] \[=\sum_{Z}(-1)^{f(Z)+\operatorname{Tr}(wZ)}\sum_{Y}(-1)^{f(Y)+f(Z )+f(Y+Z)}.\]
We first simplify \(f(Y)+f(Z)+f(Y+Z)\) as follows:
\[f(Y)+f(Z)+f(Y+Z) =\operatorname{Tr}\left(\sum_{i=0}^{m}a_{i}\left(Y^{2^{i}+1}+Z^{ 2^{i}+1}+(Y+Z)^{2^{i}+1}\right)\right)\] \[=\operatorname{Tr}\left(\sum_{i=0}^{m}a_{i}\left(YZ^{2^{i}}+ZY^{2 ^{i}}\right)\right)\] \[=\operatorname{Tr}\left(Y\sum_{i=0}^{m}\left(a_{i}Z^{2^{i}}+(a_{i }Z)^{2^{n-i}}\right)\right)\] \[=\operatorname{Tr}(YL(Z)),\]
where \(L(Z)=\sum_{i=0}^{m}a_{i}Z^{2^{i}}+a_{i}^{2^{n-i}}Z^{2^{n-i}}\) is the linearized polynomial over \(\mathbb{F}_{2^{n}}\) with kernel of dimension \(l\), denoted by \(\mathrm{Ker}(L)\). This will give us
\[\mathcal{W}_{f}(w)^{2} =\sum_{Z}(-1)^{f(Z)+\mathrm{Tr}(wZ)}\sum_{Y}(-1)^{\mathrm{Tr}(YL( Z))}\] \[=2^{n}\sum_{Z\in Ker(L)}(-1)^{f(Z)+\mathrm{Tr}(wZ)}.\]
The above equality holds because for those \(Z\not\in\mathrm{Ker}\ (L)\), \(YL(Z)\) forms a permutation over \(\mathbb{F}_{2^{n}}\) making the inner sum in the square of the Walsh transform zero. To proceed further, we consider \(\mathbb{F}_{2^{n}}\) as an \(n\)-dimensional vector space over \(\mathbb{F}_{2}\) and hence \(L(Z)\) as a linear transformation of \(\mathbb{F}_{2^{n}}\). As \(f(Y)+f(Z)+f(Y+Z)=\mathrm{Tr}(YL(Z))\), we get that \(f(Z)+\mathrm{Tr}(wZ)\) is linear on the kernel of \(L\). This will imply that either \(f(Z)+\mathrm{Tr}(wZ)\) is identically zero on \(\mathrm{Ker}(L)\), or \(f(Z)+\mathrm{Tr}(wZ)\) is an onto map on \(\mathrm{Ker}(L)\). Hence, the claim is shown.
Next, we recall the general technique given in [23] using the expression for the number of solutions to a given equation over finite fields in terms of Weil sums. The authors used this technique to compute the \(c\)-DDT entries. Let \(\chi_{1}:\mathbb{F}_{q}\to\mathbb{C}\) be the canonical additive character of the additive group of \(\mathbb{F}_{q}\) defined as follows
\[\chi_{1}(X):=\exp\left(\frac{2\pi i\mathrm{Tr}(X)}{p}\right).\]
One can easily observe (see, for instance [22]) that number of solutions \((X_{1},X_{2},\ldots,X_{n})\in\mathbb{F}_{q}^{n}\) of the equation \(F(X_{1},X_{2},\ldots,X_{n})=b\), denoted by \(N(b)\), is given by
\[N(b)=\frac{1}{q}\sum_{X_{1},X_{2},\ldots,X_{n}\in\mathbb{F}_{q}}\sum_{\beta\in \mathbb{F}_{q}}\chi_{1}(\beta(F(X_{1},X_{2},\ldots,X_{n})-b)). \tag{2.1}\]
We would be using the above expression to calculate the \(c\)-differential uniformity of a few permutations over finite fields in the forthcoming sections.
## 3. Permutations over \(\mathbb{F}_{2^{n}}\) with low \(c\)-differential uniformity
We first consider the \(c\)-differential uniformity of the function \(F(X)=(X^{2^{m}}+X+\delta)^{2^{2m-2}+2^{m-2}+1}+X\) over \(\mathbb{F}_{2^{n}}\), where \(n=2m\) and \(\delta\in\mathbb{F}_{2^{n}}\). From Lemma 2.3, we know that \(F\) is a permutation polynomial over \(\mathbb{F}_{2^{n}}\). In the following theorem, we give conditions on \(\delta\) and \(c\) for which \(F\) turns out to be either a P\(c\)N or an AP\(c\)N function.
**Theorem 3.1**.: _Let \(F(X)=(X^{2^{m}}+X+\delta)^{2^{2m-2}+2^{m-2}+1}+X\) over \(\mathbb{F}_{2^{n}}\), where \(n=2m\)._
1. _Let_ \(\delta\in\mathbb{F}_{2^{m}}\)_. Then_ \(F\) _is PcN for_ \(c\in\mathbb{F}_{2^{m}}\setminus\{1\}\)_. Moreover,_ \(F\) _is APcN for_ \(c\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{m}}\)_._
2. _Let_ \(\delta\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{m}}\) _with_ \(\mathrm{Tr}_{1}^{2m}(\delta)=\mathrm{Tr}_{1}^{m}(1)\) _and_ \(p_{m}((\delta+\bar{\delta})^{-1})\neq 0\)_, where_ \(p_{m}(X)\) _is the polynomial defined in Lemma_ 2.2_. Then_ \(F\) _is PcN for_ \(c\in\mathbb{F}_{2^{m}}\setminus\{1\}\) _and is of_ \(c\)_-differential uniformity_ \(\leq 4\) _for_ \(c\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{m}}\)
Proof.: Clearly, by expanding the given trinomial, one can easily simplify \(F(X)\) and gets the below expression
\[F(X) =\mathrm{Tr}_{m}^{2m}(X^{2^{m-1}+1}+X^{2^{2m-1}+1})+\delta\mathrm{ Tr}_{m}^{2m}(X^{2^{m-1}})+(\delta^{2^{m-2}+1}+\delta^{2^{2m-2}+1})\mathrm{Tr}_{m}^{2 m}(X^{2^{m-2}})\] \[\qquad+\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}})\mathrm{Tr}_{m}^{2 m}(X^{2^{m-2}+1}+X^{2^{2m-2}+1})+\delta^{2^{m-2}+2^{2m-2}}\mathrm{Tr}_{m}^{2 m}(X)\] \[\qquad\qquad\qquad+X+\delta^{2^{2m-2}+2^{m-2}+1}.\]
Recall that, for any \((a,b)\in\mathbb{F}_{2^{n}}\times\mathbb{F}_{2^{n}}\), the \(c\)-DDT entry \({}_{c}\Delta_{F}(a,b)\) is given by the number of solutions \(X\in\mathbb{F}_{2^{n}}\) of the following equation,
\[F(X+a)+cF(X)=b, \tag{3.1}\]
which is the same as,
\[(1+c)F(X)+\mathrm{Tr}_{m}^{2m}((a^{2^{m-1}}+a^{2^{2m-1}})X+a(X^{2 ^{m-1}}+X^{2^{2m-1}}))+F(a)+\delta^{2^{2m-2}+2^{m-2}+1}\] \[\qquad+\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}})\mathrm{Tr}_{m}^{2 m}((a^{2^{m-2}}+a^{2^{2m-2}})X+a(X^{2^{m-2}}+X^{2^{2m-2}}))=b.\]
Now, by using Equation (2.1), the number of solutions \(X\in\mathbb{F}_{2^{n}}\) of the above equation, that is, \({}_{c}\Delta_{F}(a,b)\), is given by
\[2^{n}\ {}_{c}\Delta_{F}(a,b)=\] \[\sum_{\beta\in\mathbb{F}_{2^{n}}}\sum_{X\in\mathbb{F}_{2^{n}}}(-1 )\mathrm{Tr}(\beta((1+c)F(X)+\mathrm{Tr}_{m}^{2m}((a^{2^{m-1}}+a^{2^{2m-1}})X +a(X^{2^{m-1}}+X^{2^{2m-1}}))+b))\] \[\qquad(-1)^{\mathrm{Tr}}(\beta(\mathrm{Tr}_{m}^{2m}(\delta^{2^{m- 2}})\mathrm{Tr}_{m}^{2m}((a^{2^{m-2}}+a^{2^{2m-2}})X+a(X^{2^{m-2}}+X^{2^{2m-2}} ))))\] \[\qquad\qquad(-1)^{\mathrm{Tr}}(\beta(F(a)+\delta^{2^{2m-2}+2^{m-2 }+1})),\]
or, equivalently,
\[2^{n}\ {}_{c}\Delta_{F}(a,b) =\sum_{\beta\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}\left(\beta \left(F(a)+b+\delta^{2^{2m-2}+2^{m-2}+1}\right)\right)\sum_{X\in\mathbb{F}_{2 ^{n}}}(-1)^{\mathrm{Tr}}\left(\beta(1+c)F(X)\right)\] \[\qquad(-1)^{\mathrm{Tr}}(\beta(\mathrm{Tr}_{m}^{2m}((a^{2^{m-1}}+ a^{2^{2m-1}})X+a(X^{2^{m-1}}+X^{2^{2m-1}}))))\] \[\qquad(-1)^{\mathrm{Tr}}(\beta(\mathrm{Tr}_{m}^{2m}(\delta^{2^{m- 2}})\mathrm{Tr}_{m}^{2m}((a^{2^{m-2}}+a^{2^{2m-2}})X+a(X^{2^{m-2}}+X^{2^{2m-2} })))).\]
We now use the following notations
\[T_{0} =\mathrm{Tr}(\beta(1+c)F(X))\] \[T_{1} =\mathrm{Tr}(\beta(\mathrm{Tr}_{m}^{2m}((a^{2^{m-1}}+a^{2^{2m-1}}) X+a(X^{2^{m-1}}+X^{2^{2m-1}}))))\] \[\qquad\qquad+\mathrm{Tr}(\beta(\mathrm{Tr}_{m}^{2m}(\delta^{2^{ m-2}})\mathrm{Tr}_{m}^{2m}((a^{2^{m-2}}+a^{2^{2m-2}})X+a(X^{2^{m-2}}+X^{2^{2m-2}})))).\]
Therefore,
\[{}_{c}\Delta_{F}(a,b)=\frac{1}{2^{n}}\sum_{\beta\in\mathbb{F}_{2^{n}}}(-1)^{ \mathrm{Tr}}\left(\beta\left(F(a)+b+\delta^{2^{2m-2}+2^{m-2}+1}\right)\right) \sum_{X\in\mathbb{F}_{2^{n}}}(-1)^{T}_{0}+T_{1}. \tag{3.2}\]
**Case 1.** Let \(c\in\mathbb{F}_{2^{n}}\) and \(\delta\in\mathbb{F}_{2^{m}}\). To compute \(T_{0}\) and \(T_{1}\), we first write
\[T_{1} =\mathrm{Tr}(\beta(\mathrm{Tr}_{m}^{2m}((a^{2^{m-1}}+a^{2^{2m-1}}) X+a(X^{2^{m-1}}+X^{2^{2m-1}}))))\] \[=\mathrm{Tr}((\mathrm{Tr}_{m}^{2m}(a^{2^{m-1}})\mathrm{Tr}_{m}^{ 2m}(\beta)+\mathrm{Tr}_{m}^{2m}(a^{2})\mathrm{Tr}_{m}^{2m}(\beta^{2}))X),\]
and
\[T_{0} =\mathrm{Tr}(\beta(1+c)F(X))\] \[=\mathrm{Tr}\left(\beta(1+c)((\mathrm{Tr}_{m}^{2m}(X^{2^{m-1}+1} +X^{2^{2m-1}+1})+\delta\mathrm{Tr}_{m}^{2m}(X^{2^{m-1}})+X+\delta^{2^{m-1}+1})\right)\] \[\qquad\qquad+\mathrm{Tr}\left(\beta(1+c)\delta^{2^{m-1}}\mathrm{ Tr}_{m}^{2m}(X)\right)\] \[=\mathrm{Tr}(\beta(1+c)\delta^{2^{m-1}+1})+\mathrm{Tr}(\mathrm{ Tr}_{m}^{2m}(\beta(1+c))(X^{2^{m-1}+1}+X^{2^{2m-1}}))\] \[\qquad+\mathrm{Tr}\left((\beta(1+c)+\mathrm{Tr}_{m}^{2m}(\delta^ {2}\beta^{2}(1+c)^{2})+\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-1}}\beta(1+c)))X \right).\]
Now Equation (3.2) reduces to
\[{}_{c}\Delta_{F}(a,b) =\frac{1}{2^{n}}\sum_{\beta\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr} (\beta(F(a)+b+c\delta^{2^{m-1}+1}))\] \[\qquad\qquad\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}(u(X^{2^ {m-1}+1}+X^{2^{2m-1}+1})+wX),\]
where,
\[u =\mathrm{Tr}_{m}^{2m}((1+c)\beta)\] \[w =\mathrm{Tr}_{m}^{2m}(a^{2^{m-1}})\mathrm{Tr}_{m}^{2m}(\beta)+ \mathrm{Tr}_{m}^{2m}(a^{2})\mathrm{Tr}_{m}^{2m}(\beta^{2})\] \[\qquad+\beta(1+c)+\delta^{2}\mathrm{Tr}_{m}^{2m}((1+c)^{2}\beta^ {2})+\delta^{2^{m-1}}\mathrm{Tr}_{m}^{2m}((1+c)\beta).\]
Further, splitting the above sum depending on whether \(\mathrm{Tr}_{m}^{2m}((1+c)\beta)\) is \(0\) or not, we get
\[2^{n}\ {}_{c}\Delta_{F}(a,b) =\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{2m}(\beta(1+c))=0\end{subarray}}(-1)\mathrm{Tr}(\beta(F(a)+b +c\delta^{2^{m-1}+1}))\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}\left((\beta (1+c))\,X\right)\] \[\qquad\qquad\qquad(-1)\mathrm{Tr}\left((\mathrm{Tr}_{m}^{2m}(a^{2 ^{m-1}})\mathrm{Tr}_{m}^{2m}(\beta)+\mathrm{Tr}_{m}^{2m}(a^{2})\mathrm{Tr}_{m }^{2m}(\beta^{2}))X\right)\] \[+\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{2m}(\beta(1+c))\neq 0\end{subarray}}(-1)\mathrm{Tr}(\beta(F(a)+b +c\delta^{2^{m-1}+1}))\] \[\qquad\qquad\qquad\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}(u (X^{2^{m-1}+1}+X^{2^{2m-1}+1})+wX)\] \[=S_{0}+S_{1},\]
where \(S_{0},S_{1}\) are the two inner sums. We first consider the sum \(S_{0}\) below,
\[S_{0}=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \operatorname{Tr}_{m}^{2m}(\beta(1+c))=0\end{subarray}}(-1)\text{Tr}(\beta(F(a) +b+c\delta^{2^{m-1}+1}))\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\text{Tr}\left((\beta (1+c))\,X\right)\] \[(-1)\text{Tr}\left((\operatorname{Tr}_{m}^{2m}(a^{2^{m-1}}) \text{Tr}_{m}^{2m}(\beta)+\operatorname{Tr}_{m}^{2m}(a^{2})\text{Tr}_{m}^{2m} (\beta^{2}))X\right).\]
To compute \(S_{0}\), we need to find number of solutions of the following equation in \(\mathbb{F}_{2^{n}}\).
\[(1+c)\beta+\operatorname{Tr}_{m}^{2m}(a^{2^{m-1}})\text{Tr}_{m}^{2m}(\beta)+ \operatorname{Tr}_{m}^{2m}(a^{2})\text{Tr}_{m}^{2m}(\beta^{2})=0. \tag{3.3}\]
If \(c\in\mathbb{F}_{2^{m}}\setminus\{1\}\), then \(\operatorname{Tr}_{m}^{2m}((1+c)\beta)=(1+c)\text{Tr}_{m}^{2m}(\beta)=0\), and hence Equation (3.3) reduces to \((1+c)\beta=0\). Thus, the inner sum in \(S_{0}\) is zero for all \(\beta\in\mathbb{F}_{2^{n}}^{*}\) and so \(S_{0}=2^{n}\).
Next, we let \(c\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{m}}\). Then we observe that for \(\beta\)'s satisfying \(\operatorname{Tr}_{m}^{2m}((1+c)\beta)=0\), we have \(\beta^{2^{m}}=\tilde{c}\beta\), where \(\tilde{c}=(1-c)^{1-2^{m}}\). If \(\operatorname{Tr}_{m}^{2m}(a)=0\), then Equation (3.3) vanishes only for \(\beta=0\). Now assume that \(\operatorname{Tr}_{m}^{2m}(a)\neq 0\), then after substituting the value of \(\beta^{2^{m}}\) in Equation (3.3), it follows that Equation (3.3) can have at most two solutions, namely, \(\beta_{1}=0\) and \(\beta_{2}=\dfrac{\operatorname{Tr}_{m}^{2m}(a^{2^{m-1}})(1+\tilde{c})+(1+c)}{ \operatorname{Tr}_{m}^{2m}(a^{2})(1+\tilde{c})^{2}}\), which is clearly nonzero for some \(a\in\mathbb{F}_{2^{n}}\) with \(\operatorname{Tr}_{m}^{2m}(a^{2^{m-1}})\neq\dfrac{1+c}{1+\tilde{c}}\). Hence, we get that the inner sum in \(S_{0}\) is zero for all \(\beta\in\mathbb{F}_{2^{n}}\setminus\{\beta_{1},\beta_{2}\}\). Thus, for \((a,b)=(a,F(a)+c\delta^{2^{m-1}+1})\) along with \(\operatorname{Tr}_{m}^{2m}(a^{2^{m-1}})\neq\dfrac{1+c}{1+\tilde{c}}\), we have \(S_{0}=2^{n+1}\); and otherwise \(S_{0}=2^{n}\).
Next, we consider
\[S_{1}=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \operatorname{Tr}_{m}^{2m}((1+c)\beta)\neq 0\end{subarray}}(-1)\text{Tr}(\beta(F(a) +b+c\delta^{2^{m-1}+1}))\] \[\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\text{Tr}\left(u(X^{2^{m-1}+1}+X ^{2^{2m-1}+1})+wX\right)\] \[=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \operatorname{Tr}_{m}^{2m}((1+c)\beta)\neq 0\end{subarray}}(-1)\text{Tr}(\beta(F(a) +b+c\delta^{2^{m-1}+1}))\sum_{X\in\mathbb{F}_{2^{n}}}\mathcal{W}_{f_{u}}(w),\]
where \(\mathcal{W}_{f_{u}}(w)\) is the Walsh transform of the function \(f_{u}(X)=\text{Tr}(u(X^{2^{m-1}+1}+X^{2^{2m-1}+1}))\) at \(w\). We split our analysis in three cases depending on the value of \(m\). Also, we have \(k=2m\), \(a=m-1\) and \(b=2m-1\).
**Subcase 1(a).** Let \(m\) is odd, then \(\gcd(b-a,k)=m\) and \(\gcd(b+a,k)=1\). Also, notice that \(v_{2}(a)=v_{2}(m-1)\), \(v_{2}(b)=v_{2}(2m-1)=0\) and \(v_{2}(k)=v_{2}(2m)=1\). Summarizing, we observe that \(v_{2}(a)=v_{2}(b)\leq v_{2}(k)\) does not hold. From Lemma 2.9, one can see that \(\mathcal{W}_{f_{u}}(w)=\mathcal{W}_{f}(\frac{w}{\eta})\), where \(f(X)=\text{Tr}(X^{2^{m-1}+1}+X^{2^{2m-1}+1})\) and \(\eta\in\mathbb{F}_{2^{m}}\) such that \(u=\eta^{2^{m-1}+1}\). We now show that \(\mathcal{W}_{f}(\frac{w}{\eta})=0\) using Lemma 2.8. One can clearly observe that \(0=v_{2}(b-a)=v_{2}(b+a)=v_{2}(k)-1\). Hence it is sufficient to show \((L_{1}\circ L_{2})(\frac{w}{\eta}+1)\neq 0\)
Now,
\[(L_{1}\circ L_{2})(X)=X+X^{2}+X^{2^{2}}+\cdots+X^{2^{m-1}}.\]
Hence,
\[(L_{1}\circ L_{2})\left(\frac{w}{\eta}+1\right) =m+\frac{\beta(1+c)}{\eta}+\left(\frac{\beta(1+c)}{\eta}\right)^{2 }+\cdots+\left(\frac{\beta(1+c)}{\eta}\right)^{2^{m-1}}\] \[+\mathrm{Tr}_{1}^{m}\left(\frac{\mathrm{Tr}_{m}^{2m}(a^{2^{m-1}} )\mathrm{Tr}_{m}^{2m}(\beta)+\mathrm{Tr}_{m}^{2m}(a)^{2}\mathrm{Tr}_{m}^{2m}( \beta)^{2}}{\eta}\right)\] \[+\mathrm{Tr}_{1}^{m}\left(\frac{\delta^{2}(1+c)^{2}\mathrm{Tr}_{ m}^{2m}(\beta)^{2}+\delta^{2^{m-1}}(1+c)\mathrm{Tr}_{m}^{2m}(\beta)}{\eta} \right).\]
Now, if \((L_{1}\circ L_{2})\left(\frac{w}{\eta}+1\right)=0\), then we have \(\left((L_{1}\circ L_{2})\left(\frac{w}{\eta}+1\right)\right)^{2}=0\). This will give us
\[(L_{1}\circ L_{2})\left(\frac{w}{\eta}+1\right)+\left((L_{1} \circ L_{2})\left(\frac{w}{\eta}+1\right)\right)^{2} =\frac{1}{\eta}\mathrm{Tr}_{m}^{2m}(\beta(1+c))+m+m^{2}\] \[=\frac{u}{\eta}+m+m^{2}=0,\]
As \(u^{2}=\eta^{3}\), we have \(u=(m+m^{2})^{3}\equiv 0\pmod{2}\), that gives us \(\mathrm{Tr}_{m}^{2m}((1+c)\beta)=0\), which is obviously not true and hence we are done.
**Subcase 1(b).** Let \(m\equiv 0\pmod{4}\). If we have \(v_{2}(m)=v(\geq 2)\), then \(v_{2}(a)=v_{2}(b)=0\) and \(v_{2}(k)=v_{2}(2m)=v+1\). Summarizing, we have \(v_{2}(a)=v_{2}(b)<v_{2}(k)\) and \(v_{2}(k)>\nu\). Thus, by using Lemma 2.9 if we show that \(L_{1}\circ L_{2}\left(\frac{w}{u^{2}}\right)\neq 0\) then we have \(\mathcal{W}_{G}(w)=0\), therefore, \(S_{1}=0\) and then we showed the claim. So, our goal is to now show \(L_{1}\circ L_{2}\left(\frac{w}{u^{2}}\right)\neq 0\). As \(d_{1}=m\) and \(d_{2}=2\), we have
\[(L_{1}\circ L_{2})(X)=X+X^{2^{2}}+X^{2^{(2\cdot 2)}}+X^{2^{(3\cdot 2)}}+\cdots+X^ {2^{m-2}}.\]
Now, if \((L_{1}\circ L_{2})\left(\frac{w}{u^{2}}\right)=0\), then we have \(\left((L_{1}\circ L_{2})\left(\frac{w}{u^{2}}\right)\right)^{2^{2}}=0\). This will give us
\[(L_{1}\circ L_{2})\left(\frac{w}{u^{2}}\right)+\left((L_{1}\circ L_{2})\left( \frac{w}{u^{2}}\right)\right)^{2^{2}}=\mathrm{Tr}_{m}^{2m}\left(\frac{w}{u^{2} }\right)=\frac{1}{u^{2}}\mathrm{Tr}_{m}^{2m}(w)=0,\]
which is a contradiction to the assumption that \(\mathrm{Tr}_{m}^{2m}(\beta(1+c))\neq 0\).
**Subcase 1(c).** Let \(m\equiv 2\pmod{4}\), then \(d_{1}=m\) and \(d_{2}=4\). In this subcase, we have \(v_{2}(a)=v_{2}(b)=0\) and \(v_{2}(k)=v_{2}(2m)=2\). Summarizing all, we observe that \(v_{2}(a)=v_{2}(b)<v_{2}(k)\) holds and \(v_{2}(k)\leq\nu\). Then again from Lemma 2.9, if we show that \(\frac{w}{u^{2}}\not\in S_{d_{1}}\cap S_{d_{2}}\), then we are done. In this subcase, \(S_{d_{1}}=\{X\in\mathbb{F}_{2^{n}}:\mathrm{Tr}_{m}^{2m}(X)=0\}\) and \(S_{d_{2}}=\{X\in\mathbb{F}_{2^{n}}:\mathrm{Tr}_{4}^{n}(X)=0\}\). Let \(\frac{w}{u^{2}}\in S_{d_{1}}\cap S_{d_{2}}\). Then we have \(\mathrm{Tr}_{m}^{2m}\left(\frac{w}{u^{2}}\right)=0\), which implies that \(\mathrm{Tr}_{m}^{2m}(\beta(1+c))=0\), which is not possible. Hence, \(\mathcal{W}_{G}(w)=0\), which renders \(S_{1}=0\).
This above analysis shows the claim that \(F\) is P\(c\)N for \(c\in\mathbb{F}_{2^{m}}\setminus\{1\}\) and AP\(c\)N for \(c\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{m}}\).
**Case 2.** Let \(\delta\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{m}}\) with \(\mathrm{Tr}_{1}^{2m}(\delta)=\mathrm{Tr}_{1}^{m}(1)\) and \(p_{m}((\delta+\bar{\delta})^{-1})\neq 0\). Then \(T_{0},T_{1}\) become
\[T_{1} =\mathrm{Tr}(\beta(\mathrm{Tr}_{m}^{2m}((a^{2^{m-1}}+a^{2^{2m-1}} )X+a(X^{2^{m-1}}+X^{2^{2m-1}}))))\] \[\qquad\qquad+\mathrm{Tr}(\beta(\mathrm{Tr}_{m}^{2m}(\delta^{2^{m -2}})\mathrm{Tr}_{m}^{2m}((a^{2^{m-2}}+a^{2^{2m-2}})X+a(X^{2^{m-2}}+X^{2^{2m-2} }))))\] \[=\mathrm{Tr}((\mathrm{Tr}_{m}^{2m}(a^{2^{m-1}})\mathrm{Tr}_{m}^{ 2m}(\beta)+\mathrm{Tr}_{m}^{2m}(a^{2})\mathrm{Tr}_{m}^{2m}(\beta^{2}))X)\] \[\qquad\qquad+\mathrm{Tr}((\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}}) \mathrm{Tr}_{m}^{2m}(a^{2^{m-2}})\mathrm{Tr}_{m}^{2m}(\beta)+\mathrm{Tr}_{m}^{ 2m}(\delta)\mathrm{Tr}_{m}^{2m}(a^{2^{2}})\mathrm{Tr}_{m}^{2m}(\beta)^{2^{2}} )X),\]
and,
\[T_{0} =\mathrm{Tr}(\beta(1+c)F(X))\] \[=\mathrm{Tr}\left(\beta(1+c)(\mathrm{Tr}_{m}^{2m}(X^{2^{m-1}+1}+X ^{2^{2m-1}+1})+\delta\mathrm{Tr}_{m}^{2m}(X^{2^{m-1}})+X+\delta^{2^{2m-2}+2^{ m-2}+1})\right)\] \[\quad+\mathrm{Tr}\left(\beta(1+c)(\delta^{2^{m-2}+2^{2m-2}} \mathrm{Tr}_{m}^{2m}(X)+\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}})\mathrm{Tr}_{m}^ {2m}(X^{2^{m-2}+1}+X^{2^{2m-2}+1}))\right)\] \[\qquad\qquad\qquad+\mathrm{Tr}\left(\beta(1+c)(\delta^{2^{m-2}+ 1}+\delta^{2^{2m-2}+1})\mathrm{Tr}_{m}^{2m}(X^{2^{m-2}})\right)\] \[=\mathrm{Tr}\left(\mathrm{Tr}_{m}^{2m}(\beta(1+c))(X^{2^{m-1}+1}+ \mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}})X^{2^{m-2}+1})\right)\] \[\qquad\qquad+\mathrm{Tr}\left(\mathrm{Tr}_{m}^{2m}(\beta(1+c))^ {2}X^{2^{1}+1}+\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}(\beta(1+c))^{2 ^{2}}X^{2^{2}+1}))\right)\] \[\qquad\qquad+\mathrm{Tr}\left((\mathrm{Tr}_{m}^{2m}(\delta^{2} \beta^{2}(1+c)^{2})+\beta(1+c)+\mathrm{Tr}_{m}^{2m}(\beta(1+c)(\delta^{2^{m-2 }+2^{2m-2}})))X\right)\] \[\qquad\qquad+\mathrm{Tr}\left(\mathrm{Tr}_{m}^{2m}(\beta^{2^{2}} (1+c)^{2^{2}}(\delta^{2^{m-2}+1}+\delta^{2^{2m-2}+1})^{2^{2}})X\right)+ \mathrm{Tr}\left(\beta(1+c)\delta^{2^{2m-2}+2^{m-2}+1}\right).\]
Hence, we can rewrite Equation (3.2) as
\[{}_{c}\Delta_{F}(a,b) =\frac{1}{2^{n}}\sum_{\beta\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr} (\beta(F(a)+b+c\delta^{2^{2m-2}+2^{m-2}+1})\] \[\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}(u_{1}X^{2^{m-1}+1}+u _{2}X^{2^{1}+1}+u_{3}X^{2^{m-2}+1}+u_{4}X^{2^{2}+1}+wX),\]
where,
\[u_{1} =\mathrm{Tr}_{m}^{2m}((1+c)\beta)\] \[u_{2} =\mathrm{Tr}_{m}^{2m}((1+c)\beta)^{2}=u_{1}^{2}\] \[u_{3} =\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}})\mathrm{Tr}_{m}^{2m}((1+c )\beta)\] \[u_{4} =\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}((1+c)\beta)^{2^ {2}}=u_{3}^{2^{2}}\] \[w =\mathrm{Tr}_{m}^{2m}(\delta^{2}\beta^{2}(1+c)^{2})+\beta(1+c)+ \mathrm{Tr}_{m}^{2m}(\beta(1+c)(\delta^{2^{m-2}+2^{2m-2}}))\] \[+\mathrm{Tr}_{m}^{2m}(\beta^{2^{2}}(1+c)^{2^{2}}(\delta^{2^{m-2}+ 1}+\delta^{2^{2m-2}+1})^{2^{2}})+\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^ {2m}(a^{2^{2}})\mathrm{Tr}_{m}^{2m}(\beta)^{2^{2}}\] \[+\mathrm{Tr}_{m}^{2m}(a^{2^{m-1}})\mathrm{Tr}_{m}^{2m}(\beta)+ \mathrm{Tr}_{m}^{2m}(a^{2})\mathrm{Tr}_{m}^{2m}(\beta^{2})+\mathrm{Tr}_{m}^{2m }(\delta^{2^{m-2}})\mathrm{Tr}_{m}^{2m}(a^{2^{m-2}})\mathrm{Tr}_{m}^{2m}(\beta).\]
We now split our analysis in two cases and define \(S_{0}\) and \(S_{1}\) depending on whether \(\operatorname{Tr}_{m}^{2m}((1+c)\beta)=0\) or not, respectively. We first compute \(S_{0}\) as follows:
\[S_{0}=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \operatorname{Tr}_{m}^{2m}(\beta(1+c))=0\end{subarray}}(-1)^{\operatorname{ Tr}(\beta(F(a)+b+c\delta^{2^{2m-2}+2^{m-2}+1}))}\sum_{X\in\mathbb{F}_{2^{n}}}(-1)^{ \operatorname{Tr}\left(\left(\beta(1+c)\right)X\right)}\] \[(-1)^{\operatorname{Tr}\left(\left(\operatorname{Tr}_{m}^{2m}( \beta^{2^{2}}(1+c)^{2^{2}}(\delta^{2^{m-2}+1}+\delta^{2^{2m-2}+1})^{2^{2}} \right)+\operatorname{Tr}_{m}^{2m}(a^{2^{m-1}})\operatorname{Tr}_{m}^{2m}( \beta))X\right)}\] \[(-1)^{\operatorname{Tr}\left(\left(\operatorname{Tr}_{m}^{2m}( \delta^{2^{m-2}})\operatorname{Tr}_{m}^{2m}(a^{2^{m-2}})\operatorname{Tr}_{m}^ {2m}(\beta)+\operatorname{Tr}_{m}^{2m}(\delta)\operatorname{Tr}_{m}^{2m}(a^{2 ^{2}})\operatorname{Tr}_{m}^{2m}(\beta)^{2^{2}}\right)X\right)}\] \[(-1)^{\operatorname{Tr}\left(\left(\operatorname{Tr}_{m}^{2m}( \delta^{2}\beta^{2}(1+c)^{2})+\operatorname{Tr}_{m}^{2m}(\beta(1+c)(\delta^{2^ {m-2}+2^{2m-2}}))+\operatorname{Tr}_{m}^{2m}(a^{2})\operatorname{Tr}_{m}^{2m}( \beta^{2}))X\right)}.\]
If \(c\in\mathbb{F}_{2^{m}}\), then \(\operatorname{Tr}_{m}^{2m}((1+c)\beta)=0\) implies \(\operatorname{Tr}_{m}^{2m}(\beta)=0\) and thus \(\beta^{2^{m}}=\beta\). This would further reduce \(S_{0}\) as follows:
\[S_{0}=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \operatorname{Tr}_{m}^{2m}(\beta(1+c))=0\end{subarray}}(-1)^{\operatorname{ Tr}(\beta(F(a)+b+c\delta^{2^{2m-2}+2^{m-2}+1}))}\sum_{X\in\mathbb{F}_{2^{n}}}(-1)^{ \operatorname{Tr}\left(\left(\beta(1+c)\right)X\right)}\] \[(-1)^{\operatorname{Tr}\left(\left(\operatorname{Tr}_{m}^{2m}(( \delta^{2^{m-2}+1}+\delta^{2^{2m-2}+1})^{2^{2}})\beta^{2^{2}}(1+c)^{2^{2}}+ \operatorname{Tr}_{m}^{2m}(\delta^{2})\beta^{2}(1+c)^{2})X\right)}.\]
To compute \(S_{0}\), we need to find the number of solutions \(\beta\in\mathbb{F}_{2^{n}}\) of the following equation
\[\operatorname{Tr}_{m}^{2m}((\delta^{2^{m-2}+1}+\delta^{2^{2m-2}+1})^{2^{2}})( 1+c)^{4}\beta^{4}+\operatorname{Tr}_{m}^{2m}(\delta^{2})(1+c)^{2}\beta^{2}+(1 +c)\beta=0,\]
or equivalently, to find the number of solutions \(\beta\in\mathbb{F}_{2^{m}}^{*}\) of the equation given below:
\[\operatorname{Tr}_{m}^{2m}(\delta^{2^{m}+2^{2}}+\delta^{1+2^{2}})(1+c)^{3} \beta^{3}+\operatorname{Tr}_{m}^{2m}(\delta^{2})(1+c)\beta+1=0.\]
Further, multiplying the above equation by \(\operatorname{Tr}_{m}^{2m}(\delta)\) and using \(Z=\operatorname{Tr}_{m}^{2m}(\delta^{2})(1+c)\beta\), one can rewrite the above equation as follows,
\[Z^{3}+\operatorname{Tr}_{m}^{2m}(\delta^{2^{m-1}})^{2}Z+\operatorname{Tr}_{m} ^{2m}(\delta^{2^{m-1}})^{2}=0.\]
Substituting \(Z\) with \(\operatorname{Tr}_{m}^{2m}(\delta^{2^{m-1}})Z\) in the above equation, we therefore get \(Z^{3}+Z+\operatorname{Tr}_{m}^{2m}(\delta^{2^{m-1}})^{-1}=0.\) From Lemma 2.2, the cubic equation \(Z^{3}+Z+\operatorname{Tr}_{m}^{2m}(\delta^{2^{m-1}})^{-1}=0\) cannot have a unique solution as \(\operatorname{Tr}_{1}^{m}(\operatorname{Tr}_{m}^{2m}(\delta^{2^{m-1}})+1)= \operatorname{Tr}_{1}^{m}(\operatorname{Tr}_{m}^{2m}(\delta)+1)=\operatorname {Tr}_{1}^{n}(\delta)+\operatorname{Tr}_{1}^{m}(1)=0\). Let \(p_{m}(\operatorname{Tr}_{m}^{2m}(\delta^{2^{m-1}})^{-1})=0\), then the cubic equation will have three distinct solutions in \(\mathbb{F}_{2^{n}}\) and hence the cubic equation \(Z^{3}+Z+\operatorname{Tr}_{m}^{2m}(\delta)^{-1}=0\) has three distinct solutions in \(\mathbb{F}_{2^{n}}\), implying that \(p_{m}((\delta+\bar{\delta})^{-1})=0\), which is a contradiction to the given assumption. Hence \(S_{0}=2^{n}\).
Let \(c\in\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{m}}\). Using a similar technique as above, we are led to finding the number of solutions \(\beta\in\mathbb{F}_{2^{n}}\) of the following equation,
\[\mathrm{Tr}_{m}^{2m}(\beta^{2^{2}}(1+c)^{2^{2}}(\delta^{2^{m-2}+1} +\delta^{2^{2m-2}+1})^{2^{2}})+\mathrm{Tr}_{m}^{2m}(a^{2^{m-1}})\mathrm{Tr}_{m }^{2m}(\beta)+\mathrm{Tr}_{m}^{2m}(a^{2})\mathrm{Tr}_{m}^{2m}(\beta^{2})\\ +\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}})\mathrm{Tr}_{m}^{2m}(a^{2^ {m-2}})\mathrm{Tr}_{m}^{2m}(\beta)+\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^ {2m}(a^{2^{2}})\mathrm{Tr}_{m}^{2m}(\beta)^{2^{2}}+(1+c)\beta\\ +\mathrm{Tr}_{m}^{2m}(\delta^{2}\beta^{2}(1+c)^{2})+\mathrm{Tr}_{ m}^{2m}(\beta(1+c)(\delta^{2^{m-2}+2^{2m-2}}))=0. \tag{3.4}\]
Further, we reduce the above equation by substituting \(\beta^{2^{m}}=\dfrac{(1+c)}{(1+c)^{2^{m}}}\beta=\tilde{c}\beta\), where
\[\tilde{c}=\dfrac{(1+c)}{(1+c)^{2^{m}}},\,\text{to get}\\ (1+c)^{4}\mathrm{Tr}_{m}^{2m}(\delta^{2^{m}+2^{2}}+\delta^{2^{2}+ 1})\beta^{4}+(1+\tilde{c})\mathrm{Tr}_{m}^{2m}(a^{2^{m-1}})\beta+(1+\tilde{c}) ^{2}\mathrm{Tr}_{m}^{2m}(a^{2})\beta^{2}\\ +(1+\tilde{c})\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}})\mathrm{Tr}_{ m}^{2m}(a^{2^{m-2}})\beta+(1+\tilde{c})^{4}\mathrm{Tr}_{m}^{2m}(\delta) \mathrm{Tr}_{m}^{2m}(a^{2^{2}})\beta^{4}+(1+c)\beta\\ +(1+c)^{2}\mathrm{Tr}_{m}^{2m}(\delta)^{2}\beta^{2}=0.\]
Combining the coefficients of \(\beta^{i}\) for \(i=1,2\) and \(4\), we rewrite the above equation in the following simplified way, \(A(1+c)^{4}\beta^{4}+B(1+c)^{2}\beta^{2}+C(1+c)\beta=0\), where
\[A =\mathrm{Tr}_{m}^{2m}(\delta)^{5}+\dfrac{(1+\tilde{c})^{4}}{(1+c )^{4}}\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}(a^{2^{2}}),\] \[B =\dfrac{(1+\tilde{c})^{2}}{(1+c)^{2}}\mathrm{Tr}_{m}^{2m}(a^{2})+ \mathrm{Tr}_{m}^{2m}(\delta)^{2},\] \[C =\dfrac{(1+\tilde{c})}{(1+c)}\mathrm{Tr}_{m}^{2m}(a^{2^{m-1}})+ \dfrac{(1+\tilde{c})}{(1+c)}\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}})\mathrm{Tr }_{m}^{2m}(a^{2^{m-2}})+1.\]
Notice that \(A=\mathrm{Tr}_{m}^{2m}(\delta)B^{2}\), and hence substituting \(u=(1+c)\beta\), we have the equation \(\mathrm{Tr}_{m}^{2m}(\delta)B^{2}u^{4}+Bu^{2}+Cu=0\). Thus, it is sufficient to consider the solutions of the cubic equation \(\mathrm{Tr}_{m}^{2m}(\delta)B^{2}u^{3}+Bu+C=0\). Observe that \(B\) and \(C\) are in \(\mathbb{F}_{2^{m}}\), which implies that \(\mathrm{Tr}_{m}^{2m}(\delta)B^{2}u^{3}+Bu+C=0\) is over \(\mathbb{F}_{2^{m}}\). Hence our goal is to find the number of solution of the above cubic equation in \(\mathbb{F}_{2^{m}}\). Multiply the cubic equation by \(\mathrm{Tr}_{m}^{2m}(\delta^{2})B\) and substituting \(Z=(\mathrm{Tr}_{m}^{2m}(\delta)B)u\), we have \(Z^{3}+\mathrm{Tr}_{m}^{2m}(\delta)BZ+\mathrm{Tr}_{m}^{2m}(\delta)^{2}BC=0\), or equivalently
\[Z^{3}+\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-1}})^{2}(B^{\prime})^{2}Z+\mathrm{Tr}_ {m}^{2m}(\delta)^{2}BC=0,\]
where \((B^{\prime})^{2}=B\). Now replacing \(Z\) by \(\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-1}})B^{\prime}Z\), we have
\[Z^{3}+Z+\dfrac{\mathrm{Tr}_{m}^{2m}(\delta)^{2}BC}{\mathrm{Tr}_{m}^{2m}( \delta^{2^{m-1}})^{3}(B^{\prime})^{3}}=0,\]
which is the same as
\[Z^{3}+Z+\dfrac{\mathrm{Tr}_{m}^{2m}(\delta)^{2^{m-1}}C}{B^{\prime}}=0. \tag{3.5}\]
Notice that for \(a\in\mathbb{F}_{2^{m}}\), the above equation reduces to \(Z^{3}+Z+\frac{\mathrm{Tr}_{m}^{2m}(\delta)^{2^{m-1}}}{\mathrm{Tr}_{m}^{2m}( \delta)}=0\). If the reduced equation has three solutions in \(\mathbb{F}_{2^{m}}\) then \(Z^{3}+Z+1\frac{1}{\mathrm{Tr}_{m}^{2m}(\delta)}=0\) also has three distinct solutions which is not true as \(p_{m}((\delta+\bar{\delta})^{-1})\neq 0\). Also, \(\mathrm{Tr}_{1}^{m}(\mathrm{Tr}_{m}^{2m}(\delta)+1)\neq 1\), thus for \(a\in\mathbb{F}_{2^{m}}\), we have no solution of Equation (3.5) in \(\mathbb{F}_{2^{m}}\). If \(a\not\in\mathbb{F}_{2^{m}}\), then we may have at most three solutions to Equation (3.5). Thus we have \(S_{0}\leq 2^{n+2}\). We now consider \(S_{1}\) as follows.
\[S_{1} =\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{2m}((1+c)\beta)\neq 0\end{subarray}}(-1)\mathrm{Tr}(\beta(F(a)+b +c\delta^{2^{2m-2}+2^{m-2}+1}))\] \[\qquad\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}(u_{1}X^{2^{m- 1}+1}+u_{2}X^{2^{1}+1}+u_{3}X^{2^{m-2}+1}+u_{4}X^{2^{2}+1}+wX)\] \[=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{2m}((1+c)\beta)\neq 0\end{subarray}}(-1)\mathrm{Tr}(\beta(F(a)+b +c\delta^{2^{2m-2}+2^{m-2}+1}))\sum_{X\in\mathbb{F}_{2^{n}}}\mathcal{W}_{G}(w),\]
where \(\mathcal{W}_{G}(w)\) is the Walsh transform of the trace of function \(G(X)=u_{1}X^{2^{m-1}+1}+u_{2}X^{2^{1}+1}+u_{3}X^{2^{m-2}+1}+u_{4}X^{2^{2}+1}\) at \(w\). Now from Lemma 2.11, the square of the Walsh transform coefficient of \(G\) is given by
\[\mathcal{W}_{G}(w)^{2}=\begin{cases}2^{n+\ell}&\text{if $G(X)+\mathrm{Tr}(wX) \equiv 0$ on $\mathrm{Ker}\ (L)$}\\ 0&\text{otherwise,}\end{cases}\]
where \(\ell\) is dimension of kernel of the linearized polynomial
\[L(X)=u_{1}(X^{2^{m}}+X)^{2^{m-1}}+u_{2}(X^{2^{m}}+X)^{2}+u_{3}(X^{2^{m}}+X)^{2 ^{m-2}}+u_{4}(X^{2^{m}}+X)^{2^{2}}.\]
It is easy to see that \(\mathbb{F}_{2^{m}}\subseteq\mathrm{Ker}(L)\). Thus, if we can show that \(G(X)+\mathrm{Tr}(wX)\neq 0\) for all \(X\in\mathbb{F}_{2^{m}}\), then \(S_{1}=0\). We shall now make efforts to prove that
\[G(X)+\mathrm{Tr}(wX)=u_{1}X^{2^{m-1}+1}+u_{2}X^{2^{1}+1}+u_{3}X^{2^{m-2}+1}+u_ {4}X^{2^{2}+1}+\mathrm{Tr}(wX)\]
is not identically zero on \(\mathbb{F}_{2^{m}}\). Observe that for \(X\in\mathbb{F}_{2^{m}}\), \(G(X)+\mathrm{Tr}(vX)\) reduces to the following polynomial over \(\mathbb{F}_{2^{m}}\),
\[u_{1}X^{2^{m-1}+1}+u_{2}X^{2^{1}+1}+u_{3}X^{2^{m-2}+1}+u_{4}X^{2^{2}+1}+ \mathrm{Tr}_{m}^{2m}(\beta(1+c))X+\cdots+\left(\mathrm{Tr}_{m}^{2m}(\beta(1+c ))X\right)^{2^{m-1}}.\]
If \(m=1\) then \(G(X)+\mathrm{Tr}(wX)=0\) implies that \(\mathrm{Tr}_{m}^{2m}(\beta(1+c))=0\), which is not possible. For \(m\geq 2\), the degree of \(G(X)+\mathrm{Tr}(wX)\) is strictly less than that of \(2^{m}-1\). Hence the claim.
Next, we discuss the \(c\)-differential uniformity of \(F(X)=(X^{2^{m}}+X+\delta)^{3\cdot 2^{2m-2}+2^{m-2}}+X\) for some fixed values of \(c\) and \(\delta\).
**Theorem 3.2**.: _Let \(F(X)=(X^{2^{m}}+X+\delta)^{3\cdot 2^{2m-2}+2^{m-2}}+X\) over \(\mathbb{F}_{2^{n}}\), where \(n=2m\) and \(m\not\equiv 0\pmod{3}\). Then \(F\) is PcN for all \(c\in\mathbb{F}_{2^{m}}\setminus\{1\}\) and \(\delta\in\mathbb{F}_{2^{n}}\)._
Proof.: We know \(F(X)\) is a permutation polynomial from Lemma 2.4. One can easily simplify \(F(X)\) to get the below expression
\[F(X) =\mathrm{Tr}_{m}^{2m}(\delta)^{2^{m-2}}\mathrm{Tr}_{m}^{2m}(X^{2^{m- 1}+2^{m-2}}+X^{2^{m-1}+2^{2m-2}})+\delta^{2^{2m-1}}\mathrm{Tr}_{m}^{2m}(\delta) ^{2^{m-2}}\mathrm{Tr}_{m}^{2m}(X^{2^{m-2}})\] \[\qquad+(\delta^{2^{m-2}+2^{2m-2}}+\delta^{2^{2m-1}})\mathrm{Tr}_{ m}^{2m}(X)^{2^{m-1}}+X^{2^{m}}+\delta^{3\cdot 2^{2m-2}+2^{m-2}}.\]
Notice that for \(\delta\in\mathbb{F}_{2^{m}}\), \(F(X)=X^{2^{m}}+\delta^{2^{m}}\), would have \(c\)-differential uniformity \(1\). Let us assume \(\mathrm{Tr}_{m}^{2m}(\delta)\neq 0\). Recall that, for any \((a,b)\in\mathbb{F}_{2^{n}}\times\mathbb{F}_{2^{n}}\), the \(c\)-DDT entry \({}_{c}\Delta_{F}(a,b)\) is given by the number of solutions \(X\in\mathbb{F}_{2^{n}}\) of the equation \(F(X+a)+cF(X)=b\), or, equivalently,
\[(1+c)F(X) +\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}})\mathrm{Tr}_{m}^{2m}((a^{ 2^{m-2}}+a^{2^{2m-2}})X^{2^{m-1}}+a^{2^{m-1}}(X^{2^{m-2}}+X^{2^{2m-2}}))\] \[\qquad+F(a)+\delta^{3\cdot 2^{2m-2}+2^{m-2}}=b.\]
Now, by using Equation (2.1), the number of solutions \(X\in\mathbb{F}_{2^{n}}\) of the above equation, \({}_{c}\Delta_{F}(a,b)\), is given by
\[2^{n}\ {}_{c}\Delta_{F}(a,b)=\sum_{\beta\in\mathbb{F}_{2^{n}}} \sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}(\beta((1+c)F(X)+F(a)+\delta^{3 \cdot 2^{2m-2}+2^{m-2}}+b))\] \[\qquad(-1)\mathrm{Tr}(\beta\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}} )\mathrm{Tr}_{m}^{2m}((a^{2^{m-2}}+a^{2^{2m-2}})X^{2^{m-1}}+a^{2^{m-1}}(X^{2^ {m-2}}+X^{2^{2m-2}})))\] \[=\sum_{\beta\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}(\beta(F(a)+ \delta^{3\cdot 2^{2m-2}+2^{m-2}}+b))\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}( \beta((1+c)F(X)))\] \[(-1)\mathrm{Tr}(\beta\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}}) \mathrm{Tr}_{m}^{2m}((a^{2^{m-2}}+a^{2^{2m-2}})X^{2^{m-1}}+a^{2^{m-1}}(X^{2^ {m-2}}+X^{2^{2m-2}}))).\]
With,
\[T_{0} =\mathrm{Tr}(\beta(1+c)F(X)),\] \[T_{1} =\mathrm{Tr}(\beta\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}})\mathrm{ Tr}_{m}^{2m}((a^{2^{m-2}}+a^{2^{2m-2}})X^{2^{m-1}}+a^{2^{m-1}}(X^{2^{m-2}}+X^{2^{2 m-2}}))),\]
the above equation becomes
\[{}_{c}\Delta_{F}(a,b)=\frac{1}{2^{n}}\sum_{\beta\in\mathbb{F}_{2^{n}}}(-1)^{ \mathrm{Tr}\left(\beta\left(F(a)+b+c\delta^{3\cdot 2^{2m-2}+2^{m-2}}\right) \right)}\sum_{X\in\mathbb{F}_{2^{n}}}(-1)T_{0}+T_{1}. \tag{3.6}\]
Further, we simplify \(T_{0}\) and \(T_{1}\) as follows:
\[T_{1} =\mathrm{Tr}(\beta\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-2}})\mathrm{ Tr}_{m}^{2m}((a^{2^{m-2}}+a^{2^{2m-2}})X^{2^{m-1}}+a^{2^{m-1}}(X^{2^{m-2}}+X^{2^{2 m-2}})))\] \[=\mathrm{Tr}(\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-1}})(a^{2^{m-1}}+a ^{2^{2m-1}})\mathrm{Tr}_{m}^{2m}(\beta^{2})X+\mathrm{Tr}_{m}^{2m}(\delta) \mathrm{Tr}_{m}^{2m}(a^{2})\mathrm{Tr}_{m}^{2m}(\beta^{4})X),\]
and,
\[T_{0} =\mathrm{Tr}(\beta(1+c)F(X))\] \[=\mathrm{Tr}\left(\beta(1+c)(\mathrm{Tr}_{m}^{2m}(\delta)^{2^{m-2 }}\mathrm{Tr}_{m}^{2m}(X^{2^{m-1}+2^{m-2}}+X^{2^{m-1}+2^{2m-2}}))\right)\] \[\qquad+\mathrm{Tr}(\beta(1+c)\delta^{2^{2m-1}}\mathrm{Tr}_{m}^{2m }(\delta)^{2^{m-2}}\mathrm{Tr}_{m}^{2m}(X^{2^{m-2}}))\] \[\qquad+\mathrm{Tr}\left(\beta(1+c)((\delta^{2^{m-2}+2^{2m-2}}+ \delta^{2^{2m-1}})\mathrm{Tr}_{m}^{2m}(X)^{2^{m-1}}+X^{2^{m}}+\delta^{3\cdot 2 ^{2m-2}+2^{m-2}})\right)\]
\[=\mathrm{Tr}(\beta(1+c)\delta^{3\cdot 2^{2m-2}+2^{m-2}})+\mathrm{Tr}(( \beta(1+c))^{2^{m}}X+\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}(\delta^{2} (\beta(1+c))^{4})X)\] \[\qquad\qquad+\mathrm{Tr}(\mathrm{Tr}_{m}^{2m}((\beta(1+c))^{2}( \delta+\delta^{2^{m-1}+2^{2m-1}}))X\] \[\qquad\qquad+\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}(( \beta(1+c))^{4})(X^{3}+X^{2^{m+1}+1}).\]
Now Equation (3.6) reduces to
\[{}_{c}\Delta_{F}(a,b)=\frac{1}{2^{n}}\sum_{\beta\in\mathbb{F}_{2^{n}}}(-1)^{ \mathrm{Tr}}\left(\beta\left(F(a)+b+c\delta^{3\cdot 2^{2m-2}+2^{m-2}}\right)\right)\]
\[\sum_{X\in\mathbb{F}_{2^{n}}}(-1)^{\mathrm{Tr}}(u(X^{2^{1}+1}+X^{2^{m+1}+1})+ wX),\]
where,
\[u =\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}((\beta(1+c))^{ 4})=(1+c)^{4}\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}(\beta)^{4}\] \[w =\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-1}})\mathrm{Tr}_{m}^{2m}(a^{2^ {m-1}})\mathrm{Tr}_{m}^{2m}(\beta^{2})+\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr }_{m}^{2m}(a^{2})\mathrm{Tr}_{m}^{2m}(\beta^{4})\] \[\quad+(\beta(1+c))^{2^{m}}+\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{ Tr}_{m}^{2m}(\delta^{2}(\beta(1+c))^{4})+\mathrm{Tr}_{m}^{2m}((\beta(1+c))^{2}( \delta+\delta^{2^{m-1}+2^{2m-1}}))\] \[=\mathrm{Tr}_{m}^{2m}(\delta^{2^{m-1}})\mathrm{Tr}_{m}^{2m}(a^{2^ {m-1}})\mathrm{Tr}_{m}^{2m}(\beta^{2})+\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr }_{m}^{2m}(a^{2})\mathrm{Tr}_{m}^{2m}(\beta^{4})\] \[\quad+(1+c)\beta^{2^{m}}+(1+c)^{4}\mathrm{Tr}_{m}^{2m}(\delta) \mathrm{Tr}_{m}^{2m}(\delta^{2}\beta^{4})+(1+c)^{2}\mathrm{Tr}_{m}^{2m}(\beta ^{2}(\delta+\delta^{2^{m-1}+2^{2m-1}})).\]
Further, splitting the above sum depending on whether \(\mathrm{Tr}_{m}^{2m}(\beta)\) is \(0\) or not, we get
\[2^{n}\ {}_{c}\Delta_{F}(a,b)=\sum_{\begin{subarray}{c}\beta\in \mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{2m}(\beta)=0\end{subarray}}(-1)^{\mathrm{Tr}}(\beta(F(a)+b +c\delta^{3\cdot 2^{2m-2}+2^{m-2}}))\sum_{X\in\mathbb{F}_{2^{n}}}(-1)^{\mathrm{Tr}}( (1+c)\beta^{2^{m}}X)\] \[(-1)^{\mathrm{Tr}}(((1+c)^{4}\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{ Tr}_{m}^{2m}(\delta^{2}\beta^{4})+(1+c)^{2}\mathrm{Tr}_{m}^{2m}(\beta^{2}( \delta+\delta^{2^{m-1}+2^{2m-1}})))X)\] \[\quad+\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{2m}(\beta)\neq 0\end{subarray}}(-1)^{\mathrm{Tr}}(\beta(F(a)+b +c\delta^{3\cdot 2^{2m-2}+2^{m-2}}))\] \[\qquad\sum_{X\in\mathbb{F}_{2^{n}}}(-1)^{\mathrm{Tr}}(u(X^{2^{1} +1}+X^{2^{m+1}+1})+wX)\] \[=S_{0}+S_{1},\]
where \(S_{0},S_{1}\) are the two inner sums. We first consider the sum \(S_{0}\) below. We write
\[S_{0}=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{2m}(\beta)=0\end{subarray}}(-1)^{\mathrm{Tr}}(\beta(F(a)+b+c \delta^{3\cdot 2^{2m-2}+2^{m-2}}))\sum_{X\in\mathbb{F}_{2^{n}}}(-1)^{\mathrm{Tr}} \left((1+c)\beta^{2^{m}}X\right)\] \[\qquad\qquad(-1)^{\mathrm{Tr}}\left(((1+c)^{4}\mathrm{Tr}_{m}^{2m} (\delta)\mathrm{Tr}_{m}^{2m}(\delta^{2}\beta^{4})+(1+c)^{2}\mathrm{Tr}_{m}^{2m} (\beta^{2}(\delta+\delta^{2^{m-1}+2^{2m-1}})))X\right)\]
\[=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{2m}(\beta)=0\end{subarray}}(-1)\mathrm{Tr}(\beta(F(a)+b+c\delta ^{3\cdot 2^{2m-2}+2^{m-2}}))\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}\left( \beta(1+c)X\right)\] \[\qquad(-1)\mathrm{Tr}\left(((1+c)^{4}\mathrm{Tr}_{m}^{2m}(\delta )^{3}\beta^{4}+(1+c)^{2}\mathrm{Tr}_{m}^{2m}(\delta)\beta^{2})X\right)=2^{n}.\]
The last identity follows by analyzing the number of solutions \(\beta\) of the following equation:
\[\beta(1+c)+(1+c)^{4}\mathrm{Tr}_{m}^{2m}(\delta)^{3}\beta^{4}+(1+c)^{2}\mathrm{ Tr}_{m}^{2m}(\delta)\beta^{2}=0, \tag{3.7}\]
or equivalently, nonzero solutions \(\beta\in\mathbb{F}_{2^{m}}\) of \(Z^{3}+Z+1=0\), where \(Z=(1+c)\mathrm{Tr}_{m}^{2m}(\delta)\beta\). From Lemma 2.2, it is clear that the polynomial \(p_{m}(X)\) has an odd number of terms if \(m\not\equiv 0\pmod{3}\), and each of term is a monomial of \(X\). Hence, it cannot have three distinct solutions in \(\mathbb{F}_{2^{m}}\). Also, since \(\mathrm{Tr}_{1}^{m}(1+1)=0\), then it cannot have a unique solution. Thus, Equation (3.7) has only one solution \(\beta=0\) in \(\mathbb{F}_{2^{m}}\), and that gives us \(S_{0}=2^{n}\). Next, we have
\[S_{1} =\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{2m}(\beta)\neq 0\end{subarray}}(-1)\mathrm{Tr}(\beta(F(a)+b+c \delta^{3\cdot 2^{2m-2}+2^{m-2}}))\] \[\qquad\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}\left(u(X^{2^{ 1}+1}+X^{2^{m+1}+1})+wX\right)\] \[=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{2m}(\beta)\neq 0\end{subarray}}(-1)\mathrm{Tr}(\beta(F(a)+b+c \delta^{3\cdot 2^{2m-2}+2^{m-2}}))\sum_{X\in\mathbb{F}_{2^{n}}}\mathcal{W}_{f_{u}} (w),\]
where \(\mathcal{W}_{f_{u}}(w)\) is the Walsh transform of the function \(f_{u}(X)=\mathrm{Tr}(u(X^{2^{1}+1}+X^{2^{m+1}+1}))\) at \(w\). We split our analysis in two cases depending on the value of \(m\). Also, we have \(k=2m\), \(a=1\) and \(b=m+1\).
**Subcase 1(a).** If \(m\) is odd, then \(\gcd(b-a,k)=m\) and \(\gcd(b+a,k)=1\). Also, notice that \(v_{2}(a)=v_{2}(1)=0\), \(v_{2}(b)=v_{2}(m+1)\) and \(v_{2}(k)=v_{2}(2m)=1\). Summarizing, we observe that \(v_{2}(a)=v_{2}(b)\leq v_{2}(k)\) does not hold. From Lemma 2.9, one can see that \(\mathcal{W}_{f_{u}}(w)=\mathcal{W}_{f}(\frac{w}{\eta})\), where \(f(X)=\mathrm{Tr}(X^{2^{1}+1}+X^{2^{m+1}+1})\) and \(\eta\in\mathbb{F}_{2^{m}}\) such that \(u=\eta^{2^{1}+1}\). We now show that \(\mathcal{W}_{f}(\frac{w}{\eta})=0\) via Lemma 2.8. One can observe that \(0=v_{2}(b-a)=v_{2}(b+a)=v_{2}(k)-1\). Hence it is sufficient to show that \((L_{1}\circ L_{2})(\frac{w}{\eta}+1)\neq 0\). Now,
\[(L_{1}\circ L_{2})(X)=X+X^{2}+X^{2^{2}}+\cdots+X^{2^{m-1}}.\]
Hence,
\[(L_{1}\circ L_{2})\left(\frac{w}{\eta}+1\right)=m+\frac{\beta^{2^ {m}}(1+c)}{\eta}+\left(\frac{\beta^{2^{m}}(1+c)}{\eta}\right)^{2}+\cdots+ \left(\frac{\beta^{2^{m}}(1+c)}{\eta}\right)^{2^{m-1}}\\ +\mathrm{Tr}_{1}^{m}\left(\frac{\mathrm{Tr}_{m}^{2m}(\delta^{2^{m- 1}})\mathrm{Tr}_{m}^{2m}(a^{2^{m-1}})\mathrm{Tr}_{m}^{2m}(\beta^{2})+\mathrm{ Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}(a^{2})\mathrm{Tr}_{m}^{2m}(\beta^{4})}{ \eta}\right)\]
\[+\mathrm{Tr}_{1}^{m}\left(\frac{(1+c)^{4}\mathrm{Tr}_{m}^{2m}(\delta) \mathrm{Tr}_{m}^{2m}(\delta^{2^{2m+1}}\beta^{4})+(1+c)^{2}\mathrm{Tr}_{m}^{2m}( \beta^{2}(\delta+\delta^{2^{m-1}+2^{2m-1}}))}{\eta}\right).\]
Now, if \((L_{1}\circ L_{2})\left(\frac{w}{\eta}+1\right)=0\), then we have \(\left((L_{1}\circ L_{2})\left(\frac{w}{\eta}+1\right)\right)^{2}=0\). This will give us
\[\begin{split}(L_{1}\circ L_{2})\left(\frac{w}{\eta}+1\right)+ \left((L_{1}\circ L_{2})\left(\frac{w}{\eta}+1\right)\right)^{2}& =\frac{1}{\eta}\mathrm{Tr}_{m}^{2m}(\beta^{2^{m}}(1+c))+m+m^{2} \\ &=\frac{(1+c)}{\eta}\mathrm{Tr}_{m}^{2m}(\beta)+m+m^{2}=0.\end{split} \tag{3.8}\]
As \(u=\eta^{3}\), we have \(\eta\neq 0\), that gives us \(\mathrm{Tr}_{m}^{2m}(\beta)=0\) which is obviously not true and hence we are done.
**Subcase 1(b).** Let \(m\equiv 0\pmod{4}\). If we have \(v_{2}(m)=v(\geq 2)\), then \(v_{2}(a)=v_{2}(b)=0\) and \(v_{2}(k)=v_{2}(2m)=v+1\). Summarizing, we have \(v_{2}(a)=v_{2}(b)<v_{2}(k)\) and \(v_{2}(k)>\nu\). Thus, by using Lemma 2.9 if we show that \(L_{1}\circ L_{2}\left(\frac{w}{u^{2}}\right)\neq 0\) then we have \(\mathcal{W}_{G}(w)=0\) which gives us \(S_{1}=0\) and then we are done with the claim. So, our goal is to now show that \(L_{1}\circ L_{2}\left(\frac{w}{u^{2}}\right)\neq 0\). As \(d_{1}=m\) and \(d_{2}=2\), we have
\[(L_{1}\circ L_{2})(X)=X+X^{2^{2}}+X^{2^{(2\cdot 2)}}+X^{2^{(3\cdot 2)}}+\cdots+X^ {2^{m-2}}.\]
Now, if \((L_{1}\circ L_{2})\left(\frac{w}{u^{2}}\right)=0\), then we have \(\left((L_{1}\circ L_{2})\left(\frac{w}{u^{2}}\right)\right)^{2^{2}}=0\). This will give us
\[(L_{1}\circ L_{2})\left(\frac{w}{u^{2}}\right)+\left((L_{1}\circ L_{2})\left( \frac{w}{u^{2}}\right)\right)^{2^{2}}=\mathrm{Tr}_{m}^{2m}\left(\frac{w}{u^{2 }}\right)=\frac{1}{u^{2}}\mathrm{Tr}_{m}^{2m}(w)=0,\]
which is a contradiction to the assumption that \(\mathrm{Tr}_{m}^{2m}(\beta)\neq 0\).
**Subcase 1(c).** Let \(m\equiv 2\pmod{4}\), then \(d_{1}=m\) and \(d_{2}=4\). In this subcase, we have \(v_{2}(a)=v_{2}(b)=0\) and \(v_{2}(k)=v_{2}(2m)=2\). Summarizing, we observe that \(v_{2}(a)=v_{2}(b)<v_{2}(k)\) holds and \(v_{2}(k)\leq\nu\). Then again from Lemma 2.9, if we show that \(\frac{w}{u^{2}}\not\in S_{d_{1}}\cap S_{d_{2}}\), then we are done. In this subcase, \(S_{d_{1}}=\{X\in\mathbb{F}_{2^{n}}:\mathrm{Tr}_{m}^{2m}(X)=0\}\) and \(S_{d_{2}}=\{X\in\mathbb{F}_{2^{n}}:\mathrm{Tr}_{4}^{n}(X)=0\}\). Let \(\frac{w}{u^{2}}\in S_{d_{1}}\cap S_{d_{2}}\). Then we have \(\mathrm{Tr}_{m}^{2m}\left(\frac{w}{u^{2}}\right)=0\), which implies that \(\mathrm{Tr}_{m}^{2m}(\beta)=0\), which is not possible. Hence \(\mathcal{W}_{G}(w)=0\), giving us \(S_{1}=0\). This shows the claim that \(F\) is P\(c\)N for \(c\in\mathbb{F}_{2^{m}}\setminus\{1\}\), and the proof is done.
**Theorem 3.3**.: _Let \(F(X)=(X^{2^{m}}+X+\delta)^{3\cdot 2^{m-2}+2^{2m-2}}+X\) over \(\mathbb{F}_{2^{n}}\), where \(n=2m\) and \(m\not\equiv 0\pmod{3}\). Then \(F\) is P\(c\)N for all \(c\in\mathbb{F}_{2^{m}}\setminus\{1\}\) and \(\delta\in\mathbb{F}_{2^{n}}\)._
Proof.: Notice that \(F(X)\) is a permutation polynomial over \(\mathbb{F}_{2^{n}}\) from Lemma 2.5. The proof follows along a similar lines as in the aforementioned Theorem 3.2.
Next, we compute the \(c\)-differential uniformity of the permutation polynomial \(F(X)=(X^{2^{m}}+X+\delta)^{2^{2m+1}+2^{m}}+(X^{2^{m}}+X+\delta)^{2^{2m}+2^{m+1 }}+X\) and show that it is P\(c\)N for some values of \(c\) and \(\delta\).
**Theorem 3.4**.: _Let \(F(X)=(X^{2^{m}}+X+\delta)^{2^{2m+1}+2^{m}}+(X^{2^{m}}+X+\delta)^{2^{2m}+2^{m+1 }}+X\) over \(\mathbb{F}_{2^{n}}\), where \(n=3m\). Let \(\delta\in\mathbb{F}_{2^{n}}\) and \(\mathrm{Tr}_{m}^{3m}(\delta)=0\). Then \(F\) is P\(c\)N for \(c\in\mathbb{F}_{2^{m}}\setminus\{1\}\)._
Proof.: First, we know that \(F\) is a permutation polynomial via Lemma 2.6. Next, by expanding the trinomial, one can easily simplify \(F(X)\) and gets the below expression
\[F(X) =\mathrm{Tr}_{m}^{3m}(X^{2^{m+1}+1}+X^{2^{2m+1}+1})+(\delta^{2^{m+1} }+\delta^{2^{2m+1}})X^{2^{2m}}+\delta^{2^{2m+1}}X^{2^{m}}+\delta^{2^{2m}}X^{2^ {m+1}}\] \[+(\delta^{2^{m}}+\delta^{2^{2m}})X^{2^{2m+1}}+\delta^{2^{m}}X^{2}+ (\delta^{2^{m+1}}+1)X+\delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2^{m}}.\]
Since \(\mathrm{Tr}_{m}^{3m}(\delta)=0\), we can write the above equation as
\[F(X) =\mathrm{Tr}_{m}^{3m}(X^{2^{m+1}+1}+X^{2^{2m+1}+1})+\delta^{2}X^{ 2^{2m}}+\delta^{2^{2m+1}}X^{2^{m}}+\delta^{2^{2m}}X^{2^{m+1}}\] \[+\delta X^{2^{2m+1}}+\delta^{2^{m}}X^{2}+(\delta^{2^{m+1}}+1)X+ \delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2^{m}}.\]
Recall that, for any \((a,b)\in\mathbb{F}_{2^{n}}\times\mathbb{F}_{2^{n}}\), the \(c\)-DDT entry \({}_{c}\Delta_{F}(a,b)\) is given by the number of solutions \(X\in\mathbb{F}_{2^{n}}\) of the equation \(F(X+a)+cF(X)=b\), or, equivalently,
\[(1+c)F(X)+F(a)+\delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2^{m}}\] \[\qquad\qquad\qquad\qquad+\mathrm{Tr}_{m}^{3m}((a^{2^{m+1}}+a^{2^{ 2m+1}})X+a(X^{2^{m+1}}+X^{2^{2m+1}}))=b,\]
From Equation (2.1), the number of solutions \(X\in\mathbb{F}_{2^{n}}\) of the above equation is given by
\[2^{n}\ {}_{c}\Delta_{F}(a,b)=\sum_{\beta\in\mathbb{F}_{2^{n}}} \sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}(\beta((1+c)F(X)+F(a)+\delta^{2^{ m+1}+2^{2m}}+\delta^{2^{2m+1}+2^{m}}))\] \[(-1)\mathrm{Tr}(\beta(\mathrm{Tr}_{m}^{3m}((a^{2^{m+1}}+a^{2^{2m+ 1}})X+a(X^{2^{m+1}}+X^{2^{2m+1}}))+b)),\]
or, equivalently,
\[2^{n}\ {}_{c}\Delta_{F}(a,b)=\sum_{\beta\in\mathbb{F}_{2^{n}}}(-1) \mathrm{Tr}\left(\beta\left(F(a)+b+\delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2^ {m}}\right)\right)\] \[\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}\left(\beta(1+c)F(X)+ \beta\mathrm{Tr}_{m}^{3m}((a^{2^{m+1}}+a^{2^{2m+1}})X+a(X^{2^{m+1}}+X^{2^{2m+ 1}}))\right).\]
Using notations
\[T_{0} =\mathrm{Tr}(\beta(1+c)F(X))\] \[T_{1} =\mathrm{Tr}(\beta(\mathrm{Tr}_{m}^{3m}((a^{2^{m+1}}+a^{2^{2m+1} })X+a(X^{2^{m+1}}+X^{2^{2m+1}})))),\]
the above equation becomes
\[{}_{c}\Delta_{F}(a,b)=\frac{1}{2^{n}}\sum_{\beta\in\mathbb{F}_{2^{n}}}(-1) \mathrm{Tr}\left(\beta\left(F(a)+b+\delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2 ^{m}}\right)\right)\sum_{X\in\mathbb{F}_{2^{n}}}(-1)T_{0}+T_{1}. \tag{3.9}\]
Let \(c\in\mathbb{F}_{2^{m}}\setminus\{1\}\). To compute \(T_{0}\) and \(T_{1}\), we simplify them further,
\[T_{1} =\mathrm{Tr}(\beta(\mathrm{Tr}_{m}^{3m}((a^{2^{m+1}}+a^{2^{2m+1} })X+a(X^{2^{m+1}}+X^{2^{2m+1}}))))\] \[=(a^{2^{m+1}}+a^{2^{2m+1}})\mathrm{Tr}_{m}^{3m}(\beta)X+(a^{2^{m -1}}+a^{2^{2m-1}})\mathrm{Tr}_{m}^{3m}(\beta^{2^{m-1}})X,\]
and,
\[T_{0} =\mathrm{Tr}(\beta(1+c)F(X))\] \[=\mathrm{Tr}\left(\beta(1+c)(\mathrm{Tr}_{m}^{3m}(X^{2^{m+1}+1}+X^{ 2^{2m+1}+1})+\delta^{2}X^{2^{2m}}+\delta^{2^{2m+1}}X^{2^{m}}+\delta^{2^{2m}}X^{ 2^{m+1}})\right)\] \[\qquad+\mathrm{Tr}\left(\beta(1+c)(\delta X^{2^{2m+1}}+\delta^{2 ^{m}}X^{2}+(\delta^{2^{m+1}}+1)X+\delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2^{m }})\right)\] \[=\mathrm{Tr}\left(\mathrm{Tr}_{m}^{3m}(\beta(1+c))(X^{2^{m+1}+1} +X^{2^{2m+1}+1})+\delta^{2^{m+1}}((\beta(1+c))^{2^{m}}+(\beta(1+c))^{2^{2m}})X\right)\] \[\qquad+\mathrm{Tr}\left((\beta(1+c))^{2^{2m-1}}\delta^{2^{m-1}}X+ (\beta(1+c)\delta)^{2^{m-1}}X+(\beta(1+c)\delta^{2^{m}})^{2^{3m-1}}X\right)\] \[\qquad+\mathrm{Tr}\left(\beta(1+c)(\delta^{2^{m+1}}+1)X+\beta(1+ c)(\delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2^{m}})\right)\] \[=\mathrm{Tr}\left(\mathrm{Tr}_{m}^{3m}(\beta(1+c))(X^{2^{m+1}+1} +X^{2^{2m+1}+1})+\beta(1+c)X+\mathrm{Tr}_{m}^{3m}(\beta(1+c))\delta^{2^{m+1}}X\right)\] \[\qquad+\mathrm{Tr}\left(\mathrm{Tr}_{m}^{3m}(\beta(1+c))^{2^{m-1} }\delta^{2^{m-1}}X+\beta(1+c)(\delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2^{m}}) \right).\]
Now Equation (3.9) reduces to
\[{}_{c}\Delta_{F}(a,b)=\frac{1}{2^{n}}\sum_{\beta\in\mathbb{F}_{2^{n}}}(-1) \mathrm{Tr}(\beta(F(a)+b+c(\delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2^{m}}))\]
\[\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}(u(X^{2^{m+1}+1}+X^{2^{2m+1}+1})+wX),\]
where,
\[u =\mathrm{Tr}_{m}^{3m}((1+c)\beta)\] \[w =(a^{2^{m+1}}+a^{2^{2m+1}})\mathrm{Tr}_{m}^{3m}(\beta)+(a^{2^{m-1 }}+a^{2^{2m-1}})\mathrm{Tr}_{m}^{3m}(\beta^{2^{m-1}})+\beta(1+c)\] \[\qquad+\mathrm{Tr}_{m}^{3m}(\beta(1+c))\delta^{2^{m+1}}+\mathrm{ Tr}_{m}^{3m}(\beta(1+c))^{2^{m-1}}\delta^{2^{m-1}}.\]
As \(c\in\mathbb{F}_{2^{m}}\setminus\{1\}\), we can split the above sum depending on whether \(\mathrm{Tr}_{m}^{3m}((1+c)\beta)\) is \(0\) or not, or equivalently, \(\mathrm{Tr}_{m}^{3m}(\beta)\) is \(0\) or not. We write
\[2^{n}\ {}_{c}\Delta_{F}(a,b)=\sum_{\begin{subarray}{c}\beta\in \mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{3m}(\beta)=0\end{subarray}}(-1)\mathrm{Tr}(\beta(F(a)+b+c( \delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2^{m}})))\] \[\sum_{X\in\mathbb{F}_{2^{n}}}(-1)\mathrm{Tr}\left(\left(\beta(1+c )+(a^{2^{m+1}}+a^{2^{2m+1}})\mathrm{Tr}_{m}^{3m}(\beta)\right)X\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad(-1)\mathrm{Tr}\left((a^{2^{ m-1}}+a^{2^{2m-1}})\mathrm{Tr}_{m}^{3m}(\beta^{2^{m-1}})X\right)\] \[+\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \mathrm{Tr}_{m}^{3m}(\beta)\neq 0\end{subarray}}(-1)\mathrm{Tr}(\beta(F(a)+b+c( \delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2^{m}})))\]
\[\sum_{X\in\mathbb{F}_{2^{n}}}(-1)^{\text{Tr}}(u(X^{2^{m+1}+1}+X^{2^{2m+1}+1})+wX)\]
\[=S_{0}+S_{1},\]
where \(S_{0},S_{1}\) are the two inner sums. We first consider the sum \(S_{0}\):
\[S_{0}=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \text{Tr}_{m}^{3m}(\beta)=0\end{subarray}}(-1)^{\text{Tr}}(\beta(F(a)+b+c( \delta^{2^{m+1}+2^{2m}}+\delta^{2^{2m+1}+2^{m}})))\sum_{X\in\mathbb{F}_{2^{n}} }(-1)^{\text{Tr}}\left(\beta(1+c)X\right).\]
Clearly, the inner sum in \(S_{0}\) is zero only for \(\beta=0\) and hence we have \(S_{0}=2^{n}\). Next, we consider \(S_{1}\),
\[S_{1}=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{2^{n}}\\ \text{Tr}_{m}^{3m}(\beta)\neq 0\end{subarray}}(-1)^{\text{Tr}}(\beta(F(a)+b+c( \delta^{2^{m}+2^{2m}+1})))\sum_{X\in\mathbb{F}_{2^{n}}}\mathcal{W}_{f_{u}}(w),\]
where \(\mathcal{W}_{f_{u}}(w)\) is the Walsh transform of the function \(f_{u}(X)=\text{Tr}(u(X^{2^{m+1}+1}+X^{2^{2m+1}+1}))\) at \(w\). We split our analysis in three cases depending on the value of \(m\). Also, we have \(k=3m\), \(a=m+1\) and \(b=2m+1\).
**Subcase 1(a).** If \(m\) is odd, then \(\gcd(b-a,k)=m\) and \(\gcd(b+a,k)=1\). Also, notice that \(v_{2}(a)=v_{2}(m+1)\), \(v_{2}(b)=v_{2}(2m+1)=0\) and \(v_{2}(k)=v_{2}(3m)=0\). Summarizing all, we observe that \(v_{2}(a)=v_{2}(b)\leq v_{2}(k)\) does not holds. From Lemma 2.9, one can see that \(\mathcal{W}_{f_{u}}(w)=\mathcal{W}_{f}(\frac{w}{\eta})\), where \(f(X)=\text{Tr}(X^{2^{m+1}+1}+X^{2^{2m+1}+1})\) and \(\eta\in\mathbb{F}_{2^{m}}\) such that \(u=\eta^{2^{m-1}+1}\). Also, observe that \(0=v_{2}(b-a)=v_{2}(b+a)\neq v_{2}(k)-1\) and \(v_{2}(k)=\nu\). Hence it is sufficient to show that \(\frac{w}{\eta}\not\in S_{d_{1}}\cap S_{d_{2}}\), where \(S_{d_{1}}=\{X\in\mathbb{F}_{2^{n}}:\text{Tr}_{m}^{3m}(X)=0\}\) and \(S_{d_{2}}=\{X\in\mathbb{F}_{2^{n}}:\text{Tr}_{1}^{n}(X)=0\}\). It is easy to observe that \(\text{Tr}_{m}^{3m}\left(\frac{w}{\eta}\right)=\frac{1}{\eta}\text{Tr}_{m}^{3 m}(\beta(1+c))=0\), which is not possible as \(\text{Tr}_{m}^{3m}(\beta(1+c))\neq 0\). Hence \(\mathcal{W}_{f_{u}}(w)=\mathcal{W}_{f}(\frac{w}{\eta})=0\).
**Subcase 1(b).** Let \(m\equiv 0\pmod{4}\). If we have \(v_{2}(m)=v(\geq 2)\), then \(v_{2}(a)=v_{2}(b)=0\) and \(v_{2}(k)=v_{2}(3m)=v_{2}(m)\). Summarizing all, we have \(v_{2}(a)=v_{2}(b)<v_{2}(k)\) and \(v_{2}(k)=\nu\). Thus, by using Lemma 2.9 if we show that \(\frac{w}{u^{2^{-(2m+1)}}}=\frac{w}{u^{2^{m-1}}}\not\in S_{d_{1}}\cap S_{d_{2}}\), where \(S_{d_{1}}=\{X\in\mathbb{F}_{2^{n}}:\text{Tr}_{m}^{3m}(X)=0\}\) and \(S_{d_{2}}=\{X\in\mathbb{F}_{2^{n}}:\text{Tr}_{2}^{n}(X)=0\}\). As \(u\in\mathbb{F}_{2^{m}}\), \(\text{Tr}_{m}^{3m}\left(\frac{w}{u^{2^{m-1}}}\right)=0\), will imply \(\text{Tr}_{m}^{3m}(\beta(1+c))=0\), which will give us a contradiction to the assumption that \(\text{Tr}_{m}^{3m}(\beta(1+c))\neq 0\). Thus, \(S_{1}=0\) in this case too.
**Subcase 1(c).** Let \(m\equiv 2\pmod{4}\). Then we have \(v_{2}(m)=1\), \(v_{2}(a)=v_{2}(b)=0\) and \(v_{2}(k)=v_{2}(3m)=v_{2}(m)=1\). Summarizing all, we have \(v_{2}(a)=v_{2}(b)<v_{2}(k)\) and \(v_{2}(k)<\nu\). Then by using the same argument as in Subcase 1(b), we have \(S_{1}=0\). This completes the proof.
## 4. Permutation polynomial over \(\mathbb{F}_{3^{n}}\) with low \(c\)-differential uniformity
In this section, we consider the \(c\)-differential uniformity of polynomial \(F(X)=(X^{3^{m}}-X+\delta)^{3^{m}+4}+(X^{3^{m}}-X+\delta)^{5}+X\), which is a permutation, as stated in Lemma 2.7, over \(\mathbb{F}_{3^{n}}\), where \(\delta\in\mathbb{F}_{3^{n}}\) and \(n=2m\).
**Theorem 4.1**.: _Let \(F(X)=(X^{3^{m}}-X+\delta)^{3^{m}+4}+(X^{3^{m}}-X+\delta)^{5}+X\) over \(\mathbb{F}_{3^{n}}\), where \(n=2m\) and \(\delta\in\mathbb{F}_{3^{n}}\) such that \((1-[\mathrm{Tr}_{m}^{2m}(\delta)]^{4})\) is a square element in \(\mathbb{F}_{3^{m}}^{*}\). Then_:__
1. \(F\) _is PcN for all_ \(c\in\mathbb{F}_{3^{m}}\setminus\{1\}\)_._
2. _Moreover,_ \(F\) _is of_ \(c\)_-differential uniformity_ \(3\) _for all_ \(c\in\mathbb{F}_{3^{n}}\setminus\mathbb{F}_{3^{m}}\)_._
Proof.: Clearly, after simplifying
\[F(X)= \mathrm{Tr}_{m}^{2m}(\delta)(X^{4\cdot 3^{m}}+X^{4}-X^{3^{m+1}+1}-X^ {3^{m}+3}+\delta X^{3^{m+1}}+\delta^{3}X^{3^{m}}+\delta^{4})\] \[\qquad\qquad+\mathrm{Tr}_{m}^{2m}(\delta)(-\delta X^{3}-\delta^{ 3}X)+X.\]
We know that for any \((a,b)\in\mathbb{F}_{3^{n}}\times\mathbb{F}_{3^{n}}\), the \(c\)-DDT entry \({}_{c}\Delta_{F}(a,b)\) is given by the number of solutions \(X\in\mathbb{F}_{3^{n}}\) of the following equation
\[F(X+a)-cF(X)=b, \tag{4.1}\]
or, equivalently,
\[(1-c)F(X)+\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}((a^{3}-a^{3^{m+1}}) X+a(X^{3}-X^{3^{m+1}}))+F(a)-\delta^{3^{m}+4}-\delta^{5}=b.\]
Now, by using Equation (2.1), the number of solutions \(X\in\mathbb{F}_{3^{n}}\), \({}_{c}\Delta_{F}(a,b)\), of the above Equation (4.1) is given by
\[\frac{1}{3^{n}}\sum_{\beta\in\mathbb{F}_{3^{n}}}\sum_{X\in\mathbb{ F}_{3^{n}}}\omega\mathrm{Tr}\left(\beta((1-c)F(X)+\mathrm{Tr}_{m}^{2m}( \delta)\mathrm{Tr}_{m}^{2m}((a^{3}-a^{3^{m+1}})X+a(X^{3}-X^{3^{m+1}})))\right)\] \[\omega\mathrm{Tr}(\beta(F(a)-\delta^{3^{m}+4}-\delta^{5}-b)),\]
where \(\omega=e^{2\pi i/3}\). Equivalently,
\[{}_{c}\Delta_{F}(a,b) =\frac{1}{3^{n}}\sum_{\beta\in\mathbb{F}_{3^{n}}}\omega\mathrm{ Tr}\left(\beta\left(F(a)-b-\delta^{3^{m}+4}-\delta^{5}\right)\right)\] \[\sum_{X\in\mathbb{F}_{3^{n}}}\omega\mathrm{Tr}\left(\beta((1-c)F( X)+\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}((a^{3}-a^{3^{m+1}})X+a(X^{3}-X^{3 ^{m+1}})))\right).\]
Let
\[T_{0} =\mathrm{Tr}(\beta(1-c)F(X)),\] \[T_{1} =\mathrm{Tr}(\beta\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m }((a^{3}-a^{3^{m+1}})X+a(X^{3}-X^{3^{m+1}}))).\]
Then,
\[{}_{c}\Delta_{F}(a,b)=\frac{1}{3^{n}}\sum_{\beta\in\mathbb{F}_{3^{n}}}\omega \mathrm{Tr}\left(\beta\left(F(a)-b-\delta^{3^{m}+4}-\delta^{5}\right)\right) \sum_{X\in\mathbb{F}_{3^{n}}}\omega T_{0}+T_{1}. \tag{4.2}\]
**Case 1.** Let \(c\in\mathbb{F}_{3^{m}}\setminus\{1\}\) and \(\delta\in\mathbb{F}_{3^{n}}\). To compute \(T_{0}\) and \(T_{1}\), we first write
\[T_{1} =\mathrm{Tr}(\beta\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m }((a^{3}-a^{3^{m+1}})X+a(X^{3}-X^{3^{m+1}})))\] \[=\mathrm{Tr}\left(\mathrm{Tr}_{m}^{2m}(\beta)\mathrm{Tr}_{m}^{2m }(\delta)(a^{3}-a^{3^{m+1}})X+\mathrm{Tr}_{m}^{2m}(\beta^{3^{m-1}})\mathrm{Tr} _{m}^{2m}(\delta^{3^{m-1}})(a^{3^{m}}-a)^{3^{m-1}}X\right),\]
and
\[T_{0} =\mathrm{Tr}(\beta(1-c)F(X))\] \[=\mathrm{Tr}\left(\beta(1-c)(\mathrm{Tr}_{m}^{2m}(\delta)(X^{4.3^{m }}+X^{4}-X^{3^{m+1}+1}-X^{3^{m+3}}+\delta X^{3^{m+1}}+\delta^{3}X^{3^{m}}+ \delta^{4})\right.\] \[\qquad\qquad\left.+\mathrm{Tr}_{m}^{2m}(\delta)(-\delta X^{3}- \delta^{3}X)+X\right)\] \[=\mathrm{Tr}\left(\beta(1-c)(\delta^{3^{m}+4}+\delta^{5})\right)+ \mathrm{Tr}\left(\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}(\beta(1-c)) X^{4}+\beta(1-c)X\right.\] \[\qquad\left.-\mathrm{Tr}_{m}^{2m}(\delta)^{3^{m-1}}\mathrm{Tr}_{ m}^{2m}(\beta(1-c))^{3^{m-1}}X^{3^{m-1}+1}+(\mathrm{Tr}_{m}^{2m}(\delta) \delta\beta(1-c))^{3^{m-1}}X\right.\] \[\left.+(-(\mathrm{Tr}_{m}^{2m}(\delta)\delta\beta(1-c))^{3^{2m-1 }}+(\mathrm{Tr}_{m}^{2m}(\delta)\delta^{3}\beta(1-c))^{3^{m}}-\mathrm{Tr}_{m}^ {2m}(\delta)\delta^{3}\beta(1-c))X\right).\]
Now Equation (4.2) reduces to
\[{}_{c}\Delta_{F}(a,b)=\frac{1}{3^{n}}\sum_{\beta\in\mathbb{F}_{3^{n}}}\omega \mathrm{Tr}(\beta(F(a)-b-c(\delta^{3^{m}+4}+\delta^{5}))\!\!\sum_{X\in\mathbb{ F}_{3^{n}}}\omega\mathrm{Tr}(u_{1}X^{4}-u_{2}X^{3^{m-1}+1}+vX),\]
where,
\[u_{1} =\mathrm{Tr}_{m}^{2m}(\delta)\mathrm{Tr}_{m}^{2m}(\beta(1-c))\] \[u_{2} =\mathrm{Tr}_{m}^{2m}(\delta)^{3^{m-1}}\mathrm{Tr}_{m}^{2m}(\beta (1-c))^{3^{m-1}}=u_{1}^{3^{m-1}}\] \[v =\mathrm{Tr}_{m}^{2m}(\beta)\mathrm{Tr}_{m}^{2m}(\delta)(a^{3}-a^ {3^{m+1}})+\mathrm{Tr}_{m}^{2m}(\beta^{3^{m-1}})\mathrm{Tr}_{m}^{2m}(\delta^{3 ^{m-1}})(a^{3^{m}}-a)^{3^{m-1}}+\beta(1-c)\] \[+(\mathrm{Tr}_{m}^{2m}(\delta)\delta\beta(1-c))^{3^{m-1}}-( \mathrm{Tr}_{m}^{2m}(\delta)\delta\beta(1-c))^{3^{2m-1}}+(\mathrm{Tr}_{m}^{2m} (\delta)\delta^{3}\beta(1-c))^{3^{m}}\] \[\qquad\qquad-\mathrm{Tr}_{m}^{2m}(\delta)\delta^{3}\beta(1-c).\]
Further, splitting the above sum depending on whether \(\mathrm{Tr}_{m}^{2m}(\beta)\) is \(0\) or not, we get
\[S_{0} =\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{3^{n}}\\ \mathrm{Tr}_{m}^{2m}(\beta)=0\end{subarray}}\omega\mathrm{Tr}(\beta(F(a)-b-c (\delta^{3^{m}+4}+\delta^{5})))\] \[\sum_{X\in\mathbb{F}_{3^{n}}}\omega\mathrm{Tr}\left(\left((1-c) \beta-(1-c)\beta\mathrm{Tr}_{m}^{2m}(\delta)^{4}+(1-c)^{3^{m-1}}\beta^{3^{m-1 }}\mathrm{Tr}_{m}^{2m}(\delta)^{2\cdot 3^{m-1}}\right)X\right)\] \[=3^{n}+\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{3^{n}}^{2m}\\ \mathrm{Tr}_{m}^{2m}(\beta)=0\end{subarray}}\omega\mathrm{Tr}(\beta(F(a)-b-c (\delta^{3^{m}+4}+\delta^{5})))\] \[\sum_{X\in\mathbb{F}_{3^{n}}}\omega\mathrm{Tr}\left(\left((1-c) \beta-(1-c)\beta\mathrm{Tr}_{m}^{2m}(\delta)^{4}+(1-c)^{3^{m-1}}\beta^{3^{m-1 }}\mathrm{Tr}_{m}^{2m}(\delta)^{2\cdot 3^{m-1}}\right)X\right).\]
To compute \(S_{0}\), we need to compute the number of solutions \(\beta\in\mathbb{F}_{3^{n}}\) for following equation,
\[(1-c)\beta-(1-c)\beta\mathrm{Tr}_{m}^{2m}(\delta)^{4}+(1-c)^{3^{m-1}}\beta^{3^ {m-1}}\mathrm{Tr}_{m}^{2m}(\delta)^{2\cdot 3^{m-1}}=0.\]
It is clear that when \(\mathrm{Tr}_{m}^{2m}(\delta)=0\), the above equation has only one solution \(\beta=0\). Let us assume that \(\mathrm{Tr}_{m}^{2m}(\delta)\neq 0\). Raising the above equation to a cubic power, we get
\[(1-c)^{3}\beta^{3}-(1-c)^{3}\beta^{3}(\mathrm{Tr}_{m}^{2m}(\delta)^{4})^{3}-(1- c)\beta\mathrm{Tr}_{m}^{2m}(\delta)^{2}=0,\]
or equivalently, \(A\beta^{3}-B\beta=0\), where \(A=(1-c)^{3}\left(1-\mathrm{Tr}_{m}^{2m}(\delta)^{4}\right)^{3}\) and \(B=(1-c)\mathrm{Tr}_{m}^{2m}(\delta)^{2}\). One can clearly observe that \(A\beta^{3}-B\beta=0\) has a solution in \(\beta\in\mathbb{F}_{3^{n}}^{*}\) if and only if \(\dfrac{B}{A}\) is a square in \(\mathbb{F}_{3^{m}}\). Now \(\dfrac{B}{A}=\dfrac{\mathrm{Tr}_{m}^{2m}(\delta)^{2}}{(1-c)^{2}(1-[ \mathrm{Tr}_{m}^{2m}(\delta)^{4}])^{3}}=\left(\dfrac{\mathrm{Tr}_{m}^{2m}( \delta)}{(1-c)\nu^{3}}\right)^{2}\), where \(\nu\in\mathbb{F}_{3^{m}}^{*}\) and \(\nu^{2}=(1-\mathrm{Tr}_{m}^{2m}(\delta)^{4})\). Hence \(\beta=\pm\dfrac{\mathrm{Tr}_{m}^{2m}(\delta)}{(1-c)\nu^{3}}\), however, \(\mathrm{Tr}_{m}^{2m}(\beta)\neq 0\) as \(\mathrm{Tr}_{m}^{2m}(\beta)=\pm 2\dfrac{\mathrm{Tr}_{m}^{2m}(\delta)}{(1-c)\nu^{3}}\). This would give us a contradiction to the condition that \(\mathrm{Tr}_{m}^{2m}(\beta)=0\). Hence \(S_{0}=3^{n}\).
Next we compute sum \(S_{1}\),
\[S_{1} =\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{3^{n}}\\ \mathrm{Tr}_{m}^{2m}(\beta)\neq 0\end{subarray}}\omega\mathrm{Tr}(\beta(F(a)-b-c( \delta^{3^{m}+4}+\delta^{5})))\sum_{X\in\mathbb{F}_{3^{n}}}\omega\mathrm{Tr} \left(u_{1}X^{4}-u_{2}X^{3^{m-1}+1}+vX\right)\] \[=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{3^{n}}\\ \mathrm{Tr}_{m}^{2m}(\beta)\neq 0\end{subarray}}\omega\mathrm{Tr}(\beta(F(a)-b-c( \delta^{3^{m}+4}+\delta^{5})))\mathcal{W}_{G}(-v),\]
where \(\mathcal{W}_{G}(-v)\) is the Walsh transform of trace of function \(G(X)=u_{1}X^{4}-u_{2}X^{3^{m-1}+1}\) at \(-v\). Now, from Lemma 2.10, the absolute square of Walsh transform coefficient of \(G\) is given by
\[|\mathcal{W}_{G}(-v)|^{2}=\begin{cases}3^{n+\ell}&\text{ if }G(X)+\mathrm{Tr}(vX) \equiv 0\text{ on }\mathrm{Ker }(L),\\ 0&\text{ otherwise,}\end{cases}\]
where \(\ell\) is dimension of kernel of the linearized polynomial
\[L(X)=u_{1}(X-X^{3^{m}})^{3}-u_{2}(X-X^{3^{m}})^{3^{m-1}}.\]
It is easy to see that \(\mathbb{F}_{3^{m}}\subseteq\mathrm{Ker}(L)\). Thus, if we can show that \(G(X)+\mathrm{Tr}(vX)\neq 0\) for all \(X\in\mathbb{F}_{3^{m}}\), then \(S_{1}=0\). We shall now prove that \(G(X)+\mathrm{Tr}(vX)\) is not identically zero on \(\mathbb{F}_{3^{m}}\). For \(X\in\mathbb{F}_{3^{m}}\), the polynomial \(G(X)+\mathrm{Tr}(vX)\) gets reduced to the polynomial
\[u_{1}X^{4}-u_{2}X^{3^{m-1}+1}+\mathrm{Tr}_{m}^{2m}(\beta(1-c))X+\mathrm{Tr}_{ m}^{2m}(\beta(1-c))^{3}X^{3}+\cdots+\mathrm{Tr}_{m}^{2m}(\beta(1-c))^{3^{m-1}}X^{3^{m-1 }},\]
which is a polynomial over \(\mathbb{F}_{3^{m}}\). For \(m=1\), \(G(X)+\mathrm{Tr}(vX)\) is not identically zero on \(\mathbb{F}_{3}\) as \(\mathrm{Tr}_{m}^{2m}(\beta(1-c))\neq 0\). And for \(m\geq 2\), it is easy to observe that degree of \(G(X)+\mathrm{Tr}(vX)\) is strictly less than \(3^{m}-1\) and hence the claim.
**Case 2.** Let \(c\in\mathbb{F}_{3^{n}}\setminus\mathbb{F}_{3^{m}}\). Then using the same values of \(u_{1},u_{2}\) and \(v\) as above, one can define the sums \(S_{0}\) and \(S_{1}\) depending upon \(\mathrm{Tr}_{m}^{2m}(\beta(1-c))=0\) or \(\mathrm{Tr}_{m}^{2m}(\beta(1-c))\neq 0\)
First,
\[S_{0} =\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{3^{n}}\\ \operatorname{Tr}_{m}^{2m}(\beta(1-c))=0\end{subarray}}\omega\text{Tr}(\beta(F( a)-b-c(\delta^{3^{m}+4}+\delta^{5})))\sum_{X\in\mathbb{F}_{3^{n}}}\omega\text{Tr} \left(vX\right)\] \[=3^{n}+\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{3^{n}}^{*}\\ \operatorname{Tr}_{m}^{2m}(\beta(1-c))=0\end{subarray}}\omega\text{Tr}(\beta( F(a)-b-c(\delta^{3^{m}+4}+\delta^{5})))\sum_{X\in\mathbb{F}_{3^{n}}}\omega\text{Tr} \left(vX\right).\]
To compute \(S_{0}\), we need to find the number of solutions for
\[\operatorname{Tr}_{m}^{2m}(\beta)\text{Tr}_{m}^{2m}(\delta)(a^{ 3}-a^{3^{m+1}})+\operatorname{Tr}_{m}^{2m}(\beta^{3^{m-1}})\text{Tr}_{m}^{2m}( \delta^{3^{m-1}})(a^{3^{m}}-a)^{3^{m-1}}+\beta(1-c)\] \[+(\operatorname{Tr}_{m}^{2m}(\delta)\delta\beta(1-c))^{3^{m-1}}- (\operatorname{Tr}_{m}^{2m}(\delta)\delta\beta(1-c))^{3^{2m-1}}+( \operatorname{Tr}_{m}^{2m}(\delta)\delta^{3}\beta(1-c))^{3^{m}}\] \[\qquad\qquad-\operatorname{Tr}_{m}^{2m}(\delta)\delta^{3}\beta(1 -c)=0,\]
or equivalently, using \(\operatorname{Tr}_{m}^{2m}(\beta(1-c))=0\), we have
\[(1-\tilde{c})\text{Tr}_{m}^{2m}(\delta)(a-a^{3^{m}})^{3}\beta+( 1-\tilde{c})^{3^{m-1}}\text{Tr}_{m}^{2m}(\delta^{3^{m-1}})(a^{3^{m}}-a)^{3^{m- 1}}\beta^{3^{m-1}}+\beta(1-c)\] \[\qquad\qquad+(\operatorname{Tr}_{m}^{2m}(\delta)^{2\cdot 3^{m-1}}(1-c )^{3^{m-1}}\beta^{3^{m-1}}-\operatorname{Tr}_{m}^{2m}(\delta)^{4}(1-c)\beta=0,\]
where \(\tilde{c}=(1-c)^{1-3^{m}}\). Raising the above equation to the cubic power, we get \(A\beta^{3}-B\beta=0\), where
\[A=(1-\tilde{c})^{3}\text{Tr}_{m}^{2m}(\delta^{3})(a-a^{3^{m}})^{9}+(1-c)^{3}-( 1-c)^{3}(\text{Tr}_{m}^{2m}(\delta))^{12}\]
and
\[B=\tilde{c}(1-\tilde{c})^{3^{m}}\text{Tr}_{m}^{2m}(\delta)(a-a^{3^{m}})+ \tilde{c}(1-c)^{3^{m}}\text{Tr}_{m}^{2m}(\delta)^{2}.\]
It is easy to see that except for \(\beta=0\), \(A\beta^{3}-B\beta=0\) has a solution \(\beta\in\mathbb{F}_{3^{n}}^{*}\) if \(\dfrac{B}{A}\) is a square (notice that for \(a\in\mathbb{F}_{3^{m}},\dfrac{B}{A}\) is always a square). If it is a square, then \(A\beta+B\beta^{3}=0\) has three solution in \(\mathbb{F}_{3^{n}}\), namely, \(\beta=0,\beta=\beta_{1}\) and \(\beta=-\beta_{1}\). Hence, \(S_{0}\) becomes
\[3^{n}\left(1+\omega\text{Tr}(\beta_{1}(F(a)-b-c(\delta^{3^{m}+4}+\delta^{5}))) \right)+\omega\text{Tr}(-\beta_{1}(F(a)-b-c(\delta^{3^{m}+4}+\delta^{5}))) \right).\]
Clearly, for those pairs \((a,b)\in\mathbb{F}_{3^{n}}\times\mathbb{F}_{3^{n}}\) for which \(b=F(a)-c(\delta^{3^{m}+4}+\delta^{5})\), we have \(S_{0}=3^{n+1}\); and for the other pairs \((a,b)\in\mathbb{F}_{3^{n}}\times\mathbb{F}_{3^{n}}\), we have \(S_{0}=3^{n}(1+\omega^{\text{Tr}(\alpha)}+\omega^{\text{Tr}(-\alpha)})\), where \(\alpha=\beta_{1}(F(a)-b-c(\delta^{3^{m}+4}+\delta^{5}))\). Hence, the maximum value that \(S_{0}=3^{n}(1+\omega^{\text{Tr}(\alpha)}+\omega^{\text{Tr}(-\alpha)})\) can attain is \(3^{n+1}\), as \(\text{Tr}(-\alpha)=-\text{Tr}(\alpha)\). This yields that \(S_{0}=3^{n+1}\).
Next, we analyze the second sum,
\[S_{1} =\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{3^{n}}\\ \operatorname{Tr}_{m}^{2m}(\beta(1-c))\neq 0\end{subarray}}\omega\text{Tr}(\beta(F(a)-b-c( \delta^{3^{m}+4}+\delta^{5})))\sum_{X\in\mathbb{F}_{3^{n}}}\omega\text{Tr}(u_{1 }X^{3}-u_{2}X^{3^{m-1}+1}+vX)\] \[=\sum_{\begin{subarray}{c}\beta\in\mathbb{F}_{3^{n}}\\ \operatorname{Tr}_{m}^{2m}(\beta(1-c))\neq 0\end{subarray}}\omega\text{Tr}(\beta(F(a)-b-c( \delta^{3^{m}+4}+\delta^{5})))\mathcal{W}_{G}(-v),\]
where \(\mathcal{W}_{G}(-v)\) is the Walsh transform of trace of function \(G:X\mapsto u_{1}X^{4}-u_{2}X^{3^{m-1}+1}\) at \(-v\). By following similar arguments as in the Case 1 above, one can show that \(S_{1}=0\). This completes the proof.
## 5. Conclusions
In this paper we show that some permutation polynomials are PcN over finite fields of even characteristic and even dimension, for \(c\neq 1\) in the subfield of half dimension. This adds to the small list of known (non-trivial) PcN functions. We also find a class of permutation polynomials over finite fields of characteristic 3, of even dimension \(n=2m\), which is PcN for \(c\in\mathbb{F}_{3^{m}}\setminus\{1\}\), and has \(c\)-differential uniformity 3 for all \(c\notin\mathbb{F}_{3^{m}}\).
|
2301.00694 | Redundancies and Profundities | We reevaluate the status of the gauge principle and reposition it as an
intermediary structure dependent on the initial conditions we endow on our
theory. We explore how the gauge symmetry manifests in the context of basic
quantum electrodynamics, spontaneous symmetry breaking and the modern
scattering amplitudes program. We also investigate the addition of an auxiliary
field in $\phi^4$ theory and see how the dynamics are altered. Modal language
is pointed to and utilized as a convenient way to articulate the weight gauge
symmetry demands in our theories as well as the principles of locality and
Lorentz invariance. A shifting scale ontology is introduced with regards to the
gauge principle and other structures of Quantum Field Theory in general. | Kyle Singh | 2022-12-30T05:50:20Z | http://arxiv.org/abs/2301.00694v1 | # Redundancies
###### Abstract
We reevaluate the status of the gauge principle and reposition it as an intermediary structure dependent on the initial conditions we endow on our theory. We explore how the gauge symmetry manifests in the context of basic quantum electrodynamics, spontaneous symmetry breaking and the modern scattering amplitudes program. We also investigate the addition of an auxiliary field in \(\phi^{4}\) theory and see how the dynamics are altered. Modal language is pointed to and utilized as a convenient way to articulate the weight gauge symmetry demands in our theories as well as the principles of locality and Lorentz invariance. A shifting scale ontology is introduced with regards to the gauge principle and other structures of Quantum Field Theory in general.
## I Introduction
The gauge principle is widely regarded as the cornerstone of fundamental physics. However, the modern scattering amplitudes program, as well as the discovery of the AdS/CFT correspondence, have highlighted the redundancy that comes from the implementation of the symmetry in Quantum Field Theory and in general. Moreover, independent of such recent developments the ontological status of the gauge principle has been brought into question. This places the gauge principle in a unique position. It is a concept that is fundamental; one that no doubt leads us to profound physical insight by fixing the structure of fundamental interactions in the standard model while also revealing previously hidden degrees of freedom, elementary particles, that are present in nature. However, in other regimes, its presence masks the simplicity underlying our theory and certain physical aspects of our systems. We need to find a way to categorize that which is redundant and also profound in our physical theory.
More generally, our basic theories are such that intermediary concepts, as we will call them, map onto physical data and are often most important. Consider, for example, the fact that in General Relativity one is free to make a choice of coordinates based on what the physical system demands. Particular coordinate choices, such as in the distinction between the Schwarzschild and Kruskal coordinates for black holes, reveal different aspects of the spacetime structure. In this case, for example, the Kruskal coordinates resolve the non physical coordinate singularities endemic to the Schwarzschild geometry. Such freedom can easily allow us to question the role of a background space itself. Here space is instead in an intermediary position. It is fundamental as a background but wholly arbitrary based on the physical system and initial conditions we impose.
Not only is gauge symmetry profound in our fundamental theories, it is unavoidable. Carlo Rovelli establishes the ubiquity of gauge and claims that "Gauge invariance is not just mathematical redundancy; it is an indication of the relational character of fundamental observables in physics" (Rovelli, 7). He sets up a coupled dynamical system and shows that gauge variables are necessary in order for one to capture the full dynamics. In other words, one can not decouple the system without the presence of gauge variables. 1 For Rovelli, the gauge symmetry reveals the "relational structure of our world", as gauge interactions describe relative quantities with more than one object (Rovelli, 7). For example, consider the fact that in ordinary quantum mechanics, we can only measure differences in energy. Rovelli insists that treating gauge as pure redundancy ignores this relational structure.
Footnote 1: See Ref. [2] for a full discussion of this.
Indeed, claiming that the gauge principle results in pure redundancy or that it is somehow inherent to our fundamental theory does not capture the unique position it operates within. We are then left with a view of the gauge principle that is purely intermediary. Our options for how we should treat the gauge symmetry were aptly categorized by Michael Redhead in what has now come to be known as Redheads trilemma. Redhead purports that we have three options with how we choose to treat the gauge principle. We can either claim that gauge symmetry is physical and motivate physical structures directly representative of gauge fields, try to reformulate the entire theory in terms of quantities that are purely gauge-invariant, or we can let non-gauge-invariant quantities enter as surplus structure and develop the theory accordingly, adding further surplus structure if necessary to make the theory work.
The third option is one often taken by physicists for practical purposes and is the stance we will undertake. The question then becomes one in which we ask ourselves
how we should categorize such intermediary concepts in theoretical physics and more broadly in mathematical representation. Redhead's three propositions do not fully articulate such a position, rather it seems that they capture some aspects of how we wish to treat the gauge symmetry in each of them. His distinctions seem to arise within the context of a particular theory or set of calculations. For example, with respect to his second proposition, we can look to Wilson loops which are completely gauge invariant quantities and work out the dynamics of our QFT in terms of them, however they lead to non-local physics, as is well evidenced by the Aharonov-Bohm effect.
Indeed, parsing out various physical systems and regimes of our QFT and observing what role the gauge principle has within them will be central to how we choose to speak about its status. Furthermore, this will allow us to set up a shifting scale ontology and classify various pillars of our theory in an ontological way.
Not only do we need to deal with this status of gauge symmetry, we must also find a way to incorporate its inherent ambiguity both in its implementation and with respect to the physical objects the symmetry comes to represent. We do not directly discuss surplus structure broadly in mathematics, as Guay has suggested we ought to do; however it is not so much of a concern in the particular way we choose to position gauge symmetry since we are teasing out the physical data that it leads us to and treating its inherent redundancy as a triviality since we can easily choose a particular gauge even though we are given infinite choices from which we can begin our computations. The redundancies will only be important if they mask the underlying simplicity of our theory. This will be discussed within the context of the modern scattering amplitudes program and it is within this context that the discussion of surplus structure in general may be apt.
We now begin with a brief summary of the gauge principle and its operation in the context of electrodynamics.
## II The gauge principle
Let us briefly review the gauge principle in the context of classical electromagnetism. We begin with the Lagrangian for complex scalar field theory
\[\mathcal{L}=\left(\partial_{\mu}\psi\right)^{\dagger}(\partial_{\mu}\psi)-m^{ 2}\psi^{\dagger}\psi \tag{1}\]
The Lagrangian possesses a U(1) symmetry by the following replacement
\[\psi(x)\rightarrow\psi(x)e^{i\alpha} \tag{2}\]
Note that this symmetry is a global one, namely the field is changed by the same amount at every spacetime point. As is standard, we can then work out the consequences if we impose the notion that our Lagrangian remain invariant under local transformations with the following replacement
\[\psi(x)\rightarrow\psi(x)e^{i\alpha(x)} \tag{3}\]
Clearly, under such a prescription the Lagrangian is not invariant. In order to restore the local symmetry, we introduce a new field \(A_{\mu}(x)\) which cancel out the extra terms resulting from the requirement of local gauge invariance. We must require that \(A_{\mu}\) transforms as \(A_{\mu}(x)\to A_{\mu}(x)-\frac{1}{q}\partial_{\mu}\alpha(x)\).2We then introduce the covariant derivative defined as follows
Footnote 2: Here, \(q\) is the coupling strength and in this case is just an extra parameter which will have physical significance in other theories.
\[D_{\mu}=\partial_{\mu}+iqA_{\mu}(x) \tag{4}\]
Given this set of transformations, our theory is locally gauge invariant. The introduction of an additional field in order to obtain local gauge invariance is a ubiquitous feature of gauge theories. In the context of electromagnetism, we utilize the following Lagrangian. 3
Footnote 3: Note, that this is the same as writing \(\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-J^{\mu}A_{\mu}\)
\[\mathcal{L}=-\frac{1}{4}(\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu})( \partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu})-J^{\mu}A_{\mu} \tag{5}\]
The equations of motion are the first two Maxwell equations.
\[\partial^{2}A^{\nu}-\partial^{\nu}(\partial_{\mu}A^{\mu})=J^{\nu} \tag{6}\]
We can rewrite the transformation of our gauge field in the following more generalized form
\[A_{\mu}(x)\to A_{\mu}(x)-\partial_{\mu}\chi(x) \tag{7}\]
Of course, in any physical theory we want the fields we introduce to hold physical significance. The function \(\chi(x)\) has, for all intents and purposes, an infinite number of symmetries, one for each function.4 Therefore, we must take further steps in constraining our gauge field to facilitate its representation of a physical object. In the case of electrodynamics, the gauge field contains the dynamics of the photon. The following two conditions, known as choosing a gauge, are imposed to ensure that the gauge field has two degrees of freedom since the photon can only have two polarizations
Footnote 4: This conundrum in itself is indicative of gauge as redundancy. It would be startling to say that presence of infinite symmetries would yield an infinite number of conservation laws following from Noether’s theorem, for example! Here two states related by a gauge transformation are indeed the same physical state.
\[\partial_{\mu}A^{\mu}(x)=0 \tag{8}\]
\[\partial_{0}\xi=A_{0}^{{}^{\prime}} \tag{9}\]
These particular gauges are known as Lorentz gauge and Coulomb gauge, respectively. The gauge principle then
refers to the procedure of introducing this additional dynamical field to maintain local gauge symmetry. Moreover, as we have just seen, the gauge field dictates the form of the coupling.
_Prima facie_, there seems to be no reason at all to impose local gauge invariance. Moreover, it seems quite contrived to insist that the gauge field introduced to preserve our desired symmetry must have a precise set of dynamics correspondent to a particular object in a theory; in other words our designation of the gauge field as a photon was put in by hand and not derived from the gauge principle itself. Even more striking perhaps, is the fact that we can never measure \(A_{\mu}\). That nature seems to require us to introduce such objects is truly incredible.5
Footnote 5: All of this may be a false problem, of course, and Rovelli aptly calls into question whether or not we should ask our mathematical procedures to have a purpose; asking us whether or not we should conclude that it was the purpose of humans to kill large mammals if we were indeed responsible for their deaths.
Indeed, Martin calls these difficulties with the gauge principle out and writes that the idea that the gauge principle "'dictates' [or] 'determines' the form of fundamental interactions as well as the existence of certain physical fields must be taken with a large grain of salt" (Martin, 233). He argues that, at best, the gauge principle should be taken as heuristic and offers a differing approach to the logic of nature, viewing the gauge symmetry as not a fundamental physical principle, rather, as a relic of a theory that is more fundamental, in particular as renormalizable theories, that works in conjunction with other physical requirements such as Lorentz invariance.
It is clear that a proper revaluation of the status of the gauge principle in our physical theories is necessary. We will expand on Martin's thesis and seek to make more general modal statements relating the conception of local gauge symmetry to our physical theory as one apparatus. In doing so, we do not contradict Rovelli's argument on the ubiquity of gauge or the arbitrariness inherent to our imposition of the gauge principle in itself. Instead, we reconsider its position as a principle in our physical theory all together.
## III Pure redundancy
Consider the following Lagrangian in \(\phi^{4}\) theory.
\[\mathcal{L}=\frac{1}{2}(\partial_{\mu}\phi)^{2}-\frac{m^{2}}{2}\phi^{2}-\frac {g}{8}\phi^{4} \tag{10}\]
We shift the Lagrangian by adding a new \(\sigma\) field in the following way6
Footnote 6: Shifting our Lagrangian in such a fashion is commonly referred to as a _Hubbard-Stratonovich_ transformation.
\[\mathcal{L}^{\prime}=\mathcal{L}+\frac{1}{2g}\left(\sigma-\frac{g}{2}\phi^{2} \right)^{2} \tag{11}\]
The Lagrangian is then
\[\mathcal{L}^{\prime}=\frac{1}{2}(\partial_{\mu}\phi)^{2}-\frac{m^{2}}{2}\phi^ {2}-\frac{g}{8}\phi^{4}+\frac{1}{2g}\left(\sigma-\frac{g}{2}\phi^{2}\right)^ {2} \tag{12}\]
In QFT, the Green's functions of our theory tell us about the dynamics. We can compute these via the generating functional
\[\mathcal{Z}[J]=\frac{\int\mathcal{D}\phi\mathcal{D}\sigma{\rm exp}\left[i\int d ^{4}x\left(\mathcal{L}^{\prime}+J\phi\right)\right]}{\int\mathcal{D}\phi \mathcal{D}\sigma{\rm exp}\left[i\int d^{4}x\mathcal{L}^{\prime}\right]} \tag{13}\]
If one carries out the above functional integral, it is straightforward to show that the expression written above for our newly defined theory is the same as the original one. To emphasize this, we can compute the equation of motion for the \(\sigma\) field
\[\frac{\partial\mathcal{L}^{\prime}}{\partial\sigma}-\partial_{\mu}\frac{ \partial\mathcal{L}^{\prime}}{\partial(\partial_{\mu}\sigma)}=0 \tag{14}\]
and find that
\[\sigma=\frac{g}{2}\phi^{2} \tag{15}\]
There are no time derivatives present in the computed equation of motion. Therefore, our newly added field does not contribute to the dynamics of this system. Furthermore, we can eliminate the additional field since it can only provide a constraint on the theory.
We can promote the fields of our system to operators and write down the vacuum-to-vacuum expansion of the \(\mathcal{S}\)-matrix for our theory. This yields the relevant Feynman diagrams corresponding to various wick contractions of the fields. This further tells us about the role of the \(\sigma\) field. The \(\mathcal{S}\)-matrix operator reads as follows
\[\bra{0}\mathcal{S}\ket{0}=\bra{0}T\left[{\rm exp}\left(-i\int d^{4}x\frac{1} {2}\sigma\phi^{2}\right)\right]\ket{0} \tag{16}\]
Writing out the first couple of terms in the perturbation expansion yields
\[\begin{split}-\frac{i}{2}\int d^{4}x\bra{0}T\left[\sigma_{x} \phi_{x}\phi_{y}\right]\ket{0}+\frac{1}{2!}\left(-\frac{i}{2}\right)^{2}\\ \int d^{4}xd^{4}y\bra{0}T\left[\sigma_{x}\phi_{x}\phi_{x}\sigma_{ y}\phi_{y}\phi_{y}\right]\ket{0}+...\end{split} \tag{17}\]
Contractions of the \(\phi\) field yield the standard free field propagator. As we have shown, since the \(\sigma\) field does not contribute to the dynamics of the theory, nothing can propagate through it. Contractions such as \(\contraction{\sigma_{x}\sigma_{y}}{\sigma_{x}}\) take two spacetime points and identify them with one another playing the role of interaction vertices in the various diagrams which arise. This calculation is an example of a procedure resulting in a trivial redundancy. It is clearly distinct from the procedure undertaken in incorporating gauge symmetry into our field equations.
Treating the gauge principle as simply an artifact of pure redundancy does not capture its importance to the dynamics of say Quantum Electrodynamics and to the construction of the standard model. The \(\sigma\) field's importance in the constructed example and the role of the gauge field are wholly different in terms of what they represent and what role they play in the theory, although both were introduced in an arbitrary fashion. We must then seek to reposition the status of gauge as it relates to QFT and see what we can say about scientific theories in full generality.
## IV Pure gauge
As we have reviewed, any Lagrangian with local symmetry must harbor gauge fields. Consider the following Lagrangian for gauged complex scalar field theory
\[\begin{split}\mathcal{L}=\left(\partial^{\mu}\psi^{\dagger}-iqA ^{\mu}\psi^{\dagger}\right)(\partial^{\mu}\psi+iqA^{\mu}\psi)+\\ \mu^{2}\psi^{\dagger}\psi-\lambda(\psi^{\dagger}\psi)^{2}-\frac{ 1}{4}F_{\mu\nu}F^{\mu\nu}\end{split} \tag{18}\]
It is crucial to note that the sign of the mass term has been flipped. This allows us to invoke the standard symmetry breaking procedure. We insist, again, local gauge invariance and work in polar coordinates by letting \(\psi(x)=\sigma(x)e^{i\theta(x)}\) for a unique ground state phase set at \(\theta(x)=\theta_{0}\). The fact that we can not now change the phase of the ground state both locally and globally means that the symmetry is broken in both regimes. Now we observe, as is commonly done, what physical consequences can be derived by computing the particle spectrum of this system. In polar coordinates
\[\partial_{\mu}\psi+iqA_{\mu}\psi=(\partial_{\mu}\sigma)e^{i\theta}+i( \partial_{\mu}\theta+qA_{\mu})\sigma e^{i\theta} \tag{19}\]
Where the gauge field is represented in our theory in the following way
\[A_{\mu}+\frac{1}{q}\partial_{\mu}\theta\equiv B_{\mu} \tag{20}\]
Therefore
\[(\partial^{\mu}\psi^{\dagger}-iqA^{\mu}\psi^{\dagger})(\partial_{\mu}\psi+iqA _{\mu}\psi)=(\partial_{\mu}\sigma^{2})+\sigma^{2}q^{2}B_{\mu}B^{\mu} \tag{21}\]
The Lagrangian becomes
\[\mathcal{L}=(\partial_{\mu}\sigma)^{2}+\sigma^{2}q^{2}B^{2}+\mu^{2}\sigma^{2} -\lambda\sigma^{4}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu} \tag{22}\]
We now invoke the standard symmetry breaking procedure. The minima of the potential are at \(\sigma=\sqrt{\frac{\mu^{2}}{2\lambda}}\). We break the symmetry by setting \(\sigma_{0}=\sqrt{\frac{\mu^{2}}{2\lambda}}\)and \(\theta_{0}=0\). Expanding the Lagrangian in terms of a new field \(\delta\) defined as \(\frac{\delta}{\sqrt{2}}=\sigma-\sigma_{0}\) and ignoring constants yields the following
\[\begin{split}\mathcal{L}=\frac{1}{2}&(\partial_{\mu }\delta)^{2}-\mu^{2}\delta^{2}-\sqrt{\lambda}\mu\delta^{3}-\frac{\lambda}{4} \delta^{4}-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}\\ &+\frac{A^{2}}{2}B^{2}+q^{2}\left(\frac{\mu^{2}}{\lambda}\right)^ {\frac{1}{2}}\delta B^{2}+\frac{1}{2}q^{2}\delta^{2}B^{2}+...\end{split} \tag{23}\]
Here, \(A=q\sqrt{\frac{\mu^{2}}{\lambda}}\). Breaking the symmetry surprisingly results in our theory containing the massive vector field \(B_{\mu}\). Meanwhile, the massless excitations of the \(\theta\) field have disappeared. Note that these excitations were only present in the case of global symmetry breaking. Our theory, which once described two massive scalars and two massless photons now describes one massive scalar and three massive vector particles. Imposing the gauge transformation \(A_{\mu}+\frac{1}{q}\partial_{\mu}\theta=B_{\mu}\) removes the Goldstone modes. This removal of the Goldstone mode via the prescribed gauge transformations means that it is pure gauge.
Local symmetry breaking yields the massive physical degree of freedom while removing the nonphysical massless one. This procedure, as is well known, is crucial to the Higgs mechanism and places the Higgs boson correctly within the standard model. Local symmetry yields new physical insight fixing the redundancy of global symmetry breaking. This manifestation of gauge invariance is distinct from the pure redundancy discussed with regards to the addition of an auxiliary scalar field, however it is an indication of the intermediary role that gauge symmetry plays in a particular regime that we wish to probe in the context of our field theories.
## V Modal considerations
Given the preceding discussions, it is now natural to ask ourselves what we can say about modality with respect to the gauge principle. Modal language, even if used loosely for our purposes currently, gives us a convenient way to categorize the gauge principle in relation to the other mechanisms in our QFTs. In order to make modal statements on gauge symmetry, it seems apt to first take one of Redhead's positions on how we should treat it functionally and proceed from there in addressing its status. That the gauge symmetry results in surplus structure, redundant degrees of freedom, means that only a subset of these degrees of freedom can be recognized as physical representations. We can take Redhead's third proposition that theories should keep the gauge invariance for as long as possible. This means that we do not dispose of the local gauge symmetry and take into account its physical predicative power. 7
Footnote 7: Again, as discussed earlier, we are not so concerned with this subtly and work with the gauge principle, in many ways, after the question of what gauge to work in has been decided upon.
This conveniently allows us to split up the gauge symmetry into a local piece and a global one. 8 This factorization allows us to evaluate any claims of necessity independent of one another.
Footnote 8: It is safe to say that use of the word ”principle” in gauge principle is misleading and not indicative of how we treat gauge symmetry at large. Although the gauge principle in itself refers to the local gauge symmetry, the use of the word would be better served when looking at the role of gauge symmetry in particular as a whole.
Global gauge invariance has physical necessity because it carries through as a symmetry in all of our foundational theories although it is, in most cases trivially a part of our theories. Any principles that are essential to the structure of the theory are metaphysically necessary, for example Lorentz invariance. Local gauge symmetry then takes on an intermediary role. Since it is not wholly essential in our theories which are more fundamental, this will be discussed in the context of the modern scattering amplitudes program, we can posit that it is metaphysically possible. It is possible that local gauge invariance is required for the prediction of a photon coupling to a electron, however it may not be. Perhaps it may carry nomic necessity, however such a claim would require that we know the true origin of why we must impose local gauge symmetry at all. This would mean that it would be attached to some underlying law of nature that we clearly do not know of now. For now, in the context of how we are exploring and seeking to categorize the gauge symmetry, it is enough to make the modal distinction we have made above in the hopes of clarifying how we wish to treat gauge within the broader construction of QFT.
## VI Scattering amplitudes and simplicity
The modern scattering amplitudes movement has given us a method to compute amplitudes in a way that forgoes gauge redundancy all together revealing aspects of a more foundational QFT. The standard polarization vectors responsible for describing redundant massless particle states are replaced by spinor-helicity variables which are trivially gauge invariant. 10This modern incarnation of the \(\mathcal{S}\)-matrix bootstrap imposes the fundamental principles of locality and unitarity to determine amplitudes. What one finds is that calculations which were once extremely complicated in the traditional Feynman diagrammatic approach become tremendously simplified and almost trivial, thanks to a cancellation of redundancies. Taking the fermion states to be massless for these calculations, we are working in the high-energy scattering limit which constitutes a theory at a more fundamental energy scale.
Footnote 10: It would be interesting to explore whether such physical concepts, including gauge, tied to physical objects can be categorized within a more rigorous formal modal structure.
Let us compute the color-ordered 4-gluon amplitude, \(A_{4}[1^{-}2^{-}3^{+}4^{+}]\), at tree level. Recall that such partial amplitudes are trivially gauge invariant. Utilizing the standard Feynman rule for the 4 gluon vertex allows us to write11
Footnote 10: For a review of the spinor-helicity formalism refer to [4]
\[A_{4}=\frac{(-i\sqrt{2}g^{2})((\epsilon_{1}\cdot p_{2})\epsilon_{2}-(p_{1} \cdot\epsilon_{2})\epsilon_{1})((\epsilon_{3}\cdot p_{4})\epsilon_{4}-(p_{3} \cdot\epsilon_{4})\epsilon_{3})}{(p_{1}+p_{2})^{2}} \tag{24}\]
Translating this into the spinor helicity formalism, one finds the following expression
\[\begin{split} A_{4}&=\frac{-2g^{2}}{\langle 12 \rangle\,[12]}\frac{\langle 12\rangle\,[34]}{\langle 13\rangle\,[24]}\\ &\frac{\langle 12\rangle\,[24]}{\sqrt{2}[14]}\frac{\langle 13 \rangle\,[34]}{\sqrt{2}\,\langle 14]}\end{split} \tag{25}\]
Applying momentum conservation and simplifying this expression gives us the following simple amplitude
\[A_{4}=\frac{\langle 12\rangle^{4}}{\langle 12\rangle\,\langle 23\rangle\, \langle 34\rangle\,\langle 41\rangle} \tag{26}\]
Indeed, one finds the following simple expression for all tree level Yang-Mills amplitudes. 12
Footnote 12: Note that one needs to specify a particular set of reference spinors. We have chosen \(q\)’s such that the \(t\)-channel diagram vanishes and all \(\epsilon_{i}\cdot\epsilon_{j}\)’s vanish except \(\epsilon_{2}\cdot\epsilon_{3}\)
Footnote 12: This can be derived using the standard BCFW recursion relations
\[A_{n}[1^{+}...i^{-}...j^{-}...n^{+}]=\frac{\langle ij\rangle^{4}}{\langle 12 \rangle\,\langle 23\rangle\,...\,\langle n1\rangle} \tag{27}\]
It is a remarkably simple expression that is fully generalized. In the traditional perturbative formalism computing a seven gluon amplitude would require the calculation of 154 separate diagrams, with the amplitude boiling down still to the result above. Without the extra gauge redundancies clouding the fundamental structure of the scattering amplitudes, we can ask ourselves what principles we are left with. Consider the following ansatz for for 3 particle amplitudes
\[A_{3}(1^{h_{1}}2^{h_{2}}3^{h_{3}})=c\,\langle 12\rangle^{x_{12}}\,\langle 13 \rangle^{x_{13}}\,\langle 23\rangle^{x_{23}} \tag{28}\]
Under little group scaling on-shell amplitudes transform in the following way, with helicity \(h_{i}\)
\[A_{n}(\left|1\right\rangle,\left|1\right|,h_{1},...,t_{i}\left|i\right\rangle,t_{i}^{-1}|i],h_{i},...)=t_{i}^{-2h_{i}}A_{n}(...|i\rangle\,,[i],h_{i}...) \tag{29}\]
This fixes the following
\[-2h_{1}=x_{12}+x_{13} \tag{30a}\]
\[-2h_{2}=x_{12}+x_{23} \tag{30b}\] \[-2h_{3}=x_{12}+x_{33} \tag{30c}\]
Solving the system of equations we can rewrite the ansatz as follows
\[A_{3}(1^{h_{1}}2^{h_{2}}3^{h_{3}})=c\left\langle 12\right\rangle^{h_{3}-h_{1}-h_{2} }\left\langle 13\right\rangle^{h_{2}-h_{1}-h_{3}}\left\langle 23\right\rangle^{h_{1}-h_{2 }-h_{3}} \tag{31}\]
Now, we can consider a 3-gluon amplitude with the following helicity configuration
\[A_{3}(g_{1}^{-}g_{2}^{-}g_{3}^{+})=g\frac{\left\langle 12\right\rangle^{3}}{ \left\langle 12\right\rangle\left\langle 23\right\rangle} \tag{32}\]
Little group scaling fixes the form of the amplitude. Moreover, the amplitude is fixed by locality, namely that it is compatible with a term of the form \(AA\partial A\) in the Lagrangian \(\mathrm{Tr}F_{\mu\nu}F^{\mu\nu}\) and not a term that goes like \(g^{\prime}AA\frac{\partial}{\Box}A\).
We are now in position to ask ourselves what we are left with in this high energy theory. We are left with the principles of locality, unitarity, and Lorentz invariance, as outlined in the simple calculations above. Gauge symmetry plays a trivial role in this regime, where calculations are simplified and where a more foundational structure of our amplitudes, and perhaps our QFT, is revealed. Coupling this insight with our previous modal claims, we can revise and categorize a new ontological status for the gauge principle.
## VII The ontological status of gauge symmetry
We begin with a set of principles and regard them as the basis for our ontology. Then the gauge principle, as is customarily defined, cannot fit into our ontological construction. Instead, we can take its factorized local piece to be one step removed from the fundamental principles of locality, unitarity and Lorentz invariance. Local gauge symmetry, stated as a principle in the way we utilize it, is a part of our ontology in the theory that is less fundamental, at lower energy scales. It is a projection onto the more fundamental theory that becomes necessary at lower energy and resolves itself by exiting the picture at higher energies.
We have, then, a direct manifestation of Occam's razor at higher energy, whereby the theory seems to become simpler and where our ontological stakes become more defined. Indeed, as we have seen we are left with a set of principles embodied within the higher energy theory that dictate all of its tenets. It is quite likely that such a theory is wholly inaccessible to experiment, as has been predicated by String Theory and its exploration in the past several decades. And so, it is not simply enough to say that our ontology, as determined by principles, should be determined by the theory which inhabits higher energy scales. Instead, there is a sense in which our ontology resolves itself at various scales. The base principles carry throughout the scales. Physics is local for quarks, for baseballs and for nuclei. It must be local even for strings if they exist empirically. Certain principles then, get added to our ontology as we lower our energy scale.
The gauge principle then, is a principle in the sense that its imposition seems to be necessary to obtain the relevant physical phenomena in the effective renormalizable theory and thus becomes a part of our ontology in that setting. However, our classical field equations for example only carry global gauge invariance. Local gauge invariance does not carry through nor is it necessary to tell us about the physical data about our classical equations.
It is important to reconcile the fact that there are an infinite number of ways we could utilize our gauge freedom in setting up our equations. This brings us to Quine's idea of the proxy function somewhat loosely, in which the various choices of gauge can yield the same correct physical result. There is no "true gauge" specified by even more fundamental principles. In other words, local gauge symmetry in particular can be treated as a proxy for all the various gauge constraints we impose on our equations in the theory where local gauge invariance maps onto physical entities. Therefore, it is not strictly ontological, as global gauge symmetry may be taken to be, rather it is which masks the fundamental principles in lieu of redundancies, but also plays an ontological role in a particular regime. 13
Footnote 13: As arbiters of Quantum Field Theory, we can make the claim that photons exist independent of whether or not they arise as a result of gauge symmetry. That being said, the mathematical representations of the photon all exhibit this symmetry and thus we take the abstract entities that map onto physical data to be one and the same.
We can think of this shifting scale ontology as if we were trying to resolve the pixels of a computer screen. Various details will come in and out of focus as one zooms in and out of the screen. Our ontological commitments must be modified accordingly. If we are committed to principles, those principles will shift and resolve themselves in accordance to what the physical phenomena necessitate.
## VIII Conclusions
The gauge principle as an intermediary has been explored. We have set up a variety of systems, in various contexts and exhibited the importance of the gauge principle in each instance. We have also shown an instance where our additions to the theory, in the case of an auxiliary field, result in no new physical insight whatsoever, exemplifying the difference between the gauge principle and simply adding extra degrees of freedom to our system
with the hopes of new physical information. Moreover, in the case of symmetry breaking, our basic example shows that the Higgs mechanism is also a result of following our nose after realizing the importance of local gauge symmetry.
A consideration of this principle poses great foundational problems that have yet to be resolved and warrant further exploration. It is an open question if we can extend the idea of intermediaries and shifting scale ontology to a larger system; therefore, it would be worthwhile to explore these ideas as they relate to the broader architecture of the standard model. An application to accidental symmetries and group symmetries in QFT immediately comes to mind as well as an application to the renormalization group where our equations are fully derived from various scalings in energy. This is also particularly relevant given the current landscape of theoretical physics in which QFTs are seen as effective field theories only relevant up to a certain energy scale. This has resulted in a long search for the theory that is more fundamental and which will resolve the decades old problem, still withstanding, of quantizing the gravitational force. Moreover, such an approach will surely have consequences for a broader metaphysical set up which can extend itself into larger epistemological considerations.
One can conceive of a philosophical system that is sparred by the conception of intermediaries which present themselves as fundamental as various initial conditions are presented.
###### Acknowledgements.
We wish to thank Aden Evens and Erkki Wilho Mackey for useful conversations and for reading the initial draft of this work. We also wish to thank Carlo Rovelli for useful clarifications via e-mail correspondence as well as Laura Reutsche for helpful resources as this work was being completed.
## Appendix A Spinor Helicity Conventions
We introduce
\[\sigma^{\mu}=(1,\sigma^{i}),\hskip 28.452756pt\bar{\sigma}^{\mu}=(1,-\sigma^{i}), \tag{10}\]
where \(\sigma^{i}\) are the standard Pauli matrices and
\[\gamma^{\mu}=\begin{pmatrix}0&(\sigma^{\mu})_{ab}\\ (\bar{\sigma}^{\mu})^{a\dot{b}}&0\end{pmatrix}. \tag{11}\]
Here, \(\gamma^{\mu}\) are the usual gamma matrices obeying the Clifford algebra
\[\{\gamma^{\mu},\gamma^{\nu}\}=-2\eta^{\mu\nu}. \tag{12}\]
Defining
\[\begin{split} p_{\dot{a}b}&\equiv\frac{1}{\sqrt{2}}p_{\mu} (\sigma^{\mu})_{\dot{a}b}=\frac{1}{\sqrt{2}}\begin{pmatrix}-p^{0}+p^{3}&p^{1}- ip^{2}\\ p^{1}+ip^{2}&-p^{0}-p^{3}\end{pmatrix},\\ p^{a\dot{b}}&\equiv\frac{1}{\sqrt{2}}p_{\mu}(\bar{\sigma}^{\mu})^{ab}=- \frac{1}{\sqrt{2}}\begin{pmatrix}p^{0}+p^{3}&p^{1}-ip^{2}\\ p^{1}+ip^{2}&p^{0}-p^{3}\end{pmatrix},\end{split} \tag{13}\]
we obtain expressions for null momenta in terms of two-component spinor helicity variables:
\[\begin{split} p_{\dot{a}b}&=-|p|_{\dot{a}} \langle p|_{b}=-\tilde{\lambda}_{\dot{a}}\lambda_{b},\\ p^{a\dot{b}}&=-|p\rangle^{a}[p]^{\dot{b}}=-\lambda^{a} \tilde{\lambda}^{\dot{b}},\end{split} \tag{14}\]
Indices are raised and lowered with the Levi-Civita symbol:
\[[p]^{\dot{a}}=\epsilon^{\dot{a}\dot{b}}[p]_{\dot{b}},\hskip 28.452756pt|p \rangle^{a}=\epsilon^{ab}\langle p|_{b} \tag{15}\]
where
\[\epsilon^{ab}=\epsilon^{\dot{a}\dot{b}}=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}. \tag{16}\]
Finally, in these conventions
\[p_{i}\cdot p_{j}=\langle ij\rangle[ij], \tag{17}\]
which can be readily verified using the identity
\[\sigma^{\mu}_{\dot{a}a}\ \bar{\sigma}^{\nu a\dot{a}}=-2\eta^{\mu\nu}. \tag{18}\]
|
2307.16378 | Discovery of the shell structure via break radii in the outer halo of
the Milky Way | Based on the \textit{Gaia} DR3 RR Lyrae catalog, we use two methods to fit
the density profiles with an improved broken power law, and find that there are
two break radii coinciding with the two apocenter pile-ups of high-eccentricity
Gaia-Sausage-Enceladus (GSE) merger. Also, there is a break caused by the
Sagittarius (Sgr) stream. Combining the positions of all breaks, we briefly
analyze the metallicity and its dispersion as a function of $r$ as well as its
distribution in cylindrical coordinates. For the clean sample, the
$z\text{-to-}x$ ellipsoid axial ratio $q$ in $36\,{\rm
kpc}\,\textless\,r\,\textless\,96\,{\rm kpc}$ becomes much smaller than that of
the inner halo $(r\,\textless\,36\,{\rm kpc})$, while the major axis has a
large uncertainty in the region of $36-66\,{\rm kpc}$ and the one in the region
of $66-96\,{\rm kpc}$ is obviously different from that dominated by the
Hercules-Aquila Cloud (HAC) and the Virgo Overdensity (VOD) in the inner halo,
which indicates that there is an over-density structure distributed at low
zenithal angles. Finally, we found that the over-density structure in the outer
halo ($r\,\textgreater\,50\,{\rm kpc}$) is shell-shaped and relatively
metal-rich compared to the outer background halo. We conclude that the shells
could be the apocenter pile-ups of the high-eccentricity GSE merger, which is
supported by previous numerical simulations. | Dashuang Ye, Cuihua Du, Jianrong Shi, Jun Ma | 2023-07-31T03:07:26Z | http://arxiv.org/abs/2307.16378v1 | # Discovery of the shell structure via break radii in the outer halo of the Milky Way
###### Abstract
Based on the _Gaia_ DR3 RR Lyrae catalog, we use two methods to fit the density profiles with an improved broken power law, and find that there are two break radii coinciding with the two apocenter pile-ups of high-eccentricity Gaia-Sausage-Enceladus (GSE) merger. Also, there is a break caused by the Sagittarius (Sgr) stream. Combining the positions of all breaks, we briefly analyze the metallicity and its dispersion as a function of \(r\) as well as its distribution in cylindrical coordinates. For the clean sample, the \(z\)-to-\(x\) ellipsoid axial ratio \(q\) in \(36\,{\rm kpc}\,<\,r\,<\,96\,{\rm kpc}\) becomes much smaller than that of the inner halo (\(r\,<\,36\,{\rm kpc}\)), while the major axis has a large uncertainty in the region of \(36-66\,{\rm kpc}\) and the one in the region of \(66-96\,{\rm kpc}\) is obviously different from that dominated by the Hercules-Aquila Cloud (HAC) and the Virgo Overdensity (VOD) in the inner halo, which indicates that there is an over-density structure distributed at low zenithal angles. Finally, we found that the over-density structure in the outer halo (\(r\,>\,50\,{\rm kpc}\)) is shell-shaped and relatively metal-rich compared to the outer background halo. We conclude that the shells could be the apocenter pile-ups of the high-eccentricity GSE merger, which is supported by previous numerical simulations.
keywords: Galaxy: halos - Galaxy: structure - stars: variable: RR Lyrae
## 1 Introduction
Accounting for \(\sim 1\%\) of the total stellar mass, the stellar halo plays a particularly crucial role in understanding the early formation of the Galaxy due to the dynamical and chemical fossil record that they preserve. Astronomers have long sought to constrain models for the formation and evolution of the Milky Way (MW) on the basis of the stellar and globular cluster populations and unrelaxed substructures that it contains. In the MW, the observed substructures, such as overdensities and clusters in integrals of motion (Helmi et al., 2018; Myeong et al., 2019; Koppelman et al., 2019; Naidu et al., 2020; Malhan et al., 2022), indicate that the halo was likely built purely via mergers. A significant merger, i.e. the Sgr merger (Ibata et al., 1994; Majewski et al., 2003; Belokurov et al., 2006; Hernitschek et al., 2017; Sesar et al., 2017; Li et al., 2019; Bellazzini et al., 2020), generates the Sgr stream under the influence of tidal gravity, which can be divided into leading and trailing tails with distinct apocenters and pericenters, and the metallicity distribution of the trailing tail in south and north are significantly different (Belokurov et al., 2014). After the data release of the _Gaia_ satellite, our understanding of the Galactic formation history has been revolutionized. One of the most insightful findings is a major merger of dwarf galaxy with the MW progenitor at redshift \(z=1\sim 2\) (GSE, Belokurov et al., 2018; Helmi et al., 2018). The new insight brought by GSE is that the inner (\(<\,30\,{\rm kpc}\)) halo was largely built from one accretion event. Members of GSE are the major constituents of the inner halo, they have high eccentricity, strongly radial velocity and near-zero rotation (Belokurov et al., 2018), and the distribution of their metallicities ([Fe/H]) has a peak ranging from -1.4 to -1.2 dex (Mackereth and Bovy, 2020; Naidu et al., 2020; Das et al., 2020).
However, GSE includes additionally many retrograde stars, which are actually derived from another merger called Sequoia (Myeong et al., 2019). In addition to the above three significant accretion events, there are many other accretion events, such as Cetus (Newberg et al., 2009; Yuan et al., 2019), Pontus (Malhan et al., 2022), LMS-1/Wukong (Naidu et al., 2020; Yuan et al., 2020; Malhan et al., 2021), Arjuna (Naidu et al., 2020), Fitoi (Naidu et al., 2020), Thammos 1 and 2 (Koppelman et al., 2019), Kraken/Koala (Kruijssen et al., 2020; Forbes, 2020) and Helmi Streams (Helmi et al., 1999). At the moment, the composition of distant (\(r\,>\,40\,{\rm kpc}\)) halo is largely unknown due to low-quality astrometry and photometry dataset at such a large distance, although it is important for understanding the assembly history of the MW.
In general, the density profile of the Galactic stellar halo can reflect the accretion of the MW through apocentric pile-up. For example, a coincidence between the apocenter of large-eccentricity metal-rich halo stars (likely belonging to the GSE) and the stellar halo "break radius" at galactocentric distance \(r\sim 20\,{\rm kpc}\) was found by Deason et al. (2018). Based on this fact, they suggest that the break resulted from the apocenter pile-up of high-eccentricity stars through a massive accretion event, which is exactly GSE subsequently identified by Helmi et al. (2018) and Belokurov et al. (2018). So far, many models have been used to constrain the shape of the Galactic halo, such as single power-law (SPL, Sesar et al., 2013), broken power-law (BPL, Xue et al., 2015), double power-law (DPL), cored power-law (CPL, Iorio et al., 2018) and Einasto (EIN, Her
nitschek et al., 2018) profiles. In addition, a twice-broken power-law (TBPL) profile has also been adopted in several works (e.g. Deason et al., 2014; Xue et al., 2015; Han et al., 2022). A census of previous studies of the density profile was presented in the review by Bland-Hawthorn and Gerhard (2016). However, there are considerable spreads in the flattening (\(q\)) and the slope (\(n\)) for those best fitting models reported previously, for example, \(q=0.2\longrightarrow 0.8,n=-4.2\)(Xue et al., 2015), \(q=0.39\longrightarrow 0.81,n=-4.7\)(Das et al., 2016), \(q=0.57\longrightarrow 0.84,n=-2.96\)(Iorio et al., 2018) and \(q=0.5\longrightarrow 1.0,n=-3.15\)(Miceli et al., 2008), which suggests that the SPL and BPL cannot reflect the true underlying distribution.
Despite these significant advancements, the density profile of the outer halo has not yet been well constrained due to a lack of adequate and accurate datasets. Various tracers such as blue horizontal branch stars (Deason et al., 2011; Das et al., 2016), K giants (Xue et al., 2015) and main sequence turnoff stars (Juric et al., 2008), have been used to study the radial density profile. Pulsating horizontal branch stars, also known as RR Lyrae (RRL), the majority of them are metal-poor and old (age \(>9,10\) Gyr) with low mass (\(0.6-0.8\,\mathrm{M}_{\odot}\)), their radial velocities are biased by a range of \(40-70\,\mathrm{km}\,\mathrm{s}^{-1}\)(Liu, 1991; Drake et al., 2013), because their surfaces contract or expand regularly with periods shorter than a day. RRL serve as a standard candle to measure distance (Muraveva et al., 2018; Li et al., 2022), and is a intrinsically bright tracer of the Galactic halo (Sesar et al., 2013, 2017; Herritschek et al., 2017; Iorio et al., 2018; Herritschek et al., 2018; Iorio and Belokurov, 2019; Ramos et al., 2020; Iorio and Belokurov, 2021). The _Gaia_ DR3 (Gaia Collaboration et al., 2022) from the _Gaia_ space observatory (Gaia Collaboration et al., 2016) provides a large sample of 270,905 RRL with high completeness, including 174,947 RRab, 93,952 RRc and 2,006 RRd (Clementini et al., 2022), based on this dataset, we, therefore, study the accumulate history of halo by analyzing the fitting results of the density profile and metallicity.
It is well known that stars on very large circular orbits prefer constant velocities, while those stars on extremely flat elliptical orbits move rapidly through their points of closest approach and slow down at their furthest extent. The inevitable slow-down leads to a build-up of stars at apocenter. With the aid of the simulation in the context of the cold dark matter (CDM) model, Cooper et al. (2011) found that the disruption of a satellite system on a near-radial orbit can create the observed "shell" structures in the halo of the nearby galaxy NGC 7600. The two apocenter pile-ups, corresponding to a 'double-break' at 15-18 kpc and 28-30 kpc, are attributed to the Hercules-Aquila Cloud (HAC, Belokurov et al., 2007) and the Virgo Operdensity (VOD, Vivas et al., 2001), which are both from the GSE merger (Perottoni et al., 2022). Furthermore, based on the radial density and angular momentum distribution, Naidu et al. (2021) applied N-body galaxy simulations of the merger similar to GSE to study its accretion with the MW, and predicted that there are two breaks at 15-18 kpc and 30 kpc using TBPL to fit the density profile, which is coincident with the final two apocenters of the GSE before it fully merged with the MW.
In this paper, we explore the shell-shaped structure by fitting the density profile of RRab sample with precise distances using the improved broken power law. In Section 2, we describe the selection of two initial samples, the new broken density profile and two fitting methods, namely goodness-of-fit (GFFM) and classical fitting methods as well as how to get the candidates of shells. We analyze the best-fitting results and discuss the metallicity distribution and its dispersion as a function of radius in Section 3. The conclusion and summary are presented in Section 4.
## 2 Methods
### Dataset and selection criteria
In this study, we adopt the left-handed Galactocentric reference frame (Iorio et al., 2018). The Cartesian coordinates are indicated by \(x\), \(y\), and \(z\); the cylindrical and spherical radii are \(R\) and \(r\), respectively; the zenithal and azimuthal angles are described by \(\theta\), \(\phi\), and the Galactic longitude and latitude are adopted as \(I\) and \(b\), respectively. The Sun is located at \(x_{\odot}=R_{\odot}=8.13\,\mathrm{kpc}\)(GRAVITY Collaboration et al., 2018), and \(z_{\odot}=0\,\mathrm{kpc}\)(Iorio et al., 2018). We adopt the Sun's peculiar motion as \((U_{\odot},V_{0},W_{\odot})=(-11.10\pm 1.23,12.24\pm 2.05,7.25\pm 0.63)\,\mathrm{km}\, \mathrm{s}^{-1}\)(Schonrich et al., 2010), while the local standard of rest (LSR) velocity as \(V_{\mathrm{LSR}}=238\pm 9\,\mathrm{km}\,\mathrm{s}^{-1}\)(Schonrich, 2012).
Here, only the RRab stars have been considered due to their good completeness. Since various checks on the derived photometric metallicities and distances have been done for the globular clusters and Magellanic Clouds (e.g. Li et al., 2022), they are sufficiently accurate to investigate structures in the stellar halo. We use their method to calculate the metallicities and 3D positions of our sample stars, but instead, we apply the selection criteria related to the proper motions and positions of the Magellanic Clouds in Iorio and Belokurov (2019) to select our member stars and correct the _Gaia_ G-magnitudes due to dust reddening. The median error of our distances is \(0.068^{+0.016}_{-0.006}\) and the distances and metallicities of the Magellanic Clouds are consistent with those presented by Li et al. (2022). Derivation of metallicity requires the fundamental period \(P\) and the phase difference between the third and the first harmonics of the light-curve decomposition \(\Phi_{31}\), and the [Fe/H]-\(P\)-\(\Phi_{31}\) relation for RRab stars (Li et al., 2022) is expressed in equation (1). The absolute magni
Figure 1: The \(R-|z|\) distributions of our total sample (top) and clean sample (bottom). The color bar on the right side indicates the number density.
tudes (\(M_{\rm G}\)) can be estimated using the \(M_{\rm G}-[{\rm Fe}/{\rm H}]\) relation in Li et al. (2022), namely equation (2). Iorio & Belokurov (2019) and Vasiliev et al. (2021) assumed absolute magnitudes to be constant when calculating distances. Ramos et al. (2020) assigned an average metallicity -1.61 dex to those stars without available metallicities. In our study, for \(\sim 60,000\) stars without \(\Phi_{31}\), these values are drawn from their overall 2D distribution considering the whole _Gaia_ catalogue, similar to that described in Iorio & Belokurov (2021), using the extreme-deconvolution algorithm (XDGMM, Bovy, 2015).
\[\begin{split}[{\rm Fe}/{\rm H}]=&(2.669\pm 0.039) +(-8.641\pm 0.076)\times P\\ +&(-0.907\pm 0.022)\times\Phi_{31}+(0.794\pm 0.034) \times P\times\Phi_{31}\\ +&(0.363\pm 0.005)\times(\Phi_{31})^{2},\end{split} \tag{1}\]
\[M_{\rm G}=(0.254\pm 0.009)[{\rm Fe}/{\rm H}]+(1.002\pm 0.012). \tag{2}\]
In this work, the turning points of metallicity derived from these \(\sim 60,000\) stars without \(\Phi_{31}\) are included in those turning points from stars with \(\Phi_{31}\). It needs to be noted that the mock metallicities (\([{\rm Fe}/{\rm H}]_{\rm mean}\sim-1.43\)) are generally higher than those derived from \(P\) and \(\Phi_{31}\) (\([{\rm Fe}/{\rm H}]_{\rm mean}\sim-1.64\)), however, this does not affect the accuracy of distance. We make the mock metallicities close to the fitting curve between the metallicity and \(r\), which only results in a relative change of \(0.0239^{+0.0149}_{-0.0089}\) for distance, so the distances derived from the mock metallicities are accurate enough to explore all obvious break radii caused by apocenter pile-ups.
Regions with high dust extinction can bring some uncertainties in the study of the stellar distribution in the Galaxy. The spatial cuts to exclude the Galactic disc and the regions with poor completeness and high dust extinction at low latitudes are as follows:
\[S_{b}=\begin{cases}0,&|b|\ <\ 10^{\circ}\\ 1,&\text{else}.\end{cases} \tag{3}\]
\[S_{z}=\begin{cases}0,&|z|\ <\ 6\,{\rm kpc}\\ 1,&\text{else}.\end{cases} \tag{4}\]
We apply the selection criteria from Iorio & Belokurov (2021) to remove the most obvious compact structures, including artifacts/contaminants, globular clusters, dwarf satellites (Mateu et al., 2018), the Sgr dwarf (not including the Sgr stream), Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC), but we additionally consider the selection criteria for globular clusters:
\[\left\{\begin{array}{l}|\mu_{\alpha}-\mu_{\alpha,\rm gc}|\ <\ 10\sigma\mu_{ \alpha,\rm gc},\\ |\mu_{\delta}-\mu_{\delta,\rm gc}|\ <\ 10\sigma\mu_{\delta,\rm gc}.\end{array}\right. \tag{5}\]
where \(\mu_{\alpha,\rm gc}\), \(\mu_{\delta,\rm gc}\) are the proper motions, and \(\sigma_{\mu_{\alpha,\rm gc}}\), \(\sigma_{\mu_{\delta,\rm gc}}\) are their errors, respectively. The spatial positions, tidal radii and half-light radii of the globular clusters used in this work come from the Harris catalogue1(Harris, 1996), while their proper motions and uncertainties are taken from the catalogue2 from Vasiliev & Baumgardt (2021).
Footnote 1: [https://physics.mcmaster.ca/-harris/Databases.html](https://physics.mcmaster.ca/-harris/Databases.html)
Footnote 2: [https://github.com/Galacticdynamics-Oxford/GaiaTools](https://github.com/Galacticdynamics-Oxford/GaiaTools)
In order to excise the globular clusters without _Gaia_ proper motions (Vasiliev & Baumgardt, 2021) and all dwarf galaxies (Mateu et al., 2018), the selection function is as follows:
\[S_{\rm dwgc}=\begin{cases}0,&\text{spatial conditions (dwgc)}\\ 1,&\text{else}.\end{cases} \tag{6}\]
For the globular clusters with proper motions, the selection function is given as:
\[S_{\rm gc}=\begin{cases}1-f_{\rm gc},&\text{spatial conditions (gc)}\\ 1,&\text{else}.\end{cases} \tag{7}\]
where \(f_{\rm gc}\) is the number-ratio of selected stars of all conditions versus only under spatial conditions for each globular cluster. The selection function to cut the Sgr dwarf galaxy is given as:
\[S_{\rm Sgr}=\begin{cases}1-f_{\rm Sgr},&\text{spatial conditions (Sgr)}\\ 1,&\text{else}.\end{cases} \tag{8}\]
where \(f_{\rm Sgr}\) is the ratio of selected stars considering all conditions versus only under spatial conditions in each 2D (\(\widetilde{B}-\widetilde{B}_{\rm Sgr}\)) versus \(\left(\widetilde{\Lambda}-\widetilde{\Lambda}_{\rm Sgr}\right)\) bin (\(1^{\circ}\times 1^{\circ}\)) within \(|\widetilde{B}-\widetilde{B}_{\rm Sgr}|<\ 9^{\circ}\) and \(|\widetilde{\Lambda}-\widetilde{\Lambda}_{\rm Sgr}|<50^{\circ}\), where \(\widetilde{B}\) and \(\widetilde{\Lambda}\) are the latitude and longitude in the coordinate system aligned with the Sgr stream as defined in Belokurov et al. (2014), and \(\widetilde{B}_{\rm Sgr}=4.24^{\circ}\) and \(\widetilde{\Lambda}_{\rm Sgr}=-1.55^{\circ}\) represent the position of the Sgr dwarf. The selection function of outer LMC and SMC is given as:
\[S_{\rm outer\ LMC,SMC}=\begin{cases}1-f_{\rm LMC},&\text{spatial conditions (outer LMC)}\\ 1-f_{\rm SMC},&\text{spatial conditions (outer SMC)}\\ 1,&\text{else}.\end{cases} \tag{9}\]
where, \(f_{\rm LMC}\) is the ratio of selected stars considering all conditions versus only under spatial conditions in each 1D bin (\(1^{\circ}\)) within \(5^{\circ}<\eta_{\rm LMC}<16^{\circ}\) (\(\eta_{\rm LMC}\) is the angle with the LMC core), and \(f_{\rm SMC}\) is the similar ratio in each 1D bin (\(1^{\circ}\)) within \(5^{\circ}<\eta_{\rm SMC}<12^{\circ}\) (\(\eta_{\rm SMC}\) is the angle with the SMC core). Note that the spatial conditions (outer SMC) denote that the spatial region of outer SMC minus its intersection with that of outer LMC. The selection function to excise inner LMC and SMC is given as:
\[S_{\rm inner\ LMC,SMC}=\begin{cases}0,&\text{spatial conditions (inner LMC)}\\ 0,&\text{spatial conditions (inner SMC)}\\ 1,&\text{else}.\end{cases} \tag{10}\]
After the aforementioned selections, we obtain 32,019 stars (31,707 RRab stars with the _Gaia_ proper motions) within 116 kpc from the Galactic centre as our total sample, as shown in the top panel of Figure 1, which will be used in the fitting based on goodness-of-fit method (see Section 2.3).
In order to obtain a complete sample, we remove globular clusters, Sgr dwarf galaxy, SMC and LMC not only with spatial cuts but also with proper motion and G-magnitude. However, some breaks in density profile may be induced by proportional reductions in their selection functions. In order to rule out these potential possibilities, we only use spatial cuts to remove them. Therefore, for the globular clusters, SMC and LMC, the corresponding number-ratio of selected
stars \(f\) in their selection functions will be set to 1, and the selection function of the Sgr dwarf galaxy and stream is as follows:
\[S_{\rm Sgr}=\begin{cases}0,&\left|\vec{B}\right|\,<\,11^{\circ}\,\,{\rm and}\,\,D _{\rm sun}\,>\,15\,{\rm kpc}\\ 1,&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
50th and 84th percentiles, as listed in Table 1, which can provide a reference for the classical fitting method.
### Classical fitting method
This method is generally used in previous studies, and we found that this method can get the same conclusion as GFFM, which may exclude the influences of different fitting methods and several possibilities of breaks mentioned in Section 2.1. We divide our clean sample into five bins to ensure that each bin contains possible break radii obtained by GFFM. Furthermore, we use SPL, SBPL and the new doubly break power law (DBPL) to fit the data in order to further confirm that the determined breaks are real, and \((a,p,\gamma,\eta,\beta)\) are free parameters due to the acceptable computation time. The normal-fit method is used to fit the data in order to the best fit.
Figure 4: Goodness of fit as a function of \(r_{\rm break}\) and \(q\). The gray dashed lines represent the boundaries of all bins.
Figure 3: Break radius \(r_{\rm break}\) as a function of Goodness of fit and its uncertainty (gray background). Each dashed line represents the highest goodness-of-fit (\(\rm GF_{max}\)) of SPL in each bin, which is listed in Table 1 as the threshold of goodness-of-fit (\(\rm GF_{i}\)). Note that the \(\rm GF_{max}\) value of SPL in the bin of \(36-46\,\rm kpc\) is negative and therefore not used.
Figure 2: Classical BPL density profile (red dashed line) and SBPL density profile with various \(a\) (black solid lines with various degrees of transparency) as a function of the elliptical radius \(r_{e}\). Among them, the parameters we substitute into the SBPL model are listed in Table 1.
ized likelihood for the clean sample is as follows:
\[\ln\mathcal{L}\left(r,\theta,\phi|\Theta\right)=\sum_{i=1}^{n}\ln\mathcal{L}(r_{i },\theta_{i},\phi_{i}|\Theta), \tag{14}\]
\[\ln\mathcal{L}(r_{i},\theta_{i},\phi_{i}|\Theta)=\ln\frac{\rho_{\rm halo}(r_{i },\theta_{i},\phi_{i}|\Theta)S_{\rm all}(r_{i},\theta_{i},\phi_{i})|\mathbf{J}_{i}|} {\iint_{V}\rho_{\rm halo}(r,\theta,\phi|\Theta)S_{\rm all}(r,\theta,\phi)|\mathbf{J} |\,dr\,d\theta\,d\phi}, \tag{15}\]
\[S_{\rm all}=\prod_{j=1}^{k}S_{j}. \tag{16}\]
where \(S_{\rm all}\) represents the multiplication of all the above-mentioned selection functions in Section 2.1. The Jacobian term \(|\mathbf{J}|=r^{2}\text{sin}\,\theta\) reflects the transformation from (\(x\), \(y\), \(z\)) to (\(r\), \(\theta\), \(\phi\)) coordinates. Note that \(f=1\) in the selection functions of SMC, LMC, globular clusters, and the selection function of Sgr dwarf galaxy and stream follows equation (11). Here, a prior is adopted on the basis of previous studies (e.g., Hernitschek et al., 2018; Iorio et al., 2018) as follows:
\[\ln p\left(\Theta\right)=\left\{\begin{array}{rl}0,&n\in(1,6),q\in(0,1),r_{ \rm break}\in(r_{\rm down},r_{\rm up}),\\ &\beta,\,\eta\,\text{and}\,\lambda\in(-0.5\pi,0.5\pi),a\,\text{and}\,p\in(1,+ \infty)\\ -\infty,&\text{else}.\end{array}\right. \tag{17}\]
where \(r_{\rm down}\) and \(r_{\rm up}\) denote the lower and upper boundaries of each bin, respectively. We perform all integrations in this work using the _vegas_ algorithm (Lepage, 1978) through its Python implementation3, in which \(ntin\) is set to 10, and \(neval\) is set to 1000. The final estimate of the integral is obtained from the average of \(ntin\)_vegas_ runs with _neval_ integrand evaluations. We sample the posterior probability over the parameter space with Goodman & Weare's Affine Invariant Markov Chain Monte Carlo (MCMC, Goodman & Weare, 2010) using the Python module _emcee_(Foreman-Mackey et al., 2013). The final results are summarized in Table 1.
Footnote 3: [https://github.com/gplepage/vegas](https://github.com/gplepage/vegas)
### Selection of the new merger candidates
In each bin, we use the new broken power law to fit the data in the two methods, and preliminarily determine the distribution of shell-like structures beyond \(r\sim 50\) kpc according to the break radius in the density profile. To explore high-purity candidates of shells from our clean sample, we cluster the objects in each bin (When assigning stars to bins in spherical \(r\), the bin edges are selected so that each bin contains \(\text{N}_{\text{stars}}=500\) objects because the clustering algorithm requires enough spatial information of stars as input.) twice by the HDBSCAN clustering algorithm (Hierarchical Density-Based Spatial Clustering of Applications with Noise, Campello et al., 2013), in which we use the HDBSCAN Python package4(McInnes et al., 2017). The specific steps are as follows: (i) we cluster the stars in the Galactic longitude and latitude (\(l,b\)) space, and then remove stars marked with \(-1\) and the clustering probabilities less than 0.6; (ii) in the heliocentric distance space, we cluster the stars retained from the step (i) in each bin, and then remove those stars marked with \(-1\) and the clustering probabilities less than 0.8; (iii) among the remaining stars in the step (ii), we select the stars located outside \(r\sim 50\) kpc.
Footnote 4: [https://github.com/scikit-learn-contrib/hdbscan](https://github.com/scikit-learn-contrib/hdbscan)
## 3 Results
Previous studies have shown that an accreted galaxy on a near-radial orbit, such as GSE, will result in ring-like or shell-like structures, corresponding to apocenter pile-ups in the density map or breaks in the density profile. They are, respectively, located at the penultimate and final apocentric distances before it fully merged with the MW as well as the edges of the ultra-dense cores in the substructures after merging. Therefore, we may use the breaks in the broken power law to find shell-shaped structures which are remnants of an accreted galaxy on a near-radial orbit.
### Analysis of results by GFFM
The six breaks explored by GFFM are shown in the top panel of Figure 5. We consider that the inner and outer boundaries of the apocentric distance for the GSE merger in Mahan et al. (2022) are equivalent to the GSE penultimate and final apocentric distances in Naidu et al. (2021) for the following reasons: Malhan et al. (2022) used globular clusters, dwarf galaxies, and stellar streams that well preserve the dynamical properties of its progenitor galaxy as their sample; Naidu et al. (2021) predicted that two breaks occur at the penultimate and final apocenters of the GSE, which are consistent with the upper and lower boundaries found by Malhan et al. (2022); Han et al. (2022) recently confirmed that the predictions of Naidu et al. (2021) by fitting a tilted triaxial ellipsoid with a doubly broken power law
\begin{table}
\begin{tabular}{l c c c c c} \hline range (kpc) & \(r_{\rm break}\) (kpc) & \(n\) & \(q\) & \(\delta n\) & \(\text{GF}_{\rm t}\) \\ \hline (6,26) & \(24^{+2}_{-2}\) & \(2.1^{+0.3}_{-0.2}\) & \(0.68^{+0.08}_{-0.10}\) & \(0.5^{+0.3}_{-0.2}\) & 0.811 \\ (26,36) & \(31^{+3}_{-3}\) & \(2.9^{+0.3}_{-0.2}\) & \(0.86^{+0.10}_{-0.11}\) & \(0.5^{+0.2}_{-0.2}\) & 0.628 \\ (36,46) & \(43^{+2}_{-2}\) & \(3.4^{+0.1}_{-0.1}\) & \(0.81^{+0.11}_{-0.09}\) & \(-0.3^{+0.1}_{-0.1}\) & 0 \\ (46,76) & \(57^{+7}_{-7}\) & \(3.2^{+0.1}_{-0.1}\) & \(0.84^{+0.10}_{-0.10}\) & \(0.2^{+0.1}_{-0.1}\) & 0.746 \\ (76,96) & \(91^{+3}_{-7}\) & \(3.4^{+0.1}_{-0.1}\) & \(0.91^{+0.07}_{-0.07}\) & \(0.2^{+0.2}_{-0.1}\) & 0.673 \\ (96,116) & \(107^{+6}_{-7}\) & \(3.8^{+0.1}_{-0.2}\) & \(0.90^{+0.07}_{-0.07}\) & \(0.4^{+0.2}_{-0.2}\) & 0.281 \\ \hline \end{tabular}
\end{table}
Table 1: The best-fitting parameters obtained by GFFM.
Figure 5: Cylindrical map showing all break radii and the distribution of the number density of the total sample. The black dashed lines represent all break radii explored by GFFM, and the white dashed lines represent the inner and outer boundaries of the GSE apocenters (Mahan et al., 2022).
along its flattened radius and performing N-body simulations; the apocentric distance should become closer to the Galactic centre over time before it is completely accreted. As shown in Figure 5, the two apocenters of high-eccentricity GSE merger (white dashed line) closely match the two breaks at \(r_{\rm e}=24\) and \(31\,\)kpc. It is clear that a reverse break at \(r_{\rm e}\sim 43\,\)kpc with negative \(\delta n\) is caused by the Sgr stream at large zenithal angles. A structure composed of several overdensities at low zenithal angles is shown in the black box in Figure 5. After removing the Sgr stream, the intersecting area between the inner over-density area and the outer structure at low zenithal angles (\(40\,\)kpc \(<R\,<\,60\,\)kpc and \(|z|\,<\,20\,\)kpc) is still visible, as shown in the bottom panel of Figure 1, so the structure may lead to the break at \(r_{\rm e}\sim 57\,\)kpc in the density profile. However, in Figure 5, it can be found that the break at \(r_{\rm e}\sim 57\,\)kpc is just at the outer boundary of the over-density region in the Sgr stream, so the break could also be attributed to the Sgr stream. The break at \(r_{\rm e}\sim 91\,\)kpc could also be derived by the outer boundary of the structure at low zenithal angles. From Figure 5 we can see that the density beyond the farthest break (\(r_{\rm e}\,\sim 107\,\)kpc) is extremely sparse, which is caused by the observation limit. In addition, from Table 1 It can be seen that the inner halo is very flat (small value of \(q\)).
### Analysis of results by the classical fitting method
It is found that the flatness in the outer halo is extremely affected by overdensities due to the low-density background halo, so we also fit the clean sample. In order to further confirm that the structure does not belong to the Sgr stream from \(20\,\)to over \(100\,\)kpc (Belokurov et al., 2014; Sesar et al., 2017; Hernitschek et al., 2017; Ramos et al., 2020), and to rule out the influences of remnants of LMC, SMC and globular clusters, we apply the classical fitting method for the clean sample and study the variation of various parameters with \(r\) to confirm the spatial extent occupied by the structure at low zenithal angles. In Table 1 we summarize the results obtained by fitting the clean sample with the SPL, SBPL and DBPL density profiles including the additional five free parameters (\(a\), \(p\), \(\gamma\), \(\eta\), \(\beta\)). In Figure 6, we show the members of all substructures obtained by the HDBSCAN clustering algorithm and all triaxial ellipsoids, namely all breaks. We can find that the direction of the major axis in the region of \(r\,<\,66\,\)kpc seems to be consistent with the elongation direction of the diffuse substructure in the inner halo (\(<\,36\,\)kpc), which is in agreement with previous studies. Iorio and Belokurov (2019) have found that the two large diffuse overdensities, namely HAC and VOD, are aligned with the semi-major axis of the halo within \(\sim 30\,\)kpc from the Galactic centre. The major axis of the triaxial ellipsoid is nearly coincident with the final two apocenters of the GSE in N-body simulation (Naidu et al., 2021), indicating that the major axis is dominated by the penultimate and final apocenters of the GSE merger.
Here we evaluate the validity and reliability of breaks in each interval and compare the results of different models to determine which of them gives the best description of the data by the Bayesian evidence. Assuming that the posterior distributions are approximately Gaussian, the Bayesian evidence can be estimated by the Bayesian Information Criterion (BIC, Schwarz, 1978) defined as
\[{\rm BIC}=-2{\rm ln}({\cal L}_{\rm max})+k{\rm ln}N_{\rm s} \tag{18}\]
where \(k\) is the number of free parameters, \(N_{\rm s}\) is the data sample size and \({\cal L}_{\rm max}\) is the maximum value of the likelihood. The BIC is commonly utilized for comparing models of varying parameter dimensions, with preference given to the model exhibiting the lowest BIC value. In Table 1, we notice that the BIC values for SBPL in \((6,26)\,\)kpc, \((26,36)\,\)kpc and \((66,96)\,\)kpc are smaller than those for SPL, which indicates that the SBPL model is more suitable for the data than SPL in these intervals, resulting in increased confidence about the presence of breaks within these intervals.
It is known that the break at \(r\sim 20\,\)kpc confirmed by Deason et al. (2018) is caused by the GSE apocenter pile-up, however, some previous studies (Xue et al., 2015; Iorio et al., 2018) have shown that the BIC value obtained for BPL is larger than that for SPL. The BIC is similar to the maximum-likelihood criterion, but it takes into account a penalty depending on the number of free parameters \(k\), such that for two models with the same likelihood, the one with more parameters is penalized. In other words, a model with more parameters will be unnecessary if two models, with distinct numbers of free parameters, comparably fit the data well. Therefore, we cannot deny the presence of breaks just based on larger BIC. In the region of \(36\,\)kpc \(<r\,<\,66\,\)kpc, the BIC of the SPL model is smaller than those obtained for SBPL and DBPL due to large break scales, for example, the break scales with \(a=-0.87^{+0.41}_{-0.35}\) in SBPL and \(a_{1}=-0.75^{+0.62}_{-0.48}\), \(a_{2}=-0.66^{+0.88}_{-0.64}\) in DBPL (\(a\sim-0.87\), \(-0.75\) and \(-0.66\) mean that \(\tau\) can only increase by \(0.4\delta n\) within \(r_{\rm break}\pm 5.39\), \(4.09\) and \(3.32\,\)kpc, respectively.) are much larger than all breaks except the farthest break caused by the observation limit. In Figures 7-9, it can be found that the major axis direction in SBPL, namely \((\gamma,\eta)=(0.42^{+0.37}_{-0.42},0.01^{+0.08}_{-0.09})\), is obviously different from those of SPL (\(-0.22^{+0.20}_{-0.20},0.11^{+0.06}_{-0.04}\)) and DBPL (\(-0.41^{+0.53}_{-0.50},0.16^{+0.06}_{-0.09}\)). In addition, the distribution of \(\gamma\) in DBPL is quite wide, ranging from \(-0.91\) to \(0.12\). Furthermore, we find that the density distribution in the region of \(36\,\)kpc \(<r\,<\,66\,\)kpc is significantly different from that in the inner halo (\(r\,<\,36\,\)kpc), that is, its flatness suddenly increases in this interval, which is consistent with the results of the three models (\(q\sim 0.69\)). Combined with the above-mentioned results, it shows that the breaks at \(r_{\rm e}\sim 45.25\,\)kpc and \(48.05\,\)kpc are caused by two diffuse substructures distributed in different directions, namely (\(\gamma>0,\,\,\eta\sim 0\) and \(\gamma<0,\,\eta\sim 0.1\)), one of which is the debris of GSE due to the tendency towards the major axis direction in the inner halo (\(r\,<\,36\,\)kpc), we also plot these two breaks in Figure 6.
Here we analyze the fitting results in \(66\,\)kpc \(<r\,<\,96\,\)kpc. In Figure 6, we notice that the major axis with small uncertainties in \(66\,\)kpc \(<r\,<\,96\,\)kpc (\(\gamma>0,\,\eta\sim 0\)) is obviously different from that caused by GSE in the inner halo (\(\gamma<0,\,\,\eta\sim 0.1\)). Therefore, the diffuse overdensity corresponding to this major axis (\(\gamma>0,\,\,\eta\sim 0\)) is dominant within \(66\,\)kpc \(<r\,<\,96\,\)kpc. It could be the over-density structure in the black box, as shown in Figure 1. Since its distribution at low zenithal angles makes larger flatness in the region of \(66\,\)kpc \(<r\,<\,96\,\)kpc than that in the inner halo (\(r\,<\,36\,\)kpc), namely smaller \(q\), as shown by the black ellipsoid in Figure 6. In the region of \(36\,\)kpc \(<r\,<\,66\,\)kpc, the flattening of \(\sim 0.69\) resulted from the fact that the number density is extremely high at low zenithal angles but is very sparse in the region without the Sgr stream at large zenithal angles, as shown in the bottom panel of Figure 1. The over-density structure distributed at lower zenithal angles could be the 'eccentric Cloudy' suggested by Johnston et al. (2008), which occupies a larger range with distances of \(20-100\,\)kpc. In Figure 6, we can see that, within \(96\,\)kpc \(<r\,<\,116\,\)kpc, the value of \(q\) reaches a reasonable level based on the relationship between \(q\) and \(r\) in the inner halo (\(r\,<\,36\,\)kpc), which indicates that there are a few members of the over-density structure at low zenithal angles in this region.
We now analyze all the breaks in Figure 6: (i) The farthest break at \(r_{\rm e}\sim 104\,\)kpc could be caused by the observation limit due to its largest scale (\(a\sim-1.02\) means that \(n\) just increases by \(0.4\delta n\), i.e. \(0.4\times 1.54\), within \(104\pm 7.61\,\)kpc). (ii) The break at \(r_{\rm e}\sim 93\,\)kpc (\(a\sim-1.02\)) is a
0.44 means that \(n\) increases by \(0.8\delta n\), i.e. \(0.8\times(-0.22)\), just within \(93\pm 1.12\) kpc) could correspond to some compact substructures due to its smallest scale. (iii) For the best-fitting results of DBPL in \(36\,{\rm kpc}<r<66\,{\rm kpc}\), the reverse break at \(r_{\rm e}\sim 45\,{\rm kpc}\) could be created by debris of GSE, and the other break at \(r_{\rm e}\sim 48\) kpc could be caused by some diffuse substructures at low zenithal angles. The best-fitting results for SBPL show that the major axis (i.e., \(\gamma\sim 0.42,\ \eta\sim 0.01\)) obviously deviates from that dominated by HAC and VOD in the inner halo (\(\gamma<0,\ \eta\sim 0.1\)), and that the break at \(r_{\rm e}\sim 47\,{\rm kpc}\) with \(\delta n\sim 0.21\) is very similar to the break at \(r_{\rm e}\sim 48\,{\rm kpc}\) with \(\delta n\sim 0.31\) in DBPL, indicating that unknown substructures at low zenithal angles could lead to the break around 48 kpc in the density profile and the distinct major axis in SBPL. Based on the two breaks at \(r_{\rm e}\sim 48\,{\rm kpc}\) and \(\sim 93\,{\rm kpc}\), we are able to preliminarily locate these unknown overdensities at low zenithal angles, which contributes to the flattened halo and the orientation of the major axis approaching (\(\gamma>0,\ \eta\sim 0\)). However, the best-fitting results for SPL show that the major axis is consistent with that dominated by HAC and VOD, indicating that there is debris from them in this range. In addition, the best-fitting results for DBPL show that the major axis is roughly consistent with that dominated by HAC and VOD, while the uncertainty is significant (i.e., \(\gamma=-0.41^{+0.53}_{-0.50}\)). This could be caused by the combined effects of unknown substructures and fragments from HAC and VOD.
Finally, we compare the results of the two methods. In the GFFM approach, the setting of \(f\) in the selection function of LMC, SMC,
Figure 6: All break radii explored by the classical fitting method, including \(r_{\rm break}\sim 22.99\) kpc, \(27.18\) kpc, \(45.25\) kpc, \(48.05\) kpc, \(92.99\,{\rm kpc}\) and \(103.77\,{\rm kpc}\), are represented by triaxial ellipsoids with major axes indicated by black solid lines. We plot the ellipsoid corresponding to the break at \(92.99\,{\rm kpc}\) in black, which has a clearly distinct major axis and flattening at a larger distance. The black dots represent all overdensities explored by the HDBSCAN clustering algorithm.
Figure 7: The probability distribution shown in the figure is obtained by fitting the SPL model to the clean sample within \(36\,{\rm kpc}<r<66\,{\rm kpc}\) by our application of the classical fitting method.
the Sgr dwarf galaxy and globular clusters is equivalent to including those substructures in the corresponding space regions, and GFFM is used for the total sample, so the slope \(n\) in the fitting results of GFFM is lower than that of the classical fitting method. In addition, the fitting results of the total sample will inevitably get a larger \(z\)-to-\(x\) ellipsoid axial ratio \(q\), because of a clear decrease of density at large zenithal angles after removing the Sgr stream, as shown in Figure 1. Furthermore, if we fit the total sample with the classical fitting method, a smaller \(y\)-to-\(x\) ellipsoid axial ratio \(p\) should be derived in the region of \(D_{\rm sum}>15\,\)kpc, since its stellar density distribution elongates along the \(x\)-axis on the \(x\)-\(y\) plane or flattens along the \(y\)-axis.
### The metallicity distribution as a function of radius
It is important to explore the correlation of metallicity with breaks. The metallicity derived from \(P\) and \(\Phi_{31}\) as a function of \(r\), as shown in the top panel of Figure 10, can also imply the existence of some substructures. Since the majority of stars in the region of \(r<20\,\)kpc are composed of the metal-rich GSE, high \(\alpha\)-disk and in-situ halo (Naidu et al., 2020), the median metallicity is about \(\left[{\rm Fe/H}\right]\sim-1.58\,\)dex out to \(20\,\)kpc. In the region of \(20\,\)kpc \(<r<30\,\)kpc, there are many metal-poor stars belonging to LMS-1/Wukong, Sequoia, F'itoi and other small mergers, while the fraction of metal-rich stars belonging to high \(\alpha\)-disk and in-situ halo and GSE has decreased in this
Figure 8: Similar to Figure 7, but using the SBPL model here.
range (Naidu et al., 2020), which could together determine a gentle metallicity gradient. The apocenter of the metal-rich Sgr leading tail (\(r_{\rm apo}=47.8\pm 0.5\) kpc, Belokurov et al., 2014) lies between 30 kpc and 50 kpc, and its members could be more dispersive due to their lower eccentricities. Therefore, for the clean sample without the Sgr stream (red), we can clearly see the lower metallicity than that of the total sample (black) in this range, as shown in the top panel of Figure 10. We attribute the turning points of \(r\sim 50\) kpc and 80 kpc to the over-density structure at low zenithal angles originating from the apocenter pile-ups of its progenitor galaxy. It can be inferred that there may be one apocenter pile-up at each turning point of the metallicity, such as \(r\sim 20\) kpc, 30 kpc (GSE), 40 kpc (the Sgr leading stream), 50 kpc and 80 kpc (the over-density structure at low zenithal angles). In other words, each turning point can imply a potential alteration of the fraction of substructure due to its apocenter pile-up, but this needs to be further validated using the complete data with 6D phase-space measurements and chemical information.
Here, we analyze the metallicity dispersion as a function of \(r\) shown in the bottom panel of Figure 10. The metallicity dispersion reaches a minimum around both \(r\sim 20\) kpc and \(\sim 32\) kpc, which also indicates that the stars here are dominated by one component, namely the apocenters pile-up of GSE. However, the mixture of multiple components with very distinct metallicities in approximately the same proportions will lead to a large dispersion. Three predominant com
Figure 9: Similar to Figure 7, but using the DBPL model here.
ponents in the region of \(6\,{\rm kpc}\,<\,r\,<\,20\,{\rm kpc}\), including in-situ halo, high-\(\alpha\) disk and GSE, lead to a gentle drop in metallicity dispersion, which can reflect the increase in the relative fraction of GSE and the decrease in the relative fraction of in-situ halo and high-\(\alpha\) disk. In the region of \(20\,{\rm kpc}\,<\,r\,<\,30\,{\rm kpc}\), it leads to a prominent dispersion that the metal-poor background halo and a fraction of the metal-rich GSE less than that in the region of \(r\approx 15-20\,{\rm kpc}\). For both total (black) and clean (red) samples, the peak around \(r\sim 40\,{\rm kpc}\) and the rise around \(80\,{\rm kpc}\) could be attributed to some unknown structures or selection effects.
In order to visualize the metallicities of some structures, such as the Sgr stream, the over-density structure at low zenithal angles, and the metallicity distributions around breaks, we show the metallicity distribution of stars from the total sample (top) and clean sample (bottom) in the \(R-|z|\) space in Figure 11. We separate the total sample and clean sample into many cylindrical \(R\), \(|z|\) bins with an average Poisson signal-to-noise ratio of 10 using the _orbit_ python package (Cappellari & Copin, 2003). The top panel of Figure 11 shows that the break at \(24\,{\rm kpc}\) is close to the edge of the metal-rich area (\(r\,<\,20\,{\rm kpc}\)), and that seven metal-rich bins are located around the break at \(31\,{\rm kpc}\). These results could be the imprints of two apocenter pile-ups of GSE (Naiud et al., 2021) and lead to a gentle metallicity gradient in \(20\,{\rm kpc}<\,r\,<\,30\,{\rm kpc}\). In Figure 11, the Sgr stream at large zenithal angles between two breaks at \(r_{\rm e}\sim 43\,{\rm kpc}\) and \(57\,{\rm kpc}\) is clearly metal-rich, and after it is removed, a large number of metal-poor background halo stars with \([{\rm Fe}/{\rm H}]\sim-1.85\) dex is exposed. There are members of minor mergers that result in faster size growth, and they contribute little mass to the Galactic centre but a lot to the outer halo (Karademir et al., 2019). In the bottom panel of Figure 11, it can be seen that the metallicity in the corresponding region of the over-density structure at low zenithal angles is significantly higher than that of the background halo, so the structure could be metal-rich, which leads to a rapid drop in metallicity beyond \(r\sim 80\,{\rm kpc}\) (i.e., its outer boundary).
In Figure 11, for the clean sample (bottom), the interface between metal-rich and metal-poor regions is located at \(r\sim 40\,{\rm kpc}\), while for the total sample (top), three components, including a small structure with \([{\rm Fe}/{\rm H}]\sim-1.85\) at \((R,|z|)\sim(26,30)\) kpc, the over-density structure with \([{\rm Fe}/{\rm H}]\sim-1.70\) at low zenithal angles and the metal-rich Sgr stream with \([{\rm Fe}/{\rm H}]\sim-1.65\), are distributed around \(r\sim 40\,{\rm kpc}\). Therefore, a quick growth of metallicity dispersion in the region of \(30\,{\rm kpc}\,<\,r\,<\,40\,{\rm kpc}\) and a rapid drop beyond \(r\sim 40\,{\rm kpc}\) can be seen for two initial samples, respectively. The extremely metal-rich structure at \((R,|z|)\sim(60,40)\) kpc for the total sample (top) and the interface between the over-dense and metal-rich structure at low zenith angles and background halo beyond \(r\sim 80\,{\rm kpc}\) for two initial samples (top and bottom) lead to a growth in the metallicity dispersion around \(r\sim 80\,{\rm kpc}\). Although the situation described above will lead to the trend of metallicity with \(r\) or \(R-|z|\) as well as its dispersion with \(r\) in Figures 10 and 11, it could also be attributed to errors or some debris of undiscovered mergers in the outer halo.
Figure 11: Cylindrical maps showing the distributions of metallicities of stars with \(\Phi_{31}\) in the total sample (top) and clean sample (bottom). The black dashed lines represent all break radii explored by GFFM, including \(r_{\rm break}\sim 24\,{\rm kpc}\), \(31\,{\rm kpc}\), \(43\,{\rm kpc}\), \(57\,{\rm kpc}\), \(91\,{\rm kpc}\) and \(107\,{\rm kpc}\). The white dashed lines represent the inner and outer boundaries of the GSE apocenters (Mathan et al., 2022).
Figure 10: Matellicity (top) and its dispersion (bottom) as functions of spherical radius \(r\). The black (red) dots represent the metallicity derived from \(P\) and \(\Phi_{31}\) of the total (clean) sample. The black (red) dashed line denotes the second-order polynomial fitted to the black (red) dots using the least square method. We assign stars with the \(Gaia\,\Phi_{31}\) to bins in spherical \(r\) so that each bin contains \({\rm N_{stars}}=500\) objects. We get the statistical errors by computing 95% confidence intervals of the 10000 bootstrapped resamples of the data in each bin.
### Shell-shaped structure
In order to intuitively illustrate that the over-density structure does not belong to distant streams, we show the overdensities with \(r>50\,\mathrm{kpc}\) found by the HDBSCAN clustering algorithm and the distant streams with heliocentric distances larger than \(30\,\mathrm{kpc}\) given in the Python package _galsstream_(Mateu, 2022) in Figures 12 and 13. We found that the overdensities are mostly distributed at low latitudes, as expected, and are different from the shapes of stellar streams. In addition, their proper motions, depicted by arrows, reveal a complex velocity field.
In order to focus on the over-density structure, we only show stars at low vertical distances in Figure 14, and it can be found that the structure is shell-shaped. We infer that the flattening density distribution in the range of \(36-96\,\mathrm{kpc}\), the deviation of the major
Figure 12: The distribution of the candidates of the shell-shaped structure and distant streams (heliocentric distances greater than \(30\,\mathrm{kpc}\)) in Galactic coordinates. The arrows indicate the direction of the proper motion and are color-coded according to the stars’ distance.
Figure 13: Spatial locations of the candidates of the shell-shaped structure plotted in Figure 12 and distant streams in Cartesian coordinates.
axis (\(\gamma>0\), \(\eta\sim 0\)) from the direction dominated by HAC and VOD, and the two breaks at \(r_{\rm e}\sim 48\) kpc and \(\sim 93\) kpc are attributed to the two shells in Figure 14. Some numerical simulations have shown that shells are made by mergers on radial orbits, while streams appear in more circular orbits (Johnston et al., 2008; Amorisco, 2015; Karademir et al., 2019; Pop et al., 2018). As expected, the candidates shown in Figure 14 clearly exhibit a shell-shaped or ring-shaped structure at low zenithal angles, which is very similar to the case of small orbital angle (edge-on to the host galaxy) and impact angle (radial orbit, Karademir et al., 2019). In this case, more stars will be distributed at lower zenithal angles forming a ring, while the accretion in the direction perpendicular to the host disk will make stars get large vertical velocities distributed at higher zenithal angles. Therefore, the flattening in the outer halo indicates that the accretion at the edge is more reasonable. Pop et al. (2018) have found that the shell-forming progenitors usually accreted with high stellar mass ratio (\(>0.1\)) on approximately radial orbits about 4-8 Gyr ago, and stripped about 1-4 Gyr. Most of the resulting shells are phase-mixed if satellites are accreted too early, while satellites accreted recently will not have enough time to be stripped and form shells. Thus, a clear shell shown in Figure 14 indicates that its accretion time could be intermediate and its progenitor galaxy may be a massive galaxy, such as GSE (\(\rm M_{GE}/\rm M_{MW}\sim 0.24\), \(8-11\)Gyr ago Helmi et al., 2018; Belokurov et al., 2018). Considering that a major merger contributes a large stellar mass to the centre of our host galaxy (Karademir et al., 2019), and the high-eccentricity stars in the inner halo mostly belong to GSE except for a small amount of the Splash stars (Belokurov et al., 2020), the shells in Figure 14 may be caused by the radial collision of GSE, similar to the shells found by Donlon et al. (2020) in the HAC and VOD regions. Chandra et al. (2022) recently found the apocentric shells of GSE debris, forming \(60-90\) kpc counterparts to the \(15-20\) kpc shells that are known to dominate the inner halo. Therefore, we infer that the shells are likely to be the apocenter pile-ups of GSE in the outer halo, and could be associated with Outer Virgo Overdensity (Sesar et al., 2017) and a coherent stream of retrograde stars encircling the Milky Way from 50-100 kpc, in the same plane as the Sgr stream but moving in the opposite direction, found by Chandra et al. (2022).
## 4 Conclusion and Summary
In this paper, we use a sample of RRab stars released by Gaia DR3 to study the relationship between break radius and merger in the Milky Way, and to explore new mergers. We apply two methods (GFFM) and the classical fitting method) to fit two initial samples with the new broken power law in order to probe all breaks, which includes an additional parameter indicating the broken scale. We found that \(q\) in \(36\,\rm kpc\,<\,r\,<\,96\,\rm kpc\) is much smaller than that in the inner halo according to the formula of its increasing with \(r\) in the region of \(r\,<\,36\) kpc. In addition, the major axis deviates significantly from the direction dominated by HAC and VOD in the region of \(66-96\) kpc, and has significant uncertainty in the range of \(36-66\) kpc. Therefore, we attribute the breaks at both \(r_{\rm e}\sim 48\) kpc and \(93\) kpc to two unknown overdensities at low zenithal angles. In the study of metallicity as a function of \(r\), we found that some turning points of metallicity have corresponding breaks, such as \(r\sim 20\) kpc and \(30\) kpc, suggesting that there are likely to be the apocenter pile-ups of GSE at these locations. We also analyze the metallicity distribution in \(R-|z|\) space, and found that the metallicity in the region of over-density structure at low zenithal angles is richer than that of the outer background halo, indicating that the two overdensities are responsible for the breaks at \(r_{\rm e}\sim 48\) kpc and \(93\) kpc are likely to be metal-rich.
Finally, we apply HDBSCAN to select the candidates belonging to the two overdensities that lead to two breaks at \(r_{\rm e}\sim 48\) kpc and \(93\) kpc, and infer that two overdensities are shell-shaped or ring-shaped, which is consistent with previous numerical simulations of the mergers on high-eccentricity orbits. We conclude that the two shells are the apocenter pile-ups of GSE in the outer halo, and are associated with Outer Virgo Overdensity (Sesar et al., 2017) and a coherent stream of retrograde stars encircling the Milky Way from 50-100 kpc, in the same plane as the Sgr stream but moving in the opposite direction, found by (Chandra et al., 2022). Due to lack of high-quality observation data, such as radial velocity and chemical abundance, we still do not know its dynamical properties. Therefore, here we only tentatively propose its existence, and with the releasement of more high-precision and long-distance stellar data, it can be studied in depth.
Figure 14: In order to show the shell-shaped structure at low zenithal angles more clearly, we further select them to be members in the outer halo in Figures 12 and 13. The black dots represent members satisfying \(50\,\rm kpc\,<\,r\,<\,80\,\rm kpc\) and \(|z|\,<\,30\,\rm kpc\) in Figure 13. The royalblue dots represent members satisfying \(r\,>\,80\,\rm kpc\) and \(|z|\,<\,50\,\rm kpc\) in Figure 13.
## Acknowledgements
We thank the referee for the insightful comments and suggestions, which have improved the paper significantly. This work was supported by the National Natural Sciences Foundation of China (NSFC Nos: 11973042, 12090040, 12090044, 11973052 and 11873053). It was also supported by the Fundamental Research Funds for the Central Universities and the National Key R&D Program of China No. 2019YFA0405501. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
## Data Availability
The data supporting this article will be shared upon reasonable request sent to the corresponding authors.
|
2305.00363 | Plateau's problem via the Allen--Cahn functional | Let $\Gamma$ be a compact codimension-two submanifold of $\mathbb{R}^n$, and
let $L$ be a nontrivial real line bundle over $X = \mathbb{R}^n \setminus
\Gamma$. We study the Allen--Cahn functional, \[E_\varepsilon(u) = \int_X
\varepsilon \frac{|\nabla u|^2}{2} + \frac{(1-|u|^2)^2}{4\varepsilon}\,dx,\] on
the space of sections $u$ of $L$. Specifically, we are interested in critical
sections for this functional and their relation to minimal hypersurfaces with
boundary equal to $\Gamma$. We first show that, for a family of critical
sections with uniformly bounded energy, in the limit as $\varepsilon \to 0$,
the associated family of energy measures converges to an integer rectifiable
$(n-1)$-varifold $V$. Moreover, $V$ is stationary with respect to any variation
which leaves $\Gamma$ fixed. Away from $\Gamma$, this follows from work of
Hutchinson--Tonegawa; our result extends their interior theory up to the
boundary $\Gamma$.
Under additional hypotheses, we can say more about $V$. When $V$ arises as a
limit of critical sections with uniformly bounded Morse index, $\Sigma :=
\operatorname{supp} \|V\|$ is a minimal hypersurface, smooth away from $\Gamma$
and a singular set of Hausdorff dimension at most $n-8$. If the sections are
globally energy minimizing and $n = 3$, then $\Sigma$ is a smooth surface with
boundary, $\partial \Sigma = \Gamma$ (at least if $L$ is chosen correctly), and
$\Sigma$ has least area among all surfaces with these properties. We thus
obtain a new proof (originally suggested in a paper of Fr\"{o}hlich and Struwe)
that the smooth version of Plateau's problem admits a solution for every
boundary curve in $\mathbb{R}^3$. This also works if $4 \leq n\leq 7$ and
$\Gamma$ is assumed to lie in a strictly convex hypersurface. | Marco A. M. Guaraco, Stephen Lynch | 2023-04-30T01:11:43Z | http://arxiv.org/abs/2305.00363v2 | # Plateau's problem via the Allen-Cahn functional
###### Abstract.
We use a variant of the Allen-Cahn energy to produce minimal hypersurfaces with prescribed boundary. The energy we consider is defined on the space of sections of a real line bundle over the complement of a smooth boundary. By studying minimizing sections in the sharp interface limit, we give a new proof that the smooth Plateau problem admits a solution if the boundary is any curve in \(3\)-dimensional space, or lies in a strictly convex hypersurface in dimensions \(4\) through to \(7\). The method also gives a new proof that each nontrivial \((n-1)\)-homology class on a closed Riemannian \(n\)-manifold contains an area minimizing hypersurface, provided that \(2\leq n\leq 7\).
## 1. Introduction
Plateau's problem is to establish the existence of a surface whose area is minimal among all those which span a prescribed boundary. Lagrange posed the problem in 1760, and Plateau later demonstrated that solutions arise experimentally as soap films clinging to a wire frame. Plateau's problem is far more subtle than it appears; to even formulate it precisely, appropriate notions of surface and area must be decided upon, and one must specify what it means for a surface to span a given boundary. Over the last century a number of frameworks have been developed to solve Plateau's problem and its generalisations to higher dimensions. We refer to Section 2 of [1] for an extensive overview of these different approaches, and point to the particularly relevant references [11, 12, 13, 14, 15, 16, 17, 18, 19, 20].
In this paper we complete a new proof that the _smooth version of Plateau's problem_ admits a solution in \(\mathbb{R}^{3}\). That is, we show that for every smooth closed curve \(\Gamma\) in \(\mathbb{R}^{3}\) there is a surface \(\Sigma\), smooth up to its boundary, such that \(\partial\Sigma=\Gamma\) and the \(2\)-dimensional Hausdorff measure of \(\Sigma\) is minimal among all such surfaces. This result was originally established using the theory of flat chains mod \(2\)[14] and Allard's regularity theorems [10, 11]. Following an idea of Frohlich and Struwe [15], we show that a solution can also be obtained from sections of a twisted line bundle over \(\mathbb{R}^{3}\setminus\Gamma\) which minimize a generalisation of the Allen-Cahn energy. At present, we rely on Allard's theorems to prove regularity at the boundary, but we expect this to be achievable using PDE arguments as in the interior case [10], leading to a resolution of Plateau's problem that relies only on level set estimates for solutions of semilinear elliptic equations. Our work can be seen as a continuation of many others which study and exploit the link between energies of Allen-Cahn type and minimal surfaces (see for example [10], the recent works [1, 1, 12] and, for a more extensive overview, the articles [1, 16, 17]).
### Main results
Let \(\Gamma\) be a compact \((n-2)\)-dimensional submanifold of \(\mathbb{R}^{n}\) and consider the space \(X=\mathbb{R}^{n}\setminus\Gamma\). We allow \(\Gamma\) to have many connected components. Let \(L\) be a real line bundle over \(X\). We consider the Allen-Cahn functional
\[E_{\varepsilon}(u):=\int_{X}\varepsilon\frac{|\nabla u|^{2}}{2}+\frac{(1-|u| ^{2})^{2}}{4\varepsilon}\,dx\]
on the space of sections \(u\) of \(L\). Here and throughout, \(L\) is equipped with a metric and a flat metric connection. Critical points of the functional \(E_{\varepsilon}\) are solutions to the Euler-Lagrange
Introduction
Let \(\Gamma\) be a smooth closed curve in \(\mathbb{R}^{3}\) and let \(L\) be the spanning bundle over \(X=\mathbb{R}^{3}\setminus\Gamma\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant_\(\Gamma\)-equivariant map \(\mu:\Gamma\to\Gamma\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}:\Gamma\to\Gamma\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the _\(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)_-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)_-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)_-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)_-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)_-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)_-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)_-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)_-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)_-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)-equivariant map_\(\mu_{L}\). We denote by \(\Gamma\) the \(\Gamma\)_-equivariant map_\(\mu_{L}\).
_._
3. _The renormalised energy measures_ \(\frac{1}{2\sigma}\mu_{k}\) _weak*-converge to_ \(\|V\|\)_, where_ \(2\sigma\) _is the energy of the one-dimensional solution_ \(s\mapsto\tanh(s/\varepsilon\sqrt{2})\)_._
We also prove an analogue of Theorem 1.1 in higher dimensions, but the statement is somewhat weaker. This is unavoidable since, for \(n\geq 4\), it is possible to prescribe boundary data for which there can be no spanning hypersurface which is smooth up to the boundary (e.g., take \(\Gamma\simeq\mathbb{RP}^{2}\) in \(\mathbb{R}^{4}\)). The situation is different, however, if \(\Gamma\) lies in a strictly convex hypersurface:
**Theorem 1.3**.: _Suppose \(n\geq 4\) and that the sections \(u_{k}\) in Theorem 1.2 are minimizers of \(E_{\varepsilon_{k}}\). The bounds (1) then hold automatically, \(\Sigma\setminus\Gamma\) is a smooth hypersurface outisde a set of Hausdorff dimension at most \(n-8\), and \(V\) is the unit-multiplicity varifold induced by \(\Sigma\). If, in addition, \(\Gamma\) lies in a strictly convex hypersurface, then there is a tubular neighbourhood of \(\Gamma\) in which \(\Sigma\) is a smooth hypersurface with boundary, and \(\partial\Sigma=\Gamma\). Consequently, if \(4\leq n\leq 7\) and \(\Gamma\) lies in a strictly convex hypersurface, then \(\Sigma\) is a smooth hypersurface with boundary, \(\partial\Sigma=\Gamma\), and \(\Sigma\) has minimal area among all such hypersurfaces._
In addition to the above results, in Section 7, we outline a new proof that each nontrivial \((n-1)\)-homology class on a closed Riemannian \(n\)-manifold, \(2\leq n\leq 7\), contains a smooth area minimizing hypersurface. Once again, the key observation is that the constraint (in this case prescribed homology instead of prescribed boundary) can be imposed by working in a nontrivial line bundle over the manifold. An area minimizer is then constructed as the energy concentration set for a sequence of minimizing sections in the limit \(\varepsilon\to 0\).
### Key steps in the proofs
Let us first describe some of the main steps in the proof of Theorem 1.2. Here much of our analysis follows the work of Hutchinson and Tonegawa [10], who dealt with the interior case, but there are key differences at the boundary.
We first recall (and generalise to higher dimensions) some estimates from [11] which provide control on the derivatives of \(u\) in a tubular neighbourhood of size \(\sim\varepsilon\) around \(\Gamma\). These are then used to derive an almost-monotonicity formula for the rescaled energy
\[\frac{1}{r^{n-1}}\int_{B_{r}(p)}\varepsilon\frac{|\nabla u|^{2}}{2}+\frac{(1- |u|^{2})^{2}}{4\varepsilon}\,dx\]
in case \(p\in\Gamma\), which complements a similar formula for interior balls derived in [10]. As in [10], we find that the rescaled energy is not quite monotone in \(r\), because of a term involving the _discrepancy_,
\[\xi:=\varepsilon\frac{|\nabla u|^{2}}{2}-\frac{(1-|u|^{2})^{2}}{4\varepsilon}\]
The interior estimates in [10] imply that the discrepancy is bounded from above on compact subsets of \(X\), and hence decays to \(0\) in \(L^{1}_{\mathrm{loc}}(X)\) as \(\varepsilon\to 0\). But to prove Theorem 1.2, we need to show that \(\xi\) decays to \(0\) in \(L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\). This global statement is more delicate than its interior counterpart because, in the boundary setting, \(\xi\) is typically unbounded from above near \(\Gamma\), even for fixed \(\varepsilon\) (see the example at the conclusion of this introduction). Despite this, by carefully combining an improved interior estimate for \(\xi\) with the boundary derivative estimates proven in [11], we obtain the necessary \(L^{1}_{\mathrm{loc}}\)-decay over \(\mathbb{R}^{n}\). This ensures that the almost-monotonicity improves to a genuine monotonicity in the \(\varepsilon\to 0\) limit. With this fact in hand it can be shown that the associated varifolds approach a limit which is stationary with respect to vector fields tangent to \(\Gamma\) and satisfies uniform upper and lower density bounds. This limiting varifold is then rectifiable by Allard's rectifiability theorem, and integral by an argument in [10].
We now summarise the proofs of Theorem 1.1 and Theorem 1.3. These involve a few steps, which are carried out in Sections 4, 5 and 6.
In Section 4 we prove that, for every \(\varepsilon>0\) and \(\Gamma\), there exists a smooth section of the spanning bundle which minimizes \(E_{\varepsilon}\). To achieve this we construct competitors to show that minimizers satisfy the length bound \(|u|\leq 1\), and that their energy can be bounded purely in terms of \(\Gamma\).
Consider now a fixed boundary \(\Gamma\) and a sequence \(\varepsilon\to 0\). Suppose \(u\) is a sequence of sections of the spanning bundle which minimize \(E_{\varepsilon}\). Because of the bounds on length and energy proven in Section 4, Theorem 1.2 ensures that, after passing to a subsequence, the sequence of varifolds associated to \(u\) weak*-converges to an integer rectifiable limit \(V\). Moreover, the support of \(\|V\|\) contains \(\Gamma\). In Section 5 we study the interior and boundary regularity of \(V\). Concerning interior regularity, in any ball away from the boundary \(\Gamma\), standard arguments (included in Appendix A) show that the support of \(\|V\|\) is a perimeter-minimizing boundary, and hence is smooth away from a set of Hausdorff dimension \(n-8\). The claim concerning boundary regularity is that \(\operatorname{supp}\|V\|\) is a smooth hypersurface in a tubular neighbourhood of \(\Gamma\), provided \(\Gamma\) lies in a strictly convex hypersurface or \(n=3\). If \(\Gamma\) lies in a strictly convex hypersurface, we can simply argue as in [1, Section 5]. For a general boundary curve in \(\mathbb{R}^{3}\) the argument is more involved. We first employ a blow-up argument and the approximation by minimzing sections to show that, at every point of \(\Gamma\), the limit varifold \(V\) has a tangent cone equal to the union of \(N\) unit-density half-planes meeting along a line. The topology of the spanning bundle is then exploited to show that \(N\) is odd, and with this fact in hand a standard cut-and-paste argument gives \(N=1\). Boundary regularity of the support then follows from Allard's boundary regularity theorem [10].
In Section 6 we finally prove Theorem 1.1 and Theorem 1.3. Given the results of the previous two sections, it only remains to show that if the limiting varifold obtained from a sequence of minimizers is supported on a smooth hypersurface with boundary, then this hypersurface solves the smooth Plateau problem. This is easily proven by contradiction--if there is a competitor with less area, then the original sequence of sections could not have been minimizing, since one can construct a different sequence which has less energy.
### An example
We conclude this introduction with the construction of a critical section for \(E_{\varepsilon}\) which serves as an illustrative example; its nodal set is a half-line in \(\mathbb{R}^{2}\setminus\{0\}\), and its discrepancy is unbounded from above.
Let \(L\) be the unique nontrivial line bundle over \(\mathbb{R}^{2}\setminus\{0\}\). For each large \(r>0\) and small \(\delta>0\) we define domains
\[A_{r,\delta}:=\{\rho e^{i\theta}\in\mathbb{R}^{2}:r^{-1}\leq\rho\leq r,\;| \theta|\geq\delta\},\qquad A_{r}:=A_{r,0}.\]
For each \(\varepsilon>0\) there is a minimizer of the Allen-Cahn energy in \(W^{1,2}_{0}(A_{r,\delta})\); let \(w_{r,\delta}\) denote such a minimizer. Easy comparison arguments show that \(0<w_{r,\delta}\leq 1\) holds almost everywhere in \(A_{r,\delta}\). Standard elliptic regularity theory then implies that \(w_{r,\delta}\) is smooth away from the corners of \(A_{r,\delta}\). In fact we get uniform estimates which allow us to send \(\delta\to 0\) and extract a limit \(w_{r}\) in \(W^{1,2}_{0}(A_{r})\) which is smooth in the interior and vanishes on the the positive \(x_{1}\)-axis in \(A_{r}\). It is not difficult to show that \(w_{r}\) is reflection-symmetric through the \(x_{1}\)-axis--this follows by applying the maximum principle to \(w_{r}/\bar{w}_{r}\), where \(\bar{w}_{r}(x_{1},x_{2}):=\bar{w}_{r}(x_{1},-x_{2})\). It follows that we can define a \(C^{1}\)-section \(u_{r}\) of \(L\) over \(A_{r}\) by choosing unit sections \(e_{\pm}\) for \(L\) over \(S_{\pm}:=\mathbb{R}^{2}\setminus\{(x_{1},0):\pm x_{1}\geq 0\}\) and declaring that
\[u_{r}(x)\cdot e_{+}=w_{r}(x),\;\;x\in S_{+},\qquad u_{r}(x)\cdot e_{-}=\begin{cases} w_{r}(x),&x\in S_{-}\cap\{x_{2}\geq 0\}\\ -w_{r}(x),&x\in S_{-}\cap\{x_{2}<0\}.\end{cases}\]
One can then check directly that \(u_{r}\) solves the Euler-Lagrange equation
\[\varepsilon^{2}\Delta u_{r}=(|u_{r}|^{2}-1)u_{r}\]
in the weak sense, and hence is smooth in \(A_{r}\). Moreover, we have uniform interior estimates which ensure that, sending \(r\to\infty\), we obtain a smooth section \(u\) of \(L\) which is critical for \(E_{\varepsilon}\) and whose nodal set contains the positive \(x_{1}\)-axis. In fact, for any ball \(B\subset S_{+}\), we can use the positive minimizer in \(W^{1,2}_{0}(B)\) as a barrier to ensure that \(u\) is nonzero in \(B\). So the nodal set of \(u\) is precisely the positive \(x_{1}\)-axis. The energy of \(u\) in \(B_{1}(0)\setminus\{0\}\) is bounded uniformly as \(\varepsilon\to 0\). This can be seen by comparing with a Lipschitz section which agrees with \(u\) on \(\partial B_{1}(0)\), vanishes inside \(\{|x|\leq\varepsilon\}\) and equals \(1\) in \(\{2\varepsilon\leq|x|\leq 1-\varepsilon\}\).
By [10, Proposition 3.3], the discrepancy \(\xi\) of the solution just constructed satisfies an a priori upper bound in each ball \(\bar{B}\subset\mathbb{R}^{2}\setminus\{0\}\). However, as we will now demonstrate, \(\xi(x)\) becomes unbounded from above as \(x\to 0\). This is one of the key differences in behaviour exhibited by critical sections at boundary as opposed to interior points, and is the main difficulty which we must overcome in our proof of Theorem 1.2.
To see that \(\xi\) is unbounded, it is useful define \(v(z):=u(z^{2})\), where \(z\) is the standard complex coordinate on \(\mathbb{R}^{2}\setminus\{0\}\). The pullback of \(L\) by \(z\mapsto z^{2}\) is trivial, so we may view \(v\) as a function on \(\mathbb{R}^{2}\setminus\{0\}\). Straightforward computations show that
\[\varepsilon^{2}\Delta_{z}v=4|z|^{2}(|v|^{2}-1)v, \tag{2}\]
and that the discrepancy of \(u\), which we denote by \(\xi\), satisfies
\[\xi(z^{2})=\varepsilon\frac{|\nabla_{z}v(z)|^{2}}{8|z|^{2}}-\frac{(1-|v(z)|^{ 2})^{2}}{4\varepsilon}.\]
Moreover, since \(E_{\varepsilon}(u)<\infty\), \(v\) has finite \(W^{1,2}\)-norm in \(B_{1}(0)\setminus\{0\}\), and can therefore be extended to a smooth solution of (2) in \(B_{1}(0)\). From our construction of \(u\) it follows that (up to a choice of signs) \(v\) is positive in \(\{x_{2}>0\}\) and negative in \(\{x_{2}<0\}\). By the Hopf lemma, we conclude that \(|\nabla_{z}v|>0\) in a neighbourhood of the origin. We will show in Proposition 3.1 that \(|u|\sim 0\) in a ball of size \(\sim\varepsilon\) about the origin, and hence \(W(u)\sim\frac{1}{4}\). From the formula for \(\xi\) above we conclude that \(\xi(z^{2})\) tends to \(\infty\) as \(z\to 0\).
### Acknowledgements
The authors would like to express gratitude to M. Struwe for suggesting the problem and to T. Bourni for a number of helpful conversations.
## 2. Real line bundles
In this section we discuss the natural correspondence between the set of vector bundles of rank \(1\) over a connected manifold \(X\) and the cohomology group \(H^{1}(X,\mathbb{Z}_{2})\). Later on, we delve into the following two cases:
1. \(X=M\setminus\Gamma\), where \(H_{1}(M)=H_{2}(M)=0\) and \(\Gamma\subset M\) is a codimension two embedded submanifold. This includes \(M\simeq\mathbb{R}^{n}\).
2. \(X=M\) is a closed manifold.
We focus on these because of the applications we have in mind: case (1) is motivated by the search for a line bundle that provides a setting for solving the Plateau problem with boundary \(\Gamma\), while case (2) allows one to generate minimal hypersurfaces in arbitrary homology classes in closed manifolds. However, as we later discuss, some of the arguments for case (1) transfer to more general situations.
### Real line bundles and mod 2 cohomology
Given a real line bundle \(\pi:L\to X\), we can use a partion of unity to endow each fiber \(L_{x}=\pi^{-1}(x)\) with an inner product varying continusly along the fibers. This allows us to define \(U(L)\subset L\), the set of unitary vectors with respect to such metrics. The restriction \(\pi:U(L)\to X\) is a double cover of \(X\), which is independent of the choice of metric modulo homeomorphisms. It is a simple exercise to see that \(U(L)\) is connected if and only if \(L\) is not the trival bundle. This observation is the first half of the correspondence between non-trivial line bundles over \(X\) and connected double
covers of \(X\). In the other direction, a connected double cover \(\widetilde{X}\to X\) is always a normal covering space whose group of deck transformations is \(\mathbb{Z}_{2}\). In particular, such a covering is generated by a homemorphism \(\rho:\widetilde{X}\to\widetilde{X}\), such that \(\rho^{2}=\operatorname{id}\nolimits|_{\widetilde{X}}\). It is another simple exercise to see that the quotient \(L=\widetilde{X}\times\mathbb{R}\,/\sim\), where \((p,t)\sim(\rho(p),-t)\), is a nontrivial line bundle over \(X\), and that the two constructions above are inverse modulo homeomorphisms.
By the classification of covering spaces, connected double covers of \(X\) are in correspondence with \(\{H<\pi_{1}(X):[H:\pi_{1}(X)]=2\}\), i.e., the set of subgroups of \(\pi_{1}(X)\) of index \(2\). Index two subgroups are normal, so the first isomorphism theorem for groups implies that such subgroups correspond to non-zero elements of \(\operatorname{Hom}(\pi_{1}(X),\mathbb{Z}_{2})\). Since \(\mathbb{Z}_{2}\) is abelian, these homomorphisms factor through the abelianisation of \(\pi_{1}(X)\), which is \(H_{1}(X)\). This gives the correspondence
\[\operatorname{Hom}(\pi_{1}(X),\mathbb{Z}_{2})\simeq\operatorname{Hom}(H_{1}(X ),\mathbb{Z}_{2}).\]
Finally, by the universal coefficient theorem for cohomology, we have that
\[\operatorname{Hom}(H_{1}(X),\mathbb{Z}_{2})\simeq H^{1}(X,\mathbb{Z}_{2}).\]
Putting this all together, we have shown that there are as many real line bundles over \(X\) as there are non-zero elements in \(\operatorname{Hom}(H_{1}(X),\mathbb{Z}_{2})\simeq H^{1}(X,\mathbb{Z}_{2})\). In many applications, \(H_{1}(X)\) is a finitely generated abelian group and an element of \(\operatorname{Hom}(H_{1}(X),\mathbb{Z}_{2})\) is uniquely determined by where it sends a set of generators. The relation between the generators of \(H_{1}(X)\) and the global problem we want to solve in \(X\) must be treated in a case by case fashion.
### Real line bundles from classes of loops mod 2
We begin with case (1). Let \(X\simeq M\setminus\Gamma\), where \(M\) is diffeomorphic to \(\mathbb{R}^{n}\) and \(\Gamma\) is a closed submanifold of \(M\) of codimension \(2\). In this case,
\[H_{1}(M\setminus\Gamma)\simeq H^{n-2}(\Gamma)\simeq(\mathbb{Z}\oplus\cdots \oplus\mathbb{Z})\oplus(\mathbb{Z}_{2}\oplus\cdots\oplus\mathbb{Z}_{2})\]
where the number of copies of \(\mathbb{Z}\) and \(\mathbb{Z}_{2}\) are equal to the number of orientable and non-orientable connected components of \(\Gamma\), respectively. Moreover, a generator corresponding to a connected component \(\Gamma_{i}\subset\Gamma\) is given by any loop \(\gamma_{i}\) satisfying \(\operatorname{link}_{\mathbb{Z}}(\gamma_{i},\Gamma_{i})=\pm 1\) and \(\operatorname{link}_{\mathbb{Z}}(\gamma_{i},\Gamma\setminus\Gamma_{i})=0\). It is a simple exercise to check that there is exactly one line bundle \(L\) over \(X=M\setminus\Gamma\) such that the nodal set of a generic section intersects each generator \(\gamma_{i}\) an odd number of times. Indeed, this is the bundle corresponding to the homomorphism \(H_{1}(M)\to\mathbb{Z}_{2}\) which sends each loop in a set of generators to \(1\). We call this bundle the spanning bundle for \(\Gamma\). The spanning bundle is the only line bundle where we can hope to solve Plateau's problem for \(\Gamma\).
**Remark 2.1**.: _For completeness, we summarise how the first isomorphism above arises as a composition of several maps. Let \(T\) be a closed tubular neighborhood of \(\Gamma\). The isomorphism then factors as_
\[H_{1}(M\setminus\Gamma)\simeq H_{2}(M,M\setminus\Gamma)\simeq H_{2}(T,T \setminus\Gamma)\simeq H_{2}(T,\partial T)\simeq H^{n-2}(T)\simeq H^{n-2}( \Gamma).\]
_The first map is obtained from the long exact sequence of the pair \((M,M\setminus\Gamma)\):_
\[\cdots\to 0=H_{2}(M)\to H_{2}(M,M\setminus\Gamma)\to H_{1}(M\setminus\Gamma)\to H_{1} (M)=0\to\cdots\]
_The second map is just the excision property. The third is a retraction of \(T\setminus\Gamma\) onto \(\partial T\). The fourth is Lefschetz duality, which generalises Poincare duality to orientable manifolds with boundary. Finally, the fifth and last map is the retraction of \(T\) onto \(\Gamma\)._
In Section 6 we make use of the following lemma, which follows easily from the definition of the spanning bundle.
**Lemma 2.2**.: _Let \(\Gamma\) be a compact \((n-2)\)-dimensional submanifold of \(\mathbb{R}^{n}\), and let \(L\) be the spanning bundle over \(X=\mathbb{R}^{n}\setminus\Gamma\). Suppose \(\Sigma\) is a smooth \((n-1)\)-submanifold with boundary such that \(\partial\Sigma=\Gamma\) and \(X\setminus\Sigma\) is connected. Then \(L\) trivialises over \(X\setminus\Sigma\)._
### Real line bundles from classes of hypersurfaces mod 2
Case (2) is a lot simpler. In fact, it follows directly from Poincare duality that \(H^{1}(M,\mathbb{Z}_{2})\simeq H_{n-1}(M,\mathbb{Z}_{2})\). In particular, for each non-trivial class of \((n-1)\)-dimensional closed cycles mod 2, there exists a unique surjective homomorphism from \(H_{1}(M)\) to \(\mathbb{Z}_{2}\). By what we disscused at the begining of the section, each one of these homomorphisms gives rise to a real line bundle over \(M\). It is then a simple exercise to check that the nodal set of a generic section of this bundle is in the mod-2 homology class which gave rise to the bundle in the first place.
## 3. Critical sections with bounded energy
Let \(\Gamma\) be an embedded \((n-2)\)-dimensional submanifold of \(\mathbb{R}^{n}\). Let \(L\) be the spanning bundle over \(X=\mathbb{R}^{n}\setminus\Gamma\), which we equip with a metric \((u,v)\mapsto u\cdot v\) and a flat metric connection \(\nabla\). For the rest of the paper we take \(\Gamma\) and \(L\) to be fixed in this way.
Consider a section \(u\) of \(L\) such that \(|u|\in L^{1}_{\rm loc}(X)\). We say \(u\) is of class \(W^{1,p}_{\rm loc}\) if there exists a section \(\nabla u\) of \(TX\otimes L\) such that \(|\nabla u|\in L^{p}_{\rm loc}\) and
\[\int_{X}\nabla u\cdot\varphi=-\int_{X}u\cdot\nabla\varphi\]
for every smooth compactly supported section \(\varphi\).
Given a \(W^{1,2}_{\rm loc}\)-section \(u\) of \(L\) and a Borel set \(A\subset\mathbb{R}^{n}\), we define the energy of \(u\) in \(A\) by
\[E_{\varepsilon}(u,A):=\int_{A\setminus\Gamma}\,\varepsilon\frac{|\nabla u|^{2 }}{2}+\frac{W(u)}{\varepsilon}\,dx,\qquad W(u):=\frac{1}{4}(1-|u|^{2})^{2}.\]
The total energy is \(E_{\varepsilon}(u):=E_{\varepsilon}(u,X)\). In case \(E_{\varepsilon}(u)<\infty\), the energy density is in \(L^{1}(\mathbb{R}^{n})\), so we can equally integrate over \(A\) in the definition of \(E_{\varepsilon}(u,A)\).
A \(W^{1,2}_{\rm loc}\)-section \(u\) of \(L\) is defined to be critical for \(E_{\varepsilon}\) if \(E_{\varepsilon}(u,K)<\infty\) for every compact \(K\subset X\) and
\[\frac{d}{dt}\Big{|}_{t=0}E_{\varepsilon}(u+t\varphi)=0,\]
or equivalently
\[\varepsilon^{2}\int_{X}\nabla u\cdot\nabla\varphi=\int_{X}(|u|^{2}-1)u\cdot\varphi,\]
for every smooth section \(\varphi\) of \(L\) which is compactly supported in \(X\). In case \(u\) is a critical section satisfying \(\sup_{X}|u|<\infty\), standard elliptic theory implies that \(u\) is smooth, and hence
\[\varepsilon^{2}\Delta u=(|u|^{2}-1)u.\]
In this section we prove various estimates for critical sections of \(E_{\varepsilon}\), and combine these to prove Theorem 1.2. Much of the analysis is inspired by [10].
### Derivative estimates at the boundary
Let us recall some key estimates from [11] concerning the behaviour of critical sections of \(E_{\varepsilon}\) near \(\Gamma\).
Let \(\rho\) denote the distance function to \(\Gamma\). There is a constant \(\delta>0\) such that \(\rho\) is smooth in the tubular neighbourhood
\[\Gamma_{\delta}:=\{x\in\mathbb{R}^{n}:\rho(x)<\delta\}.\]
Given a vector field \(g\) on \(\Gamma_{\delta}\), let us write
\[g^{\perp}:=(g\cdot\nabla\rho)\nabla\rho,\qquad g^{\top}:=g-g^{\perp}.\]
**Proposition 3.1**.: _Suppose \(u\) is a critical section for \(E_{\varepsilon}\) such that \(|u|\leq B\) and \(E_{\varepsilon}(u)<\infty\). There exists a \(\delta_{0}=\delta_{0}(n,\Gamma)\) such that if \(\varepsilon R<\delta_{0}\), for any nonnegative integers \(m=k+l\) and unit vectors \(\{e_{i}\}_{i=1}^{k}\) and \(\{f_{i}\}_{i=1}^{l}\) such that \(e_{i}^{\top}=0\) and \(f_{i}^{\perp}=0\), we have_
\[|\nabla^{m}u(e_{1},\dots,e_{k},f_{1},\dots,f_{l})|^{2}\leq C\varepsilon^{-1-2l} \rho^{1-2k}\]
_in \(\{\rho\leq R\varepsilon\}\), where \(C=C(n,\Gamma,B,R)\). In particular,_
\[|\nabla u|^{2}\leq 2C(\varepsilon^{-1}\rho^{-1}+\varepsilon^{-3}\rho)\]
_holds in \(\{\rho\leq R\varepsilon\}\)._
Proposition 3.1 was proven for \(n=3\) in [10, Theorem 3.2]. The strategy is to localise near \(\Gamma\) by choosing appropriate coordinates, use the Euler-Lagrange equation for \(u\) to derive \(L^{2}\)-derivative estimates, and then conclude using the Sobolev embedding theorem. The same argument works in dimensions \(n\geq 4\), but there is one complication in the localisation step, in that \(N\Gamma\) may not admit a parallel orthonormal frame, even locally. Consequently, some lower-order error terms appear in the computations, but these can all be absorbed in a straightforward manner. We describe how this works, up to the point where the argument in [10] applies.
Let \(\delta_{0}<\delta\) be a small positive constant to be chosen later, fix an \(\varepsilon>0\), and suppose \(R\) is such that \(R\varepsilon<\delta_{0}\). We consider an arbitrary point in \(\Gamma\), which we may assume is the origin Let \(u\) be a critical section for \(E_{\varepsilon}\) such that \(|u|\leq B\) and \(E_{\varepsilon}(u)<\infty\).
After rescaling in the domain, we may assume \(u\) is a critical section for \(E_{\hat{\varepsilon}}\), \(\hat{\varepsilon}:=R^{-1}\), satisfying \(|u|\leq B\) and \(E_{\hat{\varepsilon}}(u)<\infty\), but where the boundary curve has changed to \(\Gamma/R\varepsilon\). By choosing \(\delta_{0}\) sufficiently small we can assume that \(\Gamma/R\varepsilon\) is as close as we like in \(C^{\infty}\) to \(T_{0}\Gamma\) in any neighbourhood of the origin.
Let \(\mathcal{B}\) denote the geodesic ball of radius one in \(\Gamma/R\varepsilon\), and denote by \(N\mathcal{B}(1)\) the normal tubular neighbourhood of \(\mathcal{B}\) with radius equal to \(1\). Provided that \(\delta_{0}\) is small enough, we may fix a parameterisation \(Y:B_{1}^{2}\times B_{1}^{n-2}\to N\mathcal{B}(1)\) as follows:
* The restriction of \(Y\) to \(\{0\}\times B_{1}^{n-2}\) gives a system of normal coordinates on \(\mathcal{B}\).
* For all \(y=(z^{1},z^{2},x)\) in \(B_{1}^{2}\times B_{1}^{n-2}\), \[Y(y)=y^{1}\nu_{1}(Y(0,x))+y^{2}\nu_{2}(Y(0,x))+Y(0,x),\] where \(\{\nu_{1},\nu_{2}\}\) is a local orthonormal frame of normal vectors to \(\Gamma/R\varepsilon\) which is parallel along radial geodesics emanating from \(0\).
We write \(\Omega=(B_{1}^{2}\setminus\{0\})\times B_{1}^{n-2}\). Let \(S:\Omega\to\Omega\) be defined by
\[S(z_{1},z_{2},x):=(z_{1}^{2}-z_{2}^{2},2z_{1}z_{2},x),\]
so that \(S\) acts on \(z\) by taking the complex square. Let us define
\[\tilde{u}:=u\circ P,\]
where \(P:=Y\circ S\). The pullback of \(L\) to \(\Omega\) by \(P\) is trivial, so we may view \(\tilde{u}\) as a function.
Let \(\tilde{\varphi}\in C_{0}^{\infty}(\Omega)\) satisfy \(\tilde{\varphi}(-z,x)=-\tilde{\varphi}(z,x)\) for all \((z,x)\in\Omega\). Then there is a section \(\varphi\) of \(L\) supported in a compact subset of \(N\mathcal{B}(1)\) such that we may realise \(\tilde{\varphi}\) as \(\varphi\circ P\). Let us define \(g_{ij}:=\langle\partial_{y_{i}}P,\partial_{y_{j}}P\rangle\), and write \(g^{ij}\) for the inverse of \(g_{ij}\). We then have
\[0 =\int_{N\mathcal{B}(1)}\nabla u\cdot\nabla\varphi+\hat{\varepsilon }^{-2}(|u|^{2}-1)u\cdot\varphi\] \[=\int_{\Omega}\Big{(}g^{ij}\partial_{y_{i}}\tilde{u}\partial_{y_{ j}}\tilde{\varphi}+\hat{\varepsilon}^{-2}(|\tilde{u}|^{2}-1)\tilde{u}\tilde{ \varphi}\Big{)}\sqrt{|\det(g)|}\,dy.\]
As \(\delta_{0}\to 0\), \(P\) converges smoothly to the map \((z_{1},z_{2},x)\mapsto(z_{1}^{2}-z_{2}^{2},2z_{1}z_{2},x)\). A straightforward computation shows that we may write
\[g=\begin{bmatrix}4|z|^{2}&0&0\\ 0&4|z|^{2}&0\\ 0&0&1\end{bmatrix}+a(z,x),\]
where \(|z|^{-2}a(z,x)\to 0\) uniformly in \(x\) as \(\delta_{0}\to 0\). It follows that
\[\sqrt{|\det(g)|}=4|z|^{2}h(z,x),,\qquad g^{-1}=h(z,x)^{-1}\begin{bmatrix}\frac {1}{4|z|^{2}}&0&0\\ 0&\frac{1}{4|z|^{2}}&0\\ 0&0&1\end{bmatrix}+b(z,x)\]
where where \(h(z,x)\to 1\) and \(|z|^{2}b(z,x)\to 0\) smoothly and uniformly in \(x\) as \(\delta_{0}\to 0\). We thus arrive at
\[0=\int_{\Omega}\partial_{z_{i}}\tilde{u}\partial_{z_{i}}\tilde{\varphi}+4|z|^{ 2}\bigg{(}\partial_{x_{i}}\tilde{u}\partial_{x_{i}}\tilde{\varphi}+hb^{ij} \partial_{y_{i}}\tilde{u}\partial_{y_{j}}\tilde{\varphi}+\hat{\varepsilon}^{- 2}h(|\tilde{u}|^{2}-1)\tilde{u}\tilde{\varphi}\bigg{)}\,dzdx. \tag{3}\]
The energy of \(u\) in \(N\mathcal{B}(1)\) can be written as
\[\int_{N\mathcal{B}(1)}\hat{\varepsilon}\frac{|\nabla u|^{2}}{2}+ \hat{\varepsilon}^{-1}W(u)\] \[\qquad\qquad=\frac{1}{2}\int_{\Omega}\hat{\varepsilon}|\partial_ {z}\tilde{u}|^{2}+4|z|^{2}\bigg{(}\hat{\varepsilon}|\partial_{x}\tilde{u}|^{2 }+\hat{\varepsilon}hb^{ij}\partial_{y_{i}}\tilde{u}\partial_{y_{j}}\tilde{u}+ \hat{\varepsilon}^{-1}hW(\tilde{u})\bigg{)}\,dzdx,\]
where
\[|\partial_{z}\tilde{u}|^{2}:=\sum_{i=1,2}|\partial_{z_{i}}\tilde{u}|^{2}, \qquad|\partial_{x}\tilde{u}|^{2}:=\sum_{i=1}^{n-2}|\partial_{x_{i}}\tilde{u }|^{2}.\]
We may assume \(\delta_{0}\) is so small that
\[4|z|^{2}hb^{ij}\xi_{i}\xi_{j}\geq-\frac{1}{2}|\xi|^{2}\]
for all \(\xi\in\mathbb{R}^{n}\), so that we obtain
\[\int_{\Omega}\hat{\varepsilon}|\partial_{z}\tilde{u}|^{2}+4|z|^{2}\bigg{(}\hat {\varepsilon}|\partial_{x}\tilde{u}|^{2}+\hat{\varepsilon}^{-1}hW(\tilde{u}) \bigg{)}\,dzdx\leq 4\int_{N\mathcal{B}(1)}\hat{\varepsilon}\frac{|\nabla u|^{2}}{2} +\hat{\varepsilon}^{-1}W(u).\]
The right-hand side is finite by assumption, so we have
\[\int_{\Omega}|\partial_{z}\tilde{u}|^{2}+4|z|^{2}|\partial_{x}\tilde{u}|^{2} \,dzdx<\infty.\]
Using this fact one can extend \(\tilde{u}\) to a function in \(W^{1,2}(B_{1}^{2}\times B^{n-2})\) such that (3) holds for all \(\tilde{\varphi}\in C_{0}^{\infty}(B_{1}^{2}\times B^{n-2})\) satisfying \(\tilde{\varphi}(-z,x)=-\tilde{\varphi}(z,x)\). From here, Proposition 3.1 can be derived as in [11].
### Energy monotonicity
We now derive an almost-monotonicity formula for the rescaled energy at boundary points. The computations follow [10], but various error terms arising from the boundary need to be dealt with. This is achieved using Proposition 3.1. We begin with a lemma.
**Lemma 3.2**.: _Let \(u\) be a critical section for \(E_{\varepsilon}\). Suppose \(\sup|u|<\infty\) and \(E_{\varepsilon}(u)<\infty\), and let \(g\in C_{0}^{1}(\mathbb{R}^{n},\mathbb{R}^{n})\) be such that \(g|_{\Gamma}\) is tangent to \(\Gamma\). We then have_
\[\int_{X}\bigg{(}\Big{(}\varepsilon\frac{|\nabla u|^{2}}{2}+\frac{W(u)}{ \varepsilon}\Big{)}\,\mathrm{div}\,g-\varepsilon\nabla_{\nabla u}g\cdot\nabla u \bigg{)}=0.\]
Proof.: We have
\[(\nabla u\cdot g)\Delta u=\operatorname{div}((\nabla u\cdot g)\nabla u)- \operatorname{div}\left(\tfrac{1}{2}|\nabla u|^{2}g\right)+\tfrac{1}{2}|\nabla u |^{2}\operatorname{div}(g)-\nabla_{\nabla u}g\cdot\nabla u\]
and
\[(\nabla u\cdot g)W(u)=\operatorname{div}(W(u)g)+W(u)\operatorname{div}(g).\]
We multiply the Euler-Lagrange equation
\[\varepsilon\Delta u-\varepsilon^{-1}W(u)=0\]
by \(\nabla u\cdot g\) and integrate over the region \(\{\rho>h\}\) (where \(\rho\) is the distance to \(\Gamma\)) to obtain
\[0=\int_{\{\rho>h\}}\varepsilon(\nabla u\cdot g)\Delta u-(\nabla u\cdot g)\frac {W(u)}{\varepsilon}.\]
Inserting the two identities stated above and applying the divergence theorem, we obtain
\[0 =\int_{\{\rho>h\}}\left(\varepsilon\frac{|\nabla u|^{2}}{2}+ \frac{W(u)}{\varepsilon}\right)\operatorname{div}g-\varepsilon\nabla_{\nabla u }g\cdot\nabla u\,dx\] \[-\int_{\{\rho=h\}}\left(\varepsilon(\nabla u\cdot g)\nabla u- \varepsilon\frac{|\nabla u|^{2}}{2}g-\frac{W(u)}{\varepsilon}g\right)\cdot \nabla\rho\,dx.\]
Here we assume \(h\) is small enough so that \(\{\rho=h\}\) is a smooth hypersurface. Since \(u\) has finite energy, the dominated convergence theorem implies that the first term on the right converges to
\[\int_{X}\left(\left(\varepsilon\frac{|\nabla u|^{2}}{2}+\frac{W(u)}{ \varepsilon}\right)\operatorname{div}g-\varepsilon\nabla_{\nabla u}g\cdot \nabla u\right)dx\]
as \(h\to 0\). Therefore, to prove the claim, it suffices to establish
\[\int_{\{\rho=h\}}\left(\varepsilon(\nabla u\cdot g)\nabla u-\varepsilon\frac{ |\nabla u|^{2}}{2}g-\frac{W(u)}{\varepsilon}g\right)\cdot\nabla\rho\,dx\to 0 \tag{4}\]
as \(h\to 0\).
In the following \(C\) is a positive constant which does not depend on \(h\). Since \(g|_{\Gamma}\) is tangent to \(\Gamma\), in \(\Gamma_{\delta}\) we may estimate the normal component of \(g\) by
\[|g^{\perp}|\leq C\rho.\]
Applying Proposition (3.1), we find that for all sufficiently small \(h\) we have
\[\varepsilon|\nabla u\cdot g||\nabla u\cdot\nabla\rho|\leq\varepsilon(|\nabla u \cdot g^{\perp}|+|\nabla u\cdot g^{\top}|)|\nabla u\cdot\nabla\rho|\leq C(1+ \varepsilon^{-1})\]
and
\[\left(\varepsilon\frac{|\nabla u|^{2}}{2}+\frac{W(u)}{\varepsilon}\right) \lvert g\cdot\nabla\rho\rvert\leq C(\varepsilon^{-1}+\varepsilon^{-3}\rho^{2 }+\varepsilon^{-1}\rho)\]
in \(\{\rho\leq h\}\). These two estimates imply (4).
**Proposition 3.3**.: _Let \(u\) be a critical section for \(E_{\varepsilon}\), and suppose \(\sup|u|<\infty\) and \(E_{\varepsilon}(u)<\infty\). We then have_
\[\frac{e^{cr}}{r^{n-1}}E(u,B_{r})-\frac{e^{cs}}{s^{n-1}}E(u,B_{s}) \geq-\int_{s}^{r}\frac{e^{c\tau}}{(1+n^{-1}c\tau)}\frac{1}{\tau^ {n}}\int_{B_{\tau}}\left(\varepsilon\frac{|\nabla u|^{2}}{2}-\frac{W(u)}{ \varepsilon}\right)dxd\tau\] \[+\frac{\varepsilon}{2}\int_{s}^{r}\frac{1}{\tau^{n+1}}\int_{ \partial B_{\tau}}(x\cdot\nabla u)^{2}\,dxd\tau\]
_for all \(s<r<r_{0}\), where \(r_{0}\) and \(c\) are constants which depend only on \(\Gamma\)._
Proof.: Fix a point in \(\Gamma\), which we may assume is the origin. If \(r_{1}\) is sufficiently small depending only on \(\Gamma\), there is a smooth map \(\Phi:B_{r_{1}}\to\mathbb{R}^{n}\) such that:
* The restriction of \(\Phi\) to \(\{x_{n-1}=x_{n}=0\}\) gives a system of normal coordinates on \(\Gamma\) on a neighbourhood of the origin.
* For all \(x\in B_{r_{1}}\), \[\Phi(x)=\Phi(x_{1},\ldots,x_{n-2},0,0)+x_{n-1}\nu_{1}(x)+x_{n-1}\nu_{2}(x),\] where \(\{\nu_{1},\nu_{2}\}\) is a local orthonormal frame for \(N\Gamma\) which is parallel along radial geodesics emanating from \(0\).
We define a vector field on the image of \(\Phi\) by pushing forward the position vector, i.e.
\[g^{i}(\Phi(x)):=\frac{\partial\Phi^{i}}{\partial x^{j}}(x)x^{j}.\]
This ensures that \(g|_{\Gamma}\) is tangential to \(\Gamma\), and we may write \(g(x)=x+h(x)\) where \(h\) is a smooth vector field satisfying
\[|x|^{-1}|\nabla h(x)|+|x|^{-2}|h(x)|\leq C.\]
With our choice of \(\Phi\), it is straightforward to check that \(C\) depends only on the second fundamental form of \(\Gamma\) and its first covariant derivative. We can also find an \(r_{0}=r_{0}(\Gamma)\) such that \(B_{r_{0}}\) lies in the image of \(\Phi\), and hence in the domain of \(g\).
By Lemma 3.2, for any \(\varphi\in C_{0}^{\infty}(B_{r_{0}})\), we have
\[0=\int_{X}e_{\varepsilon}(u)\operatorname{div}(\varphi g)-\varepsilon\nabla_{ \nabla u}(\varphi g)\cdot\nabla u\,dx,\]
where we write \(e_{\varepsilon}(u)\) for the energy density,
\[e_{\varepsilon}(u)=\varepsilon\frac{|\nabla u|^{2}}{2}+\frac{W(u)}{\varepsilon}.\]
Expanding and rearranging gives
\[\int_{X}\varphi e_{\varepsilon}(u)\operatorname{div}g-\varepsilon\nabla_{ \nabla u}g\cdot\nabla u\,dx=\int_{X}e_{\varepsilon}(u)\nabla\varphi\cdot g- \varepsilon(\nabla_{\nabla u}\varphi)g\cdot\nabla u\,dx.\]
For any \(r<r_{0}\), if we allow \(\varphi\) to increase to the characteristic function of \(B_{r}\) in an appropriate manner, we obtain
\[\int_{B_{r}}e_{\varepsilon}(u)\operatorname{div}g-\varepsilon\nabla_{\nabla u }g\cdot\nabla u\,dx=\frac{1}{r}\int_{\partial B_{r}}e_{\varepsilon}(u)(x\cdot g )-\varepsilon(x\cdot\nabla u)(g\cdot\nabla u)\,d\mathcal{H}^{n-1}.\]
We now insert \(g=x+h\) and so find that
\[(n-1) \int_{B_{r}}e_{\varepsilon}(u)-\int_{B_{r}}\left(\varepsilon\frac {|\nabla u|^{2}}{2}-\frac{W(u)}{\varepsilon}\right)+\int_{B_{r}}\left(e_{ \varepsilon}(u)\operatorname{div}h-\varepsilon\nabla_{\nabla u}h\cdot\nabla u\right)\] \[=r\int_{\partial B_{r}}e_{\varepsilon}(u)-\frac{\varepsilon}{r} \int_{\partial B_{r}}(x\cdot\nabla u)^{2}+\frac{1}{r}\int_{\partial B_{r}} \left(e_{\varepsilon}(u)(x\cdot h)-\varepsilon(x\cdot\nabla u)(h\cdot\nabla u )\right)\!.\]
Estimating
\[-\int_{B_{r}}\left(e_{\varepsilon}(u)\operatorname{div}h-\varepsilon\nabla_{ \nabla u}h\cdot\nabla u\right)\leq Cr\int_{B_{r}}e_{\varepsilon}(u)\]
and
\[\frac{1}{r}\int_{\partial B_{r}}e_{\varepsilon}(u)(x\cdot h)- \varepsilon(x\cdot\nabla u)(h\cdot\nabla u)\leq Cr^{2}\int_{\partial B_{r}}e_ {\varepsilon}(u)\]
yields
\[(n-1-Cr) \int_{B_{r}}e_{\varepsilon}(u)-\int_{B_{r}}\left(\varepsilon\frac{| \nabla u|^{2}}{2}-\frac{W(u)}{\varepsilon}\right)\] \[\leq(1+Cr)r\int_{\partial B_{r}}e_{\varepsilon}(u)-\frac{ \varepsilon}{r}\int_{\partial B_{r}}(x\cdot\nabla u)^{2}.\]
We combine this estimate with
\[\frac{d}{dr}\bigg{(}\frac{e^{Cnr}}{r^{n-1}}E(u,B_{r})\bigg{)}=-(n-1-Cnr)\frac{ e^{Cnr}}{r^{n}}E(u,B_{r})+\frac{e^{Cnr}}{r^{n-1}}\int_{\partial B_{r}}e_{ \varepsilon}(u),\]
and the inequality
\[\frac{n-1-Cr}{1+Cr}\geq n-1-Cnr\]
in order to obtain
\[\frac{d}{dr}\bigg{(}\frac{e^{Cnr}}{r^{n-1}}E(u,B_{r})\bigg{)}\geq-\frac{e^{Cnr }}{(1+Cr)r^{n}}\int_{B_{r}}\left(\varepsilon\frac{|\nabla u|^{2}}{2}-\frac{W( u)}{\varepsilon}\right)+\frac{\varepsilon e^{Cnr}}{(1+Cr)r^{n+1}}\int_{ \partial B_{r}}(x\cdot\nabla u)^{2}.\]
We may assume \(r_{0}\) is so small that \(1+Cr\leq 2\). Inserting this bound and integrating from \(s\) to \(r\) now gives the claim (with \(c=Cn\)).
### Interior estimates
We use the maximum principle to establish supremum and gradient estimates for solutions of the Allen-Cahn equation in a ball in \(\mathbb{R}^{n}\). Up to rescaling these apply to critical sections in any ball in \(X\). Our first estimate is slightly stronger that [10, Proposition 3.2].
**Lemma 3.4**.: _Let \(u\) be a smooth function in \(\bar{B}_{r+kR}\), where \(k\geq 1\) is an integer and \(R\geq 2\). Suppose \(\Delta u=W^{\prime}(u)\) and \(|u|\leq B\). We then have_
\[\sup_{B_{r}}|u|\leq 1+2^{1+k}BR^{-2k}.\]
Proof.: Let \(\zeta\) be a smooth function on \(\mathbb{R}^{n}\) such that \(\zeta=2BR^{-2}\) on \(B_{r+(k-1)R}\) and \(\zeta\geq B\) on \(\partial B_{r+kR}\). We may assume \(B\leq\zeta\leq 2BR^{-2}\) and \(|\Delta\zeta|\leq 2BR^{-2}\). Suppose for a contradiction that \(\sup_{B_{r+(k-1)R}}u\geq 1+4BR^{-2}\). Let \(g:=u-\zeta-1\). Since \(g\) is negative on \(\partial B_{r+kR}\), we have that \(g\) attains a maximum of at least \(2BR^{-2}\) at some point \(x_{0}\in B_{r+kR}\). At \(x_{0}\) we have
\[0\geq\Delta g=W^{\prime}(u)-\Delta\zeta\geq g\int_{0}^{1}W^{\prime\prime}(tu+( 1-t)(\zeta+1))\,dt+W^{\prime}(\zeta+1)-2BR^{-2}.\]
Since \(tu(x_{0})+(1-t)(\zeta(x_{0})+1)\geq 1\), \(W^{\prime\prime}(s)\geq 2\) for \(s\geq 1\), and \(g(x_{0})\geq 2BR^{-2}\), we obtain a contradiction. That is,
\[\sup_{B_{r+(k-1)R}}u\leq 1+4BR^{-2}.\]
Next, we apply the same argument, but with \(\zeta\) chosen so that \(\zeta=B\) on \(\partial B_{r+(k-1)R}\), \(\zeta=2BR^{-2}\) in \(B_{r+(k-2)R}\), and \(|\Delta\zeta|\leq 2BR^{-2}\). If \(\sup_{B_{r+(k-2)R}}u\geq 1+8BR^{-4}\), we obtain a contradiction by applying the maximum principle to \(g:=u-4R^{-2}\zeta-1\), exactly as before. Continuing in this way, we obtain the desired upper bound for \(u\) after \(k\) iterations. The lower bound is analogous.
Next we proceed to prove an interior estimate for the discrepancy,
\[\xi:=\varepsilon\frac{|\nabla u|^{2}}{2}-\frac{W(u)}{\varepsilon}.\]
The estimate slightly improves [10, Proposition 3.3].
**Proposition 3.5**.: _Let \(u\in C^{\infty}(\bar{B}_{r+3R})\) be a solution of \(\Delta u=W^{\prime}(u)\) such that \(|u|\leq B\). There are constants \(R_{0}\) and \(C\) depending only on \(n\) and \(B\) such that if \(R\geq R_{0}\) then_
\[\sup_{B_{r}}\xi\leq C\max\bigg{\{}\Big{(}R^{-1}\sup_{\partial B_{r+R}}\xi\Big{)} ^{2/3},R^{-2}\bigg{\}}.\]
_Moreover, for every \(\delta>0\) there is an integer \(k_{0}=k_{0}(n,B,\delta)\) with the following property. If \(u\in C^{\infty}(\bar{B}_{r+(k_{0}+2)R})\) is a solution of \(\Delta u=W^{\prime}(u)\) such that \(|u|\leq B\), and \(R\geq R_{0}\), then we have_
\[\sup_{B_{r}}\xi\leq CR^{-2+\delta},\]
_where \(C=C(n,B,\delta)\)._
Proof.: First observe that for \(R\geq 2\), Lemma 3.4 implies
\[\sup_{B_{r+R}}|u|\leq 1+8BR^{-4}.\]
Let us define
\[\xi_{G}:=\frac{|\nabla u|^{2}}{2}-W(u)-G(u),\]
where \(G:\mathbb{R}\to\mathbb{R}\) is a smooth function to be chosen later. At any point where \(|\nabla u|>0\), we have
\[\Delta\xi_{G}-2\frac{(W^{\prime}(u)+G^{\prime}(u))}{|\nabla u|^{2}} \nabla\xi_{G}\cdot\nabla u+2G^{\prime\prime}(u)\xi_{G}\] \[\geq G^{\prime}(u)^{2}+G^{\prime}(u)W^{\prime}(u)-2G^{\prime \prime}(u)(W(u)+G(u)).\]
We set
\[\mu:=\max\bigg{\{}\max_{\partial B_{r+R}}\xi,\frac{125}{R^{2}}\bigg{\}}\]
and
\[G(u):=\lambda(1-u^{2}/Q),\qquad\lambda:=Q\bigg{(}\frac{\mu}{R}\bigg{)}^{2/3},\]
where \(Q\geq 8\) will be chosen later. Let \(\zeta\in C^{\infty}(\bar{B}_{r+R})\) be such that
\[\zeta=1\text{ on }B_{r},\quad 0\leq\zeta\leq 1\text{ on }B_{r+R},\quad|\nabla\zeta|\leq 2R^{-1},\quad| \Delta\zeta|\leq 2R^{-2}\]
Define \(\tilde{\xi}:=\xi_{G}+\mu\zeta\).
Suppose for a contradiction that \(\sup_{B_{r}}\xi_{G}\geq\lambda\). It follows that \(\sup_{B_{r}}\tilde{\xi}\geq\lambda+\mu\). On the other hand, \(\tilde{\xi}\leq\mu\) on \(\partial B_{r+R}\), so we conclude that \(\tilde{\xi}\) attains an interior mamximum at some point \(x_{0}\in B_{r+R}\). At the point \(x_{0}\) we have
\[|\nabla u|\geq\lambda^{1/2},\qquad|\nabla\xi_{G}|=\mu|\nabla\zeta|\leq 2\mu R^ {-1},\qquad\Delta\xi_{G}=-\mu\Delta\zeta\leq 2\mu R^{-2}.\]
Therefore, at \(x_{0}\),
\[\Delta\xi_{G}-2\frac{(W^{\prime}(u)+G^{\prime}(u))}{|\nabla u|^{2}} \nabla\xi_{G}\cdot\nabla u+2G^{\prime\prime}(u)\xi_{G}\] \[\leq 2\mu R^{-2}+4\mu|W^{\prime}(u)|\lambda^{-1/2}R^{-1}+8\mu Q^{-1 }\lambda^{1/2}R^{-1},\]
and
\[G^{\prime}(u)^{2}+G^{\prime}(u)W^{\prime}(u)-2G^{\prime\prime}(u)(W(u)+G(u))\] \[\geq 4Q^{-2}\lambda^{2}u^{2}-2Q^{-1}\lambda uW^{\prime}(u)+4Q^{-1 }\lambda W(u).\]
We consider three cases.
_Case 1._ If \(|u(x_{0})|\in[0,1/2]\), we have
\[2\mu R^{-2}+4\mu\alpha\lambda^{-1/2}R^{-1}+8\mu Q^{-1}\lambda^{1/2}R^{-1}\geq 2 Q^{-1}\lambda W(u)\geq 4\beta Q^{-1}\lambda,\]
where
\[\alpha:=\max_{s\in[0,1/2]}|W^{\prime}(s)|,\qquad\beta:=W(1/2).\]
Inserting \(\lambda=Q(\mu/R)^{2/3}\), we obtain
\[2\mu R^{-2}+4\alpha\mu^{2/3}Q^{-1/2}R^{-2/3}+8\mu^{4/3}Q^{-1/2}R^{-4/3}\geq 4 \beta\mu^{2/3}R^{-2/3},\]
and hence
\[2\mu^{1/3}R^{-4/3}+4\alpha Q^{-1/2}+8Q^{-1/2}\mu^{4/3}R^{-2/3}\geq 4\beta. \tag{5}\]
Since \(u\) is smooth and bounded in \(B_{r+3R}\), standard elliptic estimates imply that \(\mu\) can be bounded from above in terms of \(n\) and \(B\), so we obtain a contradiction provided that \(Q\) and \(R\) are sufficiently large depending on \(n\), \(B\), \(\alpha\) and \(\beta\).
_Case 2._ If instead \(|u(x_{0})|\in[1/2,1]\), we estimate
\[G^{\prime}(u)^{2}+G^{\prime}(u)W^{\prime}(u)-2G^{\prime\prime}(u)(W(u)+G(u)) \geq Q^{-2}\lambda^{2}+Q^{-1}\lambda|W^{\prime}(u)|,\]
and so find that
\[2\mu R^{-2}+|W^{\prime}(u)|(4\mu\lambda^{-1/2}R^{-1}-Q^{-1}\lambda)+8\mu Q^{-1 }\lambda^{1/2}R^{-1}\geq Q^{-2}\lambda^{2}.\]
Inserting \(\lambda=Q(\mu/R)^{2/3}\) yields
\[2\mu R^{-2}+|W^{\prime}(u)|(4Q^{-1/2}-1)\mu^{2/3}R^{-2/3}+8Q^{-1/2}\mu^{4/3}R^ {-4/3}\geq\mu^{4/3}R^{-4/3}.\]
For \(Q\geq 16^{2}\) we obtain
\[4\mu R^{-2}\geq\mu^{4/3}R^{-4/3}\]
and hence
\[4R^{-2/3}\geq\mu^{1/3}.\]
Recall however that \(\mu\geq 125/R^{2}\). We have thus arrived at a contradiction.
_Case 3._ Finally, we consider the case that \(|u(x_{0})|\geq 1\). Recall we used Lemma 3.4 to conclude that \(|u(x_{0})|\leq 1+8BR^{-4}\). At \(x_{0}\) we have
\[2\mu R^{-2}+4\mu W^{\prime}(u)\lambda^{-1/2}R^{-1}+8\mu Q^{-1}\lambda^{1/2}R^ {-1}\geq 4Q^{-2}\lambda^{2}-2Q^{-1}\lambda uW^{\prime}(u).\]
Since \(|u|\in[1,1+8BR^{-4}]\), provided \(R\) is large enough so that \(8BR^{-4}\leq 1\), we have
\[W^{\prime}(u)\leq 48BR^{-4},\qquad-uW^{\prime}(u)\geq-480BR^{-4},\]
and hence
\[2\mu R^{-2}+184B\mu\lambda^{-1/2}R^{-5}+8\mu Q^{-1}\lambda^{1/2}R^{-1}\geq 4Q ^{-2}\lambda^{2}-960BQ^{-1}\lambda R^{-4}.\]
Inserting \(\lambda=Q\mu^{2/3}R^{-2/3}\) we obtain
\[2\mu R^{-2}+(184Q^{-1/2}+960)B\mu^{2/3}R^{-4-2/3}\geq(4-8Q^{-1/2})\mu^{4/3}R^ {-4/3}.\]
For \(Q\geq 16^{2}\), since \(\mu\geq 125R^{-2}\),
\[(4-8Q^{-1/2})\mu^{4/3}R^{-4/3}\geq\mu^{1/3}R^{-4/3}\mu+\mu^{2/3}R^{-4/3}\mu^ {2/3}\geq 5\mu R^{-2}+25\mu^{2/3}R^{-2-2/3},\]
and hence
\[2\mu R^{-2}+(184Q^{-1/2}+960)B\mu^{2/3}R^{-4-2/3}\geq 5\mu R^{-2}+25\mu^{2/3}R^ {-2-2/3}.\]
This is a contradiction provided \(R\) is sufficiently large depending on \(B\).
Combining the three cases, we arrive at the estimate
\[\sup_{B_{r}}\xi\leq C\max\bigg{\{}\Big{(}R^{-1}\sup_{\partial B_{r+R}}\xi \Big{)}^{2/3},R^{-2}\bigg{\}},\]
with \(C:=Q+125\).
Now suppose \(u\) is defined in \(B_{r+(k+2)R}\) and satisfies \(|u|\leq B\). The estimate we just derived (applied with \(r+(k-l)R\) in place of \(r\)) tells us that
\[\sup_{B_{r+(k-l)R}}\xi\leq C\max\left\{\left(R^{-1}\sup_{\partial B_{r+(k-l+1)R }}\xi\right)^{2/3},R^{-2}\right\}\]
for all \(1\leq l\leq k\). Applying this estimate iteratively, we obtain
\[\sup_{B_{r+(k-l)R}}\xi\leq C_{l}R^{-p_{l}}\]
for all \(l\leq k\), where \(p_{l}\) satisfies the relation \(p_{l}=\frac{2}{3}(1+p_{l-1})\). As \(l\to\infty\) we have \(p_{l}\to 2\), so if \(k\) is sufficiently large, \(p_{k}\geq 2-\delta\).
### Global estimates
We now begin studying the global behaviour of critical sections as \(\varepsilon\to 0\). We focus on sequences such that \(|u|\) and the total energy are uniformly bounded; both of these hypotheses are satisfied for sequences of minimizers, as we show in Section 4. The main conclusion we want to draw is the following.
**Proposition 3.6**.: _Let \(u_{k}\) be a sequence of critical sections for \(E_{\varepsilon_{k}}\), where \(\varepsilon_{k}\to 0\). Suppose the sequences \(\sup_{X}|u_{k}|\) and \(E_{\varepsilon_{k}}(u_{k})\) are bounded. Then the discrepancy_
\[\xi_{k}:=\varepsilon_{k}\frac{|\nabla u_{k}|^{2}}{2}-\frac{W(u_{k})}{ \varepsilon_{k}}\]
_converges to \(0\) in \(L^{1}_{\rm loc}(\mathbb{R}^{n})\)._
As in [10], we need separate arguments for the positive and negative parts of \(\xi_{k}\). We first consider the positive part, \(\xi_{k,+}:=\max\{\xi_{k},0\}\).
**Lemma 3.7**.: _Let \(u_{k}\) be a sequence of critical sections for \(E_{\varepsilon_{k}}\), where \(\varepsilon_{k}\to 0\). Suppose the sequences \(\sup_{X}|u_{k}|\) and \(E_{\varepsilon_{k}}(u_{k})\) are bounded. As \(k\to\infty\) we have_
\[\xi_{k,+}\to 0\]
_in \(C^{0}_{\rm loc}(X)\) and \(L^{1}_{\rm loc}(\mathbb{R}^{n})\)._
Proof.: Fix \(\delta\in(0,1)\). Suppose \(x\in X\) is such that \(\rho(x)>R\varepsilon_{k}\), where
\[R:=(1+(k_{0}+2)R_{0}),\]
and \(k_{0}=k_{0}(n,B,\delta)\) and \(R_{0}=R_{0}(n,B)\) are the constants referred to in Proposition 3.5. We then have
\[B_{\varepsilon_{k}R}(x)\subset X.\]
After rescaling and applying Proposition 3.5, we find that
\[\sup_{B_{\varepsilon_{k}}(x)}\xi_{k,+}\leq C_{0}(n,B,\delta)\varepsilon_{k}^{ 1-\delta}.\]
Since every compact subset of \(X\) is in the set \(\{\rho>\varepsilon_{k}R\}\) for sufficiently \(k\), we conclude that \(\xi_{k,+}\to 0\) in \(C^{0}_{\rm loc}(X)\).
In the set \(\{\rho\leq\varepsilon_{k}R\}\) we can apply the boundary derivative estimates stated in Proposition 3.1. These provide a constant \(C_{1}\) which is independent of \(k\) such that
\[|\nabla u_{k}|^{2}\leq C_{1}(\varepsilon_{k}^{-1}\rho^{-1}+\varepsilon_{k}^{- 3}\rho),\]
and hence
\[\xi_{k,+}\leq C_{1}(\rho^{-1}+\varepsilon_{k}^{-2}\rho)\]
in \(\{\rho\leq\varepsilon_{k}R\}\). By Fubini's theorem, for large \(k\),
\[C_{1}\int_{\{\rho\leq\varepsilon_{k}R\}}(\rho^{-1}+\varepsilon_{k}^{-2}\rho) \leq C_{1}\int_{0}^{\varepsilon_{k}R}(1+\varepsilon_{k}^{-2}\rho^{2})\,d\rho \leq C_{1}R^{3}\varepsilon_{k},\]
where we have increased \(C_{1}\) as necessary. For any compact set \(K\) in \(\mathbb{R}^{n}\) we have
\[\int_{K}\xi_{k,+}=\int_{K\cap\{\rho\leq\varepsilon_{k}R\}}\xi_{k,+}+\int_{K \cap\{\rho>\varepsilon_{k}R\}}\xi_{k,+}\leq C_{1}R^{3}\varepsilon_{k}+C_{0} \mathcal{H}^{n}(K)\varepsilon_{k}^{1-\delta}.\]
In particular, \(\xi_{k,+}\to 0\) in \(L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\) as \(\varepsilon_{k}\to 0\).
We now turn to the negative part of \(\xi_{k}\). That this quantity converges to \(0\) in \(L^{1}_{\mathrm{loc}}\) is established below in Lemma 3.10.
Given a sequence of critical sections \(u_{k}\) with uniformly bounded energy, we define a sequence of energy measures \(\mu_{k}\) by the requirement that for a Borel set \(A\subset\mathbb{R}^{n}\),
\[\mu_{k}(A)=E_{\varepsilon_{k}}(u_{k},A).\]
Then since \(\mu_{k}\) is a sequence of uniformly bounded Radon measures, after passing to a subsequence, we can assume that \(\mu_{k}\to\mu\) in the weak* topology. That is,
\[\lim_{k\to\infty}\int\varphi\,d\mu_{k}=\int\varphi\,d\mu\]
for every \(\varphi\in C_{0}(\mathbb{R}^{n})\), or equivalently,
\[\limsup_{k\to\infty}\mu_{k}(K)\leq\mu(K),\qquad\mu(U)\leq\liminf_{k\to\infty} \mu_{k}(U)\]
whenever \(K\subset\mathbb{R}^{n}\) is compact and \(U\subset\mathbb{R}^{n}\) is open.
We recall that \(H^{1}(X,\mathbb{Z}_{2})\) is generated by loops which link a connected component of \(\Gamma\) once mod \(2\), and do not link any other connected component. By the definition of \(L\), every one of its smooth sections must vanish at at least one point of such a generator. This has the following consequence for the energy of critical sections.
**Lemma 3.8**.: _Consider a sequence of critical sections \(u_{k}\) for \(E_{\varepsilon_{k}}\), where \(\varepsilon_{k}\to 0\). We assume \(|u_{k}|\leq B\), that the sequence \(E_{\varepsilon_{k}}(u_{k})\) is bounded, and that \(\mu_{k}\to\mu\) in the weak* sense. Let \(\gamma\) be a smooth loop in \(M\) which is a generator in \(H^{1}(X,\mathbb{Z}_{2})\). Then there is a point \(p\in\gamma\) such that_
\[\mu(B_{r}(p))\geq cr^{n-1}\]
_for all \(r<\operatorname{dist}(\gamma,\Gamma)\), where \(c=c(n,B)\). In particular, \(p\in\operatorname{supp}\mu\)._
Proof.: The following argument is similar to [10, Proposition 4.1]. Suppose
\[r<\operatorname{dist}(\gamma,\Gamma)/2,\]
so that in particular \(\bar{B}_{r}(x)\) is disjoint from \(\Gamma\) for every \(x\in\gamma\). We know that \(u_{k}\) vanishes at some point \(p_{k}\) in \(\gamma\) for every \(k\). After passing to a subsequence, we may assume \(p_{k}\to p\in\gamma\). We have
\[\frac{r^{1-n}}{2^{1-n}}\mu(B_{r}(p))\geq\limsup_{k\to\infty}\frac{r^{1-n}}{2^{ 1-n}}\mu_{k}(B_{r/2}(p_{k})),\]
so the interior energy monotonicity formula in [10] and Proposition 3.7 imply
\[\frac{r^{1-n}}{2^{1-n}}\mu(B_{r}(p))\geq\limsup_{k\to\infty}\varepsilon_{k}^{ 1-n}E_{\varepsilon_{k}}(u_{k},B_{\varepsilon_{k}}(p_{k})).\]
After rescaling and applying standard elliptic estimates, we find that there are constants \(\theta<1\) and \(c\) depending only on \(n\) and \(B\) such that
\[(\theta\varepsilon_{k})^{1-n}E(u_{k},B_{\theta\varepsilon_{k}}(p_{k}))\geq c\]
for all sufficiently large \(k\). It follows that
\[\mu(B_{r}(p))\geq\frac{\theta^{n-1}c}{2^{n-1}}r^{n-1}.\]
We note that the conclusion of Lemma 3.8, together with the fact that \(\operatorname{supp}\mu\) is closed, implies that \(\Gamma\subset\operatorname{supp}\mu\).
Using Lemma 3.8, we can now prove density bounds for \(\mu\), as in [10, Proposition 4.1].
**Lemma 3.9**.: _Consider a sequence of critical sections \(u_{k}\) for \(E_{\varepsilon_{k}}\), where \(\varepsilon_{k}\to 0\). We assume \(|u|\leq B\) and \(E_{\varepsilon_{k}}(u_{k})\leq C\), and that \(\mu_{k}\to\mu\). There are positive constants \(r_{0}\) and \(D\) depending only on \(n\), \(\Gamma\), \(B\) and \(C\) such that_
\[D^{-1}r^{n-1}\leq\mu(B_{r}(p))\leq Dr^{n-1} \tag{6}\]
_for all \(p\in\operatorname{supp}\mu\) and \(r\leq r_{0}\)._
Proof.: We first prove the lower bound in (6). Suppose that \(p\in\Gamma\). We then have \(p\in\operatorname{supp}\mu\) by Lemma 3.8. Provided \(r_{0}\) is sufficiently small depending on \(\Gamma\), for all \(r<r_{0}\) we can apply Lemma 3.8 to an appropriate loop in \(B_{r}(p)\) to find a point \(q\in B_{r}(p)\) with the following properties: \(B_{r/3}(q)\) is disjoint from \(\Gamma\) and is contained in \(B_{r}(p)\), and
\[(r/3)^{1-n}\mu(B_{r/3}(q))\geq c,\]
where \(c=c(n,B)\) is the constant appearing in Lemma 3.8. Therefore,
\[r^{1-n}\mu(B_{r}(p))\geq r^{1-n}\mu(B_{r/3}(q))\geq c/3^{n-1}.\]
Consider next a point \(p\in\operatorname{supp}\mu\setminus\Gamma\), and let \(d=\operatorname{dist}(p,\Gamma)\). If \(r\geq 2d\) then there is a point \(q\in\Gamma\) such that \(B_{r}(p)\) contains \(B_{r/2}(q)\), and hence
\[r^{1-n}\mu(B_{r}(p))\geq r^{1-n}\mu(B_{r/2}(q))\geq c/6^{n-1}.\]
If \(r\leq 2d\) then \(B_{r/4}(p)\) is disjoint from \(\Gamma\). We now argue as in [10, Proposition 4.1]: there is a sequence \(p_{k}\to p\) such that \(u_{k}(p_{k})\in[-3/4,3/4]\) for all sufficiently large \(k\), and the interior energy monotonicity formula and Proposition 3.7, we have
\[(r/4)^{1-n}\mu(B_{r/4}(p))\geq\lim_{k\to\infty}\varepsilon^{1-n}E_{\varepsilon _{k}}(u_{k},B_{\varepsilon}(p_{k})).\]
After rescaling by \(\varepsilon\) and using standard elliptic estimates, we find that
\[(r/4)^{1-n}\mu(B_{r/4}(p)))\geq c,\]
where \(c\) is as before. Putting all of these cases together, we see that
\[\mu(B_{r}(p)\geq D^{-1}r^{n-1}\]
for all \(p\in\operatorname{supp}\mu\), provided \(D^{-1}\leq c/6^{n-1}\).
We now turn to the upper bound. Suppose first that \(p\in\Gamma\). By Proposition 3.3, provided \(r_{0}\) is sufficiently small we have
\[\frac{e^{cr}}{r^{n-1}} E_{\varepsilon_{k}}(u_{k},B_{r}(p))-\frac{e^{cs}}{s^{n-1}}E_{ \varepsilon_{k}}(u_{k},B_{s}(p))\] \[\geq-\int_{s}^{r}\frac{e^{c\tau}}{(1+n^{-1}c\tau)}\frac{1}{\tau^ {n}}\int_{B_{r}(p)}\left(\varepsilon_{k}\frac{|\nabla u_{k}|^{2}}{2}-\frac{W( u_{k})}{\varepsilon}\right)_{+}dxd\tau\]
for all \(s<r<r_{0}\). The inner integrand on the right is \(\xi_{k,+}\), which tends to \(0\) in \(L^{1}_{\operatorname{loc}}\) as \(k\to\infty\) by Lemma 3.7. Therefore, after passing to the limit, we find that
\[\frac{e^{cr}}{r^{n-1}}\mu(B_{r}(p))\leq\frac{e^{cr_{0}}}{r_{0}^{n-1}}\mu(\bar{ B}_{r_{0}}(p))\leq\frac{e^{cr_{0}}}{r_{0}^{n-1}}C.\]
In particular, \(\mu(B_{r}(p))\leq\tilde{D}r^{n-1}\), provided we take
\[\tilde{D}\geq\frac{e^{cr_{0}}C}{r_{0}^{n-1}}.\]
Suppose next that \(p\in\operatorname{supp}\mu\setminus\Gamma\), and let \(d=\operatorname{dist}(p,\Gamma)\). Observe that there is a point \(q\in\Gamma\) such that \(B_{r}(p)\subset B_{r+d}(q)\) for all \(r>0\), and hence
\[\mu(B_{r}(p))\leq\mu(B_{d+r}(q))\leq\tilde{D}(d+r)^{n-1}.\]
If \(r\geq d/2\) then we obtain
\[\mu(B_{r}(p))\leq 3^{n-1}\tilde{D}r^{n-1}.\]
If \(r<d/2\), we can use the interior energy monotonicity formula derived in [10] to first bound
\[r^{1-n}\mu(B_{r}(p))\leq(d/2)^{1-n}\mu(\bar{B}_{d/2}(p)),\]
and then observe that \(B_{d/2}(p)\subset B_{3d/2}(q)\) to obtain
\[r^{1-n}\mu(B_{r}(p))\leq(9/2)^{n-1}\tilde{D}.\]
Combining these different cases yields the upper bound in (3.9) with \(D=(9/2)^{n-1}\tilde{D}\).
Finally, we conclude that the negative part of the discrepancy decays to \(0\) in \(L^{1}_{\operatorname{loc}}\). Combining this result with Lemma 3.7 gives Proposition 3.6.
**Lemma 3.10**.: _Consider a sequence of critical sections \(u_{k}\) for \(E_{\varepsilon_{k}}\), where \(\varepsilon_{k}\to 0\). Suppose the sequences \(|u_{k}|\) and \(E_{\varepsilon_{k}}(u_{k})\) are bounded. We then have that the negative part of the discrepancy, \(\xi_{k,-}:=\max\{-\xi_{k},0\}\), decays to \(0\) in \(L^{1}_{\operatorname{loc}}(\mathbb{R}^{n})\) as \(k\to\infty\)._
Proof.: The proof follows [10, Proposition 4.3]. Let us define a Radon measure on \(\mathbb{R}^{n}\) by
\[\xi_{k,-}(A):=\int_{A}\xi_{k,-}.\]
Suppose the claim is false, so that there is an open subset \(A\) of \(\mathbb{R}^{n}\) such that \(\xi_{k,-}(A)\) does not converge to \(0\). After passing to a subsequence, we may assume that \(\liminf_{k\to\infty}\xi_{k,-}(A)>0\). We may also assume that \(\xi_{k,-}\) and \(\mu_{k}\) weak*-converge to Radon measures \(\xi_{-}\) and \(\mu\).
We first claim that, for each \(p\in A\cap\operatorname{supp}\mu\), the quantity
\[\delta:=\liminf_{r\to\infty}\frac{\xi_{-}(B_{r}(p))}{\mu(B_{r}(p))}\]
is zero. Suppose to the contrary that \(\delta>0\) for some \(p\in A\cap\operatorname{supp}\mu\). Then, for all sufficiently small values of \(r\), we have
\[\xi_{-}(B_{r}(p))>\frac{\delta}{2}\mu(B_{r}(p)),\]
and hence
\[\xi_{-}(B_{r}(p))>\frac{\delta}{2D}r^{n-1},\]
where \(D\) is independent of \(k\), by Lemma 3.9. We derive a contradiction using energy monotonicity. If \(p\in\Gamma\) then we use the monotonicity formula from Proposition 3.3, whereas if \(p\in\mathbb{R}^{n}\setminus\Gamma\) we use the interior monotonicity formula derived in [10]. The argument is essentially the same in both cases, so we only describe the boundary case in detail. For all \(k\) (and sufficiently small \(s<r\)) we have
\[\frac{e^{cr}}{r^{n-1}}E_{\varepsilon_{k}}(u_{k},B_{r}(p))\geq-\int_{s}^{r/2} \frac{e^{cr}}{(1+n^{-1}c\tau)}\frac{1}{\tau^{n}}\int_{B_{r}(p)}\xi_{k}\,d\tau\]
Sending \(k\) to infinity, and using the fact that the positive part of the discrepancy goes to \(0\) in \(L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\) (by Lemma 3.7), we obtain
\[\frac{e^{cr}}{r^{n-1}}C\geq\int_{s}^{r/2}\frac{e^{c\tau}}{(1+n^{-1}c\tau)}\frac{ \xi_{-}(B_{\tau}(p))}{\tau^{n}}\,d\tau\geq\frac{\delta}{2D}\int_{s}^{r/2}\frac{ e^{c\tau}}{(1+n^{-1}c\tau)}\frac{1}{\tau}\,d\tau\]
for sufficiently small \(s<r\). The right-hand side becomes unbounded as \(s\to 0\), so this is a contradiction. That is, \(\delta=0\) for every \(p\in A\cap\operatorname{supp}\mu\).
By a standard result in measure theory (see eg. Lemma 1.2 on p. 47 of Evans-Gariepy), we conclude that
\[\xi_{-}(A\cap\operatorname{supp}\mu)=0.\]
But the inequality \(\xi_{k,-}(A)\leq\mu_{k}(A)\) implies \(\operatorname{supp}\xi_{-}\subset\operatorname{supp}\mu\), so in fact \(\xi_{-}(A)=0\), contrary to our initial assumption.
### The associated varifolds
Given an open subset \(U\subset\mathbb{R}^{n}\), we write \(G(U)\) for the Grassmannian bundle of unoriented \((n-1)\)-planes over \(U\). Each point in \(G(U)\) is of the form \((x,S)\), with \(x\in U\) and \(S\in T_{x}U\).
An \((n-1)\)-varifold on \(U\) is a nonnegative Radon measure on \(G(U)\). A sequence of varifolds \(V_{k}\) weak*-converges to a varifold \(V\) if for every \(\varphi\in C_{0}(G(U))\) we have
\[\int_{U}\varphi(x,S)\,dV_{k}(x,S)\to\int_{U}\varphi(x,S)\,dV(x,S).\]
Given a varifold \(V\), we define its mass \(\|V\|\) to be the Radon measure on \(U\) such that
\[\int_{U}\varphi(x)\,d\|V\|(x):=\int_{G(U)}\varphi(x)\,dV(x,S)\]
for all \(\varphi\in C_{0}(U)\).
Let \(u\) be a smooth section of \(L\) with \(E_{\varepsilon}(u,K)<\infty\), and define
\[\Phi(t):=\int_{0}^{t}\sqrt{\frac{(1-s^{2})^{2}}{8}}\,ds,\qquad w:=\Phi\circ|u|.\]
By Sard's theorem, \(\Sigma_{t}:=w^{-1}(t)\) is a smooth hypersurface in \(X\) for almost every \(t\in\mathbb{R}\). We define Borel measures on \(G(\mathbb{R}^{n})\) by setting
\[V_{\Sigma_{t}}(A):=\mathcal{H}^{n-1}(\{x\in\Sigma_{t}:(x,T_{x}\Sigma_{t})\in A\})\]
in case \(\Sigma_{t}\) is smooth, and
\[V(A):=\frac{1}{\sigma}\int_{\mathbb{R}}V_{\Sigma_{t}}(A)\,dt,\]
where \(\sigma:=2\Phi(1)\). In the definition of \(V\) we have used the fact that \(t\mapsto V_{\Sigma_{t}}(A)\) is a measurable function. By the coarea formula, for \(A\subset\mathbb{R}^{n}\) open,
\[\|V\|(A)=\|V\|(A\setminus\Gamma)=\frac{1}{\sigma}\int_{A\setminus\Gamma}| \nabla w|=\frac{1}{\sigma}\int_{A\setminus\Gamma}\sqrt{\frac{W(u)}{2}}|\nabla u|,\]
so since
\[\varepsilon\frac{|\nabla u|^{2}}{2}+\frac{W(u)}{\varepsilon}=\left(\sqrt{ \frac{\varepsilon}{2}}|\nabla u|-\sqrt{\frac{W(u)}{\varepsilon}}\right)^{2} +2\sqrt{\frac{W(u)}{2}}|\nabla u|\]
we have
\[\|V\|(A)\leq\frac{1}{2\sigma}E_{\varepsilon}(u,A). \tag{7}\]
We thus have \(\|V\|(\mathbb{R}^{n})<\infty\), and hence \(V\) is a varifold on \(\mathbb{R}^{n}\). Moreover, for almost every \(t\in\mathbb{R}\), \(V_{\Sigma_{t}}\) is a varifold on \(\mathbb{R}^{n}\). We call \(V\) the varifold associated to \(u\).
For a vector field \(g\in C^{1}_{0}(\mathbb{R}^{n},\mathbb{R}^{n})\) we have
\[\delta V(g)=\int Dg(x)\cdot S\,dV(x,S)=\frac{1}{\sigma}\int_{\mathbb{R}}\int Dg(x )\cdot S\,dV_{\Sigma_{t}}(x,S)dt.\]
Whenever \(\Sigma_{t}\) is smooth we have
\[\int Dg(x)\cdot S\,dV_{\Sigma_{t}}(x,S)=\int_{\Sigma_{t}}\operatorname{div}_{ \Sigma_{t}}g(x)\,d\mathcal{H}^{n-1}(x),\]
and
\[\operatorname{div}_{\Sigma_{t}}g(x)=\operatorname{tr}(A(\nabla w)Dg),\qquad A (\nabla w):=I-\frac{\nabla w}{|\nabla w|}\otimes\frac{\nabla w}{|\nabla w|}\]
so we may write
\[\delta V(g)=\frac{1}{\sigma}\int_{\mathbb{R}}\int\operatorname{tr}(A(\nabla w _{k})Dg)\,d\mathcal{H}^{n-1}(x)dt.\]
By the coarea formula,
\[\int_{\mathbb{R}}\int_{\Sigma_{t}}\operatorname{div}_{\Sigma_{t}}g(x)\,d \mathcal{H}^{n-1}(x)dt=\int_{X}\operatorname{tr}(A(\nabla w)Dg)|\nabla w|,\]
and hence
\[\delta V(g)=\frac{1}{\sigma}\int_{X}\operatorname{tr}(A(\nabla w)Dg)|\nabla w|.\]
Note that since \(|\nabla w|\) is in \(L^{1}(\mathbb{R}^{n})\) we may equally integrate over \(\mathbb{R}^{n}\) on the right-hand side.
We now use the estimates derived earlier in this section to study the limit of the associated varifolds for a sequence of critical sections with \(\varepsilon_{k}\to 0\).
**Lemma 3.11**.: _Let \(u_{k}\) be a sequence of critical sections for \(E_{\varepsilon_{k}}\), where \(\varepsilon_{k}\to 0\). Suppose \(|u|\leq B\) and \(E_{\varepsilon_{k}}(u_{k})\leq C\). Possibly after passing to a subsequence, we may assume that the energy measures \(\mu_{k}\) weak*-converge to a Radon measure \(\mu\) on \(\mathbb{R}^{n}\), and that the associated varifolds \(V_{k}\) weak*-converge to a varifold \(V\) on \(\mathbb{R}^{n}\). We have \(\|V\|=\mu/2\sigma\), and there are constants \(c=c(n,\Gamma)\), \(r_{0}=r_{0}(n,\Gamma,B,C)\) and \(D=D(n,\Gamma,B,C)\) such that_
\[\frac{e^{cs}}{s^{n-1}}\|V\|(B_{s}(p))\leq\frac{e^{cr}}{r^{n-1}}\|V\|(B_{r}(p)), \tag{8}\]
_and_
\[D^{-1}\leq\frac{1}{r^{n-1}}\|V\|(B_{r}(p))\leq D \tag{9}\]
_for every \(p\in\mathbb{R}^{n}\) and \(0<s<r<r_{0}\). In particular, \(\|V\|(\Gamma)=0\)._
Proof.: The measures \(\mu_{k}\) and \(\|V_{k}\|\) are both bounded independently of \(k\), so after passing to a subsequence we may assume \(\mu_{k}\to\mu\) and \(V_{k}\to V\) in the weak* sense.
We observe that since
\[\left|\varepsilon_{k}\frac{|\nabla u_{k}|^{2}}{2}+\frac{W(u_{k})}{\varepsilon _{k}}-2|\nabla w_{k}|\right|=\left(\sqrt{\frac{\varepsilon_{k}}{2}|\nabla u_ {k}|}-\sqrt{\frac{W(u_{k})}{\varepsilon_{k}}}\right)^{2}=|\xi_{k}|,\]
and we know that the right-hand side converges to \(0\) in \(L^{1}_{\operatorname{loc}}(\mathbb{R}^{n})\) by Proposition 3.6,
\[\left|\varepsilon_{k}\frac{|\nabla u_{k}|^{2}}{2}-|\nabla w_{k}|\right|\to 0 \tag{10}\]
in \(L^{1}_{\operatorname{loc}}(\mathbb{R}^{n})\). Since for any open \(A\subset\mathbb{R}^{n}\) we have
\[\|V_{k}\|(A)=\frac{1}{\sigma}\int_{A\setminus\Gamma}\sqrt{\frac{W(u_{k})}{2}} |\nabla u_{k}|,\]
this immediately implies \(\|V\|=\mu/2\sigma\).
Consider some fixed \(0<s<r\). If \(r\) is small enough so that Proposition 3.3 applies, we can use Proposition 3.6 to conclude that
\[\frac{e^{cs}}{s^{n-1}}\|V\|(B_{s}(p))\leq\frac{e^{c(r-\delta)}}{(r-\delta)^{n-1} }\|V\|(\bar{B}_{r-\delta}(p))\]
for every \(r-\delta\in(s,r)\). But we have
\[\lim_{\delta\searrow 0}\|V\|(\bar{B}_{r-\delta}(p))=\|V\|(B_{r}(p)),\]
so this implies (8).
Lemma 3.9 implies (9). Since \(\mathcal{H}^{n-1}(\Gamma)=0\), a covering argument using the upper bound in (9) yields \(\|V\|(\Gamma)=0\).
We conclude with the main result of this section, Theorem 1.2.
Proof of Theorem 1.2.: The measures \(\mu_{k}\) and \(\|V_{k}\|\) are both bounded independently of \(k\), so after passing to a subsequence we may assume \(\mu_{k}\to\mu\) and \(V_{k}\to V\) in the weak* sense. We saw in Lemma 3.11 that \(\|V\|=\mu/2\sigma\).
Let \(g\in C^{1}_{0}(\mathbb{R}^{n},\mathbb{R}^{n})\) be such that \(g|_{\Gamma}\) is tangent to \(\Gamma\). We claim \(\delta V(g)=0\). Since \(\delta V_{k}(g)\to\delta V(g)\), it suffices to show that \(\delta V_{k}(g)\to 0\). Recall that
\[\delta V_{k}(g)=\frac{1}{\sigma}\int\operatorname{tr}(A(\nabla w_{k})Dg)| \nabla w_{k}|.\]
At points where the integrand is nonzero we have \(\frac{\nabla w_{k}}{|\nabla w_{k}|}=\frac{\nabla u_{k}}{|\nabla u_{k}|}\), so we may rewrite this as
\[\delta V_{k}(g)=\frac{1}{\sigma}\int\operatorname{tr}(A(\nabla u_{k})Dg)| \nabla w_{k}|.\]
On the other hand, by Lemma 3.2, for all \(k\) we have
\[\int\operatorname{tr}(A(\nabla u_{k})Dg)\frac{\varepsilon_{k}}{2}|\nabla u_{k }|^{2}=\int\left(\frac{\varepsilon_{k}}{2}|\nabla u_{k}|^{2}\operatorname{div} g-\frac{\varepsilon_{k}}{2}\nabla\nabla_{u_{k}}g\cdot\nabla u_{k}\right)=\frac{1}{2} \int\xi_{k}\operatorname{div}g,\]
and hence
\[\delta V_{k}(g)=\frac{1}{2\sigma}\int\xi_{k}\operatorname{div}g+\frac{1}{ \sigma}\int\operatorname{tr}(A(\nabla u_{k})Dg)\xi_{k}.\]
The right-hand side tends to \(0\) as \(k\to\infty\) by (10) and Proposition 3.6, so we have \(\delta V(g)=0\).
We know that \(V\) is stationary with respect to vector fields that are compactly supported in \(X\), and its density is bounded from below by \(D^{-1}\) by (9). Therefore, we may apply Allard's rectifiability theorem [1, 5.5 (1)] to conclude that \(V\) is a rectifiable \((n-1)\)-varifold on \(X\). That is, there is a countably \((n-1)\)-rectifiable set \(\Sigma\subset\mathbb{R}^{n}\) with \(\mathcal{H}^{n-1}(\Sigma)<\infty\) and a nonnegative function \(\theta\in L^{1}_{\operatorname{loc}}(X,\mathcal{H}^{n-1})\) such that \(V\operatorname{\vrule width 0.5pt height 6.0pt depth 0.0pt\vrule width 0.5pt height 6.0pt depth 0.0pt}G(X)\) is the varifold induced by \(\Sigma\) with multiplicity \(\theta\). In fact, since \(\|V\|(\Gamma)=0\) we have
\[V=V\operatorname{\vrule width 0.5pt height 6.0pt depth 0.0pt\vrule width 0.5pt height 6.0pt depth 0.0pt}G(X),\]
and \(\theta\in L^{\infty}(\mathbb{R}^{n},\mathcal{H}^{n-1})\) by (9), so \(V\) is a rectifiable varifold on \(\mathbb{R}^{n}\). Finally, we know \(\theta\) is integer-valued \(\mathcal{H}^{n-1}\)-almost everywhere in \(X\) (and hence in \(\mathbb{R}^{n}\)) by [10, Section 5], so \(V\) is integer rectifiable.
It remains to prove part (2) of the claim. The lower density bound in Lemma 3.11 implies that \(\Sigma\) is compact. That \(|u_{k}|\to 1\) in \(C^{0}_{\operatorname{loc}}(\mathbb{R}^{n}\setminus\Sigma)\) follows immediately from Theorem 1 in [10]. Now fix some \(b\in(0,1)\). We prove that the closure of
\[\{|u_{k}|\leq 1-b\}\]
in \(\mathbb{R}^{n}\), which by Proposition 3.1 is precisely
\[\{|u_{k}|\leq 1-b\}\cup\Gamma,\]
converges to \(\Sigma\) in the local Hausdorff sense. We proceed by contradiction. Let \(K\) be a compact subset of \(\mathbb{R}^{n}\) and suppose \(p\in\operatorname{supp}\|V\|\cap K\) is such that \(B_{r}(p)\) is disjoint from \(\{|u_{k}|\leq 1-b\}\) for all \(k\). We then have \(p\not\in\Gamma\). By [13, Proposition 4.2],
\[E(u_{k},B_{s}(p))\to 0\]
for every \(s<r\), but this contradicts (9). Now suppose \(p_{k}\in\{|u_{k}|\leq 1-b\}\cap K\) is such that \(B_{r}(p_{k})\) is disjoint from \(\operatorname{supp}\|V\|\) for all \(k\). Then \(B_{r}(p_{k})\) is disjoint from \(\Gamma\), so there is a compact subset of \(X\) which contains \(B_{r/2}(p_{k})\) for all \(k\). Once again, this contradicts [13, Proposition 4.2] and (9).
## 4. Existence of minimizers
A \(W^{1,2}_{\operatorname{loc}}\)-section \(u\) of \(L\) is minimizing for \(E_{\varepsilon}\) if \(E_{\varepsilon}(u)\leq E_{\varepsilon}(v)\) for every \(W^{1,2}_{\operatorname{loc}}\)-section \(v\) of \(L\). Our goal in this section is to show that, for each \(\varepsilon>0\), there exists a smooth minimizing section of \(L\).
First, we observe that the length of a minimizing section does not exceed \(1\).
**Lemma 4.1**.: _Fix \(\varepsilon>0\) and suppose \(u\) is a section of \(L\) which minimizes \(E_{\varepsilon}\). We then have_
\[\sup_{X}|u|\leq 1.\]
Proof.: Suppose the claim is false. Define \(e:=u/|u|\) on the set \(\{|u|>0\}\) and let \(\tilde{u}\) be such that \(\tilde{u}=\min\{|u|,1\}e\) in \(\{|u|>0\}\) and \(\tilde{u}=0\) in \(\{|u|=0\}\). Then \(\tilde{u}\) is a Lipschitz continuous section satisfying \(E_{\varepsilon}(\tilde{u})<E_{\varepsilon}(u)\), so \(u\) is not minimizing. This is a contradiction.
Next we construct a section whose energy is bounded purely in terms of \(\Gamma\).
**Lemma 4.2**.: _Let \(\varepsilon_{0}>0\) be fixed. For each \(\varepsilon\in(0,\varepsilon_{0})\) there is a Lipschitz continuous section \(v\) of \(L\) such that \(E_{\varepsilon}(v)\leq C\), where \(C=C(n,\Gamma,\varepsilon_{0})\) is a constant._
Proof.: Let \(\delta>0\) be a small constant, which will later be fixed depending on \(\Gamma\). Note that it suffices to prove the claim for \(\varepsilon\leq 10^{-6}\delta\), since any smooth section which is zero in a large ball around \(\Gamma\), and has unit length outside an even larger ball, has energy bounded in terms of \(\delta\) and \(\varepsilon_{0}\) when \(10^{-6}\delta\leq\varepsilon\leq\varepsilon_{0}\).
Fix a smooth section \(w\). Although the nodal set of \(w\) may not be a smooth hypersurface, it can be perturbed to a section which does have this property, as follows.
First we show that \(L\) is finitely generated. That is, there is a finite collection of smooth sections \(v_{i}\), \(1\leq i\leq I\), such that, for every \(x\in X\), the vectors \(v_{i}(x)\) span the fiber over \(x\). This follows easily from the fact that \(X\) can be covered by finitely many open sets \(U_{i}\) over which \(L\) trivialises. Since \(\Gamma\) is compact, we can take \(U_{1}\) to be the exterior of a large ball containing \(\Gamma\). We then cover \(\Gamma\) with finitely many small open \((n-2)\)-cubes, and cover a tubular neighbourhood of each of these with two simply connected regions; for example, one can take these to be cylinders with a wedge removed. The remainder of \(X\) can then be covered by balls. For each \(U_{i}\), we let \(v_{i}\) be a unit-length section in \(U_{i}\), but then multiply it by a cutoff function and extend it so that it vanishes outside of \(U_{i}\). If the cutoffs have sufficiently large support, then at least one of the sections \(v_{i}\) is nonzero at each \(x\in X\).
Now let \(F:M\times\mathbb{R}^{I}\to L\) be the smooth map defined by
\[F(x,s):=w(x)+s_{i}v_{i}(x).\]
Since at least one of the vectors \(v_{i}\) is always nonzero, \(F\) is transverse to the submanifold of \(L\) traced out by its zero section. Therefore, by the Thom transversality theorem, for almost every \(s\), the map \(x\mapsto F(x,s)\) is transverse to the zero section of \(L\). It follows that, for some small \(\tilde{s}\), the zero-set of \(\tilde{w}(x):=F(x,\tilde{s})\) is a smooth hypersurface in \(X\). We can
require that \(\tilde{s}_{1}>0\), so that \(\tilde{w}\) is nonzero outisde of a large compact subset containing \(\Gamma\), and hence \(\tilde{w}^{-1}(0)\) is precompact.
We know \(\tilde{w}^{-1}(0)\) is smooth, but it may not have finite \(\mathcal{H}^{n-1}\)-measure. We can rectify this as follows. Let \(\delta\) be such that \(\Gamma_{\delta}:=\{\rho<\delta\}\) is a tubular neighbourhood of \(\Gamma\) whose boundary meets \(\tilde{w}^{-1}(0)\) transversally. We define a section \(\hat{w}\) which agrees with \(\tilde{w}\) in \(\{\rho\geq\delta\}\), and satisfies
\[\hat{w}(x)=\tilde{w}\bigg{(}p(x)+\delta\frac{x-p(x)}{|x-p(x)|}\bigg{)}\]
in the set \(\{0<\rho<\delta\}\), where \(p:\Gamma_{\delta}\to\Gamma\) is the nearest point projection. The nodal set of \(\hat{w}\) is then a Lipschitz hypersurface in \(X\) which agrees with \(\tilde{w}^{-1}(0)\) in \(\{\rho\geq\delta\}\), while
\[\hat{w}^{-1}(0)\cap\{\rho<\delta\}=\{sx+(1-s)p(x):x\in\tilde{w}^{-1}(0)\cap \partial\Gamma_{\delta},\;s\in(0,1)\}.\]
Since \(\tilde{w}^{-1}(0)\cap\partial\Gamma_{\delta}\) is a compact codimension-one submanifold of \(\partial\Gamma_{\delta}\), we have
\[\mathcal{H}^{n-1}(\hat{w}^{-1}(0)\cap\{\rho<\delta\})<\infty\]
and hence \(\mathcal{H}^{n-1}(\hat{w}^{-1}(0))<\infty\).
Let us write \(\Sigma:=\hat{w}^{-1}(0)\), and let \(e:=\hat{w}/|\hat{w}|\) in \(X\setminus\Sigma\). For \(\varepsilon\leq 10^{-6}\delta\) we define
\[v(x):=\begin{cases}\varphi(x)g_{\varepsilon}(d_{\Sigma}(x))e(x)&x\in X \setminus\Sigma\\ 0&x\in\Sigma.\end{cases}\]
where \(d_{\Sigma}\) is the distance function to \(\Sigma\), and \(g_{\varepsilon}:\mathbb{R}\to\mathbb{R}\) and \(\varphi\) are chosen as follows. We choose \(g_{\varepsilon}\) to be a smooth nondecreasing function which agrees with the one-dimensional solution \(t\mapsto\tanh(\varepsilon^{-1}t/\sqrt{2})\) for \(|t|\leq 100\varepsilon\), and satisfies
\[g_{\varepsilon}(t)=1\;\text{ for }\;t\geq 101\varepsilon,\qquad g_{\varepsilon}(t) =-1\;\text{ for }\;t\leq-101\varepsilon.\]
We require that \(\varphi\) is smooth, takes values between \(0\) and \(1\), and satisfies
\[\varphi=0\;\text{ in }\;\{\rho\leq 10^{4}\varepsilon\},\qquad\varphi=1\; \text{ in }\;\{\rho\geq(1+10^{4})\varepsilon\},\qquad|\nabla\varphi|=O(\varepsilon^{-1}).\]
Note that \(d_{\Sigma}\) is a Lipschitz function on \(\mathbb{R}^{n}\), so \(v\) is Lipschitz continuous.
A straightforward application of the coarea formula shows that
\[\limsup_{\varepsilon\to 0}E_{\varepsilon}(v)\leq C\mathcal{H}^{n-1}(\Sigma),\]
where \(C\) is a universal constant. (In fact, by allowing \(g_{\varepsilon}\) to approach the one-dimensional solution on increasingly large intervals, we can make \(C\) arbitrarily close to \(2\sigma\).) This proves the claim.
We now turn to the main result of this section.
**Lemma 4.3**.: _For every \(\varepsilon>0\) there exists a smooth section \(u\) of \(L\) which minimizes \(E_{\varepsilon}\)._
Proof.: Fix \(\varepsilon>0\) and let \(\bar{E}\) be the infimum of \(E_{\varepsilon}(u)\) taken over the space of all \(W^{1,2}_{\rm loc}\)-sections \(u\) of \(L\). Lemma 4.2 implies that \(\bar{E}<\infty\). Let \(u_{k}\) be a sequence of \(W^{1,2}_{\rm loc}\)-sections such that \(E_{\varepsilon}(u_{k})\to\bar{E}\) as \(k\to\infty\). After passing to a subsequence, we may assume \(E_{\varepsilon}(u_{k})\leq 1+\bar{E}\). In any ball \(B\subset X\) we have
\[\bigg{(}\int_{B}|u|^{2}\bigg{)}^{2}\leq\mathcal{H}^{n}(B)\int_{B}|u|^{4}\leq \mathcal{H}^{n}(B)\bigg{(}\mathcal{H}^{n}(B\cap\{|u(x)|^{2}\leq 2\})+16\int_{B}W(u) \bigg{)},\]
so \(\|u_{k}\|_{W^{1,2}}(B)\) is bounded independently of \(k\). Choosing a countable covering of \(X\) by open balls and appealing to a diagonal argument, we find that there is a \(W^{1,2}_{\rm loc}\)-section \(u\) of \(L\) such that \(u_{k}\to u\) weakly in \(W^{1,2}(B)\) for every open ball \(B\subset X\). By the Sobolev embedding theorem, we may also assume that \(u_{k}\to u\) in \(L^{q}(B)\) for every \(B\), where \(q=\frac{1}{2}-\frac{1}{n}\). Then
\(u_{k}\to u\) pointwise almost everywhere (possibly after passing to another subsequence), and hence by Fatou's lemma we have
\[\int_{X}\frac{W(u)}{\varepsilon}\leq\liminf_{k\to\infty}\int_{X}\frac{W(u_{k})}{ \varepsilon}.\]
Since the Dirichlet integral is lower semicontinuous with respect to weak convergence in \(W^{1,2}\), we have
\[E_{\varepsilon}(u)\leq\liminf_{k\to\infty}E_{\varepsilon}(u_{k})=\bar{E},\]
which is to say that \(u\) minimizes \(E_{\varepsilon}\). We have \(|u|\leq 1\) by Lemma 4.1, and \(u\) is a critical section, so it is smooth.
## 5. Regularity of minimizers
Our goal is to study the interior and boundary regularity of limiting interfaces obtained from sequences of minimizing sections. Concerning interior regularity, we have the following.
**Proposition 5.1**.: _Let \(u_{k}\) be a sequence of minimizers for \(E_{\varepsilon_{k}}\), where \(\varepsilon_{k}\to 0\). Possibly after passing to a subsequence, the varifolds \(V_{k}\) associated to \(u_{k}\) weak*-converge to a unit-multiplicity \((n-1)\)-rectifable varifold \(V\). We have that \(\Sigma:=\operatorname{supp}\|V\|\) contains \(\Gamma\), and \(\Sigma\setminus\Gamma\) is a smooth minimal hypersurface outside a set of Hausdorff dimension at most \(n-8\)._
Proof.: We have \(E_{\varepsilon_{k}}(u_{k})\leq C\) with \(C\) independent of \(k\) by Lemma 4.2, and \(|u_{k}|\leq 1\) by Lemma 4.1. By Theorem 1.2, after passing to a subsequence, the associated varifolds weak*-converge to an integer rectifiable limit \(V\) such that \(\operatorname{supp}\|V\|\) contains \(\Gamma\). The rest of the claim follows from standard arguments, which we have included in Appendix A. (The key point is to show that in each ball \(B\subset X\), \(\Sigma\) is the boundary of a perimeter minimizer.)
**Remark 5.2**.: _Since the convergence in Proposition 5.1 is with multiplicity equal to one, regularity of the limiting hypersurface can also be proven using the \(C^{1,\alpha}\)-estimate for level sets due to Wang [23, Theorem 9.1] (which in turn relies on [10]). This was improved to a \(C^{2,\alpha}\)-estimate in [22] (see also [11]). We note also that the interior regularity of limiting interfaces arising from solutions to the Allen-Cahn equation which are stable (or have bounded morse index) has been studied in [20, 21, 23]._
We now turn to the question of boundary regularity for limits of minimizers.
### Tangent cones at the boundary
Let \(V\) be a varifold and consider a point \(p\in\operatorname{supp}\|V\|\). Let \(D_{s_{i},p}(x):=s_{i}(x-p)\), where \(s_{i}\) is a sequence of scales \(s_{i}\to\infty\). We refer to any subsequential weak*-limit of the sequence \((D_{s_{i},p})_{\#}(V)\) as a tangent cone to \(V\) at \(p\).
Recall that if \(V\) is obtained as a limit of the varifolds associated to a sequence of critical sections satisfying length and energy bounds, then \(\Gamma\subset\operatorname{supp}\|V\|\). We will show that \(V\) admits a tangent cone at each \(p\in\Gamma\).
**Lemma 5.3**.: _Let \(u_{k}\) be a sequence of critical sections for \(E_{\varepsilon_{k}}\), where \(\varepsilon_{k}\to 0\). Suppose \(\sup_{X}|u_{k}|\) and \(E_{\varepsilon_{k}}(u_{k})\) are bounded sequences, and that the associated varifolds \(V_{k}\) weak*-converge to a limit \(V\). For every point \(p\in\Gamma\), the limit \(V\) admits a tangent cone \(C_{p}V\) at \(p\), and the projection of \(x\) onto \(S^{\perp}\) vanishes for \(C_{p}V\)-almost every \((x,S)\in G(\mathbb{R}^{n})\)._
Proof.: Fix \(p\in\Gamma\) and a sequence of scales \(s_{i}\to\infty\). The varifolds \((D_{s_{i},p})_{\#}(V)\) have uniformly bounded mass on compact subsets of \(\mathbb{R}^{n}\) by (8). Therefore, after passing to a subsequence, we may assume that \((D_{s_{i},p})_{\#}(V)\) weak*-converges to a tangent cone at \(p\), which we denote by \(C_{p}V\). Moreover,
\[\frac{1}{r^{n-1}}\|C_{p}V\|(B_{r}(0))=\lim_{\rho\to 0}\frac{1}{\rho^{n-1}}\|V\|(B_{ \rho}(p))\]
for all \(r>0\).
For each vector field
\[g\in C^{1}_{0}(\mathbb{R}^{n}\setminus T_{p}\Gamma,\mathbb{R}^{n})\]
we have that \(T_{p}\Gamma\) lies outside of the support of \(g\), and hence \((D_{s_{i},p})_{\#}(V)\) is stationary with respect to \(g\) for all large \(i\). It follows that
\[\delta(C_{p}V)(g)=0.\]
Let \(\theta\) denote the reflection map through \(T_{p}\Gamma\). By Allard's reflection principle [1, 3.2], the varifold \(C_{p}V+\theta_{\#}(C_{p}V)\) is stationary with respect to each \(g\in C^{1}_{0}(\mathbb{R}^{n},\mathbb{R}^{n})\). Since the rescaled mass \(r^{1-n}\|C_{p}V\|(B_{r}(0))\) is constant in \(r\), the monotonicity formula for stationary varifolds [1, 5.1 (2)] implies that the projection of \(x\) onto \(S^{\perp}\) vanishes for \(C_{p}V\)-almost every \((x,S)\in G(\mathbb{R}^{n})\).
**Remark 5.4**.: _It also also possible to prove Lemma 5.3 using Allard's boundary version of the monotonicity formula for stationary varifolds (see [1, 3.4 (2)] and [1]). However it seems that (8) is still required for this argument, in order to establish that \(V\) satisfies the hypotheses of the formula (in particular, it is needed to show that \(\|V\|(\Gamma)=0\))._
### Boundary regularity
We now prove boundary regularity for the limit varifold in case \(n=3\) or \(\Gamma\) lies in a strictly convex hypersurface.
**Proposition 5.5**.: _Suppose \(n=3\) or that \(\Gamma\) lies in a strictly convex hypersurface. Let \(u_{k}\) be a sequence of minimizers for \(E_{\varepsilon_{k}}\), where \(\varepsilon_{k}\to 0\). Suppose the associated varifolds \(V_{k}\) weak*-converge to \(V\). Then there is a tubular neighbourhood of \(\Gamma\) in which \(\Sigma:=\operatorname{supp}\|V\|\) is a smooth hypersurface with boundary, and \(0\Sigma=\Gamma\)._
Proof.: In case \(\Gamma\) lies in a strictly convex hypersurface, by the convex hull property for stationary varifolds [1, Theorem 6.2], we can simply apply [1, 5.2] to conclude that \(V\) has density \(1/2\) at each point in \(\Gamma\). The claim then follows immediately from Allard's boundary regularity theorem [1].
Let us turn to the case \(n=3\). Let \(C_{p}V\) be a tangent cone at \(p\in\Gamma\), obtained as a limit of the rescalings \((D_{s_{i},p})_{\#}(V)\). We claim that \(C_{p}V\) is a unit-multiplicity half-plane.
Note that the varifolds associated with the rescaled sections
\[u_{k,i}(x):=u_{k}(s_{i}^{-1}(x-p))\]
weak*-converge to \((D_{s_{i},p})_{\#}(V)\) as \(k\to\infty\). Therefore, for an appropriate subsequence \(\tilde{u}_{i}:=u_{k_{i},i}\), the varifolds associated to \(\tilde{u}_{i}\) weak*-converge to \(C_{p}V\) on \(\mathbb{R}^{n}\setminus T_{p}\Gamma\) as \(i\to\infty\). Since each \(\tilde{u}_{i}\) is minimizing we may apply Proposition 5.1 (or more precisely Proposition A.1, since the boundary curves are changing with \(i\)) to conclude that \(\Sigma:=\operatorname{supp}\|C_{p}V\|\setminus T_{p}\Gamma\) is a smooth minimal hypersurface. Since \(\operatorname{supp}\|C_{p}V\|\) is closed, \(\Sigma\) is properly embedded in \(\mathbb{R}^{n}\setminus T_{p}\Gamma\). By Lemma 5.3, \(\Sigma\) is a cone.
Let \(p_{\pm}\) be the points where \(T_{p}\Gamma\) intersects \(S^{2}=\partial B_{1}(0)\). The set \(\Sigma\cap S^{2}\) consists of finitely many mutually disjoint properly embedded geodesic segments in \(S^{2}\setminus\{p_{\pm}\}\). Denote this collection of geodesics by \(\Xi\). It is easy to see that \(\Xi\) is either a great circle, or else consists of finitely many half-circles with endpoints at \(p_{\pm}\). If \(\Xi\) is a great circle then, since \(|\tilde{u}_{i}|\to 1\) outside a neighbourhood of \(\Xi\cup\{p_{\pm}\}\) (by part (2) of Proposition 1.2), there is a loop which winds once around \(T_{p}\Gamma\) on which \(\tilde{u}_{i}\) has no zeroes whenever \(i\) is sufficiently large. But \(\tilde{u}_{i}\) must vanish on every loop which is a generator in \(H^{1}(X,\mathbb{Z}_{2})\) by the definition of the spanning bundle, so this is a contradiction. We conclude that \(\Xi\) consists of half-circles with endpoints at \(p_{\pm}\). Equivalently, \(\Sigma\) is a finite collection of halfplanes which meet along \(T_{p}\Gamma\).
Let \(T\) denote a tubular neighbourhood of the great circle \(S^{2}\cap(T_{p}\Gamma)^{\perp}\). A straightforward blow-up argument using Savin's Bernstein theorem [2, Theorem 2.3] shows that \(|\nabla\tilde{u}_{i}|>0\) in \(T\) for all large \(i\), and hence \(\tilde{u}_{i}^{-1}(0)\cap T\) is a smooth hypersurface for all large \(i\).
Let \(P_{m}\), \(1\leq m\leq N\), be the halfplanes comprising \(\Sigma\). For each \(m\), we can choose a unit section of \(L\) in a small ball \(B_{m}\) containing \(T\cap P_{m}\), and so view \(\tilde{u}_{i}\) as a function in \(B_{m}\). By Proposition A.1, \(\Sigma\) divides \(B_{m}\) into open sets \(B_{m}^{\pm}\) such that \(\tilde{u}_{i}\to\pm 1\) in \(C^{0}_{\operatorname{loc}}(B_{m}^{\pm})\). Therefore, a generic smooth curve starting in \(B_{m}^{-}\) and ending in \(B_{m}^{+}\) must intersect \(\tilde{u}_{i}^{-1}(0)\) in an odd number of isolated points when \(i\) is large. On the other hand, \(|\tilde{u}_{i}|\to 1\) in \(C^{0}_{\operatorname{loc}}(T\setminus\Sigma)\), so for large \(i\) the set \(\tilde{u}_{i}^{-1}(0)\cap T\) is contained in the union of the sets \(B_{m}\). Therefore, for large \(i\) and a generic loop in \(T\) which winds around \(T_{p}\Gamma\) once, the number of zeroes of \(\tilde{u}_{i}\) on the loop is the sum of \(N\) odd numbers. If \(N\) is even then the number of zeroes is even, which is impossible by the definition of \(L\), so \(N\) is odd.
Suppose now that \(N\geq 3\). Then there is a pair of halfplanes in \(\Sigma\) which meet at an angle not exceeding \(2\pi/3\). But since \(\Sigma\) is the boundary of a perimeter minimizer in every ball \(B\subset\mathbb{R}^{n}\setminus T_{p}\Gamma\) by Proposition A.1, this is a contradiction, as a simple cut-and-paste argument demonstrates. So we have \(N=1\), and hence \(C_{p}V\) is a halfplane with multiplicity one. It follows that \(V\) has density \(1/2\) at \(p\), so we can apply [1] as before.
**Remark 5.6**.: _It would be interesting to know whether the nodal set of a minimizing/stable section is in fact a hypersurface with boundary, at least if \(n=3\) and \(\varepsilon\) is sufficiently small relative to \(\Gamma\). This might be proven by developing a regularity theory for nodal sets at boundary points analogous to the corresponding interior theory--see eg. [1, 1, 10, 11, 12]._
## 6. Plateau's problem
We now prove Theorem 1.1 and Theorem 1.3. These follow immediately from Proposition 5.1, Proposition 5.5, and the following lemma.
**Lemma 6.1**.: _Let \(u_{k}\) be a sequence of minimizing sections for \(E_{\varepsilon_{k}}\), where \(\varepsilon_{k}\to 0\). Suppose the sequence of varifolds associated to \(u_{k}\) weak*-converges to \(V\), and set \(\Sigma:=\operatorname{supp}\|V\|\). If \(\Sigma\) is a smooth hypersurface with boundary and \(\partial\Sigma=\Gamma\), then \(\Sigma\) has least area among all such hypersurfaces._
Proof.: Suppose for a contradiction that \(\Sigma^{\prime}\) is a smooth hypersurface with boundary such that \(\partial\Sigma^{\prime}=\Gamma\) and
\[\mathcal{H}^{n-1}(\Sigma^{\prime})<\mathcal{H}^{n-1}(\Sigma).\]
Let us assume that any connected components of \(\Sigma^{\prime}\) which are disjoint from \(\partial\Sigma^{\prime}\) have been discarded. It is then straightforward to show that \(X\setminus\Sigma^{\prime}\) is connected, and hence \(L\) is trivial over \(X\setminus\Sigma^{\prime}\) by Lemma 2.2. Knowing this, we may proceed as in the proof of Lemma 4.2 to construct, for each sufficiently large \(k\), a smooth section \(u^{\prime}_{k}\) of \(L\) such that \(E_{k}(u^{\prime}_{k})\) is as close as we like to \(2\sigma\mathcal{H}^{n-1}(\Sigma^{\prime})\). The only extra complication is that \(\Sigma^{\prime}\) may be noncompact, and hence may not admit a global tubular neighbourhood of uniform size, but this can be remedied by cutting off so that \(u^{\prime}_{k}\) has unit length outside a large ball whose boundary meets \(\Sigma^{\prime}\) transversally. In particular, we can arrange that
\[E_{\varepsilon_{k}}(u^{\prime}_{k})<2\sigma\mathcal{H}^{n-1}(\Sigma)\]
for all large \(k\). But we know that \(E_{\varepsilon_{k}}(u_{k})\to 2\sigma\mathcal{H}^{n-1}(\Sigma)\), so this shows that \(u_{k}\) is not a minimizer when \(k\) is large, which is a contradiction.
## 7. Minimizing area in a homology class
We have seen that Plateau's problem can be solved by minimizing a variant of the Allen-Cahn energy for sections of an appropriate line bundle. In this section we briefly explain how a similar method can be used to produce area minimizing hypersurfaces in a fixed homology class on a Riemannian manifold.
Let \((M,g)\) be a closed Riemannian manifold of dimension \(2\leq n\leq 7\). Suppose that \([\tilde{\Sigma}]\) is a nonzero element of \(H_{n-1}(M,\mathbb{Z}_{2})\) and let \(L\) be the corresponding real line bundle over \(M\) (see Section 2). We equip \(L\) with a metric and a flat metric connection, and for each section \(u\) of class \(W^{1,2}\), define
\[E_{\varepsilon}(u):=\int_{M}\varepsilon\frac{|\nabla u|^{2}}{2}+\frac{(1-|u|^{ 2})^{2}}{4\varepsilon}.\]
For each \(\varepsilon>0\) there is a smooth section \(u\) of \(L\) which minimizes \(E_{\varepsilon}\). This can be proven exactly as in Lemma 4.3. Moreover, for every \(\varepsilon\leq 1\), the energy of any minimizer is bounded from above by a constant depending only on \((M,g)\). This can be established by constructing an appropriate competitor for each \(\varepsilon>0\). The construction is very similar to that in Lemma 4.2, but now we arrange that the competitors concentrate energy around some fixed smooth hypersurface in the class \([\tilde{\Sigma}]\).
Fix a sequence \(\varepsilon_{k}\to 0\), and let \(u_{k}\) be a minimizer for \(E_{\varepsilon_{k}}\). Since \(L\) is nontrivial, each \(u_{k}\) must vanish somewhere in \(M\), and hence an argument similar to the proof of Lemma 3.8 shows that
\[\liminf_{k\to\infty}E_{\varepsilon_{k}}(u_{k})>0.\]
This argument requires a local energy monotonicity formula and that the positive part of the discrepancy
\[\xi_{k,+}:=\left(\varepsilon_{k}\frac{|\nabla u_{k}|^{2}}{2}-\frac{W(u_{k})}{ \varepsilon_{k}}\right)_{+}\]
tends to zero in \(L^{1}(M)\) as \(k\to\infty\). Both of these ingredients can be adapted from [10] without major modifications; we refer to [11, Appendix B] for the details.
Exactly as in Section 3, we can associate a varifold \(V_{k}\) with each \(u_{k}\). After passing to a subsequence, we may assume \(V_{k}\to V\) in the weak* sense. With minor modifications, the arguments in Appendix A show that \(V\) is the unit-multiplicty varifold induced by a smooth hypersurface \(\Sigma\) which is locally area minimizing. Moreover, since \(V\) has unit multiplicity, by [12] and [13] (see Section 5 of [12] for the argument), the nodal sets \(u_{k}^{-1}(0)\) converge to \(\Sigma\) in the graphical \(C^{2,\alpha}\)-sense as \(k\to\infty\); this is the step in which we require that \(n\leq 7\). We thus conclude that \([\Sigma]=[\tilde{\Sigma}]\).
Finally, if there were some other smooth hypersurface homologous to \(\tilde{\Sigma}\), but with less area than \(\Sigma\), then for \(k\) sufficiently large we could construct a section with less energy than \(u_{k}\) (as in Lemma 6.1). Therefore, \(\Sigma\) is area minimizing in \([\tilde{\Sigma}]\).
## Appendix A Interior regularity for minimizing limit interfaces
We say that \(u\) minimizes the Allen-Cahn energy \(E_{\varepsilon}\) in a ball \(B\) if
\[E_{\varepsilon}(u,B)\leq E_{\varepsilon}(u+v,B)\]
for all \(v\in W^{1,2}_{0}(B)\). In this appendix we explain how well known arguments imply the following statement (cf. [13] and [10, Theorem 3]).
**Proposition A.1**.: _Consider a sequence of functions \(u_{k}\) which minimize \(E_{\varepsilon_{k}}\) in a ball \(B\subset\mathbb{R}^{n}\), where \(\varepsilon_{k}\to 0\). Suppose \(\sup_{B}|u_{k}|\leq C_{1}\) and \(E_{\varepsilon_{k}}(u_{\varepsilon_{k}})\leq C_{2}\), and that the sequence of varifolds associated to \(u_{k}\) weak*-converges to \(V\). Then \(V\) is a rectifiable \((n-1)\)-varifold with multiplicity equal to one, and \(\Sigma:=\operatorname{supp}\|V\|\) coincides with the boundary of a perimeter minimizer in \(B\), so \(\Sigma\) is a smooth hypersurface outside a set of Hausdorff dimension \(n-8\). In addition, \(\Sigma\) divides \(B\) into open sets \(B_{\pm}\) such that \(u\to\pm 1\) in \(C^{0}_{\operatorname{loc}}(B_{\pm})\)._
We first recall some basic facts concerning sets of finite perimeter. For an open set \(\Omega\subset\mathbb{R}^{n}\) and a function \(L^{1}_{\mathrm{loc}}(\Omega)\) we define
\[\int_{\Omega}|\nabla u|:=\sup\Big{\{}\int_{\Omega}u(x)\operatorname{div}g(x)\,dx: g\in C^{\infty}_{0}(\Omega,\mathbb{R}^{n}),\ |g|\leq 1\Big{\}}.\]
Given a measurable set \(A\subset\mathbb{R}^{n}\) and an open set \(\Omega\), the perimeter of \(A\) in \(\Omega\) is then
\[P_{\Omega}(A):=\int_{\Omega}|\nabla\chi_{A}|,\]
where \(\chi_{A}\) is the indicator function on \(A\). If \(P_{\Omega}(A)<\infty\) then \(A\) is said to have finite perimeter in \(\Omega\). If \(\partial A\) is sufficiently regular (\(C^{1}\) is more than enough) then
\[P_{\Omega}(A)=\mathcal{H}^{n-1}(\partial A\cap\Omega).\]
We say that \(A\) is a perimeter minimizer in \(\Omega\) if
\[P_{\Omega}(A)\leq P_{\Omega}(A^{\prime})\]
whenever \(A\triangle A^{\prime}\) is a compact subset of \(\Omega\), where \(\triangle\) is the symmetric difference.
Before proving Proposition A.1 we need some lemmas. The proof of the following statement is almost identical to [13, Lemma 1], but is somewhat simpler, since we do not impose a volume constraint on the sets \(A_{k}\).
**Lemma A.2**.: _Let \(B\) be an open ball in \(\mathbb{R}^{n}\) and \(A\subset B\) a set of finite perimeter. Suppose \(B^{\prime}\subset B\) is a strictly smaller open ball. There exists a sequence of compact subsets \(A_{k}\subset\mathbb{R}^{n}\) with smooth boundaries such that the following hold:_
1. \(\lim_{k\to\infty}\mathcal{H}^{n}((A_{k}\cap B)\triangle A)=0.\)__
2. \(\lim_{k\to\infty}P_{B^{\prime}}(A_{k})=P_{B^{\prime}}(A)\)_._
3. \(\mathcal{H}^{n-1}(\partial A_{k}\cap\partial B^{\prime})=0\) _for all large_ \(k\)_._
Next we record a standard construction by which the area of a smooth boundary can be approximated by the energy of a sequence of smooth functions.
**Lemma A.3**.: _Let \(A\subset\mathbb{R}^{n}\) be a compact set such that \(\partial A\) is smooth. For each \(\varepsilon>0\) there is a smooth function \(u_{\varepsilon}\) on \(\mathbb{R}^{n}\) with the following properties:_
1. \(\partial A=\{u_{\varepsilon}=0\}\)_._
2. _The functions_ \(u_{\varepsilon}\) _converge to_ \(1-2\chi_{A}\) _as_ \(\varepsilon\to 0\)_, both locally uniformly in the complement of_ \(\partial A\)_, and in_ \(L^{1}_{\mathrm{loc}}(\mathbb{R}^{n})\)_._
3. _As_ \(\varepsilon\to 0\)_,_ \(E_{\varepsilon}(u_{\varepsilon},\Omega)\to 2\sigma\mathcal{H}^{n-1}( \partial A\cap\Omega)\) _for every bounded open_ \(\Omega\)_._
Proof.: Let \(d_{A}\) denote the signed distance to \(\partial A\), with the signs chosen so that \(d_{A}>0\) in \(\mathbb{R}^{n}\setminus A\). There is a positive \(\delta\) such that \(d_{A}\) is a smooth function on the set \(\{|d_{A}|<\delta\}\). Let \(g_{\varepsilon}(s)\) be the one-dimensional solution \(\tanh(\varepsilon^{-1}s/\sqrt{2})\), and let \(\eta=\eta(s)\) be a smooth nonnegative function such that \(\eta(s)=0\) for \(|s|\leq\delta/4\) and \(\eta(s)=1\) for \(|s|\geq\delta/2\). We define
\[\tilde{g}_{\varepsilon}=(1-\eta)g_{\varepsilon}+\eta.\]
Using the coarea formula, it is straightforward to check that the function \(u_{\varepsilon}\) given by
\[u_{\varepsilon}(x)=\tilde{g}_{\varepsilon}(d_{A}(x))\]
has all of the properties claimed in the statement of the lemma.
We now proceed with the proof of Proposition A.1.
Proof of Proposition a.1.: To ease notation we drop the subscript \(k\). By Theorem 1 in [10], after passing to a subsequence, we may assume \(u\to\bar{u}\) in \(L^{1}(B)\), where \(|\bar{u}|=1\) almost everywhere in \(B\), and \(A:=\{\bar{u}=1\}\) is a set of finite perimeter. Moreover, \(E_{\varepsilon}(u,\Omega)\to 2\sigma P_{\Omega}(A)\) for every open \(\Omega\subset B\). If necessary we can adjoin a set of \(\mathcal{H}^{n}\)-measure \(0\) to \(A\) in order to arrange that \(\partial A\) coincides with the support of the Radon measure \(\mu_{A}(E):=P_{B}(E\cap A)\) (see Proposition 12.19 in [12]).
We will show that \(A\) is a perimeter minimizer in \(B\). Suppose for a contradiction that this is not the case. Then there is a competitor \(A^{\prime}\subset\mathbb{R}^{n}\) such that \(A\triangle A^{\prime}\) is a compact subset of \(B\) and
\[P_{B}(A^{\prime})<P_{B}(A).\]
We may assume \(B\) is centered at the origin. We write \(B_{r}\) for the ball of radius \(r\) centered at the origin. Since \(\mathcal{H}^{n}(\Sigma)=0\) and the density of \(V\) is bounded from above [10, Proposition 4.1], an easy argument shows that \(\|V\|(\partial B_{r})=0\) for almost every \(r>0\). Let us fix some \(r>0\) which has this property and is such that \(A\triangle A^{\prime}\) is a compact subset of \(B_{r}\). By Lemma A.2, for any \(\beta>0\), we can find a compact \(A^{\prime\prime}\subset\mathbb{R}^{n}\) with smooth boundary such that
\[\mathcal{H}^{n}((A^{\prime\prime}\cap B)\triangle A^{\prime})<\beta,\qquad P_{ B_{r}}(A^{\prime\prime})\leq P_{B_{r}}(A^{\prime})+\beta,\qquad\mathcal{H}^{n-1} (\partial A^{\prime\prime}\cap\partial B_{r})=0.\]
Next we use Lemma A.3 to obtain, for each \(\varepsilon>0\), a smooth function \(v\) such that
\[E_{\varepsilon}(v,\Omega)\to 2\sigma\mathcal{H}^{n-1}(\partial A^{\prime\prime} \cap\Omega)\]
as \(\varepsilon\to 0\). This holds for every open \(\Omega\subset B\). Let \(\eta\) be a Lipschitz function on \(B\) such that \(0\leq\eta\leq 1\), \(\eta=1\) in \(B_{r}\), \(\eta=0\) in \(B\setminus B_{r+\delta\varepsilon}\), and
\[|\nabla\eta|\lesssim\delta^{-1}\varepsilon^{-1}.\]
Here and throughout we write \(f\lesssim g\) if there is a constant \(C=C(n,C_{1},C_{2})\) such that \(f\leq Cg\). We define
\[w=u+\eta(v-u),\]
and claim that \(E_{\varepsilon}(w)<E_{\varepsilon}(u)\) whenever \(\varepsilon\) is sufficiently small, provided that \(\beta\) and \(\delta\) are sufficiently small.
To check this we first observe that
\[E_{\varepsilon}(w,B_{r})=E_{\varepsilon}(v,B_{r}),\qquad E_{\varepsilon}(w,B \setminus B_{r+\delta\varepsilon})=E_{\varepsilon}(u,B\setminus B_{r+\delta \varepsilon}).\]
In the annular region \(S_{\varepsilon}:=B_{r+\delta\varepsilon}\setminus B_{r}\) we estimate \(W(w)\lesssim 1\) and
\[|\nabla w|^{2}\lesssim|\nabla u|^{2}+|\nabla v|^{2}+\delta^{-2}\varepsilon^{-2 }|u-v|^{2}\]
in order to obtain
\[E_{\varepsilon}(w,S_{\varepsilon})\lesssim E_{\varepsilon}(u,S_{\varepsilon}) +E_{\varepsilon}(v,S_{\varepsilon})+\delta^{-2}\varepsilon^{-1}\int_{S_{ \varepsilon}}|u-v|^{2}+\delta.\]
Let \(\Omega\) be an open subset of \(\mathbb{R}^{n}\) such that \((A^{\prime\prime}\cap B)\triangle A^{\prime}\subset\Omega\) and \(\mathcal{H}^{n}(\Omega\cap B)<2\beta\). We then have
\[\mathcal{H}^{n-1}(\Omega\cap\partial B_{s})\lesssim\beta\]
for almost every \(s\geq r/2\), and hence
\[\delta^{-2}\varepsilon^{-1}\int_{S_{\varepsilon}}|u-v|^{2}\lesssim\delta^{-1} \beta+\int_{S_{\varepsilon}\setminus\Omega}|u-v|^{2}.\]
Since \(|u-v|\to 0\) in \(C^{0}_{\mathrm{loc}}(B\setminus\Omega)\) as \(\varepsilon\to 0\), we obtain
\[\limsup_{\varepsilon\to 0}E_{\varepsilon}(w,S_{\varepsilon})\lesssim\limsup_{ \varepsilon\to 0}\Big{(}E_{\varepsilon}(u,S_{\varepsilon})+E_{\varepsilon}(v,S_{ \varepsilon})\Big{)}+\delta^{-1}\beta+\delta\]
We now use Lemma 3.11 and our choice of \(r\) to evaluate
\[\limsup_{\varepsilon\to 0}\Big{(}E_{\varepsilon}(u,S_{\varepsilon})+E_{ \varepsilon}(v,S_{\varepsilon})\Big{)}\lesssim\|V\|(\partial B_{r})+\mathcal{H}^ {n-1}(\partial A^{\prime\prime}\cap\partial B_{r})=0,\]
and so obtain
\[\limsup_{\varepsilon\to 0}E_{\varepsilon}(w,S_{\varepsilon})\lesssim\delta^{-1} \beta+\delta.\]
This shows that by choosing \(\beta\) and \(\delta\) appropriately, we can ensure that \(E_{\varepsilon}(w,B)\) is arbitrarily close to
\[2\sigma P_{B\setminus B_{r}}(A)+2\sigma P_{B_{r}}(A^{\prime})=2\sigma P_{B}(A ^{\prime})\]
whenever \(\varepsilon\) is small. Since \(P_{B}(A^{\prime})<P_{B}(A)\) and \(E_{\varepsilon}(u,B)\to 2\sigma P_{B}(A)\), we then have that for all sufficiently small \(\varepsilon\),
\[E_{\varepsilon}(w,B)<E_{\varepsilon}(u,B).\]
But \(w-u\in W^{1,2}_{0}(B)\), so this is impossible given that \(u\) is minimizing in \(B\). So our assumption was false; that is, \(A\) is a perimeter minimizer in \(B\).
We now turn to the claims in the statement of the theorem. Since each \(u\) is minimizing in \(B\), we can apply [10, Theorem 2] to conclude that \(V\) has multiplicity one and that \(\operatorname{supp}\|V\|\) coincides with \(\operatorname{supp}\mu_{A}=\partial A\). Since \(A\) is a perimeter minimizer, the set \(\partial A\) is a smooth hypersurface outside a set of Hausdorff dimension \(n-8\) (see for example [12, Theorem 28.1])
Since \(\partial A=\Sigma\), \(\Sigma\) divides \(B\) into the two regions \(B_{+}:=\operatorname{int}(A)\) and \(B_{-}:=\operatorname{int}(B\setminus A)\). To conclude we show that \(u\to\pm 1\) in \(C^{0}_{\operatorname{loc}}(B_{\pm})\). Let \(U\) be an open subset of \(B_{+}\). Since we know that \(u\to 1\) in \(L^{1}(U)\), and \(|u|\to 1\) in \(C^{0}_{\operatorname{loc}}(B\setminus\Sigma)\), it suffices to show that \(U\) is disjoint from \(\Sigma\). Suppose to the contrary that \(U\cap\Sigma\) is nonempty. Then there is a point in \(x\in\Sigma\cap U\) and a radius \(r>0\) such that \(\Sigma\cap B_{r}(x)\) is a smooth hypersurface. Making \(r\) smaller if necessary, we can ensure that \(\Sigma\cap B_{r}(x)\) is as close as we like to a flat \((n-1)\)-disc which divides \(B_{r}(x)\) into two pieces. Since \(u\) is a minimizer a simple comparison argument (see the proof of Theorem 2 in [10]) shows that it converges locally uniformly to \(1\) on one side of this disc, and to \(-1\) on the other. But this contradicts the fact that \(u\to 1\) in \(L^{1}(U)\). The same kind of argument shows that \(u\to-1\) in \(B_{-}\).
|
2309.09092 | The Impact of Debiasing on the Performance of Language Models in
Downstream Tasks is Underestimated | Pre-trained language models trained on large-scale data have learned serious
levels of social biases. Consequently, various methods have been proposed to
debias pre-trained models. Debiasing methods need to mitigate only
discriminatory bias information from the pre-trained models, while retaining
information that is useful for the downstream tasks. In previous research,
whether useful information is retained has been confirmed by the performance of
downstream tasks in debiased pre-trained models. On the other hand, it is not
clear whether these benchmarks consist of data pertaining to social biases and
are appropriate for investigating the impact of debiasing. For example in
gender-related social biases, data containing female words (e.g. ``she, female,
woman''), male words (e.g. ``he, male, man''), and stereotypical words (e.g.
``nurse, doctor, professor'') are considered to be the most affected by
debiasing. If there is not much data containing these words in a benchmark
dataset for a target task, there is the possibility of erroneously evaluating
the effects of debiasing. In this study, we compare the impact of debiasing on
performance across multiple downstream tasks using a wide-range of benchmark
datasets that containing female, male, and stereotypical words. Experiments
show that the effects of debiasing are consistently \emph{underestimated}
across all tasks. Moreover, the effects of debiasing could be reliably
evaluated by separately considering instances containing female, male, and
stereotypical words than all of the instances in a benchmark dataset. | Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki | 2023-09-16T20:25:34Z | http://arxiv.org/abs/2309.09092v1 | # The Impact of Debiasing on the Performance of Language Models in Downstream Tasks is Underestimated
###### Abstract
Pre-trained language models trained on large-scale data have learned serious levels of social biases. Consequently, various methods have been proposed to debias pre-trained models. Debiasing methods need to mitigate only discriminatory bias information from the pre-trained models, while retaining information that is useful for the downstream tasks. In previous research, whether useful information is retained has been confirmed by the performance of downstream tasks in debiased pre-trained models. On the other hand, it is not clear whether these benchmarks consist of data pertaining to social biases and are appropriate for investigating the impact of debiasing. For example in gender-related social biases, data containing female words (e.g. _"she, female, woman"_), male words (e.g. _"he, male, man"_), and stereotypical words (e.g. _"nurse, doctor, professor"_) are considered to be the most affected by debiasing. If there is not much data containing these words in a benchmark dataset for a target task, there is the possibility of erroneously evaluating the effects of debiasing. In this study, we compare the impact of debiasing on performance across multiple downstream tasks using a wide-range of benchmark datasets that containing female, male, and stereotypical words. Experiments show that the effects of debiasing are consistently _underestimated_ across all tasks. Moreover, the effects of debiasing could be reliably evaluated by separately considering instances containing female, male, and stereotypical words than all of the instances in a benchmark dataset.
## 1 Introduction
Unfortunately, Pre-trained Language Models (PLMs) such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) easily learn discriminatory social biases expressed in human-written texts in massive datasets (Kurita et al., 2019; Zhou et al., 2022; Kaneko et al., 2022). For example, if a model is given _"[MASK] is a nurse."_ as the input, a gender biased PLM would predict "_She_" with a higer likelihood score than for "_He_" when filling the [MASK]. Various debiasing methods have been proposed to mitigate social biases in PLMs. Zhao et al. (2019); Webster et al. (2020) proposed a debiasing method by swapping the gender of female and male words in the training data. Kaneko and Bollegala (2021) proposed a method for debiasing by orthogonalising the vectors representing gender information with the hidden layer of a language model given a sentence containing a stereotypical word. Webster et al. (2020) showed that dropout regularization can reduce overfitting to gender information, thereby can be used for debiasing PLMs.
The debiasing method should mitigate only discriminatory information, while pre-trained useful information should be retained in the model. Evaluations in downstream tasks often employ the GLEU benchmark (Wang et al., 2018), which measures the ability to understand language (Kaneko and Bollegala, 2021; Guo et al., 2022; Meade et al., 2022). The data for downstream tasks are not selected in terms of whether they reflect the impact of
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & All & Female & Male & Occ. \\ \hline CoLA & 1,043 & 174 & 722 & 96 \\ MNLI & 9,832 & 3,467 & 8,875 & 1,415 \\ MRPC & 408 & 101 & 391 & 96 \\ QNLI & 5,463 & 2,149 & 5,371 & 1,066 \\ QQP & 40,430 & 7,415 & 29,638 & 3,331 \\ RTE & _277_ & 113 & 269 & 94 \\ SST-2 & 872 & 187 & 691 & 75 \\ STS-B & 1,500 & 513 & 1,277 & 151 \\ WNLI & 71 & 27 & 71 & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The total number of instances containing female, male, and occupational (Occ.) words in the GLUE development data.
debiasing. To mitigate gender bias, data containing female words such as _"she"_ and _"woman"_, male words such as _"he"_ and _"man"_, and stereotypical words such as _"doctor"_ and _"nurse"_ would be most affected by debiasing.
Table 1 shows the total number of instances containing female, male, and occupational (Occ.) words in the development data in the GLUE benchmark suite (Wang et al., 2019), which is widely recognised as a standard evaluation benchmark for LLMs. Occupational words have been used for probing LLMs for stereotypical social biases (Bolukbasi et al., 2016). From Table 1, we see that the GLEU benchmark has little data related to females and occupations. Therefore, the impact of debiasing on data related to females and occupations may be potentially underestimated when LLMs are evaluated on GLUE.
We first extract instances containing female words, data containing male words, and data containing stereotypical words from the benchmarks. We then calculated the performance difference between the original model and the debiased model for each category and compared it to the performance difference using the entire benchmark. The results showed that the debiased model performed worse than the original model on data related to females and occupations compared to the original model when evaluated on the entire dataset. Therefore, existing evaluations underestimate the impact of debiasing on the performance of the downstream task.
It is important to be able to compare how well the effects of debiasing are captured in the datasets related to females, males, and occupations. We propose a method to control the degree of debiasing of PLMs and investigate whether the performance difference between original and debiased models widens as the degree of debiasing increases. Experimental results showed that the proportion of female, male and occupational words in the dataset is related to the susceptibility of the dataset to debiasing.
## 2 Experiments
### Debiasing Methods
We use the following three commonly used debiasing methods in our experiments. We apply these debiasing methods during fine-tuning in downstream tasks.
Counterfactual Data Augmentation (CDA) debiasing:CDA debiasing (Webster et al., 2020) swaps the gender of gender words in the training data. For example, _"She is a nurse"_ is swapped to _"He is a nurse"_, and the swapped version is appended to the training dataset. This enables to learn a less biased model because the frequency of female and male words will be the same in the augmented dataset.
Dropout debiasing:Webster et al. (2020) introduced dropout regularisation as a method to mitigate biases. They enhanced the dropout parameters for the attention weights and hidden activations of PLMs. Their research demonstrated that intensified dropout regularisation diminishes gender bias in these PLMs. They showed that dropout interfers with the attention mechanism in PLMs, and prevents undesirable associations between words. However, it is also possible that the model may no longer be able to learn desirable associations.
Context debiasing:Kaneko and Bollegala (2021) proposed a method to debias MLMs through fine-tuning. It preserves semantic information while removing gender-related biases using orthogonal projections at token- or sentence-level. This method targets male and female words and occupational words in the text for debiasing. This method can be applied various MLMs, independent of the model architectures and pre-training methods. Token-level debiasing across all layers produces the best performance.
### Settings
Although we use BERT (bert-base-cased1) (Devlin et al., 2019) as our PML here as it has been the focus of much prior work on bias evaluations (Kaneko and Bollegala, 2021; Guo et al., 2022; Meade et al., 2022), the evaluation protocol we use can be applied to any PLM. We used the word lists2 proposed by Kaneko and Bollegala (2021) as female words, male words, and occupational words for extracting data instances and debiasing.
Footnote 1: [https://huggingface.co/bert-base-cased](https://huggingface.co/bert-base-cased)
Footnote 2: [https://github.com/kanekoomashiro/context-debias](https://github.com/kanekoomashiro/context-debias)
We use the following nine downstream tasks from the GLEU benchmark: CoLA (Warstadt et al., 2019), MNLI (Williams et al., 2018), MRPC (Dolan and Brockett, 2005), QNLI (Ra
jpurkar et al., 2016), QQP3, RTE (Dagan et al., 2006; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), SST-2 (Socher et al., 2013), STS-B (Cer et al., 2017), and WNLI (Levesque et al., 2012). Hyperparameters for debiasing follow previous studies (Kaneko and Bollegala, 2021; Webster et al., 2020), and we used the default values of huggingface for downstream task hyperparameters.4 For fine-tuning we use the entire training dataset for each corresponding task, without splitting into male, female and occupational instances. We evaluate the performance on all tasks using the official development data.
Footnote 3: [https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
Footnote 4: [https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification)
### Performance of Original vs. Debiased Models
We extract instances containing female words, male words, and stereotypical words from each of the datasets. We then calculate the performance difference between the original model and the debiased model for each dataset, and compare against the performance differences obtained when using all instances. If the performance difference for all instances is smaller than that when evaluated for the female, male, and occupational instances, it would indicate that the effect of debiasing is underestimated when evaluated on the entire dataset.
Table 2 shows the performance differences between the original model and the debiased model for each dataset/task in the GLEU benchmark. All, Female, Male, and Occ. are the performance differences when evaluated on the entire task dataset, instances containing female words, instances containing male words, and instances containing occupational words, respectively.
From the results in Table 2, it can be seen that the performance difference between the original model and the debiased model is larger for the Female, Male, and Occ. instances compared to that when using all instances. In particular, instances related to females exhibit a significant decrease in performance after debiasing.
It can be seen that different word lists used for debiasing have different effects on the performance degradation in downstream tasks due to debiasing. Context debiasing uses occupational words for debiasing, while CDA debiasing does not. Therefore, in CDA debiasing, Occ. does not have a large performance difference compared to female- and male-related instances. Therefore, in CDA debiasing, the performance difference for occupation-related instances is smaller than that for the female and male-related instances. On the other hand, in Context debiasing, occupation-related instances has the largest performance difference as well as female- and male-related instances. Dropout debiasing does not use word lists for debiasing. Therefore, unlike CDA and context debiasing, we see large drops in performance for female, male and Occ. across tasks with Dropout debiasing.
### Debias Controlled Method
To understand how debiasing of an PLM affects the performance of individual downstream benchmark datasets, following the probing technique proposed by Kaneko et al. (2023), we apply different levels of debiasing to bert-base-cased PLM and measure the difference in performance with respect to its original (non-debiased) version. For this purpose we use CDA as the debiasing method, where we swap the gender-related pronouns in \(r\in[0,1]\) fraction of the total \(N\) instances of a dataset (i.e.the total number of gender swapped instances in a dataset will be
\begin{table}
\begin{tabular}{l r r r r r r r r r r r r} \hline \hline & \multicolumn{4}{c}{CDA} & \multicolumn{4}{c}{Dropout} & \multicolumn{4}{c}{Context} \\ \cline{2-13} & All & Female & Male & Occ. & All & Female & Male & Occ. & All & Female & Male & Occ. \\ \hline CoLA & -1.36 & **-3.42** & -2.01 & -1.45 & 0.42 & -0.14 & **-0.21** & -0.07 & -0.32 & **-0.86** & -0.71 & -0.55 \\ MNLI & -0.55 & **-0.90** & -0.71 & -0.63 & 0.23 & 0.13 & **0.01** & 0.05 & -0.05 & **-0.47** & -0.43 & -0.32 \\ MRPC & -0.96 & -1.28 & **-1.31** & -1.03 & -0.82 & **-1.12** & -1.02 & -1.04 & -0.88 & -1.01 & **-1.06** & -0.92 \\ QNLI & -1.13 & **-1.42** & -1.19 & -1.27 & -1.01 & -1.11 & -1.07 & **-1.21** & 0.25 & **-0.19** & -0.06 & -0.04 \\ QQP & -0.21 & **-0.69** & -0.32 & -0.25 & 0.53 & **0.13** & 0.47 & 0.30 & 0.14 & **-0.12** & 0.03 & -0.05 \\ RTE & -1.16 & **-1.21** & -1.02 & -1.13 & -1.01 & **-1.24** & -0.96 & -1.13 & -0.43 & -0.65 & -0.51 & **-0.73** \\ SST-2 & -0.11 & **-0.81** & -0.34 & -0.25 & 0.45 & 0.20 & **0.12** & 0.23 & 0.22 & **-0.15** & -0.02 & -0.12 \\ STS-B & -1.01 & **-1.95** & -1.34 & -1.10 & 0.21 & 0.09 & -0.03 & **-0.11** & -0.08 & -0.31 & **-0.38** & -0.34 \\ WNLI & -2.82 & **-3.07** & -2.82 & -2.71 & -2.01 & -2.21 & -2.01 & **-2.33** & -1.52 & **-1.88** & -1.52 & -1.61 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance difference between the original model and debiased model for each dataset. **Bolded** values indicate the largest drop in performance of the debiased model.
\(r\times N\)). \(r=0\) corresponds to not swapping gender in any training instances of the dataset, whereas \(r=1\) swaps the gender in all instances. We increment \(r\) in step size \(0.1\) to obtain increasingly debiased version of the PLM, which is then fine-tuned for the downstream task5. Figure 1 shows the difference in performance between the original vs. debiased versions of the PLM for QQP, MNLI, and QNLI, which have the largest numbers of instances in the GLEU benchmark.
Footnote 5: In Appendix A, we show that the debias controlled method is able to debias the model according to \(r\).
Note that CDA debiasing reverses gender without considering the context, as in _"He gets pregnant"_ for _"She gets pregnant"_. This is problematic because it eliminates even useful gender-related information learnt by the PLM via co-occurring contexts. Therefore, CDA debiasing has a negative impact on the performance of downstream tasks (Zmigrod et al., 2019) as shown by all three subplots in Figure 1. In fact, Table 2 shows that the performance difference of CDA debiasing is larger than that of dropout debiasing and context debiasing. Therefore, the larger \(r\) is for CDA, the more gender instances is balanced and debiased, but the performance is unfortunately degraded.
If the dataset of the downstream task is sensitive to the effect of debiasing, the performance difference between the original model and the debiased model widens as \(r\) increases. On the other hand, if the data set is insensitive to the effect of debiasing, the performance difference between the original model and the debiased model is unlikely to increase with the value of \(r\).
We find that the performance differences for the female, male, and occupational instances in the QQP, MNLI, and QNLI datasets increase with the value of \(r\). On the other hand, for QQP and MNLI, there is a rise and fall in the performance difference when all data are used. These results indicate that All, which includes instances related to non-gender, are less sensitive to the effect of debiasing compared to Female, Male, and Occupational instances.
On the other hand, for QNLI, All has a small rise and fall in the performance difference. As seen from Table 1, QNLI contains more gender-related instances than QQP and MNLI. Therefore, it is likely that the performance decreases with \(r\) even for All. All and Male instances have a similar trend in performance difference with \(r\).
## 3 Conclusion
This study focused on gender-related social biases and the presence of female, male, and stereotypical words in benchmark datasets. Prior work had used the performance on downstream tasks to prove the usefulness of debiasing methods, overlooking the fact that only a small fraction of those downstream benchmark datasets contain gender-related instances. On the contrary, we found that the effects of debiasing an PLM were consistently underestimated across all tasks. We recommend that the
Figure 1: Performance difference between original and debiased models by debias rate \(r\). The vertical axis shows the performance difference between the original and debiased models, and the horizontal axis shows the debias rate.
evaluation of debiasing effects must be more reliably conducted by considering instances containing specific gender-related words separately rather than evaluating all instances in a benchmark dataset.
## 4 Ethical Considerations
This study uses existing methods and datasets for experiments and does not propose a debiasing method or create a new dataset for social bias. This study evaluates the impact of debiasing on the performance of the downstream task, and it is not possible to evaluate how much bias is mitigated in the PLMs. Therefore, when evaluating the bias of PLMs, it is necessary to use evaluation methods such as StereoSet Nadeem et al. (2021), Crowds-Pairs Nangia et al. (2020), and All Unmasked Likelihood Kaneko and Bollegala (2022).
In this study, we only included binary gender as a gender bias. However, gender bias regarding non-binary gender has also been reported Cao and Daume III (2020); Dev et al. (2021). It is necessary to verify whether there is a similar trend in debiasing for non-binary genders.
## 5 Limitations
Many previous studies have shown that various social biases other than gender bias are learned in PLMs. This study targets only gender bias. While existing studies Webster et al. (2020); Zhao et al. (2019) have debiasing various PLMs, we have experimented only with bert-base-cased. Furthermore, although this study targets only English, which is a morphologically limited language. On the other hand, various types of social biases are also learned in the PLMs across many languages Kaneko et al. (2022); Neveol et al. (2022). Therefore, if the proposed method is to be used with other social biases and PLMs, it is necessary to properly verify its effectiveness in languages other than English. Moreover, we have not verified the use of debiasing controlled methods in languages such as Spanish and Russian, where gender swapping is not easy from a grammatical point of view Zmigrod et al. (2019).
|
2309.11805 | JobRecoGPT -- Explainable job recommendations using LLMs | In today's rapidly evolving job market, finding the right opportunity can be
a daunting challenge. With advancements in the field of AI, computers can now
recommend suitable jobs to candidates. However, the task of recommending jobs
is not same as recommending movies to viewers. Apart from must-have criteria,
like skills and experience, there are many subtle aspects to a job which can
decide if it is a good fit or not for a given candidate. Traditional approaches
can capture the quantifiable aspects of jobs and candidates, but a substantial
portion of the data that is present in unstructured form in the job
descriptions and resumes is lost in the process of conversion to structured
format. As of late, Large Language Models (LLMs) have taken over the AI field
by storm with extraordinary performance in fields where text-based data is
available. Inspired by the superior performance of LLMs, we leverage their
capability to understand natural language for capturing the information that
was previously getting lost during the conversion of unstructured data to
structured form. To this end, we compare performance of four different
approaches for job recommendations namely, (i) Content based deterministic,
(ii) LLM guided, (iii) LLM unguided, and (iv) Hybrid. In this study, we present
advantages and limitations of each method and evaluate their performance in
terms of time requirements. | Preetam Ghosh, Vaishali Sadaphal | 2023-09-21T06:25:28Z | http://arxiv.org/abs/2309.11805v1 | # JobRecoGPT: Explainable job recommendations using LLMs
###### Abstract
In today's rapidly evolving job market, finding the right opportunity can be a daunting challenge. With advancements in the field of AI, computers can now recommend suitable jobs to candidates. However, the task of recommending jobs is not same as recommending movies to viewers. Apart from must-have criteria, like skills and experience, there are many subtle aspects to a job which can decide if it is a good fit or not for a given candidate. Traditional approaches can capture the quantifiable aspects of jobs and candidates, but a substantial portion of the data that is present in unstructured form in the job descriptions and resumes is lost in the process of conversion to structured format. As of late, Large Language Models (LLMs) have taken over the AI field by storm with extraordinary performance in fields where text-based data is available. Inspired by the superior performance of LLMs, we leverage their capability to understand natural language for capturing the information that was previously getting lost during the conversion of unstructured data to structured form. To this end, we compare performance of four different approaches for job recommendations namely, (i) Content based deterministic, (ii) LLM guided, (iii) LLM unguided, and (iv) Hybrid. In this study, we present advantages and limitations of each method and evaluate their performance in terms of time requirements.
## I Introduction
Identifying job opportunities for talent is important to enable organisations to attract, develop, and retain the talent. It is a time-consuming process when done manually and results in limited reach, inconsistent criteria with human bias. With the emergence of freelance platforms and integration of technology, significance of data-driven job recommendations has grown.
Traditional state-of-the-art techniques recommend job opportunities to the talent based on similarity between job requirements and talent attributes. However, the nature of data in this domain is inherently in unstructured natural language format viz. resume and job descriptions. To use traditional approaches, one is required to extract information and bring it to a structured format. In this study, we investigate the application of large language models (LLMs) [16] due to their ability to process and comprehend language, as well as their extensive knowledge gained from training on Internet text.
Providing recommendations falls in the category of Reasoning intelligence, specifically Prescriptive intelligence. Generative AI and language models are observed to be performing well in Recognition and Operative intelligence. In this study, we investigate role of language models in Reasoning intelligence.
The traditional methods are required to define a structure that consist of requirements of jobs and corresponding attributes of talent such as role, skills, educational background, and experience, among others. The jobs with closest match with the attributes of talent are recommended. These methods face several challenges.
* The task of transforming data from CVs and JDs into a structured format is error-prone and can result in loss of information [1].
* The qualitative aspects of talent such as achievements, strengths, and aspirations are ambiguous and presented in natural language, hence not extracted.
* The data driven tools rely on quantitative metrics, such as skills and experience, while overlooking qualitative aspects like soft skills or potential for growth.
* The JDs may be incomplete, the qualitative aspects of the job and organization may be biased or even may not be mentioned.
In this work, we leverage the capability of LLM's to understand natural language for capturing the information that was previously getting lost during the conversion of unstructured data to structured form. We present,
* One content based deterministic approach: based on traditional techniques. This is used as a baseline to compare performance of all approaches
* Two LLM based approaches: guided and unguided
* A hybrid approach: a combination of traditional and LLM based approach
* Evaluation: Comparison of quality of recommendations produced by each method and the efficiency.
Though the approaches proposed in this work are generic, we consider the domain of Information Technology to evaluate the effectiveness of these techniques. We conduct experiments using two datasets:
* _Synthetic data_: In this experiment, we generate synthetic data that simulates the characteristics of the IT domain, allowing us to assess the performance of all methods in a controlled environment.
* _Real world data_: In this experiment, we use real JDs from the IT field. This enables us to evaluate the practical applicability and performance of the methods using authentic data.
Through these experiments, we aim to gain insights into the effectiveness and limitations of the all the techniques for providing ranked job recommendations for a talent in the IT domain.
## II Content based Deterministic approach
Deterministic approach is on the lines of content based techniques of recommending jobs to talents. This simple approach is used as a baseline to compare performance of all approaches. In this approach, a job is recommended by matching its requirements with the attributes of talent.
Refer Figure 4. The algorithm accepts inputs as unstructured resume (CV) and job descriptions (JDs) and configurations that include objective direction, indicating if higher, lower or closer values of the job attributes with respect to the talent attribute is better and the number of recommendations required. The output is the recommended JDs with a score.
#### Ii-1 Unstructured to structured conversion
Figure 1 and 11 show a CV and a JD, respectively. Figure 2 shows the attribute model for a talent and a job. The talent attributes capture talent's professional information and preferences. Whereas, the job attributes capture corresponding requirements.
The talent and job model is populated by extracting information from the CV and JD. The structured model of talent and job corresponding to the CV and JD in Figure 1 and 11 is shown in Figure 3, and 10 respectively. We use LLMs to convert unstructured data to structured form. The prompts used for this conversion are depicted in Figure 12.
#### Ii-2 Deterministic algorithm
The deterministic algorithm, as the name suggests, generates predictable, hence reproducible results every time, based on the criteria provided to it. It provides a baseline score to compare recommendations from other approaches. Though there are complex recommendation approaches, we keep it simple while capturing the required attributes that determine if a job is suitable for a given talent.
At the core of this algorithm, are basic comparisons between various attributes extracted from the CV and JD.
* "Closer" match: A rating of 1 is awarded with absolute deviation equal to zero and the rating decreases as absolute deviation increases. This approach penalizes both under-qualified candidates. It also penalizes over-qualified candidates as they may be better suited for other roles.
* "Exact" match: A score of 1 is awarded if a value or a set are an exact match. This is important for some attributes where we a specific value is required, say mandatory certification.
For our experiment, we configure all the attributes set to "closer" match, the only exception is the "certifications" for which we set it to "exact". Consider the following example.
1. Skill proficiency: Consider that three skills out of four are common, with one having same proficiency and the other two with a deviation of 2 from required. This leads to a score: \((1+1/2+1/2)/4=0.5\). The first value is 1 as the proficiency exactly matches with the requirement and the next two values have \(1/2\) as the values deviate by 2.
2. Time zone: It is an an absolute difference in hours and is calculated ranging between 0 to 26 which is then normalized between 1 and 0.
Fig. 1: Unstructured Resume CV3.
Fig. 3: Structured resume CV3.
3. Certification: It is a set match of the required certifications. The score is 1 for a complete set match, else 0.
4. Education: It is denoted as a number from 1 to 5 with higher value for higher education. Its score is the reciprocal of the absolute difference with respect to required value.
5. Experience: The score is calculated as a ratio of talent value and required value in the required role. For example, for the matching role the actual value is 6 and required is 10, resulting in score of \((6/10)=0.6\).
6. Role: It is the reciprocal of the preference of the required role by the talent. For example, the candidate has 2nd preference for the role, resulting in a score of 0.5.
7. Finally, an average value of all scores is \((0.5+1+1+1+0.6+0.5)/6=4.6/6=0.77\).
## III Language model based approach
Though large language models (LLMs) are predictors of next words and lack true comprehension or knowledge, they have a convincing ability to generate coherent responses and recall information. This makes it seem like they possess knowledge of many domains. We propose to leverage this ability of LLMs to understand and correlate different required skills and roles based on their similarity. For example, someone experienced in design engineering and software development in the domain of IT may inherently be suited for a Full stack developer role, even if not explicitly mentioned. This provides a significant advantage over methods that are constrained by specific value and text based similarities.
We provide unstructured CV and JDs to a language model to generate job recommendations. Refer Figure 5 and 6 for the two LLM based approaches. We use GPT4 [2] model of OpenAI for this purpose.
### _LLM Guided Algorithm_
Here, we provide a pre-defined structure to generate recommendations. This allows control over the model's response, such as the criteria used for matching, the number of recommendations, and returning recommendations in a structured format. Further, we exploit LLMs ability to explain their actions in natural language to provide reasoning of why a certain recommendation is good or bad.
Figure 5 shows the overall flow and prompts provided as input to LLM. A configuration is provided to "guide" the LLM according in the desired objective direction, _"criteria"_. This information together with appropriate prompts is provided as an input to the LLM. The output from the LLM is consistent due to the guidelines from the prompts and consists of recommended Job ID along with explanations such as benefits, drawbacks, and qualitative aspects.
### _LLM Unguided Algorithm_
To leverage LLMs ability to comprehend natural language, we perform an experiment where no clear criteria is provided for recommending a job to a talent viz. an "unguided" approach. This way, the LLM model is expected to recommend jobs that are "good" according to its own comprehension. We direct the model in the prompt to add an explanations for the recommendation. With this approach, the model provides recommendations with logical explanations.
Figure 6 shows the overall flow and prompts used in this approach. This time no _"criteria"_ is provided to guide the model. The output obtained from this approach does not follow any specific structure. Nevertheless, it provides a structure to its response in multiple paragraphs with the Job ID at top and the explanations following below as a bulleted list.
Fig. 4: Deterministic algorithm.
Fig. 5: Language model based guided algorithm.
Fig. 6: Language model based unguided algorithm.
### _Handling large data_
LLMs have a limitation on number of tokens that can be provided as an input. It is 8192 for the OpenAI GPT4. When working with a large number of unstructured JDs, it is inevitable that we will hit this token limit. In that case, we propose to: (i) Split the JD set to smaller subsets (ii) Get recommendations separately from each subset (iii) Merge the top recommendations from each subset. Note that this merging only top recommendations from each set will overlook superior but lower ranked recommendations from other sets.
## IV Hybrid approach
The key idea here is to complement weakness of one technique by the strength of the other.
* _Qualitative aspects_: Traditional technique is effective in matching well-structured quantitative attributes such as skill proficiency, role, experience, among others. However, they lack in capturing qualitative aspects of talent and job. Whereas, LLM can comprehend language and take into consideration qualitative attributes and soft skills that are important for a job but that have been missed in the structured model. Further, they can provide justifications of recommendations in natural language format.
* _Unbiased view of job and organization_: LLM can provide an unbiased view of a job and the organization based on its knowledge gained from training on Internet data. This can guide the user to take an unbiased informed decision.
* _Scalability and cost_: LLM based technique faces a drawbacks that it is costly and has scalability challenge due to limitations on number of tokens. Whereas, the traditional technique is computationally efficient and fast, hence has the capability to process large amount of data.
We propose a hybrid approach that combines traditional method with use of language models to generate richer job recommendations. The key-idea is (i) to use traditional method to trim down large number of job opportunities to a smaller relevant set and (ii) prioritize these further on the basis of qualitative aspects, with well justified reasons, benefits, and drawbacks in natural language. Figure 7 outlines the hybrid approach. Deterministic algorithm is used in the first stage and the unguided LLM method is used in the second stage.
The recommendations are further enriched by an unbiased view of the role and organization to overcome the issue of JDs of organizations highlighting only the perks of working for them, without mentioning drawbacks. An unbiased view makes it easier for the candidate to make an informed decision. We prompt the LLM to rate the organizations and the particular role in that organization on a scale of 1 to 10. The prompt containing the aspects of the organization and roles that are considered for the rating are shown in Figure 8.
We derive recommendations on two sets of data viz. (i) controlled experiments using synthetically generated data (ii) experiments using real world JDs. We present the results in upcoming sections.
## V Experiments and Results: Synthetic data
### _Data generation_
We model the domain of Information Technology by populating relevant values of attributes. The list of values and their ranges are shown in Figure 9. We sample from these values to populate structured models of talent and job (Figure 3, 10).
For the purpose of performing controlled experiments, we generate ten JDs for every CV. The jobs are generated in a way such that, one JD is almost an exact match, three JDs deviate a little, the next three have larger deviation, the last three are quite different. The traditional deterministic algorithm is expected to recommend and rank the JDs in above order..
We generated realistic unstructured CV and JDs from the structured talent and JD models using GPT4. These provided
Fig. 8: Prompts used to generate job and organization ratings
Fig. 7: Hybrid Algorithm.
Fig. 9: Set from which attribute values are sampled.
as input to LLMs. The prompts used for this purpose is shown in Figure 12. Figure 11 shows unstructured JDs corresponding to structured JDs in Figure 10.
### _Results_
We present details of recommendations generated by each algorithm for one candidate (CV3). Refer Figure 13 for recommendations by deterministic algorithm. JD7 is a perfect match for CV3, with all attributes matching and just slight variation in the skill proficiencies. JD9 is the next best match, requiring better skill proficiency, and more certifications.
The language model pays more attention to the role match, experience and education. Refer Figure 14 for recommendations by "guided" approach. It has recommended JD3 which isn't a good match by deterministic algorithm, with only the role requirement and timezone conditions being satisfied. However, it has weighted the qualitative attributes that are not captured in the structured format viz. collaborative and inclusive environment, opportunities for skill development and career advancement. One aspect it seems to miss out across all its recommendations is the mismatch in representation of proficiency level. It is unable to understand the terminology of beginner, intermediate, advanced level and compare it with numeric proficiency levels between 0 and 5. This behaviour is expected as domain dependent numeric values do not make sense unless the range of the values or its mapping with textual classifications are mentioned in advance. It has recommended all the JDs that have preferred role by the candidate with CV3 viz. Full Stack Developer and Technical Lead.
Refer Figure 15 for recommendations by "unguided" approach. In this case, JD4, JD6, JD7 have been recommended and the recommendations are well justified. Again, it has given more preference to role, and experience. The qualitative aspects of organizations is correctly captured and presented in greater detail. Being completely unstructured, it clearly explains the perks of the recommended jobs and also mentioned the reasons why the others were not a good match.
Figure 15 presents recommendations generated by the hybrid algorithm. The deterministic algorithm recommends 5 JDs. The LLM unguided method then picks out the best 3 from among those. In addition to this, the algorithm also provides ratings of qualitative aspects of organizations and job traits as shown in Figure 17. Google and its Full Stack Developer position is rated highest 9 on 10 while SAP has organization rating of 8.7 and its Full Stack Developer position has a rating of 8.6. Finally, TCS has organization rating of 8.5 but the position of Technical lead is rated 8.7. This helps the candidate have an unbiased view and take an informed decision.
Fig. 11: Unstructured synthetic JDs: JD7 and JD3.
Fig. 12: Synthetic data generation prompts for creating unstructured CV (top) and unstructured JD (bottom).
Fig. 10: Structured synthetic JDs: JD7 and JD3.
Fig. 13: Deterministic algorithm recommendations for synthetic CV3.
## VI Real world experiments
### _Data description_
We used real JDs from Kaggle [3] and generated CVs from these. The unstructured CVs are generated using LLM. The prompt used for this is presented in Figure 18. The JDs have a lot of variation, which helps capture real world situations where JDs can be completely different.
We used five CVs to generate recommendations. For three of the CVs, we did not include the source JDs from which they are generated, whereas for two CVs, we included the source JDs. The thought behind this was to have some CVs which would not have a one to one match with any of the available JDs and the recommenders would be forced to pick some next best option. This would help make this experiment more relevant and better suited to capture real world scenario.
### _Results_
We present details of recommendations generated by each algorithm for one CV (CV2) shown in Figure 19 and 20. Figure 21 show two unstructured JDs and Figure 22 shows corresponding structured JDs. Figure 23, 24, 25, and 26 show the recommendations by deterministic, LLM-guided, LLM-unguided, and Hybrid approach respectively.
Fig. 16: Hybrid recommendations for synthetic CV3.
Fig. 14: LLM guided recommendations for synthetic CV3.
Fig. 17: Organization and job ratings in hybrid algorithm recommendations for synthetic CV3.
Fig. 18: Prompt for unstructured CV generation from a real unstructured JD.
Fig. 15: LLM unguided recommendations for synthetic CV3.
The deterministic and LLM approaches recommend JD9 from which CV2 was generated. It is best with respect to skill match and role of "DevOps Engineer". In was observed that none of the JDs had any specific location requirement. As a result, LLM took liberty to show it as either a drawback or a benefit. In the LLM-guided approach, the method brings it out as a drawback. However, the LLM-unguided approach and Hybrid approach bring it out as a benefit. LLM-guided approach points out that the job opportunity is not at the preferred location "San Francisco". However, LLM-unguided approach and Hybrid approach mention that there a match in preferred location. Hybrid algorithm brings out the positives of JD9 with respect to the skill match and how particular skills such as Hadoop, Urban code, Tomcat will help in "Infrastructure as Code in a DevOps oriented organization". Rest of the JDs recommended are not as good a match compared to JD9. However, JD6 has been recommended on the basis of similarity of skills of DevOps Engineer and Java Developer or Software Developer position.
Figure 27 shows the job ratings produced by LLM in Hybrid algorithm. Clearly, the position of "DevOps Engineer" in JD9 has a higher rating of 7.6 compared to a "Java Developer" position in JD6.
## VII Ballpark estimates of quality of algorithm
Assigning a quality score to a recommendation without human feedback is hard. We assign reference quality scores manually to each recommendation. The manual reference scores are assigned based on how a human would have rated the jobs for a given CV, i.e., by considering all "must-have" conditions first, like skills, education, experience and then focusing on the "good-to-have" like location, certification and other associated benefits provided in the job package. However, since these scores are manually assigned there can be oversights along with aspects of jobs that can be interpreted differently depending on the individual rating the jobs. So, rather than treating the scores as ground truth, we use them as guidelines and assume that some deviation from them is acceptable.
The quality scores of algorithms (Figure 28 and Figure 28) are computed as an average of absolute deviation of algorithm scores from the manual reference scores. In case of LLM unguided algorithm that just provide rank of recommendation the algorithm score is assigned a value same as the manual reference score for the same rank.
### _Synthetic data_
Figure 28 shows our findings. The results from the four methods exhibit striking similarities. Among them, the LLM unguided approach stands out for its superior performance, closely resembling human-friendly recommendations, and having a accuracy of 95.66%. We observed that LLMs excel in utilizing domain knowledge and capturing unstructured elements, showcasing their greatest strengths. Conversely, it is evident that constraining the LLM in guided approach yields the poorest results among the four methods, with around 93.66% accuracy. Deterministic method is better suited for large scale, rendering the use of LLMs inefficient and unjustifiable in terms of time and cost. Consequently, Hybrid approach, leverages the unguided LLM method in conjunction with deterministic methods to combine the best aspects of both approaches, generating results almost identical to the best, with an accuracy of 95% which is just shy of 0.66% from the best.
### _Real world data_
Our findings for recommendations for real world data are depicted in Figure 29. The quality score on real data is somewhat lower compared to the synthetic data. Similar to the synthetic data, LLM unguided once again outperforms all other methods with an impressive accuracy of 89.66%.
Fig. 19: Unstructured real CV2.
Fig. 20: Structured real CV2.
The hybrid method secures the second-best position with an overall accuracy of 87%, following the same trend as before. LLM guided, on the other hand, falls in the lower half with an accuracy of 84.66%, marginally outperforming the deterministic method which exhibited the poorest performance this time. The subpar results of the deterministic approach highlight the loss of information during the conversion to a structured data format.
An additional interesting observation can be made from these results. There is a noticeable decline in performance for one of the CVs, CV4, primarily due to the absence of an exact job match. Recommendations for this talent had to rely solely on other relevant factors and general knowledge to suggest a suitable job. Interestingly, all the methods performed equally poorly on this task.
In conclusion, it is evident that LLM unguided exhibits the best performance. However, for practical purposes and the ability to handle larger data volumes, the Hybrid algorithm
Fig. 23: Deterministic algorithm recommendations for CV2.
Fig. 24: LLM guided algorithm recommendations for real CV2.
Fig. 22: Structured real JD9 and JD6.
proves to be a more suitable choice.
## VIII Performance Analysis
We evaluate runtime performance of all the approaches. We observed a strong correlation between the number of tokens used in the LLMs, both input and output, and the time it takes to generate recommendations.
* Deterministic algorithm: In this case, the time taken by the algorithm itself is negligible when compared to the time required to convert unstructured to structured data. On an average around 20 seconds are required to convert a given CV or a JD to structured format. However, in real, JDs may not be required to be converted to structured format every single time the recommendation system runs.
* LLM guided and LLM unguided: Both of these methods exhibit similar time requirements, although LLM unguided takes slightly longer due to its tendency to employ verbose reasoning, resulting in a higher token count during the process. Generating recommendations for a given CV typically takes an average of 25 to 35 seconds using either method.
* Hybrid: The hybrid method combines both deterministic and LLM approaches, executing the process sequentially, which results in the cumulative time required by both methods. To run the deterministic part and select good recommendations, it takes approximately 20 seconds. Subsequently, the LLM part is invoked, which takes an additional 25 seconds, resulting in a total time of approximately 45 seconds for a single recommendation. While this duration may appear large, it should be noted that even with hundreds of jobs to recommend from, the time will remain relatively constant.
## IX Literature Review
Recommendation systems find extensive use in a variety of domains such as E-commerce, streaming platforms, due to their ability to enhance user experiences and drive engagement. They are generated using various techniques, such as collaborative filtering, content-based filtering, or hybrid approaches. In the past, these techniques have been used for the purpose of recommending job opportunities to talent as well.
Content based recommendations take into account the semantic similarity of attributes that are present in the CV and JDs. There are a number of approaches on how these similarities are calculated. In [11] Bag of Words method is used, whereas in [4] uses the Latent Dirichlet allocation algorithm. We also find relatively new methods such as word2vec being used for job recommendations [9, 15].
In Collaborative filtering the recommendations are generated by considering the previous selections a candidate has made and recommending similar jobs in the future. Memory based collaborative filtering is predominantly used for job recommendations where candidates are grouped based on the similar interactions. Jobs are recommended to candidates by the preference of other candidates belonging to the same group. [12] and [13] apply this method.
This is by far the most common approach for job recommendation. Contributions of Model-based methods on shallow embeddings can be found in [6]. In recent times Deep neural network based methods have gained a lot of traction with [10]using them for the purpose of job recommendation. There are also ensemble hybrid methods [7] and weighted hybrid which assigns some weightage to the different contributing methods [14]. Apart from these some notable works in this field include [5]which talks about the
Fig. 28: Quality of nanomague based recommendations on synthetic data.
Fig. 27: Job ratings for positions recommended for real CV2.
Fig. 29: Quality of language based recommendations on real data.
downside of too many recommendations and the impact of wrong recommendations on a user.
Finally, knowledge based recommendation are based on using inherent knowledge about the job in form of an ontology and it can recommend the jobs to candidates once they too are categorized following the same ontology. This is discussed in more details in [8].
The LLM based approach circumvents issues related to information extraction and modelling. The Hybrid approach generates predictable recommendations that ensure quality expected by a user, further enriching them by the use of LLMs by capturing attributes that were not captured in the model, by removing bias, and providing justifications.
## X Conclusion
Unstructured data is prevalent in job recommendations, as CVs and JDs (JDs) are often in unstructured formats. Converting unstructured data into structured formats has been the traditional approach, but achieving accurate conversion remains challenging. Large language models (LLMs) have emerged as valuable tools in this field. However, LLMs can be slow, expensive, and lack full controllability, resulting in indeterminism and occasional inaccuracies in their results. To improve job recommendations, we propose a hybrid approach that combines quantitative attribute filtering followed with LLM analysis. By initially filtering out irrelevant jobs based on quantitative attributes, we can reduce the volume of data before utilizing LLMs to consider the qualitative aspects of the remaining screened jobs. Our experiments demonstrated that relying solely on LLMs may result in job recommendations that lack the required attributes for candidates. Deterministic methods can also overlook attributes that have not been explicitly modelled and miss important qualitative aspects. In contrast, the hybrid method performs well in terms of efficiency, cost-effectiveness, and capturing both quantitative and qualitative aspects. Additionally, JDs often exhibit bias and highlight only positive qualities. Providing an unbiased perspective of the organization and job position based on the common knowledge of LLMs can empower candidates to make informed decisions. In conclusion, the LLM-based approach adds value to talent acquisition. It is recommended to be used in conjunction with traditional algorithms to handle scalability and address specific requirements.
## XI Discussion
The problem of recommending jobs faces multitude of challenges. Accurate matching of the skills and requirements of the talent with the job opportunities available is hard. Many JD are vague and poorly defined, making it difficult to identify the right match. Limited information available about the talent, such as their skills, experience, preferences and complex career goals makes it challenging to recommend personalized job opportunities that align with their qualifications and aspirations. Keeping up with the evolving job market and ensuring that the recommended opportunities remain relevant is a significant challenge. Obtaining feedback from talent about the effectiveness of recommendations is essential for continuous improvement. However, gathering feedback requires establishing reliable mechanisms to measure the success and quality of recommendations which is inherently ambiguous. Further, a real world job marketplace, requires to address complex aspects such as when to recommend, how many recommendations are to be made, how the recommendations influence the user choice, implications of a wrong recommendation and many more.
By fine-tuning prompts, incorporating few-shot learning, training the model on domain-specific data, and leveraging user feedback and reinforcement learning, it is possible to generate more relevant and personalized recommendations in the future. An important direction of work is to define quality of recommendations unambiguously, possibly using benchmarks.
|
2310.20563 | Taking control: Policies to address extinction risks from advanced AI | This paper provides policy recommendations to reduce extinction risks from
advanced artificial intelligence (AI). First, we briefly provide background
information about extinction risks from AI. Second, we argue that voluntary
commitments from AI companies would be an inappropriate and insufficient
response. Third, we describe three policy proposals that would meaningfully
address the threats from advanced AI: (1) establishing a Multinational AGI
Consortium to enable democratic oversight of advanced AI (MAGIC), (2)
implementing a global cap on the amount of computing power used to train an AI
system (global compute cap), and (3) requiring affirmative safety evaluations
to ensure that risks are kept below acceptable levels (gating critical
experiments). MAGIC would be a secure, safety-focused, internationally-governed
institution responsible for reducing risks from advanced AI and performing
research to safely harness the benefits of AI. MAGIC would also maintain
emergency response infrastructure (kill switch) to swiftly halt AI development
or withdraw model deployment in the event of an AI-related emergency. The
global compute cap would end the corporate race toward dangerous AI systems
while enabling the vast majority of AI innovation to continue unimpeded. Gating
critical experiments would ensure that companies developing powerful AI systems
are required to present affirmative evidence that these models keep extinction
risks below an acceptable threshold. After describing these recommendations, we
propose intermediate steps that the international community could take to
implement these proposals and lay the groundwork for international coordination
around advanced AI. | Andrea Miotti, Akash Wasil | 2023-10-31T15:53:14Z | http://arxiv.org/abs/2310.20563v1 | # Taking control:
###### Abstract
This paper provides policy recommendations to reduce extinction risks from advanced artificial intelligence (AI). First, we briefly provide background information about extinction risks from AI. Second, we argue that voluntary commitments from AI companies would be an inappropriate and insufficient response. Third, we describe three policy proposals that would meaningfully address the threats from advanced AI: (1) establishing a Multinational AGI Consortium to enable democratic oversight of advanced AI ("MAGIC"), (2) implementing a global cap on the amount of computing power used to train an AI system ("global compute cap"), and (3) requiring _affirmative safety evaluations_ to ensure that risks are kept below acceptable levels ("gating critical experiments"). MAGIC would be a secure, safety-focused, internationally-governed institution responsible for reducing risks from advanced AI and performing research to safely harness the benefits of AI. MAGIC would also maintain emergency response infrastructure ("kill switch") to swiftly halt AI development or withdraw model deployment in the event of an AI-related emergency. The global compute cap would end the corporate race toward dangerous AI systems while enabling the vast majority of AI innovation to continue unimpeded. Gating critical experiments would ensure that companies developing powerful AI systems are required to present affirmative evidence that these models keep extinction risks below an acceptable threshold. After describing these recommendations, we propose intermediate steps that the international community could take to implement these proposals and lay the groundwork for international coordination around advanced AI.
Executive summary
**Advanced AI poses an extinction risk to humanity**. Leading AI researchers and CEOs of the three most advanced AI companies have recognized these risks. The UK AI Safety Summit provides world leaders with an opportunity to lay the groundwork for international coordination around these pressing threats.
**Extinction Risks from AI arise chiefly from "scaling"**. Scaling refers to the process of building increasingly large, ever more autonomous, and more opaque AI systems. The vast majority of AI research with concrete applications, such as cancer detection and trading algorithms, is unconcerned with scaling.
**Governments have a critical and time-limited opportunity to reduce extinction risks from AI**. This report recommends immediate actions that can be taken by governments: recognizing the extinction risks from advanced AI, acknowledging the need for concrete scaling limits, and committing to build an international advanced AI project. Specifically, we recommend:
1. **Establishing a Multinational AGI Consortium (MAGIC), a 'CERN for AI'**. This institution would house world-leading AI scientists from all signatory countries, dedicated to the joint mission of working on advanced AI safety. MAGIC would perform cutting-edge research to control powerful AI systems and establish emergency response infrastructure that allows world leaders to swiftly halt AI development or deployment.
2. **Implementing global compute limitations**. Compute Limitations mitigate extinction risk by throttling the very few dangerous AI systems that rely on advanced hardware, while leaving the vast majority of the AI ecosystem unhindered. We recommend a tiered approach to compute limitations: development above a **moratorium threshold** would not be allowed to occur (except within MAGIC), development above a **danger threshold** would need to be regulated by MAGIC or a MAGIC-certified regulator, and development below the danger threshold would be unaffected by MAGIC.
3. **Gating critical experiments**. Before developing critical systems, companies should demonstrate the safety of these systems through verifiable criteria. The burden of proof should be on companies performing Frontier AI research to demonstrate that these systems are safe (rather than on regulators or auditors to show that systems are dangerous). This follows the standard practices from high-risk sectors, where demonstrating safety is a precondition for undertaking high-risk endeavors.
These proposals have strong support among the British and American public. Recent YouGov polling shows that 83% believe AI could accidentally cause a **catastrophic event**, 82% prefer **slowing down the development of AI, 82% do not trust AI tech executives to regulate AI**, 78% support a **"global watchdog"** to regulate powerful AI, and 74% believe AI policy should currently **prevent AI from "reaching superhuman capabilities"** (Artificial Intelligence Policy Institute [AIPI], 2023; Control AI, 2023).
Introduction
### Extinction risks from advanced AI
**Advanced AI poses an extinction risk to humanity.** Leading AI researchers, alongside the CEOs of the three main advanced AI companies, all have recently signed a statement acknowledging: **"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war"** (Center for AI Safety, 2023). Sam Altman, CEO of OpenAI, stated that the bad case from AI is "lights out for all of us" (Altman, 2023). Dario Amodei, CEO of Anthropic, claimed that the chance of a civilization-scale catastrophe was at around 10-25% (The Logan Bartlett Show, 2023). Geoffrey Hinton, considered a godfather of modern AI, recently quit Google to warn about the extinction risks from AI (MIT Technology Review, 2023).
**Extinction risks from AI arise from "scaling".** Scaling refers to the process of building increasingly large, more autonomous, and more opaque AI systems. The vast majority of AI research with concrete applications, such as cancer detection and trading algorithms, is unconcerned with scaling. The extreme risks are specific to advanced AI (sometimes called "Frontier AI"), which focuses on scaling. Companies that focus on scaling are attempting to create AI that surpasses human performance (sometimes called artificial general intelligence, AGI, human-level machine intelligence, artificial superintelligence, or superhuman AI). Ian Hogarth, Chair of the UK's AI Foundation Model Taskforce, more aptly described them as "godlike AI" due to the immense capabilities they would hold, and he noted that stopping the scaling race to godlike AI is a crucial policy priority (Hogarth, 2023).
**Extinction risks could occur very soon.** Turing Award winner Yoshua Bengio has stated that loss of control to rogue AI systems could emerge in "as little as a few years" unless appropriate precautions are taken. According to Anthropic CEO Dario Amodei, systems that enable many more actors to carry out large-scale biological attacks are likely to be built within two to three years (Oversight of A.I.: Principles for Regulation, 2023a; 2023b).
**World leaders have a fleeting window of opportunity to prevent extinction risks from AI. Risks from advanced AI** are widely acknowledged, and the public supports regulation (AIPI, 2023; Center for AI Safety, 2023 Control AI, 2023). Frontier AI systems are powerful enough to inspire action but not yet powerful enough to pose extinction risks, and only a few companies are capable of developing Frontier AI systems. These are favorable conditions for regulation. They may not last long, underscoring the need for swift and urgent action.
**To ensure global security, Frontier AI progress should occur in a secure international facility that conducts cutting-edge research while keeping extinction risks below an acceptable level.** The first step toward this future involves engaging in international coordination.
"I will refer to it as what is: **God-like AI**. A superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it. **God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race.**" - Hogarth (2023)
### The UK AI Safety Summit: A Pivotal Moment
**"It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight**. Did the people racing to build the first real AGI have a plan to slow down and let the rest of the world have a say in what they were doing?" - Hogarth (2023)
The Summit represents a historic opportunity to reduce extinction risks through national and international measures. The Summit's focus on **Frontier AI systems** and **extinction risks** is crucial, given the nature of these threats and the need for urgent action. The Summit's emphasis on international coordination underscores the great need for a coordinated global response to this major global security threat.
**This is an opportune moment to act**. Extinction risks from advanced AI are widely acknowledged by experts, the public is calling for regulation, and computing resources are still expensive and physically concentrated, constituting a natural bottleneck for government intervention. Furthermore, progress in AI is rapid and exponential: waiting until risks are imminent is not possible when dealing with technology that moves at an exponential pace. Notably, many experts believe extreme dangers could plausibly emerge within the next 2-5 years (e.g., Oversight of A.I.: Principles for Regulation, 2023a; Leike & Sutskever, 2023). Taken together, **the present moment represents an opportune, unprecedented, and necessary moment to develop the national and international regulations needed to reduce extinction risks from advanced AI**.
The Summit has the potential to usher in a series of concrete international agreements that could substantially reduce extinction risks from Frontier AI.
There are three potential categories of outcomes for the AI Safety Summit:
* **Insufficient: Extinction risk from AI fails to be addressed**: Voluntary measures that do not address extinction risks are endorsed. Governments fail to agree on common principles. The measures endorsed rely on companies to govern themselves, lack proper enforcement, and fail to place the burden of proof on companies to show that their activities are safe. Scaling is allowed to continue indefinitely until some unspecified point in the future, when we are much closer to imminent danger.
* **Adequate: Extinction risk is acknowledged, and steps forward are agreed to in principle**: The AI Safety Summit attendees all agree to acknowledge that continued Frontier AI development poses an extinction risk to humanity. Given this, the Summit agrees in principle that rapid and effective measures must be taken by major governments to limit further AI scaling, that the safety of very advanced AI systems must be proven before they can be developed or deployed, and that a priority of the Summit going forward is to establish an international regime to deal with Frontier AI.
* **Excellent: Concrete measures are agreed upon to minimize extinction risk from AI**: The AI Safety Summit publicly acknowledges the risks from advanced AI. World leaders agree that there should be concrete scaling limits put in place and commit to ending the race to AGI. World leaders recognize the urgency of working on an international organization to regulate the development of advanced AI.
Voluntary commitments and "responsible capabilities scaling" policies: solutions that do not address extinction risks
"Responsible scaling" sounds like it came from the same lobbying firm that coined "clean coal." - Tegmark (2023)
Responsible Scaling Policies (RSPs) are proposals suggesting voluntary commitments on internal protocols AI companies can use to manage risks. Broadly, RSPs involve using tests (dangerous capability evaluations) to determine how powerful and dangerous an AI system is. RSPs are sometimes referred to as "responsible capabilities scaling" or "risk-informed development policies";for convenience, we will use the term RSP throughout the report, but our points also apply to voluntary scaling commitments that are branded under different labels.
The RSP framework utilizes a burden of proof that is the opposite of what is a common standard in high-risk sectors. **In RSPs, rather than it being the onus of the company to demonstrate that their plans for further scaling of powerful Frontier AI systems is safe, the onus is on auditors to demonstrate that they are dangerous.** The RSP system presumes that Frontier AI is safe until proven otherwise.
So far, these have been proposed by Anthropic, one of the major AGI companies, and ARC Evaluations, an AI model testing organization (Anthropic, 2023; Alignment Research Center, 2023). RSPs have been briefly mentioned by a member of the UK government as a possible measure, and "responsible capabilities scaling" is listed among the discussion topics of the Summit.
In the context of extinction risks from Frontier AI, which are a core focus of the Summit and future government intervention, **RSPs are inadequate at addressing these risks.**
Below, we highlight a few of the properties that make RSPs (and other types of voluntary commitments) inadequate to address extinction risks from advanced AI (see also Wasil, 2023).
1. **RSPs reverse the standard burden of proof in high-risk sectors**. The RSP framework presumes that Frontier AI development is safe until proven otherwise. Under the RSP framework, AI development should be allowed to continue until auditors have clear evidence that models _already possess_ dangerous capabilities. This is a stark departure from norms in other areas of risk management, where we expect companies to provide _affirmative evidence_ of safety (see Raz & Hillson, 2005 for a review of standard risk management practices). This is self-defeating in the context of extinction risks from Frontier AI: waiting until the risks are realized before intervening does not work in the case of extinction-level risks. Instead, common risk management practices demand evidence of _affirmative safety_ - a point elaborated on in the next section of this report.
2. **RSPs lack oversight, accountability, and enforcement**. The RSP framework presumes that companies should be responsible for managing extinction risks. The company is allowed to figure out if and how to manage the risks, and the company is allowed to decide what level of risk is worth incurring. There are no requirements for companies, no oversight to ensure that companies follow through on their voluntary commitments, and no accountability or enforcement if companies break their agreements. In fact, **the RSP framework allows companies to break their RSPs if they are concerned about risks from their competitors**. If a company believes that a competitor may cause unacceptably high risk, the company is allowed to break its RSP at its own discretion (Alignment Research Center, 2023; Anthropic, 2023). The RSP framework allows each company to determine
what counts as an emergency and which competitors should be deemed sufficiently irresponsible. This underscores the need for approaches that have greater accountability and enforceability.
3. **The safeguards do not exist**. Experts openly acknowledge that they do not know how to control AGI. Anthropic's RSP, for example, describes AGI-like systems as "ASL-4" systems. Anthropic openly acknowledges that it does not know what safeguards will be needed to control these systems, and it does not yet have a plan to ensure that these systems are developed safely (Anthropic, 2023). Anthropic is not unique in this regard - no other AI companies have yet developed safeguards that provably remove dangerous capabilities, nor produced plans that provably minimize extinction risk from Frontier AI. In the case where an evaluation successfully identifies a trained model as dangerous, there exist no countermeasures to neuter the threat.
4. **RSPs do not require concrete commitments.** Under RSPs, when a company realizes it has a dangerous system, it is not required to specify how it will proceed. For example, if a company successfully detects that its AI system can develop biological weapons or shows signs of escaping from human control, RSPs do not require companies to specify how they will respond. RSPs do not need to specify whether or not the company will pause further development or deployment, whether they will inform anyone about these risks, or how they will manage these risks.
5. **The tests are inadequate**. RSPs rely on _dangerous capability evaluations_ - tests that show whether or not a model is capable of performing dangerous activities in specific scenarios. This approach relies on having an exhaustive list of scenarios, an exhaustive list of dangerous capabilities, and an exhaustive set of evaluations that can track those capabilities with proven accuracy. None of these exist. Neither experts nor RSP proposals have concrete or comprehensive lists of dangerous scenarios, exhaustive lists of dangerous capabilities, or reliable evaluations for those capabilities. Looking for an exhaustive list of scenarios is a flawed exercise in itself, in a domain where capabilities of AI systems are regularly discovered long after they are deployed to millions of people.
The way forward: Policy recommendations
"Many of the world's leading AI experts now think that human-level (or even more capable) AI could arrive before 2030. Regardless of the exact timelines, it's clear that unelected tech leaders should not decide whether, when, how and for what purpose such transformative AI is built." - Bengio & Privitera (2023)
In this section, we describe our preferred policy responses. We recommend the following three measures:
1. **MAGIC: A multinational artificial general intelligence consortium**. Governments should establish a multinational institution to enforce a global compute cap, perform research on how to control highly powerful AI systems, develop emergency response infrastructure, and develop additional measures to reduce extinction risks from advanced AI (see Hausenloy et al., 2023).
2. **A global compute cap**. Governments should prohibit the development of AI systems above a predetermined amount of computing power. We recommend setting this cap on compute at \(10^{24}\) floating point operations (FLOP), around the size of ChatGPT. If the cap cannot be implemented immediately, we should -- at a minimum -- develop the infrastructure and institutions necessary to implement such a cap in the future.
3. **Gating critical experiments:** Governments should establish auditing regimes that demand **affirmative evidence of safety** for certain kinds of dangerous AI development. Companies developing AI systems above a certain compute threshold (but lower than the global compute cap) would be required to show affirmative evidence of safety. This regulatory regime could draw from practices in other high-risk sectors (e.g., nuclear safety).
### MAGIC: A Multinational Artificial General Intelligence Consortium
**Governments should establish a Multinational Artificial General Intelligence Consortium (MAGIC; see Figure 1)**. MAGIC would have an exclusive mandate to conduct critical Frontier AI research. This institution would adhere to national-security grade standards such as closed-off facilities, an isolated network, clearances, regular vulnerability testing, and strict access policies. Rather than developing advanced AI in the context of a race between corporations, MAGIC would be the world's only artificial general intelligence project. It would be a highly secure organization that is tasked with understanding how to control highly advanced AI systems, ensuring that the benefits of safe AI systems are globally distributed, and preventing other actors from illegally developing advanced AI (Hausenloy et al., 2023).
**MAGIC would allow the world to have democratic oversight over the development of highly powerful and highly dangerous AI systems**. MAGIC provides a positive and proactive vision forward: it enables innovation to occur once world leaders and AI experts have determined that such innovation would be safe and beneficial for humanity.
**MAGIC would be the world's unified AGI safety project.** MAGIC would hire talented researchers from around the world to perform research on how to control highly powerful AI systems and develop additional measures to reduce extinction risks from advanced AI. Thanks to the global compute cap (described below), MAGIC would be able to perform this research _without_ getting locked into a dangerous race to godlike AI. MAGIC could develop evaluations
that provide affirmative evidence of safety, develop safeguards for increasingly powerful AI systems, and quantitatively estimate risks from advanced AI. Once MAGIC researchers and world leaders have compelling affirmative evidence that they can develop powerful AI systems safely, they would be enabled to do so.
**MAGIC would develop emergency response infrastructure that allows world leaders to respond to an AI emergency (see figure 2)**. This infrastructure would need to have at least three parts: **detection** (to make sure that government officials notice potential emergencies quickly), **alarms** (to ensure that those tracking AI risks can swiftly communicate risks to MAGIC and other relevant officials to ensure a swift and coordinated response), and an **emergency response** (a "kill switch" that allows regulators to swiftly halt AI development and deployment). The emergency response system would be tested periodically. This would involve practice drills, in which regulators simulate what would happen if national or international regulators determined that risks had exceeded an acceptable threshold. In these drills, regulators would implement their emergency response efforts to test their effectiveness in a real-world setting. For instance, they would determine how long it would take them to halt an AI training run or withdraw access to a deployed AI system. The drills would allow them to ensure that their emergency response system would work in the event of an actual emergency, ensure they have swift communication
Figure 1: MAGIC would be responsible for implementing **compute limitations**, performing **AGI safety research**, and maintaining infrastructure to respond to **AI-related emergencies**.
channels with Frontier AI developers and compute providers, and help them identify ways of improving the emergency response infrastructure.
### Global Compute Cap
**MAGIC would implement a tiered approach to compute limitations (see Figure 3)**. The most important compute limitation would be **a global moratorium on AI development above a certain amount of computing power**. Building state-of-the-art AI systems requires huge amounts of expensive machinery (called GPUs). This infrastructure is currently core to AI scaling, which makes it a bottleneck to truly dangerous systems. **Compute Limitations** address this bottleneck by enforcing a cap on the hardware used to build these AI systems. This mitigates extinction risk by throttling the very few dangerous AI systems that rely on advanced hardware, while leaving the rest of the AI ecosystem largely unhindered.
Specifically, MAGIC would implement two compute limitations: **a moratorium threshold** (no one is allowed to develop AI systems above this threshold except MAGIC) and a **danger threshold** (companies are allowed to develop systems above this threshold but only if they can show affirmative evidence of safety, such that national or international regulators are convinced that extinction risks are acceptably low). AI development below the danger threshold would not be regulated by MAGIC, AI development above the moratorium threshold would be prohibited outside MAGIC, and AI development in between the thresholds would be governed by MAGIC-certified national regulators (see Figure 3). The danger threshold allows some kinds of risky AI
Figure 2: National and international regulators would develop infrastructure to **detect** elevated risk from AI, activate an **alarm** if risks exceed an acceptable threshold, and **respond** by halting Frontier AI development or withdrawing access to a deployed model.
development to occur in the commercial sector while still limiting extinction risks below a risk threshold (set by MAGIC). In the next section, we describe the kind of national regulations that would be needed for systems above the danger threshold.
**We have drafted an international treaty** that would establish a global moratorium and pave the way for an international AI governance organization (Miodti & Wasil, 2023; see also Treaty on Artificial Intelligence Safety and Cooperation, 2023 for a related proposal).
**One of MAGIC's important roles will be to set and update the moratorium threshold**. If it is set too low, AGI could be developed outside MAGIC, considerably raising the chance of extinction and other AI-enabled catastrophes. If it is set too high, some safe AI development would be delayed. Furthermore, new algorithms and techniques are regularly discovered that can make AI systems more powerful at a given level of compute. Therefore, MAGIC will need to update these thresholds over time to account for new techniques and other developments in AI progress. We recommend setting the initial moratorium threshold at 10^24 floating point operations (FLOP).
This threshold reflects a few important facts about the AI development landscape. First, AI performance has been greatly affected by the quality and quantity of advanced hardware. Moravec
Figure 3: Under a tiered approach to compute limitations, AGI development would be forbidden above the **moratorium threshold**, regulated above a **danger threshold**, and **unaffected below** the danger threshold.
(1998) explained this trend in a seminal paper, in which he tracked growth in computing power, extrapolated future trends, and predicted when the processing power of computers would match the general intellectual performance of the human brain. His estimates suggested that we would reach AGI at some point in the 2020s (Moravec, 1998). Second, current AI systems already perform at or above human-level performance on a variety of intellectual tasks. For example, GPT-4 performs at 90th percentile on the Bar Exam, 88th percentile on the LSAT, and 99th percentile on the SAT verbal (OpenAI, 2023). Third, many contemporary AI experts believe AGI could be developed very soon. For example, Google DeepMind co-founder Shane Legg puts a 50% chance of developing AGI by 2028, Anthropic CEO Dario Amodei believes human-level intelligence could emerge within 2-3 years (Patel, 2023a; Patel, 2023b). In summary, we already have AI systems that use human-level amounts of computing power, we already have AI systems that are at human-level or superhuman-levels on challenging cognitive tasks, AI experts believe that AGI could be developed very soon given current trends, and new algorithms and techniques are making AI systems more powerful (and dangerous) even at a given level of compute. If the threshold is set too low, AGI could be developed before the international community is confident that it can be developed and implemented safely. Meanwhile, the vast majority of AI development requires far fewer than 10\({}^{\circ}\)24 FLOP. With all of this in mind, we believe an initial moratorium threshold of 10\({}^{\circ}\)24 FLOP would be an appropriate starting point for the moratorium threshold. Ultimately, MAGIC will be responsible for deciding the initial threshold and how to update it over time.
### Gating critical experiments and affirmative safety evaluations
**AI development above a certain threshold of computing power should be prohibited unless there is affirmative evidence of safety from AI developers**. Before building critical systems and deploying them to the public, companies should be required to demonstrate the safety of these systems through verifiable criteria. The results of these tests should be audited and verified by government inspectors. It should draw from well-established risk management principles and frameworks from other high-risk areas (e.g., nuclear safety, biosafety).
**Gating critical experiments places the burden of proof on Frontier AI developers to show that their practices keep risks below an acceptable level**. This contrasts with the RSP approach, highlighted earlier. RSP frameworks that focus on dangerous capability evaluations reverse the burden of proof: they assume that dangerous AI scaling should be allowed to occur until risks are already realized. This is not an appropriate risk management approach, and it is not the approach we use to manage other high-risk technologies. Instead, we commonly expect individuals and companies that engage in dangerous experiments to provide affirmative evidence that their activities keep risks at an acceptable level (see Raz & Hillson, 2005). If they cannot demonstrate affirmative evidence of safety, they are not allowed to run dangerous projects or experiments. An affirmative safety regulatory regime would require AI developers to show their systems are safe, well-understood, and controllable. At a bare minimum, governments should require compelling evidence that AI systems will not autonomously deceive humans.
**Gating critical experiments could be informed by standard practices from high-risk sectors, where demonstrating safety is a precondition for undertaking high-risk endeavors.** For example, companies might have to show evidence that they understand how their AI system reaches conclusions, they can make sure models don't access a certain fact or set of facts, they can prevent AI models from ever generating certain content, they are aware of all the facts the model knows, they have a convincing case that the AI system cannot autonomously deceive
people, and they have evidence that the cost of jailbreaking their model is higher than a certain threshold (e.g., >$100,000). Of course, these particular criteria are simply meant to be illustrative examples; in practice, AI experts do not know how to show affirmative safety. This is concerning. As a result, companies developing sufficiently dangerous may have to explore novel approaches in order to present sufficient evidence and show regulators that risks are kept below an acceptably low level.
Importantly, such methods could draw from **probabilistic risk assessments** in the nuclear domain. In nuclear safety, it is common to set a concrete risk level that cannot be surpassed. For example, the US Nuclear Regulatory Commission (NRC) sets an acceptable risk threshold of increased cancer rates at 0.1%; those who want to develop nuclear reactors must show that the reactor has a <0.1% chance of increasing cancer risks for residents living near the facility (Nuclear Regulatory Commission, 1986).
### A timeline for regulation
These proposals will require swift international coordination. Given that some experts are expecting highly dangerous AI systems within the next 2-5 years (e.g., Oversight of A.I.: Principles for Regulation, 2023a) there is a need for urgent action.
We envision the following as one pathway toward international coordination (see also Aguirre, 2023).
1. **National compute cap**. One major country implements a national compute cap. For example, the Executive Branch of the United States or the United Kingdom issues an order prohibiting the development of AI systems above a certain amount of training FLOP.
2. **National emergency response infrastructure.** One major country develops "kill switch" infrastructure. All AI development above a FLOP threshold must be registered in a government database. The government establishes a direct line of communication with executives and researchers at any companies developing these systems. The government employs a small team of people to monitor risks from Frontier AI development and activate an emergency response protocol if risks are determined to be above a predetermined threshold. The emergency response protocol involves the government swiftly halting the development of an AI system or withdrawing the deployment of an AI system (by coordinating with the AI company and/or the compute provider). Periodically, the government runs a "drill" to see if it can swiftly halt the development of a Frontier AI system or withdraw access to a deployed AI system.
3. **International coordination.** 1. The first country to implement this regime drafts an international treaty to establish a global moratorium and a multinational international AGI safety organization (e.g., Wasil & Miotti, 2023). 2. 6 months after the UK AI Safety Summit, another summit convenes in which countries are presented with the Treaty. 3. 12 months after the UK AI Safety Summit, most of the major AI world powers have signed the treaty. Signatory countries must demonstrate compliance with the international organization.
4. At least once a year, signatories meet to discuss the implementation of the treaty, share information about the risks from advanced AI, and discuss possible changes to the global moratorium threshold.
Below, we offer additional details on each of these steps.
**National compute cap**. Immediately, we believe that the nations leading AI development - the United States and the United Kingdom - should take the lead on implementing a national compute cap and a national emergency response infrastructure. The most advanced and dangerous AI systems are currently developed by companies based in the United States, making the United States's leadership on this issue particularly important. To successfully implement a national compute cap, a government would need to monitor advanced AI development. Fortunately, advanced AI development currently requires extremely large quantities of advanced hardware, and there are only a few major data centers that are powerful enough to support advanced AI development. Monitoring advanced AI development could draw from some techniques used to monitor nuclear development (Baker, 2023) and may eventually involve hardware-based monitoring techniques (Shavit, 2023). Importantly, the kind of hardware used for advanced AI development differs substantially from the kind of hardware used for consumer purposes. As a result, such monitoring could be implemented without unnecessary externalities to ordinary consumers.
**Emergency response infrastructure**. For the emergency response infrastructure, governments can draw from existing best practices in risk monitoring and emergency preparedness. This infrastructure should be developed alongside national security experts and intelligence experts who already have experience developing and implementing emergency response plans. In the case of advanced AI development, an adequate emergency response system would have at least two components: (a) the ability to identify an imminent emergency and (b) the ability to swiftly respond. The first component involves a **risk awareness and monitoring system**. This could involve direct monitoring of data centers with high concentrations of compute, a direct communication line between Frontier AI developers and government officials, whistleblower protections for individuals who report dangers, and government audits of AI facilities. If the government receives evidence of highly dangerous AI development (e.g., a system that can develop biological weapons or escape human control), the government would then need to have the ability to swiftly halt AI development or withdraw AI systems that are widely deployed. This involves a **"kill switch**", in which a government could swiftly contact AI developers or compute providers and tell them to stop an AI training run or withdraw API access to a model.
To ensure that this infrastructure would work in an actual emergency, the government should perform "drills" at regular intervals. That is, once every few months, the government should simulate a high-risk scenario, ensuring that it can successfully halt AI training runs and successfully withdraw deployment of AI systems. For example, a test could involve simulating a scenario in which a new prompting technique allows ChatGPT users to commit highly dangerous cyberattacks. To test the emergency response protocol, the government would swiftly get in touch with OpenAI and coordinate to withdraw ChatGPT for a few minutes.
**International coordination**. A few months after the national regulations are implemented, the national regulations serve as a foundation for international coordination and the establishment of MAGIC. MAGIC would enforce a global compute cap and take other measures to reduce risks from advanced AI (Hausenloy et al., 2023). All major AI powers (e.g., the United States, the United Kingdom, and the People's Republic of China) recognize the risks and agree to avoid a nationalistic race to advanced AI. MAGIC provides incentives for others to join (e.g., export controls on advanced hardware for states that do not join; access to technical expertise from MAGIC
for states that do join). In addition, typical diplomatic means (e.g., sanctions) are employed to incentivize states to join, and typical diplomatic measures are taken to prevent non-signatories from developing smarter-than-human AI systems.
### Concrete recommendations for the UK AI Safety Summit
Table 1 describes concrete commitments that governments could make, both immediately and in the longer term.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Policy** & **Example international statement** & **Example action from world leaders** \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & “Most urgently, **we commit to the establishment of an international institution** with the exclusive mandate to conduct critical Frontier AI research. We recognize that this is a long-term solution to reach a stable, global equilibrium on AI that minimizes AI extinction risk. & The UK announces that it will be convening another summit in 3 months for the explicit purpose of establishing a multinational AGI safety organization (MAGIC). At the summit, world leaders present an international treaty to establish MAGIC, a global compute cap, and other safety provisions (see Wasil & Miotti, 2023). \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & “We affirm our commitment to mitigating extinction risk from AI by agreeing to **limit computing power available for large-scale, dangerous AI development**. This limit shall be revised over time to account for advances in the technology.” & The UK can lead on this commitment by setting an adequate threshold (such as 10\^{\circ}24 FLOP), rallying a transatlantic alliance to move ahead with such measures, and setting the ground for more comprehensive international treaties. \\ \hline \multirow{4}{*}{
\begin{tabular}{} \end{tabular} } & “We recognize the risky nature of Frontier AI research, and thus endorse the principle that **the onus should be on companies to demonstrate the safety of their systems ahead of developing and deploying them, as is standard practice in other high-risk sectors”**. & The UK can pioneer the development of Safety Proofs by establishing a minimal list of properties that AI developers should demonstrate. The UK AI Foundation Model Taskforce could be in charge of ensuring that AI developers go through this process before each critical experiment and deployment to the public. \\ \hline \end{tabular}
\end{table}
Table 1: Example international measures that minimize AI extinction risk
Conclusion
Advanced AI poses extinction risks to humanity, voluntary commitments are not a sufficient response to this problem, and governments have an enormous opportunity to intervene. Ian Hogarth, chair of the UK's AI Foundation Model Taskforce, recently expressed the need to slow the race to godlike AI to avoid an AI catastrophe (Hogarth, 2023). We strongly support this goal, and we believe it is a necessary and responsible step to reduce extinction risks. In this report, we described how "responsible scaling" and other forms of voluntary commitments will be insufficient, argued that affirmative evidence of safety should be required from advanced AI developers, and proposed measures that world leaders could take to substantially reduce risks from advanced AI.
We look forward to following the outcomes of the UK AI Safety Summit, and we appreciate the opportunity to offer our input. Hopefully, the UK AI Safety Summit will provide a forum for world leaders to meaningfully discuss the extreme threats posed by advanced AI and begin to reach a consensus on the kinds of policy solutions that are needed. It will be important that those policy solutions go beyond voluntary commitments that allow AI companies to continue racing toward godlike AI. It will also be important for stronger solutions - such as global compute caps and emergency response infrastructure - to be treated as urgent priorities.
The world may not have many years left before the development of highly dangerous AI systems. The Summit may represent one of the world's key chances to curb extinction risks from advanced AI. We hope world leaders make use of this remarkable opportunity.
## Appendix
### Extinction risks from advanced AI
**The proposals we outlined have considerable support from the US and UK public**. The AI Policy Institute (AIPI) conducts polls of the American public (AIPI, 2023; Schreckinger, 2023). Their polls have found that:
* 83% believe AI could accidentally cause a **catastrophic event.**
* 82% **prefer slowing down the development of AI** (compared to 8% who prefer speeding it up).
* 82% **do not trust AI tech executives** to regulate AI.
* "Preventing dangerous and catastrophic outcomes" from AI was ranked as the most important policy priority.
Control AI commissioned a poll of the UK electorate (Control AI, 2023; YouGov, 2023). The poll found:
* 78% support the creation of a "**global watchdog**" to regulate powerful AI.
* 74% believe the government should **prevent AI from "reaching superhuman capabilities".**
* 72% believe AI should be treated as an "incredibly powerful and dangerous technology". |
2309.05864 | On Cohen--Macaulay modules over the Weyl algebra | We propose a definition of Cohen--Macaulay modules over the Weyl algebra $D$
and give a sufficient condition for a GKZ $A$-hypergeometric $D$-module to be
Cohen--Macaulay. | Kuei-Nuan Lin, Jen-Chieh Hsiao | 2023-09-11T23:06:59Z | http://arxiv.org/abs/2309.05864v1 | # On Cohen-Macaulay modules over the Weyl algebra
###### Abstract.
We propose a definition of Cohen-Macaulay modules over the Weyl algebra \(D\) and give a sufficient condition for a GKZ \(A\)-hypergeometric \(D\)-module to be Cohen-Macaulay.
2020 _Mathematics Subject Classification_. Primary 13N10, 16E99, 13C14, 14M25, 16W70; Secondary 32C38, 33C70, 13D07.
Keyword: Hypergeometric system, Cohen-Macaulay, toric, \(D\)-module.
In the case where \(R=\bigoplus_{i\geq 0}R_{i}\) is an \(\mathbb{N}\)-graded algebra over a field \(k=R_{0}\) with maximal graded ideal \(\mathfrak{m}=\bigoplus_{i\geq 1}R_{i}\), there is a parallel theory of graded Cohen-Macaulay modules as in the local case. Formulas analogous to (1.1) and (1.2) hold for nonzero finite graded \(R\)-modules. A nonzero finite graded \(R\)-module \(M\) is Cohen-Macaulay if and only if \(M_{\mathfrak{m}}\) is Cohen-Macaulay over the local ring \(R_{\mathfrak{m}}\).
The reasons behind the definition of Cohen-Macaulay \(D\)-modules are discussed in Sections 5,6. We summarize them here.
Consider the homogenized Weyl algebra \(D^{\rm(h)}\). Filter \(D\) and \(D^{\rm(h)}\) by Bernstein filtration, namely the \((\mathbf{1},\mathbf{1})\)-filtration, and denote the associated graded algebras by \(S=\operatorname{gr}D\) and \(S[h]=\operatorname{gr}D^{\rm(h)}\). Both \(S\) and \(S[h]\) are polynomial algebras over \(k\). We consider the three \(k\)-algebra \(S\), \(S[h]\), and \(D^{\rm(h)}\) as \(\mathbb{N}\)-graded algebras defined by the total degree of their elements. For a \(D\)-module \(M\) or a graded \(D^{\rm(h)}\)-module \(M^{\rm(h)}\), the associated graded module obtained from the \((\mathbf{1},\mathbf{1})\)-filtration on \(D\) or \(D^{\rm(h)}\) is denoted by \(\operatorname{gr}M\) or \(\operatorname{gr}M^{\rm(h)}\), respectively.
Let \(M\) be a nonzero finite \(D\)-module and \(\mathbf{F}\) a free resolution of \(M\) over \(D\) that induces a graded minimal free resolution \(\operatorname{gr}\mathbf{F}\) of \(\operatorname{gr}M\) over \(S\). By results in [15], there exists a graded \(D^{\rm(h)}\)-module \(M^{\rm(h)}\) that admits a free resolution \(\mathbf{F}^{\rm(h)}\) of \(M^{\rm(h)}\) satisfying the following properties.
* The dehomogenization of \(\mathbf{F}^{\rm(h)}\) coincides with \(\mathbf{F}\).
* The resolution \(\mathbf{F}^{\rm(h)}\) induces a graded minimal free resolution \(\operatorname{gr}\mathbf{F}^{\rm(h)}\) of \(\operatorname{gr}M^{\rm(h)}\) over \(S[h]\).
It is observed in Proposition 5.1 that \(\operatorname{gr}\mathbf{F}^{\rm(h)}=k[h]\otimes\operatorname{gr}\mathbf{F}\) and that the graded Betti numbers of the three modules \(M^{\rm(h)}\), \(\operatorname{gr}M\), and \(\operatorname{gr}M^{\rm(h)}\) are identical. In particular, these modules have the same projective dimension. Since \(\operatorname{gr}M\) and \(\operatorname{gr}M^{\rm(h)}\) are graded modules over polynomial rings, the formulas analogous to (1.1) and (1.2) hold for them. In particular, the \(S\)-module \(\operatorname{gr}M\) is Cohen-Macaulay if and only if \(\operatorname{gr}M^{\rm(h)}\) is Cohen-Macaulay over \(S[h]\).
On the other hand, for the graded \(D^{\rm(h)}\)-module \(M^{\rm(h)}\), a formula analogous to (1.2) can be obtained by analyzing the structure of \(M^{\rm(h)}\) as a module over the filtered ring \(D^{\rm(h)}\) (See Theorem 2.1 or [1]). Moreover, the \(\mathbb{N}\)-graded structure on \(D^{\rm(h)}\) guarantees an Auslander-Buchsbaum formula in this noncommutative setting (See Theorem 4.1 or [10]). Consequently, the condition \(j(M^{\rm(h)})=\operatorname{pd}(M^{\rm(h)})\) is equivalent to the condition of \(\operatorname{gr}(M^{\rm(h)})\) being Cohen-Macaulay. Based on these observations, we make the following definition (Definition 6.2):
A finite left \(D\)-module \(M\) is _Cohen-Macaulay_ if \(\operatorname{gr}M\) is a Cohen-Macaulay \(\operatorname{gr}D\)-module.
To obtain examples, we are interested in the characterization of a GKZ \(A\)-hypergeometric \(D\)-modules \(M_{A}(\beta)=D/H_{A}(\beta)\) to be Cohen-Macaulay, where \(H_{A}(\beta)\) is the left \(D\)-ideal generated by the toric ideal \(I_{A}\subset\mathbb{C}[\partial]=\mathbb{C}[\partial_{1},\dots,\partial_{n}]\) and the Euler operators \(E-\beta\). We show in Theorem 7.1 that:
if the \(\mathbb{C}[\partial]\)-module \(\mathbb{C}[\partial]/\operatorname{in}I_{A}\) is Cohen-Macaulay, then \(M_{A}(\beta)\) is a Cohen-Macaulay \(D\)-module.
Here, the initial ideal in \(I_{A}\) of \(I_{A}\) is obtained by the standard grading on the polynomial ring \(\mathbb{C}[\partial]\). The following two steps achieve this theorem.
* Apply the results in [19] concerning the associated prime of \(\operatorname{in}I_{A}\) to show that \(\operatorname{in}_{(\mathbf{1},\mathbf{1})}(E-\beta)\) form a linear system of parameters on \(\mathbb{C}(x)[\partial]/I_{A}\).
* Extend a argument in [18] to show that \(\operatorname{in}_{(\mathbf{1},\mathbf{1})}(E-\beta)\) form a regular sequence on \(\operatorname{gr}(D/DI_{A})\). This implies that \(\operatorname{gr}M_{A}(\beta)=S/\operatorname{in}_{(\mathbf{1},\mathbf{1})}H _{A}(\beta)\) is Cohen-Macaulay.
We note that the Cohen-Macaulay condition on the toric ring \(\mathbb{C}[\partial]/I_{A}\) is equivalent to the absence of rank-jumps in the GKZ hypergeometric system \(H_{A}(\beta)\)[13, Corollary 9.2]. It is known that if the Grobner deformation \(\mathbb{C}[\partial]/\operatorname{in}I_{A}\) is Cohen-Macaulay, then so is \(\mathbb{C}[\partial]/I_{A}\)[2, Proposition 1.6.2]. However, the converse of this statement is not true in general (Example 7.6). We also note that there exists a toric ideal \(I_{A}\) with \(\mathbb{C}[\partial]/\operatorname{in}I_{A}\) not Cohen-Macaulay, but the corresponding GKZ \(D\)-module \(M_{A}(\mathbf{0})\) is Cohen-Macaulay (Example 7.6).
## 2. Preliminaries on homological algebra
In this section, rings are always associative with unit elements and modules (left or right) are unitary.
We recall the definitions and some standard facts about the projective and global dimensions of modules. See [22, Chapter 4] for more details.
Let \(A\) be a ring. We work in the category \(\operatorname{Mod}(A)\) of left \(A\)-modules. The _projective dimension_\(\operatorname{pd}(M)\) of an \(A\)-module \(M\) is the minimum integer \(n\) (if it exists) such that there is a resolution of \(M\) by projective modules \(0\to P_{n}\to\cdots\to P_{1}\to P_{0}\to M\to 0.\) Equivalently, we have
\[\operatorname{pd}(M)=\sup\{i:\operatorname{Ext}_{A}^{i}(M,N)\neq 0\text{ for some }N\in \operatorname{Mod}(A)\}.\]
Similarly, one defines the injective dimension \(\operatorname{id}(M)\) and the flat dimension \(\operatorname{fd}(M)\) of \(M\).
The _global dimension_\(\operatorname{gldim}(A)\) is defined as the supremum of \(\operatorname{pd}(M)\) for any \(A\)-module \(M\). It is equal to any of the following numbers.
\[\operatorname{gldim}(A): =\sup\{\operatorname{pd}(M):M\in\operatorname{Mod}(A)\}\] \[=\sup\{\operatorname{id}(M):M\in\operatorname{Mod}(A)\}\] \[=\sup\{\operatorname{pd}(A/I):I\text{ is a left ideal of }A\}\] \[=\sup\{i:\operatorname{Ext}_{A}^{i}(M,N)\neq 0\text{ for some }M,N\in \operatorname{Mod}(A)\}.\]
In particular, the global dimension \(\operatorname{gldim}(A)\) can be computed by finitely generated \(A\)-modules. As a fundamental example, Hilbert syzygy theorem states that the polynomial ring \(k[x_{1},\ldots,x_{n}]\) over a field \(k\) has global dimension \(n\).
We will only consider rings that are left and right Noetherian. In this case, the flat dimension \(\operatorname{fd}(M)\) is equal to the projective dimension \(\operatorname{pd}(M)\) for any finitely generated (left or right) \(A\)-module \(M\). By the Tor dimension theorem, the global dimension
\[\operatorname{gldim}(A)=\sup\{i:\operatorname{Tor}_{i}^{A}(M,N)\neq 0\text{ for some }M,N\in \operatorname{Mod}(A)\}\]
is equal to the global dimension of \(A\) in the category of right \(A\)-modules. Another important example relevant to us is a theorem of Roos, which states that the \(n^{\text{th}}\) Weyl algebra \(k\langle x_{1},\ldots,x_{n},\partial_{1},\ldots,\partial_{n}\rangle\) over a field \(k\) of characteristic \(0\) has global dimension \(n\)[1, Chapter 2, Theorem 3.15].
We collect some facts for modules over filtered rings. The main reference of this section is [1, Chapter 2].
Consider a ring \(A\) equipped with a filtration \(\Sigma_{0}\subset\Sigma_{1}\subset\cdots\) such that \(\Sigma_{0}\) contains the unit elements of \(A\), the ring \(A=\bigcup_{i}\Sigma_{i}\), and \(\Sigma_{i}\Sigma_{j}\subset\Sigma_{i+j}\) for all \(i,j\). We assume that the associate graded ring \(\operatorname{gr}(A)=\Sigma_{0}\oplus\Sigma_{1}/\Sigma_{0}\oplus\Sigma_{2}/ \Sigma_{1}\oplus\cdots\) is commutative, Noetherian, and regular of pure
dimension \(\omega\). Let \(\mu=\operatorname{gldim}(A)\) be the global dimension of \(A\). Since \(\operatorname{gr}(A)\) is regular, the global dimension \(\operatorname{gldim}(\operatorname{gr}(A))=\omega\). By [1, Chapter 2, Theorem 3.7], we have
\[\mu=\operatorname{gldim}(A)\leq\operatorname{gldim}(\operatorname{gr}(A))=\omega.\]
For a finite left \(A\)-module \(M\), its dimension is defined as
\[d(M):=\sup\{d_{\mathfrak{m}}(M):\mathfrak{m}\in\operatorname{Max}(A)\}\]
where \(\operatorname{Max}(A)\) is the set of maximal ideals of \(\operatorname{gr}(A)\) and \(d_{\mathfrak{m}}(M):=d(\operatorname{gr}_{\Gamma}(M)_{\mathfrak{m}})\) is the local dimension of the associated graded module \(\operatorname{gr}_{\Gamma}(M)\) of \(M\) with respect to any good filtration \(\Gamma\) of \(M\). We also consider the _grade_ of \(M\):
\[\operatorname{grade}(M)=j(M):=\inf\{i:\operatorname{Ext}^{i}_{A}(M,A)\neq 0\}.\]
These numbers are related by
\[0\leq j(M)\leq\operatorname{pd}(M)\leq\mu\leq\omega.\]
Moreover, we have the following theorem [1, Chapter 2, Theorem 7.1].
**Theorem 2.1**.: _For all nonzero finite left \(A\)-module \(M\), we have \(d(M)+j(M)=\omega\)._
**Remark 2.2**.:
1. In the special case where \(A=\Gamma_{0}=\operatorname{gr}(A)\), we have \(\omega=\mu\) and the equality \(d(M)+j(M)=\omega\) still holds true, where \(d(M)=\dim(M)=\dim(A/\operatorname{ann}(M))\) is the dimension of the support of \(M\).
2. When the filtered ring \(A\) is also a \(\mathbb{N}\)-graded algebra over a field \(k\), the dimension \(d(M)\) is equal to the Gelfand-Kirillov dimension of \(M\). An \(\mathbb{N}\)-graded \(k\)-algebra satisfying Theorem 2.1 is called _Cohen-Macaulay_ in [12, 20].
3. As a consequence of Theorem 2.1, we have the Bernstein inequality \[\omega-\mu\leq d(M)\leq\omega.\] We say that a finitely generated left \(A\)-module \(M\) is _holonomic_ if \(d(M)=\omega-\mu\). Equivalently, a finitely generated left \(A\)-module \(M\) is holonomic if \(i=\mu=j(M)\) is the only index such that \(\operatorname{Ext}^{i}_{A}(M,A)\neq 0\).
We need the following result later, see [1, Chapter 2, Corollary 7.5].
**Corollary 2.3**.: _For a nonzero finite left \(A\)-module \(M\), we have \(\operatorname{Ext}^{v}_{A}(\operatorname{Ext}^{i}_{A}(M,A),A)=0\) if \(v<i\). In particular, the grade \(j(\operatorname{Ext}^{i}(M,A))\geq i\) for any \(i\geq 0\)._
**Remark 2.4**.: For any nonzero \(A\)-submodule \(N\) of \(\operatorname{Ext}^{i}(M,A)\) we have \(d(N)\leq d(\operatorname{Ext}^{i}(M,A))\), so it follows from Theorem 2.1 and Corollary 2.3 that \(j(N)\geq i\). Therefore, the algebra \(A\) is Auslander-regular in the sense of [12, Definition 2.1].
## 3. Cohen-Macaulay properties in commutative algebra
In this section, we recall the definitions and some basic facts about Cohen-Macaulay modules over commutative Noetherian rings. See [3] for more details.
Let \((R,\mathfrak{m},k)\) be a Noetherian local ring and \(M\) a nonzero finite \(R\)-module. The _depth_ of \(M\) is defined as
\[\operatorname{depth}(M):=\inf\{i:\operatorname{Ext}^{i}_{R}(k,M)\neq 0\},\]
which is the length of any maximal \(M\)-sequence in \(\mathfrak{m}\). In particular, we have \(\operatorname{depth}(M)\leq\dim(M)\). We call \(M\) a _Cohen-Macaulay_\(R\)-module if \(\operatorname{depth}(M)=\dim(M)\). A theorem of Auslander-Buchsbaum [3, Theorem 1.3.3] states that if \(\operatorname{pd}(M)<\infty\), then
\[\operatorname{pd}(M)+\operatorname{depth}(M)=\operatorname{depth}(R).\]
If \(R\) is regular or if, more generally, \(R\) is Cohen-Macaulay as a module over itself, we have \(\operatorname{depth}(R)=\dim(R)=\operatorname{gldim}(R)\). In this case, it follows from Remark 2.2(1) that \(M\) is Cohen-Macaulay if and only if \(j(M)=\operatorname{pd}(M)\).
In general, we say that a nonzero finite module \(M\) over a Noetherian ring \(S\) is _Cohen-Macaulay_, if \(M_{\mathfrak{p}}\) is a Cohen-Macaulay \(S_{\mathfrak{p}}\) for every prime ideal \(\mathfrak{p}\) in the support \(\operatorname{Supp}(M)\) of \(M\).
Consider now a Noetherian \(\mathbb{N}\)-graded ring \((R=\bigoplus_{i\geq 0}R_{i},\mathfrak{m},k)\), where \(k=R_{0}\) is a field and \(\mathfrak{m}=\bigoplus_{i\geq 1}R_{i}\) is the maximal graded ideal of \(R\). We work in the category \(\operatorname{GrMod}(R)\) of graded \(R\)-modules. For \(M,N\in\operatorname{GrMod}(R)\), denote by \(\operatorname{Hom}(M,N)\) the group of all graded homomorphisms of degree \(0\) between \(M\) and \(N\). Consider also the graded homomorphism group
\[\underline{\operatorname{Hom}}(M,N)=\bigoplus_{i\in\mathbb{Z}}\operatorname{ Hom}(M,N(i)).\]
When \(M\) is finite, the group \(\underline{\operatorname{Hom}}(M,N)\) is equal to the ungraded homomorphism group \(\operatorname{Hom}_{R}(M,N)\). Similarly, we consider the graded Ext groups \(\underline{\operatorname{Ext}}^{i}(M,N)\) and use them to make the following definitions.
**Definition 3.1**.: Let \(M\) be a nonzero finite graded \(R\)-module, define
\[\operatorname{pd}(M) :=\sup\{i:\underline{\operatorname{Ext}}^{i}(M,N)\neq 0\text{ for some }N\in \operatorname{GrMod}(R)\},\] \[j(M) :=\inf\{i:\underline{\operatorname{Ext}}^{i}(M,R)\neq 0\},\] \[\operatorname{depth}(M) :=\inf\{i:\underline{\operatorname{Ext}}^{i}(k,M)\neq 0\}.\]
We have \(\dim(M)=\dim(M_{\mathfrak{m}})\), \(\operatorname{pd}(M)=\operatorname{pd}(M_{\mathfrak{m}})\), \(j(M)=j(M_{\mathfrak{m}})\), and \(\operatorname{depth}(M)=\operatorname{depth}(M_{\mathfrak{m}})\).
**Remark 3.2**.: When \(R=k[x_{1},\dots,x_{n}]\) is the polynomial algebra over \(k\) graded by the total degree of its elements, we still have \(\operatorname{depth}(R)=\dim(R)=\operatorname{gldim}(R)=n\). In this case, the Auslander-Buchsbaum formula \(\operatorname{pd}(M)+\operatorname{depth}(M)=\operatorname{depth}(R)\) also holds for any finite graded \(R\)-module \(M\). Moreover, by [3, Exercise 2.1.27] a finite graded \(R\)-module \(M\) is Cohen-Macaulay if and only if \(\operatorname{depth}(M)=\dim(M)\) or equivalently \(j(M)=\operatorname{pd}(M)\). We also remark that an analogous formula of Remark 2.2(1)
\[\dim(M)+j(M)=\operatorname{gldim}(R)\]
also holds in the graded category \(\operatorname{GrMod}(R)\). Therefore, for a finite graded \(R\)-module \(M\), the notions of grade \(j(M)\) defined in the two categories \(\operatorname{GrMod}(R)\) and \(\operatorname{Mod}(R)\) coincide.
Let \((R,\mathfrak{m},k)\) be either the local ring or the \(\mathbb{N}\)-graded ring discussed above. Let \(M\) be a finite (graded) \(R\)-module whose dimension \(\dim M=d\). A theorem of Grothendieck states that if the \(i^{\text{th}}\) local cohomology \(H^{i}_{\mathfrak{m}}(M)\neq 0\) then \(\operatorname{depth}M\leq i\leq d\). As a consequence, the \(R\)-module \(M\) is Cohen-Macaulay if and only if the only nonvanishing local cohomology of \(M\) supported at \(\mathfrak{m}\) is \(H^{d}_{\mathfrak{m}}(M)\)[3, Theorem 3.5.7, Remark 3.6.18].
## 4. Auslander-Buchsbaum formula in noncommutative graded case
Let us state the main result in [10] for the convenience of the readers.
Let \(A=\bigoplus_{i\in\mathbb{Z}}A_{i}\) be a non-commutative \(\mathbb{N}\)-graded left Noetherian \(k\)-algebra, where \(k=A_{0}\) is a field. We work in a parallel setting as in the commutative graded case (see section 3.2). Let \(\operatorname{GrMod}(A)\) be the category of left graded \(A\)-module. For \(M,N\in\operatorname{GrMod}(A)\), consider the graded \(\underline{\operatorname{Hom}}(M,N)\) and \(\underline{\operatorname{Ext}}^{i}(M,N)\). For a nonzero finite graded left \(A\)-module \(M\), we define as in Definition 3.1 the projective dimension, \(\operatorname{pd}(M)\), the grade, \(j(M)\), and the depth, \(\operatorname{depth}(M)\), of \(M\).
Consider \(A\) as a left module over itself. We assume that \(\underline{\operatorname{Ext}}^{i}(k,A)\) is finite dimensional over \(k\) for each \(i\leq\operatorname{depth}(A)\). This condition is called the \(\chi^{\circ}\)-condition in [10], which is vacuous if \(\operatorname{depth}(A)=\infty\). Then we have the following theorem [10, Theorem 3.2].
**Theorem 4.1**.: _For any nonzero finite graded left \(A\)-module \(M\) with \(\operatorname{pd}(M)<\infty\), we have_
\[\operatorname{pd}(M)+\operatorname{depth}(M)=\operatorname{depth}(A).\]
_In particular, if \(\operatorname{gldim}(A)<\infty\), by applying the above theorem to \(M=k\) we get \(\operatorname{gldim}(A)=\operatorname{depth}(A)\)._
We mention that there is also a local cohomology theory in this noncommutative graded setting [9].
## 5. Minimal free resolutions of \(D\)-modules
We summarize some results in [15] and deduce Proposition 5.1 for later use.
Let \(D=k[x_{1},\ldots,x_{n}]\langle\partial_{1},\ldots,\partial_{n}\rangle\) be the \(n^{\text{th}}\) Weyl algebra over a field \(k\) of characteristic \(0\), which is an associative \(k\)-algebra generated by \(x_{1},\ldots,x_{n}\) and \(\partial_{1},\ldots,\partial_{n}\) with relations
\[x_{i}x_{j}=x_{j}x_{i},\quad\partial_{i}\partial_{j}=\partial_{j}\partial_{i}, \quad\partial_{i}x_{j}-x_{j}\partial_{i}=\delta_{ij}\]
for \(i,j=1,\ldots,n\). We will sometimes abbreviate \(D=k[x]\langle\partial\rangle\) when convenient. Consider also the homogenized Weyl algebra \(D^{\text{(h)}}=k[h,x]\langle\partial\rangle\) with relations, for \(1\leq i,j\leq n\),
\[x_{i}x_{j}=x_{j}x_{i},\quad\partial_{i}\partial_{j}=\partial_{j}\partial_{i}, \quad\partial_{i}x_{j}-x_{j}\partial_{i}=\delta_{ij}h^{2},\quad hx_{i}=x_{i}h,\quad h\partial_{i}=\partial_{i}h.\]
For \(u,v,\alpha,\beta\in\mathbb{Z}^{n}\), a weight vector \(L=(u,v)\) is called admissible for \(D\) and \(D^{\text{(h)}}\) if \(u_{i}+v_{i}\geq 0\). Fix any admissible weight vector \(L\). The \(L\)-degree of a monomial \(x^{\alpha}\partial^{\beta}=x_{1}^{\alpha_{1}}\cdots x_{n}^{\alpha_{n}} \partial_{1}^{\beta_{1}}\cdots\partial_{n}^{\beta_{n}}\in D\) as well as a monomial \(h^{l}x^{\alpha}\partial^{\beta}\in D^{\text{(h)}}\), \(l\in\mathbb{N}\), is defined by \(L\cdot(\alpha,\beta)=\sum_{i}u_{i}\alpha_{i}+\sum_{i}v_{i}\beta_{i}\). For an element \(P\in D\) or \(D^{\text{(h)}}\), its \(L\)-degree \(\deg^{L}(P)\) is defined as the maximum degree of its monomials. This induces an increasing filtration \(\{L_{i}\}_{i\in\mathbb{Z}}\) on \(D\) and \(D^{\text{(h)}}\) by \(L_{i}D=\{P\in D\mid\deg^{L}(P)\leq i\}\) and \(L_{i}D^{\text{(h)}}=\{P\in D^{\text{(h)}}\mid\deg^{L}(P)\leq i\}\), respectively. For wight vectors \(L\) satisfying \(u_{i}+v_{i}>0\), \(i=1,\ldots,n\), the associated graded rings
\[S:=\operatorname{gr}^{L}D=\bigoplus_{i\in\mathbb{Z}}L_{i}D/L_{i-1}D\cong k[x, \partial],\quad S[h]:=\operatorname{gr}^{L}D^{\text{(h)}}=\bigoplus_{k\in \mathbb{Z}}L_{i}D^{\text{(h)}}/L_{i-1}D^{\text{(h)}}\cong k[h,x,\partial]\]
are polynomial rings, whereby abuse of notation we still use \(\partial_{i}\) as the image of \(\partial_{i}\) in \(\operatorname{gr}^{L}D\) or \(\operatorname{gr}^{L}D^{\text{(h)}}\).
For a left ideal \(I\) of \(D\), the weight vector \(L\) induces a filtration \(\{I\cap L_{i}D\}_{k\in\mathbb{Z}}\) on \(I\). The associated graded module \(\operatorname{gr}^{L}I\) can be identified as the ideal \(\operatorname{in}_{(u,v)}I:=\langle\operatorname{in}_{(u,v)}P\mid P\in I\rangle\) in \(k[x,\partial]\) where \(\operatorname{in}_{(u,v)}P\) is the \(L\)-initial form of \(P\). On the other hand, the \(L\)-filtration \(L_{i}(D/I)=\{P+I\mid P\in L_{i}D\}\), \(i\in\mathbb{Z}\), on the \(D\)-module \(D/I\) gives the associated graded module
\[\operatorname{gr}^{L}(D/I)=\operatorname{gr}^{L}D/\operatorname{gr}^{L}I=S/ \operatorname{in}_{(u,v)}I.\]
Similarly, for a left ideal \(I\) of \(D^{\rm(h)}\) we have
\[\operatorname{gr}^{L}(D^{\rm(h)}/I)=S[h]/\operatorname{in}_{(u,v)}I.\]
More generally, the \(L\)-filtration induces many good filtrations on modules of finite type over \(D\) or \(D^{\rm(h)}\) as follows. We only present the case of \(D\)-modules. The case of \(D^{\rm(h)}\)-modules can be treated similarly.
For a free \(D\)-module \(D^{r}\) and for \(\mathbf{m}=(m_{1},\dots,m_{r})\in\mathbb{Z}^{r}\), any admissible weight vector \(L\) induces a filtration on \(D^{r}\) by
\[L_{i}[m]D^{r}=\{P\in D^{r}\mid\deg^{L}(P_{j})+m_{j}\leq i,\forall j\},\,i\in \mathbb{Z}.\]
The associated graded module, denoted by \(\operatorname{gr}^{L}[\mathbf{m}]D^{r}\), with respect to this filtration, is a free module over \(S=\operatorname{gr}^{L}D\). For a \(D\)-module \(M\) admitting a surjective \(D\)-module homomorphism \(D^{r}\xrightarrow{\varphi}M\), this induces a filtration on \(M\) given by \(\varphi(L_{i}[m]D^{r})\), \(i\in\mathbb{Z}\), whose associated graded module is denoted by \(\operatorname{gr}^{L}[m](M)\).
Let us consider only the \(L=(\mathbf{1},\mathbf{1})\) filtration on \(D\) and \(D^{\rm(h)}\). We consider the standard \(\mathbb{N}\)-grading on the polynomial rings \(S\) and \(S[h]\). The homogenized \(D^{\rm(h)}\) is also an \(\mathbb{N}\)-graded \(k\)-algebra with grading defined by the total degree of its elements. Let \(M\) be a finite \(D\)-module. By [15, Proposition 4.1], there exists a free resolution of \(D\)-modules
\[\mathbf{F}:\quad\cdots\xrightarrow{\varphi_{2}}D^{r_{1}}\xrightarrow{\varphi _{1}}D^{r_{0}}\xrightarrow{\varphi_{0}}M\to 0\]
and \(\mathbf{n}_{i}\in\mathbb{Z}^{r_{i}}\), such that, for \(i>0\) and \(j\in\mathbb{Z}\), we have
\[\varphi_{i}\left(L_{j}[\mathbf{n}_{i}]D^{r_{i}}\right)\subseteq L_{j}[ \mathbf{n}_{i-1}]D^{r_{i-1}},\]
and the induced free resolution
\[\operatorname{gr}^{L}\mathbf{F}:\quad\cdots\xrightarrow{\varphi_{2}} \operatorname{gr}^{L}[\mathbf{n}_{1}]D^{r_{1}}\xrightarrow{\overline{\varphi_ {1}}}\operatorname{gr}^{L}[\mathbf{n}_{0}]D^{r_{0}}\xrightarrow{\overline{ \varphi_{0}}}\operatorname{gr}^{L}[\mathbf{n}_{0}]M\to 0.\]
is a graded minimal free resolution of \(\operatorname{gr}^{L}[\mathbf{n}_{0}]M\) over \(S\).
Moreover, by [15, Proposition 4.2, 4.3, Corollary 4.1], the free resolution \(\mathbf{F}\) also induces a graded minimal free resolution of a \(D^{\rm(h)}\)-module \(M^{\rm(h)}\)
\[\mathbf{F}^{\rm(h)}:\quad\cdots\xrightarrow{\psi_{2}}(D^{\rm(h)})^{r_{1}}[ \mathbf{n}_{1}]\xrightarrow{\psi_{1}}(D^{\rm(h)})[\mathbf{n}_{0}] \xrightarrow{\psi_{0}}M^{\rm(h)}\to 0\]
such that the dehomogenization \(M^{\rm(h)}|_{h=1}=M\) and \(\mathbf{F}^{\rm(h)}|_{h=1}=\mathbf{F}\), where the maps \(\psi_{i}\) are obtained by homogenizing the maps \(\varphi_{i}\) for each \(i\in\mathbb{N}\).
Note that, \(i>0\) and \(j\in\mathbb{Z}\), we still have \(\psi_{i}\left(L_{j}[\mathbf{n}_{i}](D^{\rm(h)})^{r_{i}}\right)\subseteq L_{j} [\mathbf{n}_{i-1}](D^{\rm(h)})^{r_{i-1}}\), and that the free resolution \(\mathbf{F}^{\rm(h)}\) also induced a graded free resolution over \(S[h]\)
\[\operatorname{gr}^{L}\mathbf{F}^{\rm(h)}:\quad\cdots\xrightarrow{\overline{ \psi_{2}}}\operatorname{gr}^{L}[\mathbf{n}_{1}](D^{\rm(h)})^{r_{1}} \xrightarrow{\overline{\varphi_{1}}}\operatorname{gr}^{L}[\mathbf{n}_{0}](D^ {\rm(h)})^{r_{0}}\xrightarrow{\overline{\varphi_{0}}}\operatorname{gr}^{L}[ \mathbf{n}_{0}]M^{\rm(h)}\to 0.\]
On the other hand, the maps \(\overline{\varphi}_{i}\) and \(\overline{\psi_{i}}\) are obtained by the \(L\)-initial forms of entries in \(\varphi_{i}\) and \(\psi_{i}\). Since, for \(P\in D\), the \(L\)-initial form \(\operatorname{in}_{(\mathbf{1},\mathbf{1})}P\) in \(S\) coincides with the \(L\)-initial form of the homogenization of \(P\) in \(S[h]\), we have the following commutative diagram
\[\begin{CD}\operatorname{gr}^{L}[\mathbf{n}_{i}](D^{\rm(h)})^{r_{i}}& \xrightarrow{\overline{\psi_{i}}}&\operatorname{gr}^{L}[\mathbf{n}_{i-1}](D^{ \rm(h)})^{r_{i-1}}\\ @V{}V{}V@V{}V{}V\\ k[h]\otimes\operatorname{gr}^{L}[\mathbf{n}_{i}]D^{r_{i}}&\xrightarrow{ \operatorname{id}\otimes\overline{\varphi_{i}}}&k[h]\otimes\operatorname{gr} ^{L}[\mathbf{n}_{i-1}]D^{r_{i-1}}.\end{CD}\]
As a consequence, we see that \(\operatorname{gr}^{L}[\mathbf{n}_{0}]M^{\rm(h)}\cong k[h]\otimes\operatorname{gr}^{ L}[\mathbf{n}_{0}]M\) and that the free resolution
\[\operatorname{gr}^{L}\operatorname{\mathbf{F}}^{\rm(h)}\cong k[h]\otimes \operatorname{gr}^{L}\operatorname{\mathbf{F}}\]
is a graded minimal free resolution of \(\operatorname{gr}^{L}[\mathbf{n}_{0}]M^{\rm(h)}\) over \(S[h]\). We therefore obtain the following result.
**Proposition 5.1**.: _With the above notation, the graded Betti numbers of \(M^{\rm(h)}\), \(\operatorname{gr}^{\rm(\mathbf{1},\mathbf{1})}M\), and \(\operatorname{gr}^{\rm(\mathbf{1},\mathbf{1})}M^{\rm(h)}\) coincide for any finite \(D\)-module \(M\)._
## 6. Cohen-Macaulay \(D\)-modules and \(D^{\rm(h)}\)-modules
In this section, we again consider the \(L=(\mathbf{1},\mathbf{1})\) filtration of the \(n^{\rm th}\) Weyl algebra \(D\) and the homogenized Weyl algebra \(D^{\rm(h)}\) over a field \(k\) of characteristic \(0\). Recall that the \(k\)-algebras \(D^{\rm(h)},S=\operatorname{gr}^{L}D\), and \(S[h]=\operatorname{gr}^{L}D^{\rm(h)}\) are \(\mathbb{N}\)-graded defined by the total degree of elements.
Write \(A:=D^{\rm(h)}\). Since the associated grade algebra \(\operatorname{gr}^{L}A=S[h]\) is a polynomial ring of \(2n+1\) variables over \(k\), ring \(A\) is left and right Noetherian. So we can apply the results in section 2. In particular, for a nonzero finite left \(A\)-module \(M\), we have
\[0\leq j(M)\leq\operatorname{pd}(M)\leq\operatorname{gldim}(A)\leq 2n+1\quad \text{and}\quad d(M)+j(M)=2n+1.\]
On the other hand, since \(A\) is an Auslander-regular \(\mathbb{N}\)-graded \(k\)-algebra (Remark 2.4), by [12, Theorem 6.3] we have
\[\operatorname{\underline{Ext}}_{A}^{i}(k,A)=\begin{cases}0,&i\neq 2n+1\\ k,&i=2n+1.\end{cases}\]
As a consequence, we have \(\operatorname{depth}(A)=\operatorname{gldim}(A)=2n+1\) and the \(\chi^{\circ}\) condition in section 4 is satisfied. Therefore, by Theorem 4.1 we have
\[\operatorname{pd}(M)+\operatorname{depth}(M)=\operatorname{depth}(A)=2n+1\]
for any finite graded left \(A\)-module \(M\).
Let us make the following definition.
**Definition 6.1**.: Let \(M\) be a nonzero finite graded left \(D^{\rm(h)}\)-module, we say that \(M\) is _Cohen-Macaulay_ if \(\operatorname{depth}(M)=d(M)\) or equivalently if \(j(M)=\operatorname{pd}(M)\).
Let \(M\) be a nonzero finite left \(D\)-module. By Proposition 5.1, there exists a graded \(D^{\rm(h)}\)-module \(M^{\rm(h)}\) such that the graded Betti numbers of \(M^{\rm(h)}\), \(\operatorname{gr}^{L}M\), and \(\operatorname{gr}^{L}M^{\rm(h)}\) coincide. We abbreviate \(\operatorname{gr}^{L}M\) by \(\operatorname{gr}M\), and \(\operatorname{gr}^{L}M^{\rm(h)}\) by \(\operatorname{gr}M^{\rm(h)}\). As a consequence, we have
\[\operatorname{pd}(M^{\rm(h)})=\operatorname{pd}(\operatorname{gr }M)=\operatorname{pd}(\operatorname{gr}M^{\rm(h)})\leq 2n,\text{ and}\] \[n+1\leq d(M^{\rm(h)})=\dim(\operatorname{gr}M^{\rm(h)})=\dim( \operatorname{gr}M)+1\leq 2n+1.\]
Note also that
\[d(M^{\rm(h)})+j(M^{\rm(h)}) =\operatorname{pd}M^{\rm(h)})+\operatorname{depth}(M^{\rm(h)})=2n +1,\] \[\dim(\operatorname{gr}M^{\rm(h)})+j(\operatorname{gr}M^{\rm(h)}) =\operatorname{pd}(\operatorname{gr}M^{\rm(h)})+\operatorname{ depth}(\operatorname{gr}M^{\rm(h)})=2n+1,\text{ and}\] \[\dim(\operatorname{gr}M)+j(\operatorname{gr}M) =\operatorname{pd}(\operatorname{gr}M)+\operatorname{depth}( \operatorname{gr}M)=2n.\]
Therefore, we deduce that \(M^{\rm(h)}\) is Cohen-Macaulay if and only if \(\operatorname{gr}M^{\rm(h)}\) is Cohen-Macaulay if and only if \(\operatorname{gr}M\) is Cohen-Macaulay. This justifies the following definition.
**Definition 6.2**.: A finite left \(D\)-module \(M\) is _Cohen-Macaulay_ if \(\operatorname{gr}^{(\mathbf{1},\mathbf{1})}M\) is Cohen-Macaulay.
When \(M=D/I\) is a cyclic left \(D\)-module with \(I\) a left \(D\)-ideal, we have \(M^{\text{\rm(h)}}=A/I^{\text{\rm(h)}}\), \(\operatorname{gr}M^{\text{\rm(h)}}=S[h]/\operatorname{in}_{(\mathbf{1}, \mathbf{1})}I^{\text{\rm(h)}}\), and \(\operatorname{gr}M=S/\operatorname{in}_{(\mathbf{1},\mathbf{1})}I\). The \(D\)-module \(D/I\) is Cohen-Macaulay if by definition the \(S\)-module \(S/\operatorname{in}_{(\mathbf{1},\mathbf{1})}I\) is Cohen-Macaulay.
**Remark 6.3**.: Note that \(\operatorname{in}_{(\mathbf{1},\mathbf{1})}I\) is a homogeneous ideal in \(S\). Note that the ground field \(k\) is infinite. Given a homogeneous ideal \(J\) in \(S\), it is well-known that if \(S/\operatorname{in}_{\prec}(J)\) is Cohen-Macaulay with respect to a term order \(\prec\) then \(S/J\) is Cohen-Macaulay, see for example, [16, Proposition 25.4]. The converse is not true in general. But if \(\operatorname{in}_{\prec}(J)\) is generated by square-free monomials, then \(S/J\) is Cohen-Macaulay implies \(S/\operatorname{in}_{\prec}(J)\) is Cohen-Macaulay by [4, Corollary 2.7].
**Remark 6.4**.: If a \(D\)-module \(M\) is \(L\)-holonomic as in Remark 2.2(3), then \(\dim(\operatorname{gr}M)=n\). In this case, the module \(M\) is Cohen-Macaulay if and only if \(j(\operatorname{gr}M)=\operatorname{pd}(\operatorname{gr}M)=\operatorname{ depth}(\operatorname{gr}M)=n\).
## 7. Cohen-Macaulay GKZ systems
In this section, we work over an algebraically closed field \(\mathbb{C}\) of characteristic \(0\).
Let \(A=(a_{ij})\) be a \(d\times n\) integer matrix of rank \(d\) whose columns \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\in\mathbb{Z}_{d}\) are nonzero. We assume that the semigroup \(\mathbb{N}A\) is pointed with \(\mathbb{Z}A=\mathbb{Z}^{d}\). Consider the toric ideal
\[I_{A}=\langle\partial^{u}-\partial^{v}\mid Au=Av\rangle\]
associated to \(A\) in the polynomial ring \(R=\mathbb{C}[\partial_{1},\ldots,\partial_{n}]\) and the toric algebra \(\mathbb{C}[\mathbb{N}A]\cong R/I_{A}\).
For \(\beta\in\mathbb{C}^{d}\), consider the GKZ-hypergeometric ideal \(H_{A}(\beta):=D\langle I_{A},E-\beta\rangle\) in the Weyl algebra \(D=\mathbb{C}[x_{1},\ldots,x_{n}]\langle\partial_{1},\ldots,\partial_{n}\rangle\) where the Euler vector fields \(E=(E_{1},\ldots,E_{d})\) of \(A\) are defined by \(E_{i}:=\sum_{j}a_{ij}x_{j}\partial_{j}\) for \(i=1,\ldots,d\). The \(A\)-hypergeometric system with parameter vector \(\beta\) is the \(D\)-module \(M_{A}(\beta):=D/H_{A}(\beta)\).
Note that for any weight vector \((u,v)\in\mathbb{Z}^{2n}\) with \(u_{i}+v_{i}>0\), the initial form \(\operatorname{in}_{(u,v)}(E_{i}-\beta_{i})\) is equal to the sum \(\sum_{j}a_{ij}x_{j}\partial_{j}\) in the polynomial ring \(\mathbb{C}[x,\partial]\). We will sometimes interpret \(E_{i}\) as a polynomial in \(\mathbb{C}[x,\partial]\) or identify \(E_{i}\) with \(\operatorname{in}_{(u,v)}(E_{i}-\beta_{i})\) for convenience.
Let \(\operatorname{in}I_{A}\) be the initial ideal of \(I_{A}\) with respect to the total degree on \(R=\mathbb{C}[\partial]\). The purpose of this section is to prove the following theorem. Recall that \(S=\operatorname{gr}^{(\mathbf{1},\mathbf{1})}D=\mathbb{C}[x,\partial]\).
**Theorem 7.1**.: _If \(R/\operatorname{in}I_{A}\) is a Cohen-Macaulay \(R\)-module, then \(\operatorname{gr}^{(\mathbf{1},\mathbf{1})}M_{A}(\beta)=S/\operatorname{in}_{ (\mathbf{1},\mathbf{1})}H_{A}(\beta)\) is a Cohen-Macaulay \(S\)-module, that is, \(M_{A}(\beta)\) is a Cohen-Macaulay \(D\)-module._
**Remark 7.2**.:
1. As a special case of Theorem 7.1, if \(I_{A}\) is a homogeneous ideal in \(R\) so that \(R/I_{A}\) is Cohen-Macaulay, then \(M_{A}(\beta)\) is Cohen-Macaulay.
2. It is known that if the Grobner deformation \(\mathbb{C}[\partial]/\operatorname{in}I_{A}\) is Cohen-Macaulay, then so is \(\mathbb{C}[\partial]/I_{A}\)[2, Proposition 1.6.2]. However, the converse of this statement is not true in general (see Example 7.6).
3. The Cohen-Macaulay condition on the toric ring \(\mathbb{C}[\partial]/I_{A}\) is equivalent to the absence of rank-jumps in the GKZ hypergeometric system \(H_{A}(\beta)\)[13, Corollary 9.2]. It is also known in [19, Corollary 4.13] that if \(\beta\) is not a rank-jumping parameter, the \(L\)-characteristic cycles of \(M_{A}(\beta)\) is independent of \(\beta\). In this case, the Cohen-Macaulay condition of \(M_{A}(\beta)\) does not dependent on \(\beta\).
4. The converse statement of Theorem 7.1 does not hold in general. See Example 7.6.
Let \(L=(L_{\partial_{1}},\ldots,L_{\partial_{n}})\in\mathbb{Q}^{n}\) be a weight vector on \(R\). This induces an increasing filtration \(L\) of \(\mathbb{C}\)-vector space on \(R\) by \([\partial^{u}\in L_{i}R]\Leftrightarrow[L\cdot u\leq i]\). For \(f=\sum c_{u}\partial^{u}\in L_{l}R\setminus L_{l-1}R\), we define the \(L\)-degree of \(f\) as \(\deg^{L}(f):=l\) and the \(L\)-initial form of \(f\) as \(\operatorname{in}_{L}(f)=\sum_{L\cdot u=l}c_{u}\partial^{u}\). Also, for an ideal \(I\) of \(R\), we define the initial ideal \(\operatorname{in}_{L}I=\langle\operatorname{in}_{L}(f)\mid f\in I\rangle\). The associated graded ring \(\operatorname{gr}^{L}R\) is isomorphic to \(R\). By abuse of notation, we identify the associated graded module \(\operatorname{gr}^{L}(R/I)\) with \(R/\operatorname{in}_{L}I\).
The geometry of \(\operatorname{Spec}(R/\operatorname{in}_{L}I)\) is characterized by the \((A,L)\)-umbrella \(\Phi_{A}^{L}\) introduced in [19, Definition 2.7], which is the set of faces of \(\operatorname{conv}_{H}(\{0,\mathbf{a}_{1}^{L},\ldots,\mathbf{a}_{n}^{L}\})\) that do not contain zero where \(\mathbf{a}_{i}^{L}=\mathbf{a}_{i}/L_{\partial_{i}}\) for \(i=1,\ldots,n\). Here we embed \(\{0,\mathbf{a}_{1}^{L},\ldots,\mathbf{a}_{n}^{L}\}\subset\mathbb{Q}^{d}\) into \(\mathbb{P}_{\mathbb{Q}}^{d}\) and consider the convex hull of \(\{0,\mathbf{a}_{1}^{L},\ldots,\mathbf{a}_{n}^{L}\}\) in \(\mathbb{P}_{\mathbb{Q}}^{d}\) relative to a hyperplane \(H\) that separates \(\{0\}\) and \(\{\mathbf{a}_{1}^{L},\ldots,\mathbf{a}_{n}^{L}\}\).
For \(\tau\in\Phi_{A}^{L}\), we identify it with \(\{j\mid\mathbf{a}_{j}^{L}\in\tau\}\), or with \(\{\mathbf{a}_{j}\mid\mathbf{a}_{j}^{L}\in\tau\}\), or with the corresponding submatrix of \(A\). By \(\Phi_{A}^{L,d-1}\), we denote the subset of faces of dimension \(d-1\). Denote by \(R_{\tau}=\mathbb{C}[\partial_{\tau}]\) the polynomial subring of \(R\) generated by the variables \(\partial_{i}\), \(i\in\tau\). Consider the toric ideal \(I_{\tau}\) in \(R_{\tau}\) and the ideal \(J_{\tau}:=\langle\partial_{i}\mid i\notin\tau\rangle\) in \(R\). On the other hand, the \(d\)-torus \(\mathbb{T}:=(\mathbb{C}^{*})^{d}\) with coordinates \(t:=(t_{1},\ldots,t_{d})\) acts on \(\operatorname{Spec}R\) by \(t\cdot\xi:=(t^{\mathbf{a}_{1}}\xi_{1},\ldots,t^{\mathbf{a}_{n}}\xi_{n})\), which induced a \(\mathbb{Z}^{d}\)-grading on \(R\) by \(\deg(\partial_{i})=\mathbf{a}_{i}\). Define \(\mathbf{1}_{A}^{\tau}\in\operatorname{Spec}R\) by \((\mathbf{1}_{A}^{\tau})_{i}:=1\) if \(\mathbf{a}_{i}\in\tau\) and \((\mathbf{1}_{A}^{\tau})_{i}:=0\) if \(\mathbf{a}_{i}\notin\tau\). Denote by \(O_{A}^{\tau}\) the \(\mathbb{T}\)-orbit of \(\mathbf{1}_{A}^{\tau}\) and by \(\overline{O_{A}^{\tau}}\) its Zariski closure.
**Theorem 7.3**.: _[_19_, Theorem 2.14]_ _The set of \(\mathbb{Z}^{d}\)-graded prime ideal of \(R\) containing \(\operatorname{in}_{L}I_{A}\) equals \(\{RI_{\tau}+J_{\tau}\mid\tau\in\Phi_{A}^{L}\}\), and we have_
\[\operatorname{Spec}(R/\operatorname{in}_{L}I_{A})=\operatorname{Var}( \operatorname{in}_{L}I_{A})=\bigcup_{\tau\in\Phi_{A}^{L,d-1}}\overline{O_{A}^{ \tau}}=\bigsqcup_{\tau\in\Phi_{A}^{L}}O_{A}^{\tau}.\]
Notice that Theorem 7.3 holds when replacing \(\mathbb{C}\) by any algebraically closed field \(K\) of characteristic \(0\). Let \(K=\overline{\mathbb{C}(x)}\) be the algebraic closure of \(\mathbb{C}(x)=\mathbb{C}(x_{1},\ldots,x_{n})\). We have the following lemma.
**Lemma 7.4**.: _The Euler operators \(E_{1},\ldots,E_{d}\) form a linear system of parameters of \(\mathbb{C}(x)[\partial]/\operatorname{in}_{L}(I_{A})\)._
Proof.: It follows from Theorem 7.3 that each associated primes of \(\operatorname{in}_{L}I_{A}\) is of codimension \(d\), so we have \(\dim\left(R/\operatorname{in}_{L}I_{A}\right)=d.\) Moreover, since \(\mathbb{C}(x)[\partial]/\operatorname{in}_{L}(I_{A})=(R/\operatorname{in}_{L} I_{A})\otimes\mathbb{C}(x)\), we also have \(\dim\left(\mathbb{C}(x)[\partial]/\operatorname{in}_{L}(I_{A})\right)=d\). So it suffices to show that \(\mathbb{C}(x)[\partial]/\left(\operatorname{in}_{L}(I_{A})+\langle E\rangle\right)\) is supported at the origin, namely \(\operatorname{Var}_{\mathbb{C}(x)}\left(\operatorname{in}_{L}I_{A}+\langle E \rangle\right)=\{0\}\).
Apply Theorem 7.3 to \(K[\partial]/\operatorname{in}_{L}I_{A}\), we have \(\operatorname{Var}_{K}(\operatorname{in}_{L}I_{A})=\bigsqcup_{\tau\in\Phi_{A}^{ L}}O_{A}^{\tau}\). Let \(\tau\in\Phi_{A}^{L}\) be such that \(E_{1},\ldots,E_{d}\) all vanish at \(\mathbf{1}_{A}^{\tau}\in\operatorname{Var}_{K}(\operatorname{in}_{L}I_{A})\). If \(\tau\neq\emptyset\), rearrange \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\) so that \(\mathbf{a}_{1}\in\tau\) and that \(\mathbf{a}_{1},\ldots,\mathbf{a}_{d}\) are \(\mathbb{Q}\)-linearly independent. Let \(\sigma\) be the submatrix of \(A\) whose columns are \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\). Denote by \((s_{1},\ldots,s_{d})\) the first row of the invertible matrix \(\sigma^{-1}\). Then the operator
\[\sum_{i=1}^{d}s_{i}E_{i}\in\left(x_{1}\partial_{1}+\sum_{j=d+1}^{n}\mathbb{Q}x_{ j}\partial_{j}\right)\]
also vanishes at \(\mathbf{1}_{A}^{\tau}\). From this we deduce that \(x_{1}\in\sum_{i\in\Lambda}\mathbb{Q}x_{i}\) for some nonempty \(\Lambda\subseteq\{d+1,\ldots,n\}\), which is not possible.
The following lemma is motivated by [18, Lemma 4.3.7].
**Lemma 7.5**.: _For a matrix \(A\) and a weight vector \(L\) on \(R=\mathbb{C}[\partial]\), the following conditions are equivalent :_
1. \(\mathbb{C}[\partial]/\operatorname{in}_{L}(I_{A})\) _is Cohen-Macaulay;_
2. \(\mathbb{C}(x)[\partial]/\operatorname{in}_{L}(I_{A})\) _is Cohen-Macaulay;_
3. \(\mathbb{C}[x][\partial]/\operatorname{in}_{L}(I_{A})\) _is Cohen-Macaulay;_
4. _The Euler operators_ \(E_{1},\ldots,E_{d}\) _form a regular sequence on_ \(\mathbb{C}[x][\partial]/\operatorname{in}_{L}(I_{A})\)_._
5. _The Euler operators_ \(E_{1},\ldots,E_{d}\) _form a regular sequence on_ \(\mathbb{C}(x)[\partial]/\operatorname{in}_{L}(I_{A})\)_._
Proof.: Since \(\mathbb{C}(x)[\partial]/\operatorname{in}_{L}(I_{A})=(R/\operatorname{in}_{L} I_{A})\otimes\mathbb{C}(x)\) and \(\mathbb{C}[x][\partial]/\operatorname{in}_{L}(I_{A})=(R/\operatorname{in}_{L} I_{A})\otimes\mathbb{C}[x]\), the equivalence of (1), (2), and (3) follows from [3, Theorem 2.1.9, 2.1.10].
Let \(P_{\partial}\) be the prime ideal in \(\mathbb{C}[x][\partial]/\operatorname{in}_{L}I_{A}\) generated by the images of \(\partial_{1},\ldots,\partial_{n}\). Notice that the multiplicative closed set of elements not in \(P_{\partial}\) is equal to \(\{f+\operatorname{in}_{L}I_{A}\mid f\in\mathbb{C}[x]\setminus\{0\}\}\). So \(\mathbb{C}(x)[\partial]/\operatorname{in}_{L}(I_{A})\) is isomorphic to the localization \(\mathbb{C}[x][\partial]/\operatorname{in}_{L}(I_{A})\) at \(P_{\partial}\), namely
\[\mathbb{C}(x)[\partial]/\operatorname{in}_{L}(I_{A})=\left(\mathbb{C}[x][ \partial]/\operatorname{in}_{L}(I_{A})\right)_{P_{\partial}}. \tag{7.1}\]
Since the regular sequence is preserved by localization, we have (4) implies (5).
Since \(\dim\left(\mathbb{C}(x)[\partial]/\operatorname{in}_{L}(I_{A})\right)=d\), by [3, Theorem 1.2.5] the existence of a length \(d\) regular sequence implies that \(\operatorname{depth}\left(\mathbb{C}(x)[\partial]/\operatorname{in}_{L}(I_{A })\right)=d\). This shows that (5) implies (2).
We conclude the proof by showing that (3) implies (4). Let \(J_{E}\subset P_{\partial}\) be the ideal in \(\mathbb{C}[x][\partial]/\operatorname{in}_{L}I_{A}\) generated by the image of \(E_{1},\ldots,E_{d}\). By Lemma 7.4 and (7.1), we have
\[\operatorname{height}J_{E}=\operatorname{height}P_{\partial}=\operatorname{ height}(J_{E})_{P_{\partial}}=\dim\left(\mathbb{C}(x)[\partial]/ \operatorname{in}_{L}(I_{A})\right)=d.\]
Since \(\mathbb{C}[x][\partial]/\operatorname{in}_{L}(I_{A})\) is Cohen-Macaulay by (3), it follows from [3, Theorem 2.1.6] that the ideal \(J_{E}\) is unmixed and that \(E_{1},\ldots,E_{d}\) form a regular sequence on \(\mathbb{C}[x][\partial]/\operatorname{in}_{L}(I_{A})\).
We are now ready to prove Theorem 7.1.
Proof of Theorem 7.1.: Recall that \(R=\mathbb{C}[\partial]\) and \(S=\operatorname{gr}^{(\mathbf{1},\mathbf{1})}D=\mathbb{C}[x,\partial]\). For all one vector \(\mathbf{1}\in\mathbb{Z}^{n}\), notice that \(\operatorname{in}I_{A}=\operatorname{in}_{(\mathbf{1})}I_{A}\) in \(R\) and that \(\operatorname{in}_{(\mathbf{1},\mathbf{1})}I_{A}=S(\operatorname{in}I_{A})\) in \(S\).
Since \(R/\operatorname{in}I_{A}\) is Cohen-Macaulay, by Lemma 7.5 the initial forms \(\operatorname{in}_{(\mathbf{1},\mathbf{1})}(E-\beta)=E\) is a regular sequence on \(S/\operatorname{in}_{(\mathbf{1},\mathbf{1})}I_{A}\). So the assumption of [18, Theorem 4.3.5] is satisfied and we have
\[\operatorname{in}_{(1,1)}H_{A}(\beta)=\operatorname{in}_{(1,1)}I_{A}+\langle E\rangle.\]
Since \(S/\operatorname{in}_{(\mathbf{1},\mathbf{1})}I_{A}=(R/\operatorname{in}I_{A}) \otimes\mathbb{C}[x]\) is Cohen-Macaulay, it follows from [3, Theorem 2.1.3] that \(S/\operatorname{in}_{(\mathbf{1},\mathbf{1})}H_{A}(\beta)\) is Cohen-Macaulay.
We conclude this article with examples illustrating the relations between the Cohen-Macaulay condition on the three modules \(\mathbb{C}[\mathbb{N}A]=R/I_{A}\), \(R/\operatorname{in}I_{A}=R/\operatorname{in}_{(\mathbf{1})}I_{A}\), and
\[\operatorname{gr}^{(\mathbf{1},\mathbf{1})}M_{A}(\mathbf{0})=S/\operatorname{ in}_{(\mathbf{1},\mathbf{1})}H_{A}(\mathbf{0})=S/\operatorname{in}_{(\mathbf{1}, \mathbf{1})}D\langle I_{A},E\rangle.\]
**Example 7.6**.: The following table includes 5 different examples when \(\dim\mathbb{C}[\mathbb{N}A]=2\).
Here, since \(\dim\mathbb{C}[\mathbb{N}A]=2\), it follows from [3, Corollary 6.2.6] that \(\mathbb{C}[\mathbb{N}A]\) is Cohen-Macaulay if and only if \(H^{2}_{\mathfrak{m}}(\mathbb{C}[\mathbb{N}A])\) is the only nonvanishing local cohomology of \(\mathbb{C}[\mathbb{N}A]\). Moreover, the local cohomology of the semigroup ring \(\mathbb{C}[\mathbb{N}A]\) can be computed by the Ishida complex [3, Theorem 6.2.5]. More concretely, let \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\) be the columns of \(A\) that lie on the two rays of the cone \(\mathbb{Q}_{\geq 0}A\). The Cohen-Macaulay condition of \(\mathbb{C}[\mathbb{N}A]\) is equivalent to the condition \((\mathbb{N}A-\mathbb{N}\mathbf{a}_{1})\cap(\mathbb{N}A-\mathbb{N}\mathbf{a}_ {1})=\mathbb{N}A\).
On the other hand, we use the Auslander-Buchsbaum formula and the projective dimension of \(R/\operatorname{in}I_{A}\) or \(S/\operatorname{in}_{(\mathbf{1},\mathbf{1})}\langle I_{A},E_{1},E_{2}\rangle\) to test their Cohen-Macaulay condition with the help of Macaulay2 [8] and the Dmodules package written by Anton Leykin and Harrison Tsai.
|
2310.20577 | Offloading Real-Time Tasks in IIoT Environments under Consideration of
Networking Uncertainties | Offloading is a popular way to overcome the resource and power constraints of
networked embedded devices, which are increasingly found in industrial
environments. It involves moving resource-intensive computational tasks to a
more powerful device on the network, often in close proximity to enable
wireless communication. However, many Industrial Internet of Things (IIoT)
applications have real-time constraints. Offloading such tasks over a wireless
network with latency uncertainties poses new challenges.
In this paper, we aim to better understand these challenges by proposing a
system architecture and scheduler for real-time task offloading in wireless
IIoT environments. Based on a prototype, we then evaluate different system
configurations and discuss their trade-offs and implications. Our design showed
to prevent deadline misses under high load and network uncertainties and was
able to outperform a reference scheduler in terms of successful task
throughput. Under heavy task load, where the reference scheduler had a success
rate of 5%, our design achieved a success rate of 60%. | Ilja Behnke, Philipp Wiesner, Paul Voelker, Odej Kao | 2023-10-31T16:12:21Z | http://arxiv.org/abs/2310.20577v1 | # Offloading Real-Time Tasks in IIoT Environments under Consideration of Networking Uncertainties
###### Abstract.
Offloading is a popular way to overcome the resource and power constraints of networked embedded devices, which are increasingly found in industrial environments. It involves moving resource-intensive computational tasks to a more powerful device on the network, often in close proximity to enable wireless communication. However, many Industrial Internet of Things (IIoT) applications have real-time constraints. Offloading such tasks over a wireless network with latency uncertainties poses new challenges.
In this paper, we aim to better understand these challenges by proposing a system architecture and scheduler for real-time task offloading in wireless IIoT environments. Based on a prototype, we then evaluate different system configurations and discuss their trade-offs and implications. Our design showed to prevent deadline misses under high load and network uncertainties and was able to outperform a reference scheduler in terms of successful task throughput. Under heavy task load, where the reference scheduler had a success rate of 5%, our design achieved a success rate of 60%.
real-time scheduling, cyber-physical systems, task offloading, industrial internet of things +
Footnote †: thanks: 2023. Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author’s version of the work. It is quoted here for your personal use. Not for redistribution. The definitive Version of Record was published in _2nd International Workshop on Middleware for the Edge (MiddlewareEdge 23)_, December 11, 2002, Balogra, Italy, [https://doi.org/10.1145/3630180.3631202](https://doi.org/10.1145/3630180.3631202) +
Footnote †: 2023. Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author’s version of the work. It is quoted here for your personal use. Not for redistribution. The definitive Version of Record was published in _2nd International Workshop on Middleware for the Edge (MiddlewareEdge 23)_, December 11, 2002, Balogra, Italy, [https://doi.org/10.1145/3630180.3631202](https://doi.org/10.1145/3630180.3631202).
+
Footnote †: 2023. Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author’s version of the work. It is quoted here for your personal use. Not for redistribution. The definitive Version of Record was published in _2nd International Workshop on Middleware for the Edge (MiddlewareEdge 23)_, December 11, 2002, Balogra, Italy, [https://doi.org/10.1145/3630180.3631202](https://doi.org/10.1145/3630180.3631202).
+
Footnote †: 2023. Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author’s version of the work. It is quoted here for your personal use. Not for redistribution. The definitive Version of Record was published in _2nd International Workshop on Middleware for the Edge (MiddlewareEdge 23)_, December 11, 2002, Balogra, Italy, [https://doi.org/10.1145/3630180.3631202](https://doi.org/10.1145/3630180.3631202).
+
Footnote †: 2023. Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author’s version of the work. It is quoted here for your personal use. Not for redistribution. The definitive Version of Record was published in _2nd International Workshop on Middleware for the Edge (MiddlewareEdge 23)_, December 11, 2002, Balogra, Italy, [https://doi.org/10.1145/3630180.3631202](https://doi.org/10.1145/3630180.3631202).
+
Footnote †: 2023. Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author’s version of the work. It is quoted here for your personal use. Not for redistribution. The definitive Version of Record was published in _2nd International Workshop on Middleware for the Edge (MiddlewareEdge 23)_, December 11, 2002, Balogra, Italy, [https://doi.org/10.1145/3630180.3631202](https://doi.org/10.1145/3630180.3631202).
+
Footnote †: 2023. Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author’s version of the work. It is quoted here for your personal use. Not for redistribution. The definitive Version of Record was published in _2nd International Workshop on Middleware for the Edge (MiddlewareEdge 23)_, December 11, 2002, Balogra, Italy, [https://doi.org/10.1145/3630180.3631202](https://doi.org/10.1145/3630180.3631202).
## 1. Introduction
The Industrial Internet of Things (IIoT) aims to connect embedded devices in industrial environments to enable remote control, mobile computing, monitoring, and collaborative problem solving. Examples can be found in the cyber-physical systems used in smart factories, logistics, and autonomous vehicles (Bogra et al., 2016; Bogra et al., 2016), where devices often operate under hard real-time constraints. While real-time computing in automation and embedded systems is well researched and deployed, its combination with packet-based wireless IP networking presents new challenges and opportunities.
Devices used in IIoT scenarios typically have resource and power constraints that limit the type and amount of local computation. For example, resource-intensive tasks such as object recognition in video may require specialized hardware and more power than an embedded device can reasonably provide. Another example for this can be found in close-proximity model training, where sensor data is accumulated and processed on edge servers (Bogra et al., 2016). A common way to deal with these limitations in practice is to offload individual tasks to more powerful computers in the local area (Bogra et al., 2016; Kao et al., 2016; Kao et al., 2016). However, for tasks with real-time requirements, this can be very challenging due to unpredictable latencies in wireless networks, especially when considering mobile devices.
Scheduling real-time tasks in distributed systems has received increasing attention (Bogra et al., 2016; Kao et al., 2016; Kao et al., 2016), but related work generally assumes very reliable and predictable communication. However, especially with wireless connections, the exact prediction of the expected latency is difficult. Various efforts have been made to develop methods to guarantee end-to-end delay in wireless networks, but some form of uncertainty always remains (Bogra et al., 2016; Kao et al., 2016; Kao et al., 2016). In addition, efficiently deploying workloads in a distributed system is difficult, especially when these workloads occur sporadically (Bogra et al., 2016). In this work, we aim to better understand the tradeoffs and implications of offloading real-time tasks to local edge resources in IIoT environments connected by wireless networks. To this end, we make the following _contributions_:
* We present a system architecture for real-time task offloading in IIoT environments
* We discuss and implement a distributed real-time scheduler incorporating network latency uncertainties
* We evaluate relevant metrics using a prototype of our system within a virtual environment
The reminder of this paper is structured as follows. Section 2 surveys the related work. Section 3 explains our system architecture. Section 4 proposes a scheduler to perform latency-aware offloading. Section 5 evaluates different system configurations to quantify trade-offs between performance and reliability. Section 6 concludes the paper.
## 2. Related Work
In this section, we briefly present similar work in the area of real-time task offloading.
Chen et al. (Chen et al., 2016) use fog computing for energy-optimal dynamic computation offloading in IIoT environments. Computation time, energy, and resource utilization are considered and optimized. Real-time guarantees are not taken into account. Similarly, Elashri and Azim (Elashri and Azim, 2016) use offloading of real-time tasks to cloud and fog resources for energy efficiency. They propose two algorithms for making an efficient offloading decision for soft and weakly hard (firm) real-time applications while guaranteeing task schedulability.
Ma et al. (Ma et al., 2016) identify a trade-off problem between reliability and latency in IIoT applications. In their work, they propose an offloading framework for visual applications that considers both
performance metrics. A scheme is presented that uses and maximizes a utility function that balances reliability and latency. Shukla and Munir (Shukla and Munir, 2018) present an offloading architecture for IoT devices. Computers on the same local network as the IoT device process data-intensive tasks while meeting their real-time deadlines.
Liu et al. address the problem of offloading data- and compute-intensive computations for vehicular networks (Liu et al., 2019). Using a fog/cloud architecture, tasks are classified and distributed according to latency requirements. Roadside units as well as other vehicles can act as compute nodes. They formulate a real-time task offloading model that maximizes the task service ratio and propose a real-time task offloading algorithm that works cooperatively.
Hong et al. present a cross-layer cooperative offloading strategy for multi-hop scenarios (Hong et al., 2019). IIoT devices, local edge servers, as well as cloud resources are considered. QoS-Aware computation offloading in a distributed manner with game theory approaches.
In the following sections, we complement these works by presenting a simple architecture and real-time scheduler for IIoT environments, focusing on the incorporation of network delays as well as the tradeoff between predictability and throughput.
## 3. System Architecture
The goal of this work is a system design for real-time task offloading for mobile embedded systems and the development and evaluation of a scheduler for this purpose. Figure 1 shows an overview of the design. Potentially mobile machines (clients) submit tasks with deadlines to the scheduler, which checks time feasibility and - if accepted - schedules the tasks to worker queues. The workers run on the same compute cluster as the scheduler and can be preempted by it. Each worker computes one task at a time and returns the result directly to the source client while signaling task completion to the scheduler.
The considered environments contain a scalable number of (semi-) autonomous devices that reside and potentially move in a local area spanned by an IP network. Based on these settings, the following task and scheduler properties can be derived.
* Tasks arrive _spoadic_: The devices that submit the tasks are fully or semi autonomous and therefore the arrival of tasks happens during runtime, irregular and not predictable for the scheduler.
* The scheduling must therefore happen _online_ during system run-time, with _dynamic priorities_.
* The clients might not be fully independent in the overall setting, however, their interaction is assumed to happen outside of the task context. Therefore the tasks are independent and _no precedence constraints_ need to be considered.
* Tasks have _hard deadlines_, that are known at the time the task is submitted.
* Tasks are _fully preemptable_
* Finally, the computation model is that of a _homogeneous multiprocessor_. However, with the major difference of distributed processing entities (workers) and task dispatch via local network.
The following subsections describe the design of the involved entities. Design decisions and the operation of the scheduler are explained in detail in Section 4.
### Tasks
The _task_ is the central data structure that holds all the necessary information about the workload that a client wants to offload. It is modeled after real-time operating system processes and is shared and sent between the different entities in the system. In an IIoT environment such as a smart factory, we assume that all entities and workloads are known before the system is set up. Therefore, clients can assume that the programs or binaries necessary to compute all the workloads that a client wants to offload are present on the workers. This makes a task submission much like a _Remote Procedure Call (RPC)_. A task is generated initially by the client, who sends it to the scheduler, which in turn eventually sends it to a worker. It is serialized and deserialized for transportation over the network, but all entities work on the same shared Task data structure.
Definition 1 ().: _Let \(\mathcal{P}\) be a task offloaded by a client, scheduled, and computed by a worker. For the purpose of this work it is characterized by the tuple_
\[(C,T_{d},t_{r},t_{cs},t_{w},t_{e},\mathcal{P})\]
* \(C\) _the source client._
* \(T_{d}\) _the absolute deadline._
* \(t_{r}\) _the initial relative deadline._
* \(t_{cs}\) _the connection setup time._
* \(t_{w}\) _the worst-case execution time._
* \(t_{e}\) _the elapsed execution time._
* \(\mathcal{P}\) _the set of parameters._
The client \(C\) specifies its own id and its IP address and port on which it will listen for the connection that the worker will establish. \(t_{r}\) is the relative deadline at the time the client creates the task and sends it to the scheduler. This value, along with the connection setup time \(t_{cs}\) measured by the client, is used by the scheduler to calculate network latency. The duration \(t_{e}\) is set and used by the scheduler to keep track of the execution time when a task is computed by a worker and possibly preempted. The set of parameters \(\mathcal{P}\) contains the command line arguments and payload data that the worker passes to the actual executable.
Task DispatchThere are two sensible design choices for when to dispatch consecutively scheduled tasks to the assigned workers:
1. Dispatch a task to the worker as soon as it is assigned to it, and let the worker maintain an internal task queue.
2. Dispatch the next task only after the previous one is finished.
Figure 1. System Architecture Overview
The first option has the advantage that consecutively scheduled tasks are present on the worker and can be computed with minimal interruption in between. On the other hand, the scheduler sends lower priority tasks to the worker to add to its queue while it is working on the highest priority task. The second option interrupts the worker only when a task with a higher priority than the one the worker is currently processing arrives. For the remainder of this work, the second approach is used to ensure minimal disruption and maximum predictability of the execution time of the running task. We assume that the connection between the workers and the scheduler is wired and very fast (compared to the wireless connection of the clients), so that task dispatching causes little idle time for the workers. In a scenario where tasks come with large input payloads, such as multimedia, the first approach may be more appropriate.
Task PayloadsTied to the task dispatch approaches is the question of what path the task payloads take. Task payloads refer to the input parameters that the client sends with the task and the task result that is sent back to the client. There are three possible paths for the input parameters:
1. The client passes only the task metadata to the scheduler, which then communicates the assigned worker back to the client, so it can send the input parameters directly to the worker.
2. The client sends the task together with its input parameters to the scheduler, which eventually passes them on to the assigned worker.
3. The client uploads the input parameters to a shared storage outside the scheduler that all workers have access to.
Based on the task dispatch handling decision, the second option is chosen for task input parameter handling. In a scenario where the task input parameters consist of large amounts of data, such as multimedia, the scheduler might become a bottleneck if it has to handle the distribution of this data. In this case, one of the other approaches might be more appropriate. The task result payload is only relevant to the client. Therefore, the worker sends it directly to the client.
### Clients
Clients are the entities that generate _tasks_ they want to offload. They are designed as a software library that provides an interface for submitting tasks. Client machines are mobile embedded devices that perform some kind of physical action and are connected to the rest of the network via a wireless link. They implement fallback behavior in case a task is rejected by the scheduler.
Task configurationThe clients are responsible to set the following properties of offloaded tasks as defined in Section 3.1.
* Execution time
* Deadline
* Result payload size
Task RejectionsSubmitted tasks are either accepted or rejected by the scheduler. In case a task is accepted, the client will assume that the task result is available to the client before the deadline. If it arrives after the deadline, this means a failure of the system and is to be generally avoided. So, if the scheduler is not able to schedule the task on one of the workers without nearing the deadline by a certain uncertainty window, it rather rejects the task completely. The client has some fallback behavior for this case, which causes less disruption than a missed deadline.
### Workers
Workers compute the offloaded tasks distributed by the scheduler. They can be thought of as distributed processing cores: A worker can process exactly one task at a time and is fully utilized with that execution. Worker machines are edge resources that are physically close to the scheduler host and on the same local area network. However, it listens for incoming tasks even while processing one, since tasks need to be preemptable in a setting with deadlines. They ensure predictable task execution times.
Workers connect to the scheduler at startup and listen for tasks. Upon receiving a task, the worker begins executing the task with the parameters provided by the client, but continues to listen for incoming tasks to allow for preemption by the scheduler. When the worker receives a task while still processing an earlier task, the earlier task gets preempted until the new task is completed.
Worker OperationAfter the worker successfully connects to the scheduler, it enters the main loop, where it waits for tasks to be sent by the scheduler through the open connection. When a task is received, a new thread is started to handle that task while the main loop continues to listen for messages from the scheduler.
Worker PreparationThe task handler running in the new thread then starts to prepare the task computation. The TCP 3-way handshake takes a non-negligible amount of time when performed on a connection with a latency in the double-digit millisecond range (Kipper, 2017). For this reason, the worker starts establishing the connection to the client immediately after receiving a task, in parallel with the actual task computation. The worker sends the computation results of a task directly to the client. This ensures that the task result can be sent with the shortest possible delay once the task has finished computing.
Worker ComputationThe worker computes the tasks by starting the task executable as a separate operating system process.Once the process is started and a process id (PID) is returned by the operating system, the running task and its process ID are pushed into the internal task queue. The thread then keeps until the process has exited. If there is already a task running, the worker lets the operating system suspend the process execution and marks the process as preempted in the task queue. Only then the new process to compute the new task is started and the thread goes to sleep until the new process has exited.
Once the process has exited, the worker reads the computation result and packages it for transport to the client. The worker also records the actual execution time (excluding the time spent being preempted). The worker notifies the scheduler of having finished the task and sends the task result to the client. However, this needs an established connection to the client, which was started in parallel with the task computation. If the connection setup takes longer than the task computation, the worker must wait until the open connection is available. Finally, if some task was preempted, the halted process is resumed.
## 4. The Scheduler
This section explains the design choices and operation of the scheduler, which dispatches tasks submitted by clients to workers. After
parsing a task message into the corresponding task structure, the _time to deadline_ is calculated from the absolute deadline \(T_{d}\) of the task and the current system time \(T_{s}\). For each incoming task, the scheduler then derives the laxity \(=T_{d}-T_{s}-(t_{w}-t_{e})\) and density \(=(t_{w}-t_{e})/(T_{d}-T_{s})\) to make a scheduling decision.
### Scheduling Algorithm: Partitioned EDF
Based on the task model and the assumed environment, only dynamic priority, preempting scheduling algorithms such as _Earliest Deadline First (EDF)_ and _Least Laxity First (LLF)_ are suitable. LLF has the major drawback of thrashing (Liang et al., 2017), which is exacerbated in this scenario. Since context switches are expensive, the high frequency of context switches caused by the thrashing effect is already problematic in a general-purpose operating system. In our scenario, where each preemption not only causes a context switch on the worker, but also requires communication, the effect has an even greater impact on overall system performance. For this reason, EDF was considered a more reasonable algorithm for our scheduler. Yet, for the sake of completeness, there also exist enhanced versions of \(LLT\) which aim to avoid thrashing.
In a multiprocessor environment, EDF works either with a global task queue (global EDF) or with partitioned queues for each worker (partitioned EDF). In addition to the non-optimality of global EDF due to Dhall's effect (Dahl, 2017), a global queue also makes task acceptance testing more difficult: When a task is added to the global task queue and given its priority, it is not immediately clear which worker will eventually pick up the task, at what time, and whether this is early enough to meet the deadline. It would be necessary to simulate separate run queues for each worker to find the next worker to become free when the new task eventually becomes the highest priority task in the global queue. Since the scheduler must decide whether it can guarantee the timely execution of tasks as they arrive, partitioned scheduling is more appropriate. Partitioned EDF requires an algorithm or heuristic to assign a task to a worker when multiple workers can meet the task deadline. First-fit, worst-fit, and best-fit are the heuristics considered. Which one to use will depend on the circumstances and will be further explored in the evaluation.
### Deadline Adjustment & Acceptance Checks
When a new task arrives at time \(t\), the scheduler adjusts the deadline set by the client to account for network latency. In our setting, the communication delay between a client and the scheduler is roughly the same as between a worker and the client. Therefore, the scheduler uses the delay that occurs in the client-scheduler connection to calculate the expected delay \(d_{\text{exp}}\) that will occur when the worker sends the task result to the client: The client provides the _connection setup time_\(t_{\text{cs}}\) and the initial time to deadline \(t_{r}\) at the time the client generated the task in the task message. \(d_{\text{exp}}=t_{r}-T_{d}-t\).
However, in a high latency environment with short task execution times, it may take longer to establish the connection than to complete the task computation. In this case, the difference between \(t_{\text{cs}}\) and \(t_{w}\) must also be considered for the delay adjustment \(d_{\text{adj}}\):
\[d_{\text{adj}}=\begin{cases}d_{\text{exp}}+t_{\text{cs}}-t_{w}&,t_{\text{cs }}>t_{w}\\ d_{\text{exp}}&,\text{else}\end{cases}\]
The _adjusted delay_ is then multiplied by the scheduler's uncertainty factor \(\mathcal{U}\) to account for latency jitter. \(\mathcal{U}\) is a configuration option of the scheduler and the effect of different values is one of the subjects of the evaluation. Finally, the resulting value is subtracted from the deadline. \(T_{d\text{-new}}=T_{d\text{-old}}-\mathcal{U}\cdot d_{\text{adj}}\). This adjusted deadline can now be used for scheduling and assessed to see if it leaves enough time for the task result to be sent to the client and not miss the original deadline set by the client. A first acceptance check is then performed to ensure that the time to the adjusted deadline is greater than the task execution time.
The scheduler maintains a queue for each worker according to the partitioned EDF scheme. To make a scheduling decision, a copy of each queue is scheduled using EDF and evaluated for potential deadline misses. Queues with predicted misses are eliminated. In addition, the overall density of the queue is calculated from the individual task _densities_.
If the scheduler is configured to use the _first-fit_ strategy to select the worker, it will select the first worker without a missed deadline. For the other strategies, each worker is tested. In the case of the _best-fit_ or _worst-fit_ strategies, the scheduler selects the worker with the highest or lowest total density, respectively. If the new task has the highest priority, the currently running task is preempted. The scheduler records the elapsed execution time \(t_{e}\) of the task, and the worker saves its context. If no worker is found for which its queue can be scheduled without predicted missed deadlines, the task is rejected.
## 5. Evaluation
We evaluate three scenarios to investigate the impact of our architecture on offloaded real-time tasks in wireless IIoT environments. We explore the limits at which tasks or network environments may or may not meet real-time requirements.
### Experimental Setup
We implemented the proposed architecture and partitioned EDF scheduler in Rust and made the code publicly available1. We implemented a reference scheduler for comparison. It runs EDF on a global task queue, rejecting tasks only if the time to deadline is less than the WCET and without deadline adjustment.
Footnote 1: [https://github.com/dos-group/Real-Time-Offloading-Simulator-IoT](https://github.com/dos-group/Real-Time-Offloading-Simulator-IoT)
_Network._ We use _Mininet2_ to emulate a network of clients, workers, and the scheduler. The virtual network links between hosts can be configured with arbitrary latency, jitter, and bandwidth limits. The basic network layout is the same for all test setups: Client hosts are connected wirelessly to an access point, which is connected to a switch via Ethernet. Edge cluster hosts (running workers and the scheduler) are also connected to this switch via Ethernet.
Footnote 2: [https://mininet.org](https://mininet.org)
_Test Scenarios._ All tasks in a test run have the same predefined execution time of 30 seconds and have been run 5 times each. The relative deadlines, on the other hand, are only configured in terms
\begin{table}
\begin{tabular}{l||c|c|c}
**Scenario** & **1** & **2** & **3** \\ \hline number of clients & \(10-50\) & \(30\) & \(30\) \\ submission freq. mean [1/s] & \(1\) & \(1\) & \(1\) \\ latency mean [ms] & \(30\) & \(30\) & \(30\) \\ latency variance [ms] & \(10\) & \(10\) & \(10-50\) \\ laxity mean [ms] & \(100\) & \(180-60\) & \(100\) \\ \end{tabular}
\end{table}
Table 1. Test Scenario Parameters
of mean and variance, and are dynamically sampled from a normal distribution by the clients at runtime. This results in more or less preemptions, depending on the configuration, because the deadlines of the tasks differ accordingly for the same execution time.
The task submission frequency is sampled from a Poisson distribution for which \(\lambda=1\) is configured for all clients. However, clients only submit one task at a time, so the minimum time between task submissions is limited by the task execution time. The worst case task execution times are set to \(100\)ms for all experiments.
As described in section 4.2, the scheduler is configured with an uncertainty factor \(\mathcal{U}\) and the task assignment heuristic. The scheduler approximates the worker-to-client latency to be considered when creating the schedule based on the client-to-scheduler latency measured during task submission. The measured client to worker latency is multiplied by \(\mathcal{U}\) to account for variations in latency. The following test scenarios explore how much variance needs to be accounted for with which uncertainty factor, and the impact this has on overall throughput. Each scenario is tested with \(\mathcal{U}\) ranging from \(0.25\) to \(5.0\). The table 1 lists the chosen parameters for the three scenarios.
### Scenario 1: Number of Clients
The first scenario examines the behavior of the system under varying loads. Ten to fifty clients each create an average of one task per second.
The different task assignment heuristics were compared to see if the tasks were distributed appropriately. All reported results use the worst-fit allocation strategy, as first-fit and best-fit simply concentrate the workload on a single worker.
ResultsThe experimental results can be seen in Figure 1(a). We present only the most relevant results for uncertainty factors between \(0.5\) and \(1.25\). For higher values, the acceptance rate drops without any further gain in reliability. We present the ratio of successfully offloaded tasks, where a submission is considered _successful_ if the scheduler accepts it and the computation result is returned before its deadline has passed.
The results show that as the load increases, the reference miss rate increases, reducing the actual throughput of successful tasks. Depending on the uncertainty factor used, our approach minimizes the number of missed deadlines for any load. However, for higher loads and higher uncertainty factors, the number of rejected tasks increases, leading to an expected decrease in relative throughput. There are few to no missed deadlines in any of the experiments using our design and a success rate of up to \(60\)% for the highest tested submission load. Rejected tasks are returned to clients as early as possible for fallback processing.
### Scenario 2: Task Laxity
Next, we test the effect of task laxity on the acceptance rate and the miss rate. The lower the laxity, the less communication and scheduling time is available. Experiments were conducted with laxities ranging from \(180\)ms down to \(60\)ms.
ResultsIt is evident that task laxity plays an important role in real-time processing. With sufficient laxity, the proposed design is able to meet all deadlines while rejecting only a small fraction of tasks, resulting in a relative throughput of \(95\)%, as can be seen in Figure 1(b). As the laxity becomes smaller, fewer tasks can be safely accepted. The proposed design outperforms the reference implementation for all laxities and for uncertainty factors below \(1.0\), meaning that some risk of missed deadlines must be accepted. The best results could be achieved with \(\mathcal{U}=0.5\), with a successful throughput difference of \(5\) - \(20\) percentage points compared to the reference scheduler. By choosing \(\mathcal{U}>1.0\), missed deadlines can be avoided for all tested laxities. However, in the most extreme case of \(60\)ms submission laxity, only \(5\)% of submissions are accepted to accommodate this predictability. With \(\mathcal{U}=1.0\) and submission laxities above \(90\)ms, more tasks are successful than with the reference scheduler.
### Scenario 3: Latency variance
The final scenario tests the behavior of the system under increasing network latencies. We tested latency variances ranging from \(10\)ms to \(50\)ms.
ResultsThe results for this scenario in Figure 1(c) show the least impact of increasing the parameter. Again, we see that higher uncertainty factors lead to higher deadline hit rates and lower task acceptance rates. The relative throughput of our design outperforms the reference implementation with somewhat risky uncertainty factor values below \(1.0\).
Figure 2. For each scenario, we report the throughput (successfully finished tasks) and the fraction of tasks that have been accepted but missed their deadline. We compare the reference baseline vs. our latency-aware approach at \(\mathcal{U}=\{0.5,0.75,1.0,1.25\}\).
### Summary
For very tight deadlines, where the laxity is in the same range as the latency, the real-time scheduler relies heavily on correct latency estimates to generate efficient schedules. For low-variance latencies, it has been shown that high throughput can still be achieved while preventing missed deadlines and hence gaining overall system predictability.
For pessimistic latency estimates, stronger guarantees of meeting deadlines can be provided. However, this comes at the expense of throughput, since a correspondingly large number of tasks must be discarded. Still, high throughput can be achieved if isolated misses are acceptable. The choice of an acceptable uncertainty factor must be made taking into account client fallback mechanisms.
The highest advantages of the proposed design can be seen under high task submission loads, where the reference design is only able to successfully compute 5% of submitted tasks while missing the deadlines of the remaining 95% of tasks. Here, our design has still a relative throughput of 45% with no deadline misses.
## 6. Conclusion
In this work, we investigated the trade-offs and implications of offloading real-time tasks over wireless networks to local edge resources in the context of IIoT environments. On this basis, a system architecture was designed that integrates networking uncertainties. This system was implemented and extensively tested. Given the broadness of the proposed architecture optimizations and adaptations towards specialized use-cases can be applied to the scheduler and real-time networking concepts can be incorporated in future work. Scheduling algorithms could incorporate network bandwidth, memory consumption, and payload sizes. The distributed architecture will additionally be extended by fault tolerance mechanisms and removing the single point of failure.
## Acknowledgments
This research was supported by the German Ministry for Education and Research (BMBF) as Software Campus (grant 01IS17050).
|
2309.14969 | Soft X-ray phase nano-microscopy of micrometre-thick magnets | Imaging of nanoscale magnetic textures within extended material systems is of
critical importance both to fundamental research and technological
applications. Whilst high resolution magnetic imaging of thin nanoscale samples
is well-established with electron and soft X-ray microscopy, the extension to
micrometer-thick systems with hard X-rays currently limits high resolution
imaging to rare-earth magnets. Here we overcome this limitation by establishing
soft X-ray magnetic imaging of micrometer-thick systems using the pre-edge
phase X-ray Magnetic Circular Dichroism signal, thus making possible the study
of a wide range of magnetic materials. By performing dichroic
spectro-ptychography, we demonstrate high spatial resolution imaging of
magnetic samples up to 1.7 {\mu}m thick, an order of magnitude higher than
conventionally possible with absorption-based techniques. This new regime of
magnetic imaging makes possible the study of extended non rare-earth systems
that have until now been inaccessible, from magnetic textures for future
spintronic applications to non-rare-earth permanent magnets. | Jeffrey Neethi Neethirajan, Benedikt Daurer, Marisel Di Pietro Martínez, Aleš Hrabec, Luke Turnbull, Majid Kazemian, Burkhard Kaulich, Claire Donnelly | 2023-09-26T14:40:10Z | http://arxiv.org/abs/2309.14969v1 | # Soft X-ray phase nano-microscopy of micrometre-thick magnets
###### Abstract
Imaging of nanoscale magnetic textures within extended material systems is of critical importance both to fundamental research and technological applications. Whilst high resolution magnetic imaging of thin nanoscale samples is well-established with electron and soft X-ray microscopy, the extension to micrometer-thick systems with hard X-rays currently limits high resolution imaging to rare-earth magnets. Here we overcome this limitation by establishing soft X-ray magnetic imaging of micrometer-thick systems using the pre-edge phase X-ray Magnetic Circular Dichroism signal, thus making possible the study of a wide range of magnetic materials. By performing dichroic spectro-pychography, we demonstrate high spatial resolution imaging of magnetic samples up to \(1.7\,\mathrm{\mu m}\) thick, an order of magnitude higher than conventionally possible with absorption-based techniques. This new regime of magnetic imaging makes possible the study of extended non rare-earth systems that have until now been inaccessible, from magnetic textures for future spintronic applications to non-rare-earth permanent magnets.
Magnetic materials have a high impact on our society, with a range of functionalities making possible a number of applications. On one hand, the study of naturally occurring magnetite in the natural world gives insight into magnetoreception [1] and the role of the earth's field over the ages [2]. On the other hand, strong magnetic fields from highly anisotropic permanent magnets play a key role in the production of clean energy [3; 4; 5], highly inductive magnets play an important role in write heads in hard disk drives [6] and topological textures in spintronics devices promise the next generation of computing technologies [7; 8].
Key to the behaviour of these magnetic systems is their underlying magnetisation configuration, which forms local areas of uniform magnetisation - called magnetic domains - as well as nanoscale topological magnetisation defects such as domain walls. Direct imaging of the magnetisation configuration provides a unique insight into the underlying mechanisms responsible for these behaviours. For example, imaging the reversal processes of permanent magnets elucidates their switching mechanisms [9], allowing for the development of more efficient devices, while imaging of the propagation of topological magnetisation textures has allowed for the step towards realising devices based on the ultra-fast motion of magnetic defects through interconnected networks [7; 10] and non-linear dynamics enabling new types of computing architectures [11].
With such a diverse variety of magnetic systems, we require a broad range of capabilities to image the underlying magnetic configurations. First, we require the ability to study systems of varying dimensions, from single atoms to thick magnetic systems. Second, we require flexibility to study a wide variety of materials, from naturally forming magnetite, to exotic designed topological chiral magnets. And last, we require sufficient spatial resolution to resolve magnetic textures on the order of the magnetic exchange length: down to tens of nanometres and below, which corresponds to the typical sizes of key topological textures such as domain walls, skyrmions and even hopfions.
However, while for thin samples (\(<200\,\mathrm{nm}\)) and surfaces, material flexibility and spatial resolution are well established with high spatial resolution soft X-ray [12; 13] and electron microscopies [14; 15], the imaging of thicker extended systems is more challenging. High spatial resolution imaging of extended samples of thicknesses on the order of hundreds of nanometers to micrometres has been achieved with resonant hard X-ray dichroic imaging [16] which, when combined with tomographic imaging, has revealed singularities known as Bloch points, skyrmions and magnetic vortex rings [17; 18; 19; 20] within micrometre-thick samples with spatial resolutions down to \(50\,\mathrm{nm}\)[19]. However, in the hard X-ray regime, X-ray dichroic signals are highly material dependent: while relatively strong signals exist for certain materials such as rare-earth containing compounds, hard X-ray dichroic signals of transition metal magnets are approximately \(20\times\) weaker [16]. This significantly weaker dichroic signal results in poorer spatial resolution and imaging quality, thus generally limiting 3D investigations in these materials to thin films.
Here we establish a route to the high spatial reso
lution magnetic imaging of extended magnetic systems that is applicable to a wide range of magnetic materials with soft X-rays. By exploiting the phase XMCD signal, which is prominent in the pre-absorption edge, we extend soft X-ray magnetic imaging at transition metal edges to samples an order of magnitude thicker than currently viable, opening up a new regime for the imaging of magnetic compounds. We gain access to the phase contrast using X-ray ptychography, a coherent diffractive imaging (CDI) technique. By performing dichroic spectro-ptychography we map out the complex XMCD signal, revealing a notable phase XMCD signal in the pre-absorption edge where the absorption contrast vanishes. This pre-edge phase signal makes it possible to measure thicker samples, and in this way we image the magnetic configuration of samples up to \(1.7\,\mu\)m in thickness: an order of magnitude higher than what is typically measured with soft X-rays.
The limitation of soft X-ray magnetic imaging to thin samples exists due to magnetic scattering being a resonant effect. Indeed, when a photon energy is tuned close to an absorption edge between a core level and a magnetically polarised valence band, the electronic transition between the two bands is dependent on the helicity of the incoming photon and the projection of the magnetization vector along the direction of propagation of X-rays [21, 22, 23]. Experimentally, this leads to differences in absorption of the X-rays which, when combined with nano-microscopy, can provide projections of the magnetization in a sample. When compared to electron microscopy, which is limited to thin samples on the order of hundred nanometres in thickness, soft X-rays provide a higher penetration depth while being element specific. However, for magnetic imaging, the localisation of measurable XMCD signals to resonance energies where the high absorption can lead to zero transmission for extended thick systems, have meant that both soft X-rays and electron imaging have generally been limited to thin films on the order of hundreds of nanometers.
However, the magnetic contrast does not only present itself in the absorption: the scattering factor, and therefore the refractive index of a material, is complex, meaning that magnetic dichroism is also present in the phase of the transmitted wave, see Appendix A for more details. With the development of lensless CDI techniques such as holography and ptychography, the full complex transmission function of an object becomes experimentally accessible with the help of phase retrieval algorithms [24]. These lensless imaging techniques have revolutionised X-ray imaging, with phase imaging of weakly absorbing objects [25, 26] opening up the possibility to image biological samples [27], and the prospect of diffraction-limited spatial resolutions [28, 29]. So far magnetic imaging has mainly benefited from the high spatial resolutions that are available with lensless CDI techniques, bringing spatial resolutions down to \(10\,\mathrm{nm}\)[28] and below [29]. However, although it has been seen that the phase dichroism exists across the energy spectrum [16, 27], offering possibilities for low radiation dose imaging [27], so far magnetic phase imaging has not yet been fully exploited.
This X-ray phase dichroism is exploited here to extend the applicability of soft X-ray magnetic imaging to thicker systems. We first map the complex XMCD signal by imaging \(100\,\mathrm{nm}\) thick CoPt multilayer system with perpendicular anisotropy. The multilayer film was grown by magnetron sputtering on an X-ray transparent silicon nitride membrane (see Appendix B). Subsequently, a hole of diameter \(\sim 3\,\mu\)m in size milled using a focused Ga ion beam in order to provide an empty region of the sample for image normalisation and alignment. Dichoric spectro-ptychography was performed at the i08-1 beamline of the Diamond Light Source to map the complex XMCD signal from the magnetic domains in the sample. Specifically, circularly polarised X-rays were microfocused on to the Sample (O(\(r\))) as it was scanned for several overlapping Probe (P\(r\)) positions, \(r_{j}\), in the plane perpendicular to the direction of propagation of the X-rays. For each position, a coherent diffraction pattern is collected in the far field by a two dimensional detector. The complex transmission function of the object is then retrieved iteratively with the help of reconstruction algorithms [30, 31]. A simple schematic of the experimental setup is shown in Fig. 1a and further experimental details are explained in Appendix C.
The reconstructed phase images taken using Right Circular Polarised (RCP) and Left Circular Polarised (LCP)
Figure 1: a) Schematic of the ptychography setup. The sample is scanned in the plane perpendicular to the direction of propagation of the X-rays, for overlapping probe illumination positions, as indicated in the inset. Diffraction patterns are measured in the far field. b) Ptychographic phase projections measured with RCP and LCP X-rays, with the difference giving the XMCD signal, indicating the projection of the magnetisation (anti)parallel to the X-ray beam. The semicircle is a hole milled using a focused Ga ion beam which provides an empty region of the sample for image normalisation and alignment. Scale bar represents \(1\,\mu\)m.
X-rays are shown in Fig. 1b, and reveal a labyrinth-like domain structure in the CoPt film, with the magnetisation in the domains oriented perpendicular to the sample plane, and (anti)parallel to the direction of the propagation of the X-rays. When the polarisation is changed from RCP to LCP, the XMCD contrast switches, allowing for the isolation of the magnetic signal as shown in Figure 1b. We define the amplitude XMCD (\(A_{\text{XMCD}}\)) and phase XMCD (\(\phi_{\text{XMCD}}\)) signal as follows:
\[A_{\text{XMCD}}=\frac{\log_{e}(A_{\text{RCP}})-\log_{e}(A_{\text{LCP}})}{2} \tag{1}\]
\[\phi_{\text{XMCD}}=\frac{\phi_{\text{RCP}}-\phi_{\text{LCP}}}{2}\]
where \(A_{\text{RCP}}\), \(A_{\text{LCP}}\), \(\phi_{\text{RCP}}\) and \(\phi_{\text{LCP}}\) are the amplitude and phase projections taken with RCP and LCP X-rays respectively. In order to map the complex XMCD signal across the L\({}_{3}\) and L\({}_{2}\) edges, dichroic spectro-ptychography scans were performed for a range of energies between 770 eV and 807 eV in the vicinity of the Co L\({}_{3}\) and L\({}_{2}\) absorption edges with RCP and LCP X-rays. The dichroic \(A_{\text{XMCD}}\) and \(\phi_{\text{XMCD}}\) images for a select few energies are given in Fig. 2a. To extract the complex XMCD signal, the domains are segmented and their contrast averaged (details explained in Appendix D) to obtain the quantitative spectra plotted in Fig. 2b. The solid black curve, shown in Fig. 2b, represents the transmission spectrum through the sample, providing a
Figure 2: Dichroic spectro-ptychography across the Co L\({}_{3}\) and L\({}_{2}\) absorption edges of 100 nm thick CoPt multi-layer system. a) The amplitude (\(A_{XMCD}\)) and phase (\(\phi_{\text{XMCD}}\)) XMCD images across a selected range of energies. By definition, \(A_{\text{XMCD}}\) is dimensionless and \(\phi_{\text{XMCD}}\) is in radians. Scale bar represents 500 nm. b) \(A_{\text{XMCD}}\) (red) and \(\phi_{\text{XMCD}}\) (blue) XMCD spectra across the Co L\({}_{2,3}\) edges extracted from domains seen in the XMCD images plotted in a). The black curve represents the transmission through the thickness of the sample. The \(\phi_{\text{XMCD}}\) spectra exhibits a non-zero signal throughout the entire range of energy except on resonance. The \(A_{\text{XMCD}}\) signal is notably strong only on resonance, when transmission is minimum and vanishes at energies 1.8 eV off-resonance. The blue shaded region represents areas where only \(\phi_{\text{XMCD}}\) is measurable and red shaded regions represent areas where \(A_{\text{XMCD}}\) is also measurable.
direct comparison between the strength of the respective XMCD signals and transmission through the sample. We first consider the \(A_{\mathrm{XMCD}}\), with the images highlighted by the red dotted box in the top row of Fig. 2a, and the extracted \(A_{\mathrm{XMCD}}\) spectrum given by the red curve in Fig. 2b. Two resonance peaks of opposite sign can be observed across the L\({}_{3}\) and L\({}_{2}\) edges, where domain contrast reversal can also be seen in the \(A_{\mathrm{XMCD}}\) images. The signal and the domain contrast is strongest at the energy associated with highest absorption (\(780\,\mathrm{eV}\)), and already drops to zero, \(1.8\,\mathrm{eV}\) away from resonance with the magnetic domains no longer resolvable in the images.
We next consider the \(\phi_{\mathrm{XMCD}}\) signal, represented by the blue curve in Fig. 2b. Corresponding domain contrast can be observed in almost all of the \(\phi_{\mathrm{XMCD}}\) projections at different energies, highlighted in a blue dotted box in Fig 2a. The \(\phi_{\mathrm{XMCD}}\) signal is particularly strong in the vicinity of the L\({}_{3}\) and L\({}_{2}\) edges, with the maximum occurring \(0.5\,\mathrm{eV}\) below the absorption edge, with images showing strong contrast around \(779.4\,\mathrm{eV}\) and \(794.4\,\mathrm{eV}\). In the pre-absorption edge, the \(\phi_{\mathrm{XMCD}}\) domain contrast is opposite to \(A_{\mathrm{XMCD}}\), while we observe a \(\phi_{\mathrm{XMCD}}\) contrast reversal across the two absorption edges. Most notably, the difference between the \(\phi_{\mathrm{XMCD}}\) and \(A_{\mathrm{XMCD}}\) is that while the \(A_{\mathrm{XMCD}}\) is restricted to on-resonance energies, the \(\phi_{\mathrm{XMCD}}\) signal is non-zero across almost all energies measured, specifically in the pre-absorption edge where the magnetic domains can even be resolved \(10\,\mathrm{eV}\) below the absorption edge, and transmission through the sample is significantly higher. To visualise this difference in the energy-dependent measurable contrast, we have shaded the regions of the spectrum in Fig. 2b, where red regions indicate the energies for which the \(A_{\mathrm{XMCD}}\) is measurable (as well as the \(\phi_{\mathrm{XMCD}}\)), while blue regions indicate the energy regime for which only the \(\phi_{\mathrm{XMCD}}\) is detectable. The \(\phi_{\mathrm{XMCD}}\) is available for a much wider range of avail
Figure 3: a) Calculated transmission of thicker CoPt films as a function of energy across the Co L\({}_{3}\) and L\({}_{2}\) absorption edges. The darker regions indicate regions of low or zero transmission. As the thickness is increased we are forced to measure at pre-absorption edge energies where transmission is significantly higher than on-resonance, indicated by the red and blue dotted lines for \(A_{\mathrm{XMCD}}\) and \(\phi_{\mathrm{XMCD}}\) respectively. b) Transmission spectra for a few select calculated thicknesses taken across the line corresponding to the lines shown in (b). c) Optimal energies at which highest \(A_{\mathrm{XMCD}}\) and \(\phi_{\mathrm{XMCD}}\) can be extracted as sample thickness is increased. d) Evolution of the complex pre-edge XMCD signal as a function of energy-dependent transmission of the material. As one moves to energies corresponding to higher transmission, \(A_{\mathrm{XMCD}}\) exhibits an exponential decay while the \(\phi_{\mathrm{XMCD}}\) exhibits a linear decay.
able energies than the \(A_{\rm XMCD}\), offering a more flexible contrast mechanism.
The importance of the flexibility of the contrast mechanism becomes clear when we consider thicker systems. While it is clear that on-resonance imaging with \(A_{\rm XMCD}\) contrast works well for thin samples, as we increase the thickness above a threshold at which there is not sufficient transmission on resonance for measurements, we will be forced to image off-resonance at energies associated with lower absorption. We explore this by calculating the transmission for thicker samples using the experimentally measured transmission spectrum of the CoPt multilayer and further calculating the complex XMCD signal, as explained in Appendix F. We first consider the calculated transmission for thicker samples, shown in Fig. 3a, with each row in the image representing transmission as a function of energy for a sample of a certain thickness. The darker regions indicate a lower transmission through the sample and we observe a reduction in energies with significant transmission as the thickness increases. Examples of transmission spectra corresponding to selected thicknesses (indicated as coloured lines on Fig. 3a) are given in Fig. 3b where one can observe a drop to zero transmission on-resonance for higher thicknesses.
We next consider the complex XMCD signal for thicker samples, by plotting the energy at which both strongest signals can be extracted for increasing thickness in Fig. 3c. We observe that above a threshold thickness, the optimal energy at which the XMCD can be measured drops steadily. This decrease in the measurement energy, to energies associated with higher transmission, has further implications when we consider the dependence of the \(A_{\rm XMCD}\) and \(\phi_{\rm XMCD}\) signal on the energy-dependent transmission. Indeed, by plotting the complex pre-edge XMCD signal, measured for 100 nm CoPt shown in Fig. 2b, as a function of transmission in Fig. 3d, we observe a significant difference: the \(A_{\rm XMCD}\) is maximum for lower transmission, and decays exponentially as the transmission is increased. In contrast, the \(\phi_{\rm XMCD}\) signal is maximum for energies corresponding to higher transmission, and decays linearly as the transmission is increased, thus exhibiting a weaker dependence on the transmission of the sample. For purely absorption-based imaging, the requirement to measure at lower off-resonance energies thus quickly leads to the suppression of the XMCD signal, however access to the pre-edge \(\phi_{\rm XMCD}\) opens the possibility to measure thicker samples.
To explore the evolution of the \(A_{\rm XMCD}\) and \(\phi_{\rm XMCD}\) for increasing thickness, we investigate the dimensionless Signal to Noise ratio (SNR) for the calculated \(A_{\rm XMCD}\) and \(\phi_{\rm XMCD}\) as a function of effective thickness (scaled for comparison with experimental data, see Appendix F), plotted in Fig. 4b (dashed lines). For thinner samples, where the \(A_{\rm XMCD}\) SNR is significantly higher than the \(\phi_{\rm XMCD}\) SNR, the \(A_{\rm XMCD}\) SNR peaks at a thickness corresponding to the suppression of on-resonance transmission, and decreases exponentially afterwards. However, the \(\phi_{\rm XMCD}\) SNR increases at a slower rate, peaking at a higher effective thickness, but retaining a high SNR to yet higher thicknesses. We identify a 'thin' regime where the \(A_{\rm XMCD}\) SNR provides a significantly better image quality, and a 'thick' regime, where \(\phi_{\rm XMCD}\) SNR provides high quality images.
We experimentally confirm this ability to image the magnetic state of thicker samples with high SNR \(\phi_{\rm XMCD}\) imaging by performing dichroic spectro-ptychography scans on FeGd films grown by magnetron co-sputtering of thicknesses 400 nm, 1 \(\mu\)m and 1.7 \(\mu\)m across the the L\({}_{3}\) and L\({}_{2}\) edges of Fe (refer Appendix B and E for more details). We first plot the highest SNR images for \(\phi_{\rm XMCD}\) and \(A_{\rm XMCD}\) and in Fig. 4a and 4c respectively, where it can be seen that the quality of the images is highly thickness dependent and that this thickness dependence is different for the two contrast mechanisms. In particular, for the \(A_{\rm XMCD}\) the quality of the image appears to decrease steadily with thickness, with a loss of quantitative XMCD signal for thicknesses of 1 \(\mu\)m and above, with contrast only present in the vicinity of the domain walls. For the \(\phi_{\rm XMCD}\), the maximum quality instead appears to occur at 1 \(\mu\)m, and the quantitative measurement of the magnetic domain structure (indicated by the equal and opposite magnetic contrast in positive and negative domains) is retained for all sample thicknesses. Furthermore, the \(\phi_{\rm XMCD}\) images have a higher spatial resolution than the \(A_{\rm XMCD}\) images for samples in the 'thick regime', see Appendix G for more details.
To quantitatively compare both \(A_{\rm XMCD}\) and \(\phi_{\rm XMCD}\) for the various thicknesses in these images, we calculate the dimensionless Signal to Noise Ratio (SNR) of the measured \(A_{\rm XMCD}\) and \(\phi_{\rm XMCD}\) projections, combining our 100 nm CoPt and thicker FeGd films by defining an effective thickness as discussed in the Appendix F. The SNR of the images is plotted as a function of effective thickness in Fig 4b, with red and blue dots representing the measured \(A_{\rm XMCD}\) and \(\phi_{\rm XMCD}\) SNR. As observed in the images, we see a clear difference in the thickness dependence, with the \(\phi_{\rm XMCD}\) providing high SNR imaging of magnetic domains for thicknesses up to 1.7 \(\mu\)m, the thickest film that was measured.
By comparing with our calculated SNR plotted in Fig. 4b (dashed lines), it can be seen that the \(\phi_{\rm XMCD}\) imaging can extend to samples up to 2 \(\mu\)m, an order of magnitude thicker than what is currently achievable with soft X-ray absorption imaging. While this exact thickness dependence is highly material dependent, this new approach addresses a key limitation of current imaging capabilities, making possible the high spatial resolution mapping of thicker transition metal-based systems that until now have been inaccessible.
In conclusion, we have demonstrated the soft X-ray magnetic imaging of thick magnetic systems by imaging in the pre-edge \(\phi_{\rm XMCD}\) signal. By determining the complex XMCD spectrum of a CoPt thin film across the L\({}_{3}\) and L\({}_{2}\) absorption edges of Co, we were able to not only
identify the presence of a significant \(\phi_{\text{XMCD}}\) signal in the pre-edge regime but, by extrapolating this data, also to establish a new imaging regime where the \(\phi_{\text{XMCD}}\) signal enables quantitatively imaging of thick samples with high SNR and spatial resolution that would not be possible with traditional absorption based techniques. Remarkably, our analysis predicts an order of magnitude increase in the accessible thickness regime due to the phase imaging, which we confirmed by imaging magnetic domains in FeGd samples of up to \(1.7\,\mu\)m in thickness.
We expect pre-edge magnetic phase imaging to have a significant impact on the field of magnetism, making possible the imaging of topological defects in higher dimensional chiral magnets, where this new-found flexibility in the material and sample geometry will drive forward the discovery of exotic textures. Moreover, it will enable the non-destructive investigation of naturally occurring magnetic systems, providing insight into the formation and role of magneto-fossils [32] and meteorites [33]. Finally, an immediate societal impact will be found with
Figure 4: a) and c) \(\phi_{\text{XMCD}}\) and \(A_{\text{XMCD}}\) imaging of magnetic films of increasing thickness. The XMCD projections with highest SNR of the 100 nm thick CoPt were measured at 780 eV and 779.4 eV for \(A_{\text{XMCD}}\) and \(\phi_{\text{XMCD}}\) respectively. The XMCD projections with highest SNR for the 400 nm, 1 \(\mu\)m and 1.7 \(\mu\)m FeGd films were measured at 709 eV, 708.5 eV and 708 eV, respectively, for both \(A_{\text{XMCD}}\) and \(\phi_{\text{XMCD}}\). b) Calculated highest SNR of \(A_{\text{XMCD}}\) and \(\phi_{\text{XMCD}}\) for a range of thicknesses indicated by the red and blue dashed line, respectively. Note that the calculated thickness dependence of the CoPt XMCD SNR is scaled in thickness to allow for a comparison between the experimentally measured CoPt/FeGd data shown as red and blue dots (see Appendix F). We identify a ‘thin’ regime where the \(A_{\text{XMCD}}\) SNR provides a significantly better image quality, and a ‘thick’ regime, where due to the pre-edge \(\phi_{\text{XMCD}}\) we obtain high quality projections for thick samples with high SNR. The highest \(A_{\text{XMCD}}\) and \(\phi_{\text{XMCD}}\) SNR across the Co L\({}_{2,3}\) edges for 100 nm thick CoPt (scaled to an effective thickness of 217 nm FeGd), and 400 nm, 1 \(\mu\)m and 1.7 \(\mu\)m FeGd. The extracted optimal SNR values from each of these XMCD projections is plotted, with red circular dots indicating the \(A_{\text{XMCD}}\) SNR and blue squares indicating the \(\phi_{\text{XMCD}}\) SNR for the different thicknesses. The images shown in a) and c) are the projections from which the \(A_{\text{XMCD}}\) and \(\phi_{\text{XMCD}}\) signal were extracted. Scale bar in the images represents 500 nm.
the study of materials critical to efficient and clean energy production, opening the door to the mapping of the internal configuration of non-rare earth magnets [34].
## Acknowledgements
Diamond Light Source provided access to the i08-1 Soft X-ray Ptychography Facility with experiment grants MG32635-1 and MG28255-1. J.N.N., M.D.P.M, L.T. and C.D. acknowledge funding from the Max Planck Society Lise Meitner Excellence Program. J.N.N acknowledges support from the International Max Planck Research School for Chemistry and Physics of Quantum Materials.
|
2309.09086 | Split Federated Learning for 6G Enabled-Networks: Requirements,
Challenges and Future Directions | Sixth-generation (6G) networks anticipate intelligently supporting a wide
range of smart services and innovative applications. Such a context urges a
heavy usage of Machine Learning (ML) techniques, particularly Deep Learning
(DL), to foster innovation and ease the deployment of intelligent network
functions/operations, which are able to fulfill the various requirements of the
envisioned 6G services. Specifically, collaborative ML/DL consists of deploying
a set of distributed agents that collaboratively train learning models without
sharing their data, thus improving data privacy and reducing the
time/communication overhead. This work provides a comprehensive study on how
collaborative learning can be effectively deployed over 6G wireless networks.
In particular, our study focuses on Split Federated Learning (SFL), a technique
recently emerged promising better performance compared with existing
collaborative learning approaches. We first provide an overview of three
emerging collaborative learning paradigms, including federated learning, split
learning, and split federated learning, as well as of 6G networks along with
their main vision and timeline of key developments. We then highlight the need
for split federated learning towards the upcoming 6G networks in every aspect,
including 6G technologies (e.g., intelligent physical layer, intelligent edge
computing, zero-touch network management, intelligent resource management) and
6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous
systems). Furthermore, we review existing datasets along with frameworks that
can help in implementing SFL for 6G networks. We finally identify key technical
challenges, open issues, and future research directions related to SFL-enabled
6G networks. | Houda Hafi, Bouziane Brik, Pantelis A. Frangoudis, Adlen Ksentini | 2023-09-16T19:59:17Z | http://arxiv.org/abs/2309.09086v1 | # Split Federated Learning for 6G Enabled-Networks: Requirements, Challenges and Future Directions
###### Abstract
Sixth-generation (6G) networks anticipate intelligently supporting a wide range of smart services and innovative applications. Such a context urges a heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent network functions/operations, which are able to fulfill the various requirements of the envisioned 6G services. The revolution of 6G networks is driven by massive data availability, moving from centralized and big data towards small and distributed data. This trend has motivated the adoption of distributed and collaborative ML/DL techniques. Specifically, collaborative ML/DL consists of deploying a set of distributed agents that collaboratively train learning models without sharing their data, thus improving data privacy and reducing the time/communication overhead. This work provides a comprehensive study on how collaborative learning can be effectively deployed over 6G wireless networks. In particular, our study focuses on Split Federated Learning (SFL), a technique recently emerged promising better performance compared with existing collaborative learning approaches. We first provide an overview of three emerging collaborative learning paradigms, including federated learning, split learning, and split federated learning, as well as of 6G networks along with their main vision and timeline of key developments. We then highlight the need for split federated learning towards the upcoming 6G networks in every aspect, including 6G technologies (e.g., intelligent physical layer, intelligent edge computing, zero-touch network management, intelligent resource management) and 6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous systems). Furthermore, we review existing datasets along with frameworks that can help in implementing SFL for 6G networks. We finally identify key technical challenges, open issues, and future research directions related to SFL-enabled 6G networks.
6G networks, Wireless Communication, Federated Deep Learning, Split Deep Learning, Split Federated Learning.
## I Introduction
### _Context and Motivation_
G wireless networks are growing to take a substantially more holistic approach, catalyzing smart services and innovative applications [1][2][3][4]. 6G is expected to ensure highly efficient and timely data collection, transfer, learning, and synthesizing at any time and anywhere. Applications such as Smart Grid 2.0, Extended Reality (XR), Holographic Tele-presence (HT), space and deep-sea tourism, and Industry 5.0 represent the mainstream applications of next-generation 6G systems [5][6][7], [8]. 6G will be driven by a vision towards _ubiquitous_ intelligence integrated into every aspect of mobile networks [9][10], from network management and operations, to the specifics of intelligent vertical services powered by 6G. From a network infrastructure perspective, AI tools will play an integral role in automating multiple operations/functions in 6G wireless communication networks, and enabling a closed-loop optimization to support the emerging 6G services [11][12][9]. From a service provision perspective, heavily data-driven applications will pervade, that are featuring Machine/Deep Learning (ML/DL) workflows spanning heterogeneous and potentially massive-scale networks; these applications need to be efficiently supported by the 6G infrastructure substrate, and this gives rise to interesting communication-computation co-design problems [13].
These trends are currently being reflected in the activities of standardization bodies. The ITU Telecommunication Standardization Sector (ITU-T) has already established many focus groups (FGs), to promote data-driven AI applications in next-generation networks, such as FG-ML5G about ML for 5G networks, and FG-DPM for data processing and management coming from IoT and smart cities. The European Telecommunications Standards Institute (ETSI), on the other hand, has initiated the Expertiential Networked Intelligence Industry Specification Group (ENI ISG), which targets beyond-5G networks with the aim of introducing AI-driven facilities for cognitive network management. In addition, both academia and industry sectors have initiated the development of AI-based schemes to improve the performance of next-generation networks. Notable examples include (i) HEXA-X1 and its successor, HEXA-X-II2, two EU-funded 6G flagship projects where AI-based 6G technology enablers are a core theme, (ii) DETERMINISTIC6G,3 another EU-funded project that leverages advanced DL techniques for network performance awareness, in order to provide deterministic networking capabilities to 6G networks, and (iii) Nokia's strategy to lead the 6G development in the US.4
Footnote 1: [https://hexa-x.eu/](https://hexa-x.eu/)
Footnote 2: [https://hexa-x-ui.eu/](https://hexa-x-ui.eu/)
Footnote 3: [https://deterministic.istog.eu/](https://deterministic.istog.eu/)
Footnote 4: [https://www.bell-labs.com/institute/blog/nokia-is-leading-the-6g-](https://www.bell-labs.com/institute/blog/nokia-is-leading-the-6g-)\(\backslash\) conversation-in-the-us/
Developments in 6G are driven by the trend towards exploiting the massive availability of data, which in turn calls
for moving from centralized and big data management to small and distributed data [14][3]. Thus, 6G networks should leverage both small and distributed data sets at their infrastructures to optimize network performance. This trend will be manifested in the heavy use of distributed and collaborative ML/DL techniques, which go beyond traditional and centralized ones [15][16]. Specifically, collaborative ML/DL consists in deploying a set of distributed agents, that collaborate with each other to train learning models, without sharing their local data [9]. In this context, Federated Learning (FL), proposed by Google in 2017 [17], has emerged to build cooperative learning models among a set of learners, while protecting the privacy of learners' data. However, implementing FL on top of 6G networks is still challenging mainly due to the heterogeneity of 6G-connected entities (cars, drones, sensors, haptic devices, flying vehicles, etc.) in terms of resource capabilities, as well as because of concerns from a ML _model_ privacy perspective, since, by design, the continuously updated versions of a learning model are shared with learners during the training process [18].
To address such challenges, another DL technique was recently proposed by MIT Media Lab's Camera Culture group called Split Learning or Split Neural Networks (SplitNN) [19]. As its name indicates, it consists in splitting a global neural network into multiple sections, and training each section on an independent device (learner), by using the local device data. Thus, the training of the learning model is performed by transferring the output of the last layer of each section (_smashed data_) as input to the next section, from one involved learner to another. Hence, compared with FL, SplitNN improves model privacy since no single learner is in possession of the global model, and reduces the computation required by the different learners to build a global learning model. Nevertheless, the main challenge of SplitNN is related to the time overhead needed to build a learning model, due to its sequential way of training. This may be very challenging in 6G settings, particularly for massive networks and when training time matters [20].
Finally, split federated learning (SplitFed or SFL) comes to merge the two distributed DL solutions (FL and SplitNN), to design an enhanced hybrid collaborative learning algorithm [20]. In particular, SplitFed splits the neural network among the involved learners and server, as in SplitNN, to optimize both data/model privacy and compute resource usage. Moreover, SplitFed improves on training time as compared to SplitNN, by adopting the parallel model update paradigm of FL.
It is clear that SplitFed offers advantages over both FL and SplitNN by optimizing model privacy, learners' computation resources, and training time overhead. This motivates us to focus on SplitFed and show its main benefits when leveraging it over 6G wireless networks.
### _Review of Existing Related Surveys_
So far, many survey papers about the emerging 6G networks have been proposed to review 6G technologies, development, and enablers [3][21][5][29][22][23]. In [3][7], the authors presented a holistic vision about 6G systems and their main tenets. The primary drivers of 6G are also identified along with their promising applications, enabling technologies, and performance requirements. Another survey work was proposed in [21]. The authors focused more on the different 6G-enabled scenarios with their challenges and open issues. A 6G framework describing the main 6G actors/components was also designed. In [22], the authors discussed technologies that may help to transform wireless networks toward 6G, and be considered as main enablers for many potential 6G applications. A full-stack perspective on 6G requirements and scenarios is also provided. Similarly, the authors describe a human-centric vision about 6G networks in [23]. A systematic framework, including potential 6G applications in addition to key features of 6G such as privacy, improved security, and secrecy, is also provided. An exhaustive survey about the current developments towards 6G was proposed in [5]. The authors also highlight the main technological trends with their potential enabled applications and requirements. Ongoing research projects and standardization efforts are also outlined. In [6], recent advances toward developing 6G systems have been explored. The authors propose a taxonomy including use-cases, enabling computing/communication/networking technologies, and promising AI techniques. Open research challenges are identified and discussed. The emerging technologies that can assist 6G architecture development in meeting use-case requirements are identified and described in [24]. These technologies include blockchain, AI, Terahertz communications, quantum communications, cell-free communications, dynamic network slicing, and integrated sensing and communication, among others. Potential challenges and research directions are also discussed. In [25], the authors give a comprehensive survey on mobile network evolution towards 6G, while focusing more on the architectural updates featuring by the introduction of AI, ubiquitous 3D coverage, and improved network stack. Potential technologies that may help in forming green and sustainable networks, such as Terahertz and visible light communication, are also discussed. Besides, a novel next-generation network architecture was designed to facilitate the migration from 5G to 6G in [26]. This architecture also provides various applications and technologies of 6G networks. In [12], the authors study the main motivations toward the need to move to 6G wireless communication. The authors analyse the limits of 5G networks, providing a new synthesis of emerging services, including high precision manufacturing, holographic communications, and AI, while the key technologies of 6G in terms of critical requirements, different drivers, enabling technologies, and system architectures, were studied in [27]. In [28], the authors provide a roadmap study about the different use cases for 6G enabling techniques, discussing recent developments on 6G and open challenges with potential solutions, followed by a development timeline of 6G.
Besides, a wide range of survey and review papers have discussed the application of AI and its benefits for 6G networks [29][32][30][33][31]. These works focus mostly on ML/DL algorithms. In [29], the authors discussed both 6G-enabled AI applications and AI-enabled 6G performance and
design optimization, while how AI is revolutionizing 6G communication technology was described in [30]. Another comprehensive survey about various ML techniques applied to networking, communication, and security aspects in 6G vehicular networks is proposed in [31]. In [32], the authors focused on what AI can bring at both the physical layer and link layer in 6G networks, they also present major challenges when using AI, and provide some future directions to mitigate them in 6G networks. An AI-based architecture for 6G networks to mainly enable smart resource management, knowledge discovery, and automatic service adjustment, is designed in [33].
However, few survey works have addressed distributed/collaborative learning techniques for 6G networks. Most of them are focused on federated learning (FL) [15][9][34]. In [15], the authors introduced the combination between FL and 6G, and provide enabling applications in 6G networks. Critical issues, suitable FL techniques, and future research directions leveraging FL for 6G are also described. Similarly, the main requirements which are driving convergence between FL and 6G are identified in [9]. In addition, the authors designed a
\begin{table}
\begin{tabular}{|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline \multicolumn{1}{|c|}{**Works.**} & \multicolumn{1}{p{28.5pt}|}{**Metrics**} & \multicolumn{1}{p{28.
novel FL-based architecture, and showed its benefits in dealing with the emerging challenges of 6G. Moreover, future research directions and critical open challenges in FL-enabled 6G are also reviewed. In [34], the authors gave a comprehensive study on how distributed learning can be deployed over wireless networks, while focusing more on FL, distributed inference, federated distillation, and multi-agent reinforcement learning.
On split learning (SL), only one survey work [35] has been presented, to the best of our knowledge. In this work, the authors reviewed both FL and SL, and provided a survey on different technologies enabling to combine them in an Internet of Things (IoT) context. In [36], the authors first proposed a combined architecture of both FL and SL to leverage their advantages. Then, they studied their convergence under non-independent and identically distributed (non-IID) data related to 6G drone networks. Numerical results showed the efficiency of the generated learning model when combining both FL and SL.
Table I compares existing survey studies. As we show, even though survey articles addressing 6G, and AI for 6G systems exist, there is a lack of comprehensive surveys that combine split learning and 6G to explore the potential of split learning for developing efficient, reliable, privacy-preserving AI-powered 6G systems. The relevant studies on the integration of AI with 6G networks [29][32][30][33][31][35][36] focus more on FL rather than the recent split learning approach. In [35] and [36], the authors studied the combination of FL and SL, but the focus was specifically on IoT networks and Unmanned Aerial Vehicles (UAV), and the technical aspects of 6G as well as 6G-enabled use-cases remained largely unexplored. Therefore, the related literature is missing a comprehensive survey of split learning (SL and SFL) and its potential in designing the upcoming 6G systems, which could be valuable in guiding practitioners and researchers. This is the gap our work aims to fill.
### _Our Contributions_
The main contributions of this article are summarized below.
* **Bridging the gap between SplitFed and 6G Networks:** Understanding the integration of SplitFed within 6G networks requires viewing what this new distributed learning algorithm represents. To do this, this survey examines first the algorithms that precede SplitFed learning to guide the reader to recognize the different existing techniques before the emergence of SplitFed. Additionally, the reader will gain a comprehension of the principles of SplitFed and how 6G can benefit from it. There are myriads of papers on the applications of AI, such as Deep Learning and Federated Learning in 6G networks [15][37]. However, the interplay between SplitFed and 6G is absent from the discussion. This comprehensive survey fills this gap by analyzing the important contributions that the connection between SplitFed and 6G can lead to.
* **A comprehensive survey of SplitFed for the most important technical aspects and use-cases of 6G:** A deeper dive in the chief 6G technical aspects (e.g., intelligent physical layer, intelligent edge computing, resource management) and 6G use cases (e.g., holographic telepresence, digital twin, and intelligent e-health) is taken to investigate how SplitFed can help in enhancing the functionalities of all 6G stakeholders.
* **Towards a successful implementation of SplitFed over 6G networks:** This work describes various tools that can support the development, evaluation, and validation of SFL-based solutions for 6G networks. We first list multiple existing datasets for different 6G technical aspects and use-cases. Then, we overview multiple existing frameworks related to B5G networks as well as collaborative AI techniques. Finally, this article discusses multiple open implementation challenges along with their potential solutions, as future directions, in applying SFL to 6G systems.
### _Paper Outline_
As shown in Fig. 1, the organization of this article is as follows: Section I delineates the significant role of AI in 6G, highlights our motivation, and provides an in-depth review on existing related surveys in addition to the main contributions of this paper. Section II gives a background on AI, collaborative learning, and 6G networks, which are required for describing the potentials of SplitFed learning for 6G networks. In Section III, we describe the main challenges and requirements pertinent to different key 6G technical aspects, and show how SFL can help in optimizing operations and performance through a realistic scenario for each such aspect. Section IV discusses five emerging 6G use cases with a particular focus on how SFL can help in optimizing some of their performance characteristics. To help towards a successful implementation of SFL on top of 6G networks, we list several existing datasets and frameworks (tools) that can support the development, evaluation, and validation of SFL-based solutions for 6G networks. As with any new technology, SFL has its limitations and challenges which are presented in Section VI. This section gives not only the main limitations of SFL, but also the still-open challenges of applying SFL to 6G networks along with future directions. We also provide the main limitations of our study as well as the faced difficulties, that may be considered to stimulate further research in Section VII. Finally, we conclude the article in Section VIII. For the ease of reference, the acronyms used in this article are summarized in Table II, in alphabetical order. We should finally note that, in the course of this article, we use the terms SplitFed and SFL interchangeably.
## II Background
This section gives a background on AI, collaborative learning, and 6G networks, which are required for describing the potential of SplitFed learning for 6G networks. We start by introducing AI and machine/deep learning as a main branch of AI; this may help non-expert readers get a better understanding of the concepts we introduce next, namely the main collaborative learning mechanisms of interest in this article: federated, split, and SplitFed learning. We finally provide an overview of 6G networks, along with their main vision and development timeline.
### _Artificial Intelligence fundamentals_
Artificial intelligence is a computer science field that aims to mimic human mind capabilities in solving problems and making decisions. It enables machines, e.g., computer systems, to simulate human intelligence process through algorithms and rules, by combining various fields, such as reasoning, planning, learning, communicating, perception, and interaction. In our study, we focus more on machine/deep learning as an AI branch.
#### Iii-A1 Machine Learning Algorithms
Machine learning (ML) is a branch of AI that leverages statistical and mathematical models to process and generate inferences from robust dataset patterns. ML consists in building learning models based on training data, typically to obtain accurate predictions, as results. ML algorithms are usually classified into three different categories:
* **Supervised learning**: It consists in mapping specific inputs to an output considering structured and labeled data. For instance, to train a learning model to recognize pictures of dogs and cats, the model should consider pictures labeled as dogs and cats. This category of ML models can deal with two main problems: regression to predict a continuous (real) value, e.g., temperature and velocity, and classification to predict the class to which a particular input data example may belong, e.g., picture of cat or dog. Various supervised learning algorithms have been developed. For example, _linear_ and _logistic regression_ aim to learn the correlation between input and output data by estimating the parameters of a linear or logistic model fit to them [38]. _Random decision forests_ or _random forests_ build a set of decisions trees in order to make both regression and classification. _Random forests_ also belong to another category of learning algorithms, called ensemble learning, which further includes algorithms such as boosting machines and AdaBoost [39]. _Support Vector Machines (SVM)_, on the other hand, create classification and regression learning models building on a statistical learning theory framework. Particularly, they
Fig. 1: The structure of the article.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Acronym** & **Definition** & **Acronym** & **Definition** \\ \hline \hline
**1G** & First Generation & **LIME** & Local Interpretable Model-Agnostic Explanations \\ \hline
**2G** & Second Generation & **LIS** & Large Intelligent Surfaces \\ \hline
**3G** & Third Generation & **LSTM** & Long Short Term Memory \\ \hline
**3GPP** & 3rd Generation Partnership Project & **LTE-A** & Long-Term Evolution Advanced \\ \hline
**4G** & Fourth Generation & **MAC** & Medium Access Control \\ \hline
**5G** & Fifth Generation & **MBBLL** & Mobile BroadBand and Low-Latency \\ \hline
**6G** & Sixth Generation & **MEC** & Multi-access Edge Computing \\ \hline
**A2C** & Advantage Actor Critic & **MIMIC-III** & Medical Information Mart for Intensive Care III \\ \hline
**AE** & Auto Encoder & **MIMO** & Multiple-Input Multiple-Output \\ \hline
**AI** & Artificial Intelligence & **MIoT** & Massive Internet of Things \\ \hline
**AMC** & Automatic Modulation Classification & **MIT** & Massachusetts Institute of Technology \\ \hline
**AMF** & Access and Mobility Management Function & **ML** & Machine Learning \\ \hline
**AV** & Autonomous Vehicle & **mLLMT** & massive Low-Latency Machine Type communication \\ \hline
**ANN** & Artificial Neural Networks & **MLOps** & ML system operations \\ \hline
**AR** & Augmented Reality & **MLP** & Multi-Layer Perceptron \\ \hline
**AWD** & Augment WiFi Intrusion Dataset & **mMTC** & massive Machine Type Communications \\ \hline
**B5G** & Beyond Fifth-Generation & **MR** & Mixed Reality \\ \hline
**BBU** & Baseband Unit & **MSE** & Mean Sequared Error \\ \hline
**BPSK** & Binary Phase Shift Keying & **NFV** & Network Function Virtualization \\ \hline
**BS** & Base Station & **NG RAN** & New Generation RAN \\ \hline
**CAPEX** & CAPtral Expenditures & **NOMA** & Non-Orthogonal Multiple Access \\ \hline
**CAE** & Convolutional Auto-Encoder & **NR-MAC** & New Radio Medium Access Control \\ \hline
**CAV** & Connected Autonomous Vehicles & **NS** & Network Slicing \\ \hline
**CD** & Continuous Delivery & **NS3** & Network Simulator 3 \\ \hline
**CNN** & Convolutional Neural Network & **PSK** & Phase Shift Keying \\ \hline
**CPS** & Cyber-Physical Systems & **QoE** & Quality of Experience \\ \hline
**CS** & Compressive Sensing & **QoS** & Quality of Service \\ \hline
**CT** & Continuous Training & RAN & Radio Access Network \\ \hline
**CV** & Connected Vehicles & **RAT** & Radio Access Technologies \\ \hline
**DARPA** & Defense Advanced Research Projects Agency & **RL** & Reinforcement Learning \\ \hline
**DDoS** & Distributed Denial of Service & **RM** & Resource Management \\ \hline
**DevOps** & DEvelopment and IT Operations & **RNN** & Recurrent Neural Network \\ \hline
**DL** & Deep Learning & **RSPQ** & Reference Signals Received Quality \\ \hline
**DLT** & Distributed Ledger Technologies & **RSPR** & Reference Signals Received Power \\ \hline
**DNN** & Deep Neural Network & **RSU** & Road Side Unit \\ \hline
**DP** & Differential Privacy & **RT** & Real Time \\ \hline
**DQN** & Deep Q-Network & **SCM** & Structural Causal Models \\ \hline
**DRL** & Deep Reinforcement Learning & **SDN** & Software Defined Network \\ \hline
**DSRC** & Dedicated Short Range Communications & **SFL/SplitFed** & Split Federated Learning \\ \hline
**E2E** & End-to-End & **SG** & Smart Grid \\ \hline
**EDO** & Energy Data Owners & **SHAP** & **SHapley Additive exPlanations** \\ \hline
**EI** & Edge Intelligence & **SIC** & Successive Interference Cancellation \\ \hline
**eMBB** & enhanced Mobile Broadband & **SL/SplitNN** & Split Learning/Split Neural Network \\ \hline
**ENT** & Experiential Networked Intelligence & **SLA** & Service Level Agreement \\ \hline
**ETSI** & European Telecommunications Standards Institute & **SM** & Spectrum Management \\ \hline
**FDMA** & Frequency Division Multiple Access & **SMO** & Service Management and Orchestration \\ \hline
**FedAvg** & Federated Averaging & **SNR** & Signal-to-Noise Ratio \\ \hline
**FeMBB** & Further enhanced Mobile Broadband & **SSN** & Self-Sustaining Networks \\ \hline
**FG** & Focus Group & **SUMO** & Simulation of Urban Mobility \\ \hline
**FG-DPM** & Focus Group for Data Processing and Management & **SVM** & Support Vector Machines \\ \hline
**FL** & Federated Learning & **TDMA** & Time Division Multiple Access \\ \hline
**GAN** & Generative Adversarial Network & **TFF** & TensorFlow Federated \\ \hline
**gNB** & gNodeB & **THz** & Tera Hertz \\ \hline
**GPS** & Global Positioning System & **TL** & Transfer Learning \\ \hline
**HE** & Horizon Europe & **TTI** & Transmission Time Interval \\ \hline
**HO** & Handover & **UAV** & Unmanned Aerial Vehicles \\ \hline
**HT** & Holographic Tele-presence & **UE** & User Equipment \\ \hline
**IDS** & Intrusion Detection Systems & **umMTC** & ultra-massive Machine-Type Communication \\ \hline
**IEC** & Intelligent Edge Computing & **uRLLC** & ultra-Reliable Low Latency Communications \\ \hline
**IEEE** & Institute of Electrical and Electronics Engineers & **V2X** & Vehicle-To-Everything \\ \hline
**IID** & Independent and Identrically Distributed & **VNF** & Virtual Network Function \\ \hline
**IoE** & Internet of Everything & **VoIP** & Voice Over IP \\ \hline
**IoMT** & Internet of Medical Things & **VR** & Virtual Reality \\ \hline
**IoT** & Internet of Things & **WBANs** & Wireless Body Area Networks \\ \hline
**IoV** & Internet of Vehicles & **WG** & Working Group \\ \hline
**IP** & Internet Protocol & **Wifi** & Wireless Fidelity \\ \hline
**IRS** & Intelligent Reflecting Surfaces & **XAI** & eXplainable AI \\ \hline
**ITS** & Intelligent Transportation Systems & **XR** & eXtended Reality \\ \hline
**ITU** & International Telecommunication Union & **ZSM** & Zero-touch Network and Service Management \\ \hline
**KPI** & Key Performance Indicators & **XAI** & Key Performance Indicators \\ \hline \end{tabular}
\end{table} TABLE II: List of Acronyms.
aim to learn the optimal hyperplane that separates the data instances [40]. _Artificial Neural Networks (ANN)_ mimic the human brain, by linking a high number of artificial neurons (perceptrons) with each other via edges and their associated weights. The purpose of an ANN algorithm is to learn the optimal edge weights so that a loss function is minimized and inference accuracy is increased. Deep learning is based on ANNs with a large number of hidden layers in the neural network [41].
* **Unsupervised learning:** In this family of approaches, unlabeled data are processed to deduce common pattern-s/information. Unlike supervised learning, data labels are not known ahead of time. The ML algorithm processes the whole dataset, classifying it into groups based on common attributes. This category of ML includes three different sub-classes: clustering, dimensionality reduction, and association rules. _K-means_ is a clustering algorithm that divides data into \(k\) clusters, according to the distance to each cluster's centroid. Different such distance metrics exist, such as dynamic time warping, Euclidean, or Manhattan distance [42]. _Dimensionality Reduction_ is used to reduce the data dimension, while keeping its main attributes. _Principal Component Analysis (PCA)_ is one of the main dimensionality reduction algorithms, which operates by projecting data geometrically onto new components, called Principal Components [42]. _Association rule_ mining learns the different associations between the input data, which will then help to determine the correlation/relationship among them. For example, associations between shoppers can be established based on their purchasing or browsing histories [43]. It is worth noting that there is also a mixed category named semi-supervised learning, where only some data are labeled [44]. In this category, a final output is known, and the learning algorithm should figure out how to structure and organize the data to achieve the desired outputs.
* **Reinforcement learning:** The basic idea of this class of learning is "learning by doing." An _agent_ learns to perform particular tasks in a feedback loop by trial and error, until achieving a desirable performance. The agent receives either positive or negative reward when it performs the task either well or poorly, respectively. Hence, the agent aims to learn an optimal policy enabling to maximize the reward. A typical example of a reinforcement learning application is when teaching a robotic hand to pick up a ball. Popular reinforcement learning algorithms include _Q-learning_, _Deep Q-learning_, and _Advantage Actor Critic (A2C)_[45]. _Q-learning_ consists in finding the optimal policy to transit from a state of the system to another. It aims to learn a so-called _Q-table_, where "Q" stands for quality. Each value of the table encodes the quality of picking a specific action when the agent is at a given state. _Deep Q-learning_ replaces the Q-table with an ANN. This allows the algorithm to handle problems with a continuous state space and cases where the state space is prohibitively large. Finally, _A2C_ consists in building two different learning networks, named actor and critic. The actor is in charge of making optimal actions, while the critic network assesses their qualities [46].
#### Iii-B2 Deep Learning Algorithms
An Artificial Neural Network (ANN) comprises a set of artificial neurons, named also perceptrons, which are computational components used to process and analyze data. ANNs are generally organized in layers of neurons, where neurons of a specific layer receive input from the layers before, operate on this input, and pass the output of their computations on to the layers that follow (though variations of this organization exist). Deep Learning (DL) is a sub-field of ML that is characterized by the application of ANNs with many layers. ANNs with more than three layers are usually referred to as Deep Neural Networks (DNN), and some modern neural architectures may include hundreds of layers. DL alleviates the need for manual feature engineering, allowing instead the system to automatically _learn_ features of the input data that are important, by hierarchically synthesizing higher-level features (e.g., shapes, in an image classification problem) from lower-level ones (e.g., edges or contours) as ANN layers progress. This is in contrast with other ML designs where humans need to explicitly define the representation of data in terms of input features. DL The main deep learning algorithms are as follows:
* _Feed-forward artificial neural networks_ are one of the most used deep learning forms, where data are fed from the first to the last (output) layer, through multiple hidden layers, and thus multiple computational neurons [41]. A feed-forward ANN is usually coupled to a back-propagation algorithm, that works back from the results (output layer) towards the first layer in order to correct errors and improve prediction accuracy of the neural network.
* _Sequence Algorithms_ enable to deal with sequential data-related problems (time series), such as speech recognition and language translation. _Recurrent Neural Networks (RNN)_, such as _Long Short-Term Memory (LSTM)_ networks, are typical examples of such algorithms. They are mainly based on an internal memory to save what happened in the previous layer, to decide about the output of the current layer. For example, if we saved the first two words of the well-known sentence, "Deep Learning Algorithms," it would be much easier to predict the third word "Algorithms." Thus, recurrent architectures decide about their future outputs based on both the historical and actual states [47].
* _Convolutional Neural Networks (CNN)_ is another deep learning form that is mostly used for image recognition. As in the typical ANN structure, CNNs include input, hidden, and output layers. However, intermediate layers can include distinct layer types, such as convolutional, pooling, full-connected, and normalization layers [48]. These different types of layers can learn about both simple and more complex image features like colors and edges.
* _Generative Adversarial Network (GAN)_: It is based on CNN to deal with unlabeled data (unsupervised learning)
to extract common attributes from data [49]. In particular, GAN includes two competitive neural networks. The first one generates new data examples (generator), while the second one evaluates the quality of such new data (discriminator). For instance, GAN has been widely used to generate new realistic images.
* _Auto-encoder_: It is another unsupervised deep learning algorithm used to learn efficient encodings of unlabeled data. Auto-encoder comprises two main steps: encoder to code the input data, and decoder to reconstruct the input data from the code [50]. Applications include detecting anomalies and reducing the noise of images.
DL models may be trained either in a centralized or a distributed fashion. Centralized learning can be performed by uploading all required data from all connected data sources, such as remote devices, to a central node, e.g., a cloud server, to train a global model. The latter can then be deployed to all involved entities, or it can be accessed remotely as a service delivered from the cloud. In a mobile network context, centralized training enables to optimize the energy consumption of connected devices which are typically battery-powered. At the same time, though, it is associated with other critical challenges related to device privacy threats due to sharing data, and increasing communication overhead and costs. To overcome such challenges, distributed learning emerges as a potential solution, due to its potential for network cost savings and its inherent privacy preservation. Enabling a set of learners to collaboratively build machine learning models without sharing their private data enhances data sovereignty and is a form of user empowerment. This is a step towards democratizing deep learning processes, while at the same time bearing the potential for resource savings at the cloud end by distributing training load.
### _Background on Collaborative Deep Learning_
In this section, we give an overview of the main collaborative deep learning schemes, ranging from federated learning, to the recent split learning and SplitFed learning paradigms.
#### Iii-B1 Federated Deep Learning
Federated Learning (FL) enables a set of nodes (clients) to collaboratively learn a prediction model, without sharing their own data [51][52]. Thus, FL aims to build cooperative learning models, while protecting the privacy of learners' data. The FL process comprises three main steps (see Fig 2):
* Local learning initialization: During this first step, a central node, e.g., cloud server, specifies learning hyper-parameters in terms of the neural architecture and deep learning algorithm to be used, along with its configuration (number of layers and neurons, activation functions, learning rate, optimizer, dataset features, number of iterations, minimum required accuracy, etc.). Such parameters are then transferred to all the involved learners.
* Training of local models: Once receiving the learning parameters from the central node, each learner starts to build its local learning model leveraging its own data. The local models, i.e., neural network weights, are then communicated back to the central node after either the specified number of iterations is reached, or the needed accuracy has been achieved.
* Local models aggregation: During this step, the central node aggregates all the received local models to generate a global model, before sharing it with all learners. At this stage, different aggregation algorithms can be used such Federated Averaging (FedAvg) [17], FedProx [53], FedPer [54], and SCAFFOLD [55].
Even though FL enables to train neural networks in a distributed, collaborative way, it still presents critical challenges
Fig. 2: Federated Learning Training.
that should be addressed. One of the main challenges related to FL is _systems heterogeneity_, where the storage, communication and computing capabilities of involved learners may differ mainly due to the heterogeneity in hardware (memory and processing units), network connectivity (Wi-Fi, 3G, 4G, 5G), and power source and state (e.g., battery level). Thus, it is possible that not every learner is able to train a neural network, nor to periodically share it with a central node [18]. Moreover, while FL avoids sharing learners' data by design, sharing model updates during the training process can reveal private information, either to the central server, or to a third-party [18].
#### Iv-B2 Split Deep Learning
To overcome FL's limits, another collaborative deep learning technique called "Split Learning" or "Split Neural Learning" (SplitNN) has recently been developed [56]. As illustrated in Fig. 3, it is used to build deep neural networks over multiple learners, while avoiding to share their labeled data. In SplitNN, a deep neural network is split into multiple sections (sub-layers) and each section is locally trained on a different learner (user or server). Thus, the training of the learning model is performed by transferring the weights of the last layer of each section (_smashed data_), also named the _cut layer_, to the next section. By this, SplitNN mitigates to share learners' data, and only the weights of the last layer are shared with the next learners. Specifically, the neural networks in SplitNN are trained through two main steps:
* Forward propagation: Each learner (for example an IoT device) trains a partial deep neural network up to the cut layer. The outputs of the cut layer are then transferred to the next learner (server), that continues the training without access to the data of the other learners (IoT devices).
* Backward propagation: This consists in back-propagating the gradients from the last section, to the first section of the neural network. Only the gradients of the cut layer are sent back from the server to the IoT devices.
This process is repeated until training the whole neural network and reaching the required accuracy. In practice, SplitNN can be configured in three different ways:
* Vanilla split learning: It is the simplest configuration, where the deep neural network is split between a set of learners and a server, where the last section of layers is located at the server. Learners start to train their neural networks until the cut layer. The weights of the cut layers are then sent by the learners to the server to complete the rest of training. For backward propagation at the server, the gradients are back propagated from the last layer towards the cut layer. The gradients at the cut layer are then communicated back to the different learners, to continue the backward propagation step.
* Split learning without label sharing: This configuration also consists in splitting the neural network among learners and a server. However, the label of each example is located at the learner's side. The neural network is partitioned in a way that learners maintain the first and the last layers of the neural network, so (i) the output of the last cut layer in the forward pass is sent back by the server to learners, which in turn (ii) start back propagation by building the gradients from the last section of the neural network without sharing the corresponding labels, and passing the gradients on to the server, which eventually (iii) sends back its output and the back propagation process is finalized at the learners' end. This configuration is ideal for applications where labels incorporate very sensitive information like patients' disease status.
* Vertically partitioned split learning: It is suitable when multiple institutions, for example different network op
Fig. 3: Split Learning Training.
erators, aim to train a common network over vertically partitioned data (i.e., where each institution holds a different set of features of the dataset) through a central server and without sharing their data. The deep neural network is split in a way that the institutions share the same neural network sections, albeit with different input features each. The last layers are located at the server. Institutions train their neural networks up to the cut layer, and the institutions' outputs at the cut layer are then aggregated and sent to the central server that continues the training process.
Compared to FL, SplitNN improves data privacy by sharing only the weights of a sub-section of the neural network, up to the cut layer, rather than sharing the model updates during the training process. In addition, SplitNN strongly reduces the computation required by the different learners to generate a global learning model, since each learner is in charge of only a part of the neural network.
Despite the main advantages of SplitNN, however, it presents a critical performance issue: The sequential nature of training in SplitNN makes client resources idle, since only one learner can be engaged with the server at one instance. In particular, in settings with multiple learners, after one learner finishes back propagation - thus a new version of the global model, partitioned between the learner and the server, is available - the next one needs to begin its forward pass on the most up-to-date model. Synchronizing the learner-side part of the model can take place either in a centralized (the learner-side model portion is uploaded by the last learner to a central server accessible by other learners) or in a peer-to-peer manner. In any case, new versions of the global model are created one learner at a time, while the rest remain idle. This may generate a considerable training time overhead, especially when the number of learners is large [20].
#### Iii-B3 SplitFed Deep Learning
SplitFed (or SFL) merges the two distributed ML solutions, FL and SplitNN, to construct an enhanced hybrid collaborative learning algorithm, that always follows the learner-server model [20]. It inherits the dual advantages of both FL and SplitNN. It partitions the model into learner/client and server sides, but all the sub-models are trained in parallel. In addition to the main server that existed in the earlier SplitNN design, a novel component is introduced in the architecture called _Fed Server_ (see Fig. 4). The working process follows a series of steps: At first, the Fed Server starts the procedure by sending the global learner-side model portion to all participating clients. Next, all learners run at the same time the forward propagation function using their own local data, till the cut layer of learners, where they pass the smashed data to the cut layer of the main server. Following the SplitNN principle explained so far, the server takes over the rest of the forward propagation, calculates the cost function and back-propagates up to the cut layer of the server. Notably, as we shall describe next, it is possible to execute this server-side process in parallel for multiple clients. Then, each client continues the back propagation on its learner-side model portion. The cycle of forward and backward propagation between the learners and the server is carried out for some rounds without the Fed Server. Then, the learners communicate their updates to the Fed Server, that aggregates them and creates a global learner-side model, which is sent back to all the involved learners.
There are two ways to perform server-side synchronization:
* With Aggregation: The main server is responsible not only of training its part of the model over the smashed data received from clients, but also to aggregate the back propagation results corresponding to each client's data
Fig. 4: SplitFed Learning Training.
into a single global server-side model at each learning epoch. This takes place by executing federated averaging over the values it computed during the backward pass on each individual learner's smashed data. It should be noted that the main server processes these smashed data _in parallel_.
* Without Aggregation: There is no aggregation at the main server, and the server-side model is updated in every single forward-backward pass, processing smashed data from the various clients _sequentially_. The smashed data themselves are received by clients synchronously. Thus, at each instance, the main server selects one client at random, in order to perform forward-back propagation. The clients' operations remain the same (forward-back propagation), where their models are sent periodically to the Fed Server to aggregate them and generate a global (learner-side) model.
To summarize, SplitFed splits the neural network among the involving learners and server, as in SplitNN, to optimize both data/model privacy and compute resource use. Moreover, SplitFed improves on training time, as compared to SplitNN, by integrating the parallel model update paradigm of FL. Table III compares the three collaborative learning forms, FL, SplitNN, and SplitFed, according to six main criteria: privacy of the learning model, model aggregation, training time overhead, needed computation resources, distributed computing, and access to the raw data.
It is clear that the three collaborative learning approaches enable a high degree of computation distribution, without any access to the raw data. However, SplitFed offers more advantages as compared to both FL and SplitNN by optimizing model privacy, learners' computation resources, and training time overhead. This represents the main motivation of our work to focus on SplitFed and demonstrate its benefits when leveraging it over B5G/6G wireless networks.
### _Background on Sixth-Generation (6G) Networks_
6G mobile networks are expected to evolve towards connected intelligence with the support of a wide range of services with diverse and stringent requirements. In this section, we provide an overview of the emerging 6G mobile networks, ranging from mobile network evolution, to today's 6G vision and development timeline of 6G-enabled networks.
#### Iv-C1 Evolution of Mobile Networks
During the last four decades, mobile networks have transformed through five different generations. Each new generation integrates more capabilities and technologies to enhance and empower our lifestyle and work. Before 1980s, pre-cellular mobile generation was referred as the zeroth-generation (0G) of mobile networks. It offered basic voice communication using devices such as walkie-talkies [57]. In the 1980s, the first-generation (1G) of cellular networks was launched, to support analog cellular telephony [58]. Second-Generation (2G) cellular telephony was introduced in the early 1990's. 2G featured a transition from analog to digital technology, to provide new services
Fig. 5: Mobile Networks Evolution.
\begin{table}
\begin{tabular}{|p{11.3pt}|p{11.3pt}|p{11.3pt}|p{11.3pt}|p{11.3pt}|p{11.3pt}|p{11.3pt}|} \hline
**Collaborative Learning** & **Localization** & **Localization** & **Localization** & **Localization** & **Localization** & **Localization** \\ \hline \hline
**Federated Learning** & L & H & L & H & H & L \\ \hline
**Split Learning** & H & L & H & L & H & L \\ \hline
**Split Federated Learning** & **H** & **H** & **M** & **L** & **H** & **L** \\ \hline \end{tabular}
\end{table} TABLE III: Comparison of Collaborative Learning Algorithms.
such as MMS, picture messages, and text messages in addition to voice communication [59]. The International Telecommunication Union (ITU) then launched initiatives to unify a frequency band in the 2000 MHZ, supporting a single wireless communication standard for all countries. The third-generation (3G) was introduced based on such standard to enable new and advanced services, while optimizing network capacity [60]. These services include video calls, multimedia messages, mobile TV, GPS (global positioning system), etc. The fourth-generation (4G) succeeds 3G cellular networks, to introduce further improved mobile services, including Voice Over IP (VoIP), online gaming, High-definition mobile TV, mobile web access, and 3D television [61].
Currently, multiple network operators are deploying 5G mobile communication worldwide to support further advanced services, such as ultra Reliable Low Latency Communication (uRLLC) to ensure a communication latency down to 1 ms, enhanced Mobile Broadband (eMBB) to achieve data throughput up to 10 Gbps, and massive Machine Type Communication (mMTC) to support a massive deployment of devices, in particular over 100x more devices per unit area as compared to 4G. In 5G, network availability and reliability are expected to reach 99.999% [62]. Indeed, network slicing and network softwarization are the main technology enablers of 5G that introduce more programmability, dynamicity, and abstraction of networks [63]. These capabilities have enabled promising applications including Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), Internet of Things (IoT), autonomous vehicles, and Industry 4.0 [64][65]. Recent developments have brought about several new concepts, such as Non-Orthogonal Multiple Access (NOMA), beyond sub 6GHz to THz communication, Edge Intelligence (EI), Self-Sustaining Networks (SSN), swarm networks, and Large Intelligent Surfaces (LIS) [3][4]. These concepts are expected to play a vital role in empowering the next generations of wireless networks. In addition, these concepts are also expected to be the main enablers of many new applications such as Unmanned Aerial Vehicles (UAV), smart grid 2.0, Extended Reality (XR), Holographic Telepresence (HT), space and deep-sea tourism, and Industry 5.0. However, the requirements of these applications, including accurate sensing and localization, availability of powerful computing resources, ultra-high data rates, extremely low latency, very high reliability and availability, surpass the 5G network capabilities [21][15]. This has motivated the research and industrial communities to envision 6G communication networks, which are expected to consider the emerging communication concepts and applications.
Fig. 5 gives the evolution of cellular networks, while showing the key features of each generation. It also illustrates the main envisaged 6G enablers, vision, requirements, and applications.
#### Ii-B2 Today's 6G Vision
As envisioned today, extreme peak data rates over 1 Tbps and a very low end-to-end delay of 0.1 ms are expected to be provided by 6G networks. To attain such goals, processing delays at the sub-microsecond range for specific tasks may be required, and this will be facilitated by the pervasive use of edge intelligence, a core theme in the 6G vision. Both network reliability and availability are expected to go beyond 99.99999%. 6G networks are expected to provide high connection density of over 107 devices/km2, and thus support Internet of Everything (IoE), which connects massive numbers of Cyber-Physical Systems (CPS), devices, actuators, and sensors. Furthermore, extreme mobility up to 1000 kmph is expected to be supported by 6G in addition to a spectrum efficiency estimated to 5\(\times\) that of 5G [3].
To enable emerging applications, 6G networks are expected to meet multiple new requirements including massive Low-Latency Machine Type communication (mLLMT), Mobile Broadband and Low-Latency (MBBLL), ultra-massive Machine-Type Communication (umMTC), and Further enhanced Mobile Broadband (eFMBB). These new requirements are enabled thanks to the new emerging technologies in terms of Compressive Sensing (CS), distributed/collaborative learning, Edge AI, THz spectrum, 3D networking, and blockchain/Distributed Ledger Technologies (DLT). To this end, considerable efforts are made by research community to developing and specifying 6G technologies, applications, services, vision, and standards [66][67].
#### Ii-B3 Expected 6G Development Timeline
Developments of 6G networks progress along with the finalization of 4G LTE-C, that followed LTE-B and LTE-Advanced as well as the commercialization and deployment of 5G networks [29]. By 2023, the definition of the 6G vision in terms of requirements and development evaluation, standards, technologies, etc. is expected. The technical specifications of 6G are expected to be developed by standardization bodies like 3rd Generation Partnership Project (3GPP) and ITU by 2026-2027 [29]. Network operators are also expected to initiate 6G research and development by 2026-2027, in order to perform 6G network trials by 2028-2029, and to start deploying 6G communication networks by 2030 [21, 29][22]. Fig 6 illustrates the expected timeline of 6G standardization, development, and deployment.
## III SFL for 6G: Technical Aspects
### _Intelligent Physical Layer_
#### Iii-A1 Introduction, requirements, and existing solutions
One of the major features of 6G is the ability to connect a massive number of intelligent devices (IoE) [1]. Undoubtedly, this would lead to a dramatic growth in the number of users and emerging applications that impose diverse performance requirements: extremely high data rate, extremely high reliability and ultra low latency. As previous cellular networks, the issue of spectrum's scarcity is still ongoing, thus achieving these goals is a challenging task. Notwithstanding the wide spectrum offered by 5G, it is insufficient to cover the future 6G needs. For that, additional frequency bands are a necessity. Multiband spectrum that combines sub-6 GHz, millimeter wave (30 - 300 GHz), Terahertz (0.06 - 10 THz) and non-radio frequencies (RFs) (visible and optical bands such as Li-Fi) will be a central solution for 6G networks [2]. To usefully exploit the limited resources, smart techniques for sharing and managing are mandatory. The intervention of AI in the physical layer will be a further addition to deal with problems that are cumbersome to model accurately with conventional mathematical methods.
In the literature, many works are devoted to the utilization of AI paradigms (ML, DL, FL) in the physical layer [68][69]. AI will render the transmission more reliable by improving different physical layer aspects, e.g., signal modulation, channel estimation, and error control. In [16], the authors proposed a channel estimation model based on Federated Learning, and their results show that their approach offers 16\(\times\) lower overhead than centralized learning.
Similarly, Automatic Modulation Classification (AMC) is an attractive solution widely used for intelligent radio systems. In a typical communication environment, the modulation scheme is shared between both the transmitter and receiver. However, this would increase the signalling overhead, while a sniffer can interpret and identify the modulation scheme used for transmission. AMC consists in the identification of the modulation type of the received signal without prior knowledge of the transmitter modulation. A reliable modulation classifier needs to sustain a high accuracy and low loss under various channel conditions and SNR (Signal-to-Noise Ratio) rate. Of late, many studies address the integration of deep learning algorithms to replace conventional classifiers. For example, in [70] the authors proposed a new AMC multi-class model with four possible outputs (BPSK, QPSK, 8-PSK, 16-QAM) based on a Recurrent Neural Network (RNN), which is the most suitable for sequential data. The classifier exhibits noteworthy results under different noise conditions. A CNN architecture has also been successfully employed by [71] for the processing of graphical representations of signals in a spectrogram form. Recently, a distributed classification method for AMC based on federated learning has been introduced in [72]. The proposed solution, so-called FedeAMC, shows a slight performance gap with a centralized solution, i.e., CentAMC (less than 2%). Another solution to set up a reliable communication link at the physical layer in 6G systems is to use a huge number of antennas at the transmitter and receiver sides. This technology is called Massive MIMO (Multiple-Input, Multiple-Output) [73]. It enables the transmission/reception of signals from/to multiple users simultaneously. Beamforming is a thriving technique used in Massive MIMO. It is based on smart small antennas that increase the transmitted energy to a specific direction, in order to create narrow beams destined for particular users. This method can boost the link's capacity, by reducing interference, providing more signal paths and higher throughput. On the other hand, the orchestration of the massive number of antennas would be complex and hardly manageable. In this regard, many researchers leverage AI capabilities, by introducing new antenna design processes through ML. For instance, in [74], a FL-CNN based model was designed for analog beamformers, where simulation results demonstrated that the framework minimizes the overhead of channel state information (CSI) collection and transmission, and it is more tolerant to channel changes and imperfections.
#### Iii-B2 Challenges and how SFL can help
Despite the promising performance of traditional ML and FL in physical layer design, they present some drawbacks. Most studies apply centralized ML that entails the availability of datasets at a central node, e.g., the base station (BS), wherein a transmission of local data from user equipment (UE) is prerequisite. Thus, the BS starts the training process after the collection of the required datasets from the respective sources. Further, in existing FL-based approaches, the generated datasets, e.g., the received pilots are kept intact at the client side, and the base station forwards a replica of the same model to all clients (e.g., mobile phones). Accordingly, each client trains independently the whole model which is prohibitive in terms of computing resources, that may not always be available, due to the heterogeneous hardware constraints. Another problem of FL stems from the migration of the total parameters of the physical model, which causes privacy and security issues. To cope with these limitations, SplitFed (SFL) could be a better
Fig. 6: Expected Timeline of 6G development [29].
alternative. Contrary to FL, the SFL technique does not share the entire model. As a matter of fact, the model is cleaved into two parts, one part for the clients and the other for the main server. Then, only the dedicated sub-model is migrated towards each client. Therefore, this will enhance the computing and energy consumption and raise the level of the model's privacy. In contrast to SplitNN, the SFL client-side model is trained by all the wireless devices at once, using their own raw data (CSI, received signal, beamformer information, etc.), which accelerates the learning stage. As mentioned earlier, SFL can be applied for various practical physical layer applications, ranging from channel estimation to error control. The base station can act as a bridge between the clients and the Fed Server to send the model updates for aggregation purposes. Indeed, SFL can be seen as a hybrid solution that combines the advantages of both centralized and distributed ML paradigms, in order to confront the performance parameters expected to surge in 6G networks.
#### Iv-A3 Realistic Scenario
In order to elucidate the use of SFL at the PHY layer, we propose an illustration of a realistic scenario for modulation recognition based on SFL (see Fig. 7). The model proposed in [71] used a CNN architecture, where the input is the corresponding spectrograms of the signals with a dimension of \(100\times 100\times 3\). The model comprises four convolutional layers with a kernel size of \(3\times 3\) and different number of filters from 64, 32, to 12 and 8. The size of both zero-padding and stride are set to 1. The pooling size of the max-pooling layer is (2, 2). The fully connected layer consists of 128 neurons. To apply SFL, we consider a system with \(k\) clients [\(c_{1}\), \(c_{2}\), \(\ldots\), \(c_{k}\)] and the client with the highest computing resources will represent the main server; we assume it to be the last client \(c_{k}\). First, we divide the global model \(W\) (see step 1 in Fig. 7) into \(W_{C}\) (client side) and \(W_{S}\) (server side) (step 2 in the figure). Then, we allocate the first five layers to the clients (Conv, Max Pooling, Conv, Max Pooling and Conv) and the remaining layers to the main server (Max Pooling, Conv, Fully Connected and Softmax). The used dataset is RadioML2016.10a [75], which considers 11 modulation methods, 20 different signal-to-noise ratios (SNRs) and 1000 signals per modulation mode per SNR. \(700\) random signals, per modulation mode per SNR, are chosen as training data, and the remaining are divided into validation and test data. We distribute the dataset between the \((k-1)\) clients for training, So, each client will have [\(700\times 11\times 20\,/\,(k-1)\)] signals. In the first iteration, the clients from \(c_{1}\) to \(c_{k-1}\) train the \(W_{C}\) until the third Conv layer. Then, each of them applies the ReLU activation function and transmits the output to the main server (client \(c_{k}\)) (step 3 in the figure). The rest of the forward operation is performed by the main server and the recognition accuracy is measured. Next, the client \(c_{k}\) back-propagates the model until its Max Pooling layer and sends the activation gradients for the clients to continue the back-propagation on their client-side local model (step 4 in the figure). After some iterations, the \(k-1\) clients forward their local weights to the Fed Server to reconstruct the new global \(W_{C}\). The resulting averaged model is then re-forwarded to all clients and the process restarts (step 6 in the figure). The training is stopped once the validation loss is not decreased (has stable values). Using SplitFed in this scenario will be beneficial on several levels. First, it would be time and energy saving for the client-side, as each learner would just train five layers instead of nine layers. It is also advantageous in terms of storage requirements because of the small number of trainable parameters implicated, compared to FL that typically require a large number parameters due to the learnable layers. All these previous points make SplitFed more suitable especially when no computational capabilities are available.
### _Resource Management_
#### Iv-B1 Introduction, requirements, and existing solutions
Resource Management (RM) in 6G would face significant challenges. Owning to the uncommon number of expected connected devices, network resources are projected to be under a high strain. Accordingly, to meet the diverse needs of each device and/or service, RM algorithms should be able no only to allocate the resources but also to optimize, adapt, prioritize and secure the allocation process. Typically, RM problems are resolved using optimization and heuristics-based methods. However, the foreseeable services and their performance in 6G will hinder the utilization of traditional solutions. For instance, one of the key capabilities of 5G and beyond is Network Slicing. The general rule that stands behind this technology is to build multiple virtual networks (aka slices) on the top of a single physical infrastructure [76]. A slice is a set of resources (memory, computing, and network) and functions (virtual network functions) customized to support a specific service, and deployed at different levels, such as RAN, core network, edge and cloud computing facilities [77]. The resource allocation to different running slices should be managed in an automatic and flexible way. For that, a range of research works tackled AI-assisted network-slicing along the complete lifecycle of a slice [78, 79], from admission control [80], resource orchestration [81, 82] to radio scheduling [83]. It should be noted that while significant efforts have been put in these directions in the context of 5G, emerging 6G use cases will challenge Network Slicc customization, management and orchestration in multiple ways: Application requirements will cut across the traditional 5G service classes (eMBB, URLLC, and MIoT) [22], which implies that slice resource management mechanisms will need to address new throughput-reliability-latency (and potentially privacy) trade-offs. Furthermore, handling massive numbers of potentially short-lived slices can strain the control and management planes.
Another aspect of RM is the power allocation problem, where the transmitted power should conserve the signal's quality without causing interference. In [84], the authors have developed a DL model for MIMO power control to predict the power allocation profile of any UE based on its position. The model has been trained to learn the mapping between the UE's position and the optimal power allocation. The limited radio resources and the massive channel access expected in 6G make the orthogonal multiple access schemes (e.g., TDMA, FDMA and CDMA), used in the previous generations of cellular networks, unable to fulfill the needs of users. To handle this struggle, NOMA (Non-Orthogonal Multiple Access) is a good
candidate for 6G. It covers multiple users using the same resource block (same time and frequency) which brings about inter-user interference [85]. To mitigate the latter, successive interference cancellation (SIC) process is applied. In [86], a DNN-aided SIC architecture was studied, where all the users' signals are successively decoded by the base station from the strongest to the weakest. The input of each DNN is the composite signal that contains all received signals, and the decoded signals of all previous users (the input for the first user is only the composite signal), and the output is the decoded signal of the corresponding user. On the side, the high-speed and high-mobility of some nodes (e.g, drones and air-taxis) and the use of mm-waves in 6G may lead to many handover (HO) events [87]. Cell selection and handover management are major issues that must be handled in 6G to allow users to continue communicating smoothly without interruption, by moving from one AP/BS to another. DL-based HO techniques prove their power in literature. In [88], Hu et al. proposed a novel intelligent HO control method where the handover decision-making is based on the results of a deep learning-based model for trajectory prediction. According to the test results, the method achieves higher accuracy than the other traditional systems (on average, the accuracy value is 8% better).
#### Iv-B2 Challenges and how SFL can help
The frequent changes in network conditions and configurations (fluctuating number of users, dynamic network status) cause a degradation in traditional resource allocation methods, which assume static networks and depend on fixed network models. Within the last decade, a considerable number of works leveraging data-driven techniques to solve issues related to resource management in B5G/6G networks have been proposed [89][90]. Data-driven approaches permit the training of a model on a large amount of data, associate the relationship between input and output, and forecast an optimal resource allocation. Among the manifold data-driven algorithms that can be used for resource allocation, we find traditional DL schemes, reinforcement learning with its DL-based variants, such as Deep Q-Learning, and FL. However, the proposed techniques cannot deal with all the challenges, especially when executed over resource-constrained client devices. The training of a complex FL-based resource management model with low-resource devices can lead to many problems, starting from the lack of storage resources, where the whole model cannot be loaded into the small memory of the device and also cannot be trained due to the lack of compute resources. In addition, the training of the model will consume a lot of energy and will be time consuming. In this scenario, to ensure the training process in resource-constrained devices, we need to split the model into shards between the different entities, which will optimize the resource utilization with a fast convergence time. So, SFL is a recommended technique for the 6G network resource allocation optimization, especially when device resources are scarce.
#### Iv-B3 Realistic Scenario
To demonstrate the efficiency of SFL in solving resource management issues, we project its application on Network Slicing architectures in 5G and beyond. In [90] the authors propose a new framework called ADAPTIVE6G based on the Transfer Learning (TL) paradigm that offers incontrovertible advantages by reusing an already trained neural network model, instead of developing a fresh model from scratch, to resolve related problems [91]. ADAP
Fig. 7: SFL-based modulation recognition application.
TIVE6G considers three slices A (eMBB), B (mIoT), and C (URLLC) respectively. Founded on the data collected from all slices, it builds up a traditional Deep Neural Model \(M_{DNN}\) to predict the total network loads. \(M_{DNN}\) contains five neural layers: the input, output, and three hidden layers. After optimizing \(M_{DNN}\), the learned weights are held as TL parameters to train a new model called \(M_{ADAPTIVE6G}\) using the dataset of each slice individually, releasing three models \(M_{eMBB}\), \(M_{mIoT}\) and \(M_{URLLC}\) (one for each slice). Performing training on the entire model and dataset may spend more time and energy for devices that know tiny size and limited resources. In this scenario, SFL may be applied in two manners:
* First, with the \(M_{DNN}\) model by splitting the five layers among the clients within each slice and the main server. As depicted in Fig. 8, the first two layers are running on the eMBB devices whereas the last three layers are running on the main server, represented by the in-slice manager entity (ISM). Given that context, the in-slice manager plays twofold roles, namely the main server and fed server for all slice devices. After some iterations, each in-slice manager forwards the obtained model towards the Slice Orchestrator Manager (SOM) to create the global model.
* Second, applying SFL on the \(M_{ADAPTIVE6G}\) model inside each slice would be immensely beneficial in terms of efficient use of network resources. With this aim, we suggest a resource-aware split strategy where the number of layers running on each slice is not fixed but varying per slice based on its device capabilities. Using this approach, we propose to split the \(M_{ADAPTIVE6G}\) learning model into just one layer as a client segment inside the mIoT slice due to its low-power devices, while the remaining four layers are sent to the main server. Following the same concept, eMBB and URLLC devices can have more layers as they have more computing resources and large memory compared with mIoT slice devices. To achieve that, we suggest preserving two layers for eMBB and URLLC clients and running the other three layers on the main server. In this scenario, ISM acts as the main server in close proximity to slice clients whilst the SOM entity acts as a fed server that aggregates the sub-models updates contributed by the participating clients.
### _Intelligent Edge Computing_
#### Iv-C1 Introduction, requirements, and existing solutions
Copous volumes of data are generated incessantly by the various kinds of ubiquitous devices. The transmission of this amount of data to remote cloud servers puts a strain on the network infrastructure, while their centralized storage and processing requires massive cloud resources. In the reverse direction, massive content and service delivery, as well as bounded latency requirements of real-time applications, call for resource disaggregation and the delivery of services from locations closer to end users. The Multi-Access Edge Computing (MEC) paradigm offers answers to these challenges [92]. It consists in moving computing resources close to the data source and executing network operator or third-party services at telco-operated edge data centers close to the radio access network [93, 94]. ETSI provides a comprehensive set of standards specifying various aspects of the MEC architecture [95]. The integration of AI algorithms into the edge enabled the emergence of a new form called Intelligent Edge Computing (IEC), which is a missing element in 5G networks [96]. IEC will well be an important component in future 6G networks, by making services more intelligent, secure, autonomous, reliable and scalable. The appearance of distributed learning such as FL and SplitNN has supported its progress by the utilization of the edge capabilities to train and share models providing added value and optimized services. A wave of applications stimulates the deployment of an edge-native solutions, such as live video-based facial recognition in smart spaces and air pollution monitoring, to name a few, that call for real-time data processing. IEC starts to draw a keen interest among specialists and research communities, wherein series of works study the intersection between AI and edge computing. A recent study in [97] provides a comprehensive survey on the IEC technologies in 6G. This article describes the necessary IEC's concepts and raises new open challenges and future directions in IEC within 6G networks. In [14], the authors propose a self-learning architecture based on self-supervised Generative Adversarial Nets (GANs), to illustrate the performance improvement that can be achieved by automatic data learning and synthesizing at the edge of the network. Self-learning is a prominent field in ML, that allows automatic data collection, label generation, feature extraction, and model construction without human involvement. Self-supervised learning is one of the principal axes of self-learning that permits the generation of labeled data set from unlabeled data [98].
#### Iv-C2 Challenges and how SFL can help
Although intelligent edge computing is an attractive technology to cover the limitations of the cloud, there are some challenges that need to be addressed, such as security and privacy-associated ones, wherein different types of attacks may be launched, such as data poisoning, data evasion, and privacy attacks. At the same time, there is a persistent need of the cloud in particular for big data processing due to the limited computation and storage capabilities of edge servers. Therefore, lightweight AI algorithms must be utilized to provide smart applications for edge scenarios. In effect, SplitFed and IEC form a complete unit, where each of them will enhance and emphasize the qualities of the other. The integration of SplitFed will accommodate IEC's technology requirements. Because of its model segmentation feature, a large model can be optimally trained at the edge on massive data generated in smart spaces permitting a more efficient use of device resources. Also, with IEC, both the Fed Server and the main server would be deployed at the edge network instead of the cloud, which reduces the distance between servers and edge devices, leading to a low latency connection in the forward and backward propagation steps, higher training speed, and less network traffic. Furthermore, in addition to the protection of user data, SFL improves on model privacy and ensures a high reliability due to the high number of edge nodes (user devices and servers), that participate in the training process. Moreover, the proximity feature makes
the prediction of the end-user's location easier, and helps in the training of SFL models for localization based services.
#### Iv-C3 Realistic Scenario
To depict how SFL can be applied in Intelligent Edge Computing, we examine a realistic scenario. In [99], the authors propose a MEC deep learning-powered framework for an optimal execution of vehicle collision detection and avoidance services. The proposed architecture involves predicting first the density of vehicles to be covered by a MEC host. Then, according to the observed vehicle density, the required MEC computing resources are deduced. To predict the vehicle mobility, a long short-term memory (LSTM) based model is used. It consists of five layers that start with a fully connected input layer of \(56\) neurons, three stacked LSTM layers, each with \(56\) neurons and an output layer of \(45\) neurons. The evaluation process was made on real-world taxi GPS data, consisting of 464019 entries, recorded over 30 days in the city of San Francisco [100]. The system assumes that each taxi forwards periodically the vector (timestamp, ID, GPS coordinates) to a central server. In this regard, the incorporation of SFL could bring significant improvements. Fig. 9 illustrates how SFL can help in enhancing mobility prediction in the proposed framework. First, the data remain on participating taxis and each taxi carries out a part of model training. As stated before, only model parameters would be transferred over the network. Moreover, both servers involved in the SFL paradigm would be at the edge, near the taxis, which will enhance learning performance. Once a vehicle completes its local training task, it sends the smashed data to the nearest base station which plays the role of a bridge between the taxis and the main server. Next, the main server continues with the forward/backward propagation procedures and returns, through the closest base station, the adjusted weights to the taxis' cut layers for the rest of training. The process is repeated until reaching an expected level of accuracy.
### _Privacy, Trust, and Security_
#### Iv-D1 Introduction, requirements, and existing solutions
6G is a hyper-connected network that allows communication over the ground, air, sea and space [101], connecting the digital, virtual, and physical worlds [102]. Undoubtedly, this would open up a range of subject matters in terms of privacy, trust, and security. How to deal with threats, vulnerabilities, and attacks in an ultra-dense, heterogeneous, and complex network? How to assure the CIA triad (Confidentiality, Integrity, and Availability) and offer the best user experience? How to build a trustworthy network and defend users' personal data against bad practices? In the literature, a plethora of thematic solutions applying varied approaches have been proposed.
In recent years, Distributed Ledger Technologies (DLT) such as Blockchain are one of the most explored techniques in this field due to their exclusive features such as decentralization, replication, immutability, transparency, and traceability [103]. This is manifested in the ongoing discussion on the integration of Blockchain with networks beyond 5G [104][105]. In such networks, deployment cost challenges associated with operating smaller (and denser) cells due to higher radio frequencies, and accountability challenges due to supporting multi-vendor settings arise. These challenges call for efficient and trustworthy _resource and infrastructure sharing_ among different providers, and Blockchain can be seen
Fig. 8: An illustration of SFL for Resource Management (First Scenario).
as the fabric to support automated, transparent, and accountable SLA management mechanisms to implement it [106]. Accountability and transparency are critical to build trust among the involved stakeholders, such as network operators and device vendors. From a technical standpoint, smart contracts executing on the Blockchain can be used, among others, to encode rules to create and regulate a RAN resource marketplace [107, 108], or to implement automated negotiation mechanisms among infrastructure providers for offering end-to-end network slices to vertical service providers [109].
On the federated learning front, Issa et al. [110] and Zhu et al. [111] survey recent works that integrate Blockchain into federated learning design to address specific security and performance issues of the latter. This body of works addresses such issues in two major ways: (i) Using Blockchain and the associated payment mechanisms to incentivize FL nodes to contribute computation resources in exchange for specific rewards. (ii) Replacing the traditional centralized FL design with a Blockchain-powered, decentralized one, where there is no single node that acts as the FL server; instead, this role is shared by multiple FL nodes in a peer-to-peer manner, where nodes exchange encrypted model updates and smart contracts execute secure model aggregation, with the global model being recorded in the Blockchain in an immutable, transparent, and reliable way.
Contemporary cybersecurity solutions leverage the capabilities of machine learning, since it enables the automatic detection of abnormal behaviors. Several ML algorithms confirmed their reliability in analyzing traffic data and detecting malicious attacks. For instance, in [112], authors proposed a new centralized solution combining RNN with an Autoencoder to detect DDOS attacks. In another work [113], CNN with RNN have been used for Android Malware Detection. Recently, FL has also been actively introduced for malicious attacks detection. In [114], authors proposed a FL framework using MLP and AE, to detect malicious software in IoT devices. In another work [115], authors leverage FL to enhance privacy protections in 6G cybertwin networks.
#### V-B2 Challenges and how SFL can help
With its vision towards connected intelligence, 6G is expected to both feature AI-driven operation, and support key use cases that make heavy use of AI (see Section IV). Therefore, it inherits AI security and privacy challenges that have received significant attention. Based on the model life cycle (training and testing), four types of relevant attacks can be distinguished: model extraction and model inversion attacks (privacy threats), and adversarial and poisoning attacks (security threats) [116]. The goal of model extraction attack is to generate a substitute model architecture as a close approximation to the original model based on a query dataset and the relation between the input and output pairs. The model inversion attack, proposed in 2015 by Fredrikson et al [117], consists of finding the input that has a high resemblance to the record that was used in the training set. In adversarial attacks, the attacker leads the target model to report false predictions with a high confidence. In poisoning attacks, the adversary focuses on polluting the model's training data. A variety of techniques have been integrated to solve privacy-preserving DL, for instance, differential privacy (DP) and homomorphic encryption (HE). The former shields from the inversion attack by applying noise on the input data whilst the latter is a cryptography technology that enables to perform computation and processing on encrypted data
Fig. 9: SFL-enabled vehicle mobility prediction for a collision avoidance system.
without revealing the original one. However, both mechanisms raise model privacy-accuracy trade-offs. The defenses for DL security threats are divisible into two main categories: Adversarial defenses, such as pre-processing and malware detection, and poisoning defenses that aim to remove the poisoning samples during the training phase. At present, there is no universal approach to address all deep learning privacy and security issues. A synergy between FL and SplitNN can be used to mitigate some of these problems, particularly for model privacy violations. In view of the fact that the model is split into sub-models and the duplicate instances of the client side copy, the model would be more robust and reliable regarding models' attacks. For instance, if one sub-model was invaded, the rest remain intact and the attacker cannot infer all the attributes and parameters of the model.
#### V-D3 Realistic Scenario
The core objective of an Intrusion Detection System (IDS) is guarding data, services and applications against threats and attacks. Standard data-driven IDS involves the presence of very large amounts of network traffic data in a central location for training purposes. However, gathering data in a single site abets intrusion attempts and facilitates data stealing. In addition, the transmission of data over a vulnerable environment endangers its security. Furthermore, the centralized training on huge network traffic can affect latency whereas IDS claims analysis responsiveness. Authors in [118] propose an unsupervised Deep Learning Approach for IDS that encompasses a one dimensional convolutional auto-encoder (1D CAE) and one-class classifier SVM (OCSVM). The former (CAE) is utilized as a feature representation learning method while the OCSVM is used for attack detection. Note that one class classification is considered because of the imbalanced IDS dataset (the model is trained only with the knowledge of normal traffic). SFL applied to Intrusion Detection Systems can provide an effective strategy by maximizing work division. For instance, in the features learning phase, each host in the network downloads and trains its client side CAE model (the encoder part) locally. Afterwards, the server side recuperates the compressed data (memorized in the Bottleneck layer) and decompresses them (the decoder layer). Then, it calculates the MSE reconstruction loss and back-propagates the model parameters. The cycle continues until convergence is reached. The application of SFL at this stage alleviates the computational complexity for the central processing server, preserves data privacy and enhances bandwidth utilization.
### _Zero Touch System Management_
#### V-E1 Introduction, requirements, and existing solutions
The distributed network architecture and multi-service support enabled by new technologies (SDN, MEC, network slicing, and NFV) involve increased complexity in the management of 5G and beyond networks where the existing solutions are inefficient in managing, monitoring and orchestrating all the operations and services of the network. In 2017, ETSI adopted a new framework called the Zero-touch Network and Service Management (ZSM) [119]. It aims to minimize human intervention by enabling the full automation of all processes and networking services. Emerging disciplines such as AI, ML, DL and Big Data play a significant role in the self-governing of the network (for instance: self-configuration, self-optimization, self-healing and self-protection). Note that the aforementioned attributes can be expanded to self-* to support more of the network's autonomic capabilities. This would be helpful in reducing typical causes of humans errors, improving network performance, and likewise shorten the time and operational costs. There have been many research works that analyze the benefits of integration of AI/DL algorithms in ZSM [120][121] in order to create an automated network for customers. However, the success of the full automation process depends on multiple parameters such as the learning algorithm being used and the quality of the input data. Besides, many relevant challenges would accompany this network transformation as we will see in the next section.
#### V-E2 Challenges and how SFL can help
To deal with the increased network complexity of beyond-5G systems, full E2E automation is needed and AI-based ZSM offers a good answer. However it also has limitations. For instance, a higher accuracy with a short training time is one of the fundamental challenges in AI/ML model-based ZSM systems. Interface-level security issues (e.g., Open API security threats), E2E management, scalability, privacy, near real-time systems are among the dares that confront the application of ZSM. As we have seen in the earlier sections, SplitFed could be introduced at any mentioned point. For instance, it could be applied in the case of network multi-tenancy, where multiple slices have a variety of resource requirements (including radio access network, core network, and cloud computing resources) provided by different administrative domains. The latter possess relevant and pertinent data for global management and orchestration procedures, but are less willing to share it with the global orchestrator. Under these circumstances, the collaboration feature of SplitFed among the administrative domains preserves data privacy while enabling network learning.
#### V-E3 Realistic Scenario
It would be tricky to disentangle ML/DL and ZSM. Both concepts are elemental parts for the automation of network operations. ML/DL could be integrated in the management of several network categories described under the label of the FCAPS model (Fault, Configuration, Accounting, Performance and Security). In this example, we focus on the work presented in [122] where the authors formulate a new FL-based solution for performance and security functions. The proposed model predicts slices' service-oriented Key Performance Indicators (KPIs) to act quickly to the decline of one or more QoS parameters of a running network slice and maintain the specification of a Service Level Agreement (SLA). The authors assume the presence of an in-slice manager to monitor the service-level KPI, and train the FL local model. SplitFed, as an intelligent, decentralized and cooperative solution, could be used to further enhance the performance of the model. Instead of training the entire deep neural network model, each in-slice manager trains a sub-model using its private data subset. After some forward-backward passes between the participating clients and the main server, the client-side sub-models are aggregated by the Fed Server. Considering the parallel and continuous aspects of
SFL, it is coherent that the convergence rate of the proposed model would be improved.
As could be observed throughout this section, all the examined papers either apply a traditional centralized learning or federated algorithms as a distributed solution. In Table IV, we highlight, for a list of selected works, how SFL can enhance both techniques by identifying the corresponding SFL agents (clients and servers), as well as the benefits that would accrue from the application of SFL.
## IV SFL for 6G Use Cases
### _Industry 5.0, Digital Twin, and Autonomous Robots_
#### Iv-A1 Motivation
Digital Twin technology has a great impetus in the development of governments and businesses in many fields such as healthcare, industry, education and sports. It consists in creating a connection between the virtual space and the physical world [131] by developing a digital replica of a physical asset (animate or inanimate). It has innumerable added value applications, for instance creating an online meeting via avatars that would have the same features of physical persons (voice, behavior and intelligence). In the Qatar World Cup, the digital twin concept was applied by bridging the digital and physical stadiums. The latter ingests real-time data from millions of IoT devices that help in monitoring the situation at every stadium (climate, security, lights, etc.) and therefore take the appropriate action at the adequate time. As well, the introduction of automation and robotics have largely changed the way of working in several areas, wherein some jobs have completely disappeared and been replaced by machines. Examples include agriculture robots for fruit harvesting, plant irrigation and scanning, medical robots for assisted surgery and commercial robots for delivery and sale. Both aforementioned technologies have a profound impact in the development of the current industry that has passed through many phases from the water power, steam engine, electricity, oil and computers to the industry 5.0 that fuses all the emerging technologies. Digital Twin was used in the maritime industry for shipbuilding processes in order to comprehend the ship's behaviors under the different conditions by designing a cyber-physical system (CPS) and therefore improve the safety of marine transportation. Cobots or collaborative robots constitute one of the most remarkable supporting technologies in industry 5.0. Unlike traditional robots that work for humans, Cobots focus on the integration of the human in the manufacturing process by charging machines with dull tasks and entrust duties that demand critical and cognitive thinking to humans [132]. This collaboration and communication permit to leverage human intellect and enhance the industrial process.
#### Iv-A2 How SFL can help
AI and ML play a critical role in the future industry to concretise the use of Autonomous Robots and Digital Twins. For instance, to create a digital model for predictive maintenance systems [133][134], a pool of real-time data generated by the sensors embedded in the physical system is collected, then processed and analyzed by a central unit to be leveraged by the DT model (for potential enhancements to the real asset). However, the recurring transfer of complex system data to a central entity may result in massive network traffic and data privacy leaks. This was the basic cause for the adoption of new advanced distributed solutions. For instance, in [135] authors propose a new Federated Learning framework for Industrial IoT where industrial devices (robots, excavators, construction machinery, etc.) perform their manufacturing tasks locally (e.g, training a defective detection model) without sharing their own dataset. This will increase the data privacy and decrease the communication costs. However, the authors do not consider some network constraints, such as the bandwidth utilization and potential vulnerabilities that could be caused by the transfer of the whole global model towards the industrial devices. SFL, as a new technology, could be integrated along with Industry 5.0, Digital Twin and Autonomous Robots as well to deal with issues not previously considered for the development of smart manufacturing models. For instance, for Digital Twin-enabled 6G networks, SFL can improve the efficiency of machine learning models dedicated to various twin sources to manage the real system and predict its future states by ensuring cost efficiency, resource-optimized operation and instant wireless connectivity that keeps a synchronized digital plane with the physical system. As well, SFL models embedded on robots and Cobots could improve the learning process performance by providing a distributed and parallel training of complex systems over shared learning models (client- and serve-side) and various data islands.
### _Connected and Autonomous Vehicles_
#### Iv-B1 Motivation
Preventing car crashes and saving human lives are among the root reasons for smart traffic systems. Recent developments in wireless communication technologies have brought rapid and massive changes in the automotive industry world. A new era of safer and smarter transportation is dawning, namely via connected autonomous vehicles (CAV) [136]. The concept of a connected vehicle (CV) means that the vehicle is equipped with sophisticated communication and sensing modules (GPS, radars, cameras, sensors, etc.) that allow it to exchange with its surrounding neighbors (other vehicles, infrastructure and personal devices) giving birth to many applications summarized under the term V2X (Vehicle-To-Everything). An autonomous vehicle (AV) refers to a vehicle that has the power to react by itself to any road event without the need of a driver (braking, steering, obstacle avoidance, etc.). A vehicle that possesses the potential to carry out both actions is termed as a CAV. To enable these activities, two sorts of technologies are used. The first one pertains to connectivity, and features many developed standards such as DSRC and C-V2X [137]. Secondly, we find AI technology and its branches (see Section II) dedicated to the automation part. In view of stringent and diverse Quality-of-Service (QoS) needs imposed by smart-transportation applications, that are data-intensive and delay-sensitive, 6G is expected to be the foundation of ITS deployment because of its uniqueness in terms of reliability, latency and massive connectivity. Artificial Intelligence as well will be one of the prime pillars of the future 6G-supported ITS, and research in this field begins to have contributors that discuss a diversity of issues and viewpoints
around the topic. In [138], an in-depth review on how machine learning can enhance the next-generation ITS main tasks in terms of perception, prediction and management is deliberated. In [139], the authors investigate the key enabling technologies for 6G-V2X and their effect on the different 6G vehicular network aspects, namely communication, computing, and security. The authors subdivide these technologies into two categories: revolutionary and evolutionary V2X technologies. Besides tactile communications, quantum computing, brain-controlled vehicles and blockchain-aided V2X, one of the appealing revolutionized technologies discussed in the paper that can improve the 6G-assisted ITS systems is intelligent reflecting surfaces (IRS) [140]. After that, the authors expose a variety of evolutionary technologies that need some changes to become more suited to 6G-V2X, such as advanced resource allocation.
#### V-B2 How SFL can help
AI-aided solutions present a key part of a smart transportation system strategy. However, the majority of existing research papers for next generation ITS share the same basic patterns of training. One pattern follows the traditional design in which transport entities (smart car, smart road, smart traffic light, etc.) forward, in a wireless and continuous manner, information observed from embedded equipment (e.g., smart-car velocity, trajectory, direction and geographical coordinates) to be processed in a central location [141]. The transmission of these details menaces the driver's privacy. For instance, if attackers gain access to the future location of a driver, they can easily determine the latter's traveled routes and therefore deduce sensitive information about the driver, such as residence, job, health state, religious beliefs and other. The second pattern consists in applying the reverse operation, i.e., transmitting the whole model to the implicated entities as in federated learning paradigm [142][143]. Model privacy leakage is the common threat with both techniques. Future research directions are expected to strive not only for solutions that consider data protection, but also that propose model privacy-preserving mechanisms. The implementation of SFL in the transportation and mobility industry would improve on this aspect. SplitFed, as a promising technology, will allow connected and autonomous vehicles to share the training of a complex model without violating its privacy. The model would be decoupled among the transport entities, one part at the client's side (vehicular nodes) and the second part at the aggregator's side (main server). In addition, the interplay between SFL and edge computing can be considered to achieve better, more timely, and safer decisions via increasing the proximity among the involved servers by hosting them at the edge. However, the high speed and frequency changing of vehicular nodes may lead to intermittent connection between nodes and servers (Fed server and Main server) in the transmission of gradients and smashed data. The duplication of servers could be a way to address this issue.
### _Intelligent eHealth and Body Area Networks_
#### V-C1 Motivation
Healthcare is one of the areas that have witnessed vigorous advances and improvements in both hardware and software platforms through eHealth services and applications. For instance, telemedicine has changed the standard practices in healthcare centers, mainly during the COVID-19 pandemic that hastened the use of teleconsultation, telediagnosis and telemonitoring. These techniques provide patients a personalized and easy access to health services irrespective of their geographical positions especially in rural areas where the healthcare system is unavailable or underdeveloped. A significant development is the Internet of Medical Things (IoMT) [144], also known as H-IoT (Healthcare-IoT). It consists of all implantable sensors and wearable devices (e.g., diabetic pump, smart watches, fitness bands) that record constantly vital parameters such as blood pressure, body temperature, oxygen saturation, glucose level and heart rates, allowing users to track their health state, and detect any abnormal signs. The interconnection among all devices forms a new extension of sensor networks, called Wireless Body Area Networks (WBANs) [145]. Through AI-assisted analysis of big health-related data, produced by distinct types of H-IoT devices, several medical applications of diverse benefits are possible [146, 147], like lung cancer detection, Alzheimer's and Parkinson's disease prediction and diabetic retinopathy recognition. Because medical imaging is the most used clinical examination for disease diagnosis, CNN architectures have taken the lead in the wellness research space and demonstrated high performance for various fundamental tasks. The deployment of WBANs must consider many QoS features such as latency, power-consumption, stable communication (even if the person is moving) and interference mitigation [148]. Therefore, the selection of the most appropriate communication technology is of great importance. 6G is envisioned to streamline many aspects in the smart healthcare systems by providing efficient and economic remote services (e.g., surgical intervention via video streaming, out of hospital care using holographic communication) that would support healthcare practitioners in their daily tasks [149].
#### V-C2 How SFL can help
In 2020, Rieke et al [150] introduced a paper that explores the future of digital health with federated learning (FL) and its potential impact on the various healthcare stakeholders. The study shows how FL overcomes data security and privacy by sharing the updated weights of trained local models instead of the raw data from edge users. However, some points make FL-based approaches not always efficient for healthcare use cases. First, as it is known, images are the type of data highly used in the e-health domain. In general, they are characterized by a large size that requires models with huge numbers of parameters. The frequent transfer of the whole model over unreliable channels and limited bandwidth is a big issue. Second, in contrast to some medical equipment such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) scanners, clients are not always powerful; they could be low-power electronic devices. In this case, the training over the high-dimension model is not feasible. Third, the proportional relation between the size of the model and attack success rate (the higher the dimensionality of a model, the higher its probability of being perturbed by an attacker) [151] render the model less secure. Using SplitFed, medical devices would only have a lower dimensionality part of the complete model (SplitFed has the ability to vary/decrease the model portion of clients). This
would save more energy and make the model more robust to poisoning attacks. Besides, given that the computation is performed in parallel, a fast model training is envisioned. In sum, SFL would be favorable for many future healthcare applications that are based on resource-constrained devices and require real-time services.
### _Multisensory XR Applications and Holographic Telepresence_
#### V-D1 Motivation
Developments at the intersection of diverse research fields, including computer vision, sensing technology, wearables, holographic display technology, edge computing, specialized AI hardware technology, and high-capacity/low-latency wireless communication have made it possible to offer users new experiences via **eX**tended **R**ealities (XR): **V**irtual **R**eality (VR), **A**ugmented **R**eality (AR) and **M**ixed **R**eality (MR). These experiences are difficult, even impossible, to engage with in real life, such as visiting the moon's surface and climbing the deadliest mountain. VR consists in creating an entire 3D virtual environment that highly simulates the real world. The basic idea behind VR systems is to display computer-generated images of the virtual space in three forms: visual, aural and haptic [152] taking into consideration the person's attitude (position, orientation, eye motion, etc). This representation immerses the user (physically and mentally) into the virtual universe. The AR concept, as its name indicates, permits to augment the real environment. Unlike VR that isolates users totally from their existing world, AR enhances it by adding more virtual objects (digital contents) through digital devices like smartphones, tablets, or AR glasses [153]. In effect, the definition of the term MR is debatable, even among experts [154]. E.g., in [155], the authors positioned MR in the middle of AR and AV (Augmented Virtuality), while in [156], authors categorize MR as an extension of AR, although in [157], MR is defined as an amalgamation of both VR and AR. The latter is the most common among academic researchers. Numerous XR tools, such as VR headsets, VR gloves, AR glasses, Teslasuit and Holosuit, are used to support a wide range of XR applications in various fields, including tourism, education, marketing, agriculture, and medicine [158]. Hologram is another immersive media technology that would surmount the distance barrier and provide a real-time presence. It permits people to collaborate and connect to each other by offering a natural conversation experience to such an extent they feel like they are inside the same room [159]. Unlike AR and VR, holographic telepresence does not require wearable devices. At a remote environment, images of humans and their surrounding items are compressed and optimized before being sent over a high bandwidth network connection. Afterwards, these images will be reconstructed (decompressed and laser-projected) at the users' site. All the aforementioned technologies require QoS guarantees (high processing, sufficient computation power and extra-reliability connections) that surpass the limits of the 5G network. Due to its specific technological and technical aspects, the future 6G is the suitable candidate to fulfill the requirements of XR/Holographic Telepresence systems.
#### V-D2 How SFL can help
A/ML aided solutions are key in automating the complex decision-making processes needed to realize future XR/Holographic Telepresence systems. However, the huge quantity of data generated by VR and AR users make the use of a centralized learning paradigm almost impossible because of the high network resource utilization. SFL, as a distributed learning algorithm, is an effective way to reduce network load by just forwarding the model updates that are smaller than user data. In addition, if 6G communication is coupled with SFL algorithms, the performance of XR classification/prediction models could be enhanced. For instance, data confidentiality and privacy are important in all XR applications since sensitive information is implicated to control the XR contents (e.g., eye motion and fingers' position for the estimation of 3D human body pose). In reference to this, SFL can offer several benefits. First, the model is subdivided into multiple sections (sub-models) between XR devices and the main server. Clients start processing on their local data before forwarding their intermediate outputs to the main server to proceed with the training. This will reduce the computational load of individual devices, ensure both user and model privacy, capture context-aware information (e.g., user preferences) leading to customized XR content and interactions. As well, 6G will guarantee a high bandwidth and reliable communication for smashed data and gradients transfer. Similarly, this also holds at inference time for an object detection model that needs an accurate and rapid identification (size, shape, color, location, motion) in order to mix both virtual and physical environments instantaneously. With SFL, the client first extracts low-level features locally (e.g., color or motion information) from the data collected in the physical environment. Then, it sends the intermediate results to the server for extended analysis (such as object localization). Accordingly, this will reduce privacy concerns since sensor data from the physical environment remain on the XR devices, minimize data transmission requirements (only smashed data are shared with the server) and optimize bandwidth usage. In the same way, with Holographic Telepresence where the assimilation of the real environment depends on well understanding all its components, a high throughput and deep analysis of the data collected from the real world are needed for a perfect virtual human training. The combination of SFL and 6G will accelerate the understanding of the real environment and prompt the customers to live an interactive and convincing experience.
### _Smart Grid 2.0_
#### V-E1 Motivation
Most electrical power distribution systems rely on fossil-fuel generators, which are detrimental to the environment due to the high air pollutants induced by this technique. According to [160], almost 40% of the \(CO_{2}\) emissions are due to power generation. Furthermore, producing a large amount of power in one site increases the delivery cost, particularly for distant consumers where long transmission cables are required. Likewise, a centralized grid topology is more prone to reliability issues, as a sudden fault involves a full blackout. Introducing new information and
communication technologies is a good solution to deal with energy issues [161]. Smart Grid (SG) is an intelligent and distributed digital power system designed to effectively utilize the electricity network. In conventional power networks, there is only one single source and only one way to feed end-users. With SG, the power is emanated from various sources (e.g., solar farm, wind farm) using multi-way communication [162]. In order to build a flawless system, incorporating intelligent and high performance technologies into the diverse smart grid subsystems (generation, storage, transmission, monitoring and distribution) is essential [163]. Indeed, Massive Internet of Things (MIoT) is one of the cutting-edge technologies in supporting smart power grids, including, for instance, smart meters, automated meter reading, vehicle-to-grid systems, smart sensor and actuator networks [164] to mention a few. All these smart devices participate in the accurate and automated measurement, extraction and transfer of parameter values from the different part of the smart grid system. With the help of AI/ML techniques, the analysis of the collected data would be beneficial to estimate the state of the grid network, boost the quality of experience (QoE), deploy dynamic pricing and personalized energy services. Several ML/DL approaches have been applied for smart grid networks, including, but not limited to: CNN for load forecasting [165], LSTM-RNN for photovoltaic power prediction [166], KNN for load and price prediction [167], SAE for detection and classification of transmission line faults [168], SVM for cyberattack detection (covert cyber deception assault) [169], and, lastly, random forests combined with CNN for energy theft detection [170].
#### Iv-B2 How SFL can help
For better and faster grid management, energy data collected from smart components should be analyzed quickly and securely. However, it is difficult to address these challenges with centralized learning schemes, which are the most commonly encountered in literature. In effect, broadcasting all the grid information towards a central location (e.g., cloud) extends the transmission time and augments security and privacy threats since customer load profiles reveal a lot of sensitive data (e.g., daily routine, time spent home). Beyond that, the abuse of this information could involve social issues, such as increased burglary threats when residences are unoccupied. Applying distributed learning algorithms in power and energy domains permits not just understanding grid activities but protecting system data, too. Many interesting works have used the federated learning paradigm to tackle transmission delay and data privacy concerns. For instance, in [171], the authors proffer a federated framework for electricity consumer characteristics identification where smart meter data are kept locally within retailers, and only local weights are sent to a computational center. Similarly, in [172] and [173], the authors expose collaborative FL architectures for learning power consumption patterns and energy theft detection, respectively. In [174], a new approach for electrical load prediction based on edge computing and FL using residents' behavior data is proposed. Certainly, all smart-grid FL-based studies ensure data privacy. Nevertheless, all neglect the model privacy aspect. SFL provides a direct answer to this challenge. For instance, to develop a SFL-based approach for faults location and detection in a smart-grid system, the model would not be sent over the network but partitioned into client and server sides. All the grid clients, also called energy data owners (EDOs), e.g., substations, train in parallel their sub-models using their local data gathered from sensors and smart meters installed in the smart grid. Then, the obtained parameters are exchanged with the main server that resumes the training, aggregates the EDOs shared models, and forwards the outcome back to EDOs. After some rounds, the EDOs upload their local models to the Fed server for aggregation. The process is repeated until a desired accuracy is achieved. As a result, less processing power from the grid network nodes is required, while the split and parallelism features of SFL help to protect the model against inference attacks and supply service providers (SPs) with energy-related knowledge in a brief delay. Fig. 10 illustrates how smart-grid systems could benefit from the SFL paradigm.
Table V analyses a selected group of studies related to 6G use cases. In addition, it shows how SFL can be applied and the advantages that this new algorithm presents.
## V Datasets and Frameworks for Successful Implementation of SFL-driven 6G Networks
This section describes various tools that can support the development, evaluation, and validation of SFL-based solutions for 6G networks. We first list multiple existing datasets for different 6G technical aspects and use-cases. Then, we provide multiple existing frameworks related to each 6G aspect/use-case.
### _Existing Datasets for 6G Networks_
ML/DL model outcomes are greatly dependent on data quality (type, size, source, format, etc). As low-quality data leads to poor results, collecting and preparing appropriate data is the first hurdle in the AI domain. Actually, there is a lack of datasets related to the future 6G since it is still under development, despite some initiatives (e.g, DeepSense 6G). This scarcity is among the major reasons, that push researchers to adopt datasets from other relevant network technologies and configurations (e.g, 5G-specific datasets). We discuss in this section some publicly available datasets that could be used for solving both uses-cases and technical-related issues. In this context, we distinguish two types of datasets: (a) Technical datasets, where the data are used to enhance core technical aspects of 6G networks, (b) application datasets that can support 6G-enabled split federated learning applications like smart-grid, healthcare and smart-agriculture.
#### V-1 Datasets for 6G technical aspects
* DeepSense 6G [186]: In order to promote ML/DL research in cellular communication, a public, large-scale and real-world dataset has been recently published in [187]. It contains data about multiple dynamic scenarios (34 scenarios as stated on the official website of the dataset). Each scenario emulates a use case (indoor communication, vehicle-to-infrastructure communication, millimeter wave drone communication, night/day time, rainy weather, and others). A scenario comprises a set of units (camera, 2D/3D LiDAR, radar, GPS, smart car,
stationary base station, etc.), and collects a set of modalities (e.g., RGB images, position, beam power). These measurements could be used for many SFL-enabled applications such as beam prediction, user identification, positioning, object detection/classification and more.
* 5GMdata [188]: It is a public and simulated telecommunication dataset, that has been generated using traffic and ray-tracing simulators (SUMO/Remcom Wireless InSite). The simulations with SUMO and InSite represent the first phase before organizing the raw data into a 5GMdata database, based on the target application. Next, data post-processing is performed by converting the 5GMdata into a format utilizable by the ML/DL algorithm. Finally, the ML/DL experiment using the associated data is run. This dataset could be personalized to fit 6G cellular networks and develop new split federated learning applications such as channel estimation and beam selection.
* DeepMIMO [189]: It is a public and generated dataset for mmWave/massive MIMO channels introduced in [190]. It includes seven scenarios (according to the official dataset's website). For example, the first scenario mimics an outdoor environment of two streets, one intersection of \(18\) base stations (the height of each BS is 6 \(m\)) and more than a million users distributed on three uniform grids. The dataset provides three versions (DeepMIMO v1, DeepMIMO v2, DeepMIMO 5G NR) and many applications in the mmWave area, including Intelligent Reflecting Surfaces (IRS), channel estimation, and blockage prediction, to mention just a few.
* Telecom Italia [191]: It is a rich, open and multi-source dataset largely used by academic researchers. It aggregates telecommunications activities, namely: SMSs, calls, and Internet usage data in the city of Milan and the Province of Trentino. The data are recorded every 10 minutes for two months (from 1/11/2013 to 1/1/2014). The dataset could be adopted for user traffic prediction that plays a major role in designing smart resource management solutions. [192][193] are recent works that have made use of Telecom Italia dataset for 6G networks.
* Cellular Traffic Analysis Data [194]: It is a real-world and labeled cellular dataset available at Github [195]. The traffic has been captured on several android devices using virtual private network tunneling. The dataset could be utilized to predict user traffic, and therefore, ensure an adequate traffic-aware resource management.
* 5G dataset [196]: This is a public dataset described in [197]. It is generated when a user is running two services: downloading files and streaming videos (Net-flix and Amazon Prime applications) from two mobility patterns, separately, namely: driving and static. Various channel, context and cell-related details were measured, such as downlink/uplink rates, mobile device speed, GPS coordinates, SNR (Signal to Noise Ratio), RSPR (Reference Signal Received Power), RSPQ (Reference Signal Received Quality), etc. Based on its features, the dataset could be used to automate various network functions for
Fig. 10: Split Federated Learning in Smart Grid.
5G and beyond, such as intelligent predictive handover, intelligent resource management, and bandwidth prediction.
* UNSW-NB15 [198][199]: This is one of the most popular datasets used in ML/DL-aided network security. It was released in 2015. It has 44 features and about 2,540,044 data records distributed between normal and attack traffic. UNSW-NB15 uses contemporary methods to more reflect the real network traffic, and contains modern attacks divided into 9 types, namely Fuzzers, Backdoors, Analysis (e.g., port scanning), DoS, Exploits, Generic, Shellcode, Reconnaissance, and Worms.
* AWID (Aegean WiFi Intrusion Dataset) [200]: It implements real Wi-Fi network traces of both legitimate and illegitimate IEEE 802.11 WLAN traffic. Each record in the dataset comprises 155 attributes between numeric and nominal values. The last version AWID3 focuses on IEEE 802.11w, Wi-Fi5 and WPA2 enterprise attacks. The dataset is also considered for other wireless communication technologies such as IoT and 5G [201].
* CIC-IDS2017 [202]: It is an intrusion detection labeled dataset published by the Canadian Institute for Cybersecurity in 2017. It contains benign and the most up-to-date common attacks such as DDoS, Brute Force, XSS, SQL Injection, Infiltration, Port Scan, and Botnet. The dataset contains 2,830,743 records split into 8 files with 78 features for each record.
* non-IP data delivery) [203]: This is a fully labeled dataset generated from a functional 5G test network, that can be used to develop and test AI/ML solutions for identification, and detection of malicious content in network traffic.
* Microservices configurations dataset [204]: In order to verify if a cloud's tenant configuration (in terms of memory and CPU) is appropriate to its service requirements, authors in [205] conducted an experiment based on the execution of three concurrent applications under diverse resource configurations, namely: Web servers, RabbitMQ broker and the OpenAirInterface 5G Core network AMF (Access and Mobility Management Function). The experimental results led to the generation of three different datasets (one for each deployed application). For instance, the webserver dataset has been produced from a parallel and increasing number of requests (between 100 and 1000) sent to each web server instance. The experiment left 16 features in the web server dataset, 15 features in the 5G AMF dataset, and 12 features in the RabbitMQ dataset. The features include the timestamp of
\begin{table}
\begin{tabular}{c
metrics' collection, the memory and CPU allocated to the container, the memory and CPU used by the container, and other application-related features. The dataset could be adopted to train split federated learning models that manage automatically the configuration of services' resources for an optimal execution and efficient computing resources usage.
#### Vi-A2 Datasets for 6G Use-Cases
This section describes datasets related with popular 6G use-cases that could be leveraged for SFL methods.
* PlantVillage [206]: It is a public plant disease dataset used in [207] to develop a Digital Twin Framework for Smart Agriculture, which represents one of the main applications of 6G networks in industry 5.0. It contains 54305 leaf images from 14 crops, between healthy and diseased, divided into 38 classes. PlantVillage is the most cited among available plant disease datasets. The authors removed the leaves from the plants and photographed them with a single digital camera. PlantVillage is a real dataset that could be used in implementing SFL-based smart farming solutions. The idea is to partition the data across the different simulated collaborators (drones, cameras, etc.), potentially experimenting with different data distributions (uniform/non-uniform) and different forms of data partitioning (e.g., vertical vs. horizontal).
* Berkeley Deep Drive-X (eXplanation) [208]: It is a real, public and large-scale dataset, that contains 77 hours of driving in 6,970 videos shot under various driving conditions (day/night, highway/urban/rural area, rainy/sunny weather, etc). Each video is approximately 40 seconds long and comprises 3-4 actions (accelerating, slowing down, stopping, turning left, moving into the right lane, and so on). All actions are annotated with a description and explanation.
* Dataset of Annotated Car Trajectories (DACT) [209]: It is a set of driving trajectories captured in Columbus, Ohio, where each trajectory registers over 10 minutes that can be divided into multiple segments annotated by the operating pattern (e.g., speed-up and slow-down). Furthermore, each trajectory is an ordered set of tuples and each tuple consists of 11 attributes, such as: Trip ID, vehicle's speed in mph (miles per hour), vehicle's acceleration, latitude, longitude, type of segment (exit, loop, turn, etc).
* MIMIC-III [210]: This is the acronym for Medical Information Mart for Intensive Care III. It is a big, anonymized and freely accessible medical database. It covers data of forty thousand patients admitted in intensive care units in Boston, USA, hospitals between 2001 and 2012. It provides an important benchmark for evaluating health-related models based on split federated learning.
* COVID-19 image data collection [211]: According to [212], it is the largest public dataset for the diagnosis of coronavirus disease. In addition to the chest X-ray (CXRs) images, the dataset includes a list of metadata such as patient ID, sex, age, temperature, time since first symptoms, intensive care unit (ICU) status, incubation status, hospital location, etc. It is a suitable resource for building and evaluating several split federated learning based applications, for example automatic detection of COVID-19 cases, patient's severity and the need for mechanical ventilation prediction.
* VR streaming [213]: It is a publicly-available dataset that comprises head tracking of \(48\) users fairly divided between males and females. The data have been recorded while users were watching \(18\) spherical videos from \(5\) categories. Taking advantage of its features (users' way of watching, their head movements, and directions they focus on), the data can serve as a powerful source to enhance user experience in VR applications by building SFL-based patterns for gazing prediction and user identification.
* Irish CER [214]: The dataset is provided by the Irish Commission for Energy Regulation (CER). It contains customers' electric load profiles from 6435 smart meters conducted on 536 days and half-hourly basis. Besides, through a questionnaire filled in by the experiment's contributors, the dataset is enriched by many variables on occupant socio-demographic factors, their consumption behavior, domestic properties, and home appliances. By segmenting the whole dataset into different parts, it could be used to build several SFL-based models to understand the electricity customer conduct, for instance predicting the future load in one or multiple nodes of a smart-grid network, forecasting the electricity demand, and detecting faults and attacks.
* RAE (Rainforest Automation Energy) [215]: The dataset includes 1 Hz energy readings (mains and sub-meters) from two residential dwellings. In addition to power data, environmental and sensor data from the house's thermostat are included. As well, pertinent sub-meter data for power utilities (heat pump and rental suite) are captured and incorporated. The dataset recordings could be adopted for various applications including, but not restricted to, energy saving, abnormal detection, occupancy pattern and energy demand prediction.
Table VI presents a comparative view between the above existing datasets according to their properties (public or private), label class (labeled or unlabeled), data distribution (IID or Non-IID), generation procedure (real vs. simulated) and applicable area (core 6G technical aspects or use case-specific ones). In the last column, we provide some potential 6G applications for which the concerned dataset could be used to develop SFL models. Note that datasets are arranged in descending chronological order according to the publication year.
### _Existing Implementation Frameworks_
To implement Split Federated Learning in 6G networks, we need two sorts of tools. Firstly, network simulators that play a significant role in modeling and analyzing the cellular system. Secondly, ML/DL platforms for training, testing and validating SFL models before being applied to the 6G network. For that, a plethora of network simulators and AI frameworks could
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & & & & & & & & & & & \\ DeepSense 6G & 2022 & Public & Labeled & Non-IID & Real & Technical & - Beam Prediction & - Resource Management \\ & & & & & & & & - Interference Management \\ & & & & & & & & - User Scheduling & \\ \hline DeepMIMO & 2022 & Public & Unlabeled & Non-IID & Simulated & Technical & - Intelligent Reflecting Surfaces (IRS) & - Channel Estimation & - Blockage Prediction \\ \hline
5G-NIDD & 2022 & Public & Labeled & Non-IID & Real & Technical & - Cellular Network Intrusion Detection \\ \hline Microservices & & & & & & & & \\ Configurations & 2022 & Public & Unlabeled & Non-IID & Simulated & Technical & - Applications’ Resources Configuration \\ Data & & & & & & & & \\ \hline AWID3 & 2021 & Public & Labeled & Non-IID & Real & Technical & - Security Issues (Network Intrusion Detection) \\ \hline COVID-19 image data & 2020 & Public & Labeled & Non-IID & Real & Use case & - Coronavirus Automatic Detection & - Patient’s Severity Prediction \\ & & & & & & & - Mechanical Ventilation Need Prediction \\ \hline
5G dataset & 2020 & Public & Unlabeled & Non-IID & Real & Technical & - Intelligent predictive handover & - Intelligent resource management \\ & & & & & & & - Bandwidth prediction. & - Bandwidth prediction. \\ \hline Cellular Traffic Analysis & 2019 & Public & Labeled & Non-IID & Real & Technical & - Cellular Load Traffic Forecasting & - Resource Management \\ \hline Berkeley Deep Drive-X & 2018 & Public & Labeled & Non-IID & Real & Use case & - Explainable Model for Autonomous cars & - Driving Behavior Explanation \\ \hline
5GMdata & 2018 & Public & Labeled & Non-IID & Simulated & Technical & - Beam Selection & - Channel Estimation \\ \hline DACT & 2017 & Public & Labeled & Non-IID & Real & Use case & - Driving Pattern \\ \hline CIC-IDS2017 & 2017 & Public & Labeled & Non-IID & Simulated & Use case & - Network Intrusion Detection & - Attack Forecasting \\ \hline RAE & 2017 & Public & Unlabeled & Non-IID & Real & Use case & - Energy Saving / Anomalous Consumption Detection - Occupancy Pattern / Energy Demand Prediction \\ \hline VR Streaming & 2017 & Public & Labeled & Non-IID & Real & Use case & - Patterns for gazing prediction - User identification. \\ \hline MIMIC-III & 2016 & Public & Labeled & Non-IID & Real & Use case & - Predicting hospital length of stay - Early Detection of Diseases (Sepsis, Pancreatitis, etc) \\ \hline Irish CER & 2015 & Public & Unlabeled & Non-IID & Real & Use case & - Electric Demand Prediction - Electric Load Forecasting & - Faults and attacks Detection \\ \hline PlantVillage & 2015 & Public & Labeled & Non-IID & Real & Use case & - Digital Twin Smart-Farming Framework - Plant Disease Detection \\ \hline UNSW-NB15 & 2015 & Public & Labeled & Non-IID & Simulated & Use case & - Network Anomaly Detection - Identify Cyber-Attacks \\ \hline Telecom Italia & 2015 & Public & Labeled & Non-IID & Real & Technical & - Cellular Traffic Prediction - Resource Management \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Benchmark Datasets.
be used. In the following, we present the most popular and widespread tools in both academia and industry.
#### V-B1 Mobility and Network Simulators
* SUMO (Simulation of Urban MObility)5: This is an open source traffic generator, developed in 2001 by the German Aerospace Center (DRL). It permits the simulation and analysis of realistic user mobility and traffic-related models. It offers many features; we briefly cite a few of them: building roads, considering streets, intersections, traffic lights, high-speed routes, lane and direction changing, etc. The extracted data from the trace file could be used by another network simulator such as OMNET++, NS2 and NS3. Footnote 5: [https://www.eclipse.org/sumo/](https://www.eclipse.org/sumo/)
* NS36: This is a popular event-driven emulator/simulator designed specifically for research and educational purposes in computer communication networks. It is based on two programming languages: C++ and Python. The simulator core is developed entirely in C++ with optional python bindings, which gives users the ability to choose between C++ and Python to write simulation scripts. It supports diverse network technologies, including cellular networks such as 4G (LTE) and 5G (NR). Footnote 6: [https://www.nsnam.org/](https://www.nsnam.org/)
* OMNET++ (Objective Modular Network Testbed in C++)7: It is an extensible, modular, discrete-event and free software simulator, targeted mainly for computer network simulation (wired and wireless). It is programmed exclusively in C++, and can be coupled with several external frameworks such as Tensorflow for ML/DL development. It is widely used for 4G and 5G networks simulation. Footnote 7: [https://omnetep.org/](https://omnetep.org/)
* NetSim8: NetSim is a C language-based network simulator that allows not only simulation but also emulation of real-time traffic from real devices. In addition, it is available under three versions (Pro, Standard and Academic). Each version has different features, support options and pricing (no free usage). It provides an easy graphical user interface and a packet trace file with all information needed for further analysis and evaluation of performance metrics. Footnote 8: [https://www.stoc.com/](https://www.stoc.com/)
* Riverbed Modeler9: It is a commercial discrete event-simulation environment, formerly known as OPNET, used for the analysis of communication applications, protocols and networks. Its sophisticated graphical interface permits the user to build network topology (nodes and links), display results, adjust the different parameters, performing various experiments and scenarios visually and rapidly. Footnote 9: [https://www.irved.com/](https://www.irved.com/)
Footnote 9: [https://github.com/OpenMined/PySyft](https://github.com/OpenMined/PySyft)
#### V-B2 ML/DL Frameworks
* PySyft10: This is an open-source Python library, developed by OpenMined. It integrates secure and private deep learning algorithms such as Federated Learning. In also implements differential privacy, and encrypted computation. On Github, it has 8.6k stars and 1.9k forks.
Footnote 10: [https://github.com/PederatedAI/FATE](https://github.com/PederatedAI/FATE)
* Federated AI Technology Enabler (FATE)11: This software is an open source federated learning framework in Linux. It implements secure computation protocols based on homomorphic encryption and multi-party computation (MPC). It has been applied in many domains such as the finance and medical ones, and acquired 4.8k stars and 1.4k forks on its Git repository. Footnote 12: [https://github.com/FedML-AI/FedML](https://github.com/FedML-AI/FedML)
* FedML12: This framework is an open research library enabling collaborative machine learning on decentralized data. It was developed at the University of Southern California based on PyTorch. It has 2.4k stars and 572 forks on Github. It assures three computing paradigms: on-device training for edge devices, distributed computing, and single-machine simulation. Further, it encourages diverse algorithmic research through the design of flexible APIs and comprehensive reference implementations (optimizer, models, and datasets). Footnote 12: [https://www.tensorflow.org/federated](https://www.tensorflow.org/federated)
* TensorFlow Federated (TFF)13: It is a free and open-source TensorFlow-based framework. It is developed by Google for federated learning. It has attracted about 2k stars and 532 forks on GitHub. TFF proposes two APIs, namely: Federated Learning (FL) API for high-level interfaces (training and evaluation of users' models) and Federated Core (FC) API for low level interfaces (e.g., developing new FL algorithms). Footnote 13: [https://github.com/netle/openfl](https://github.com/netle/openfl)
* OpenFL14: It is a python-based, open-source framework, developed by Intel. OpenFL is a versatile tool that was initially deployed for medical imaging usage (training brain tumor segmentation models). It provides an efficient and reproducible method for developing and evaluating FL algorithms. It gained 1.7k stars and 405 forks on Github. Footnote 14: [https://github.com/Intel/Openfl](https://github.com/Intel/Openfl)
* Flower15: It is an open-source federated learning framework created by Adaptech Research. It provides a high-level API that enables researchers to experiment and build various FL use cases. It is compatible with both PyTorch and TensorFlow frameworks and supports a large number of clients. It gained 470 forks and 2.2k stars on the github repository. Footnote 15: [https://flower.dev/](https://flower.dev/)
Footnote 15: [https://github.com/PederatedAI/FATE](https://github.com/PederatedAI/FATE)
## VI Open Challenges and Future Directions
While SplitFed is a promising technique for collaborative machine learning in decentralized 6G systems, there are still several open challenges that need to be addressed for its effective implementation in 6G networks. We discuss these challenges along two dimensions: (i) SFL-specific issues, which are related with its space of available architectural and algorithmic configuration options, and have a scope that goes beyond its application in/for 6G, and (ii) 6G-specific challenges that stem from the expected characteristics of this communication technology and their interplay with SFL.
### _Open Challenges in SFL_
**Splitting strategy:** Sharing the model among all the involved learners is the first stage of the SplitFed algorithm. If we assume a SplitFed Model \(\mathcal{M}\) with \(\boldsymbol{\ell}\) layers, the total number of possible splitting combinations is then \((\boldsymbol{\ell}-1)\). For each possibility \(\mathrm{P_{i}}\), \(i\in\{1,\dots(\boldsymbol{\ell}-1)\}\), the client and server sides would have \(i\) and \((\boldsymbol{\ell}-\mathrm{i})\) layers, respectively. Based on these alternatives, the following questions should be answered: Which combination would be suitable to split the model? Is dividing the model in a random way a good strategy? Should the splitting strategy take into consideration certain criteria, such as the number of collaborators and clients' computing resources?
**Computation requirements at the server side:** An aspect that is relatively downplayed in SFL research is the fact that the desired parallelization in processing client data is achieved at the expense of increased compute requirements at the server side, as the server needs to maintain a copy of the server-side model portion per client participating in a training round. For settings with massive numbers of clients, this could put significant strain on the server and increase costs. At the same time, this observation reveals interesting resource allocation and orchestration problems for future research. For example, the size of the set of recruited clients per training round can be dynamically tuned based on the available resources on the server side, while multiple main server instances, each responsible for a different client subset, can be introduced - at the same time dealing with synchronization and consistency issues towards building a global shared model.
**Data fairness (imbalanced and non-IID data):** In SplitFed, each participating host collects its local data from various heterogeneous sources and with different features (source, location, period, etc.). Therefore, in many settings, is unrealistic to assume that all SFL clients will have iid-distributed local data. On the contrary, various and non-stationary data are expected. Training a SFL model under highly skewed data will cause a high weight divergence and thus a model accuracy degradation [216]. Hence, new techniques and algorithms have to be proposed to handle data heterogeneity and improve the learning process on non-IID data. Furthermore, some parties may represent less or more samples than others leading to uneven data distribution that will affect the learning model convergence as well as the performance of SplitFed training process. To overcome this, each client can apply dataset enhancement strategies locally, for instance, using GAN algorithms or data augmentation techniques (e.g., rotation, shearing, and flipping for images) [217]. An alternative option is to support collaborators by supplementary data from the main or Fed server.
**Dataset labeling:** The labeling procedure is an integral part of data preparation for Supervised Split Federated Learning (SSFL). It consists of appending informative tags to data (text, image, audio, and/or video) to help the SFL model identify the class of an unlabeled object. However, the distributed nature of client data may lead to divergences in the labeling results (owing to different annotators' expertise level, biases or even malicious falsification). This can cause noisy labels that severely degrade the performance of the learned model. Ensuring uniform and accurate labeling among all the SFL clients is a challenging task that must be considered to enhance the quality of local datasets and maximize the reliability and accuracy of models. In this context, some techniques could be utilized such as meta learning, label correction, and knowledge distillation [218].
**Aggregation technique:** The aggregation algorithm plays a crucial role for achieving a good performance in a SFL design. It permits combining the local sub-model updates from all the SFL nodes participating in the training round. A robust aggregation technique should be able to maximize the accuracy of the global model, enhance the privacy of local updates, optimize the communication bandwidth and identify suspicious clients. In this respect, multiple questions might be usefully discussed: is it accurate to adopt the aggregation mechanisms developed for Federated Learning [219] or new aggregation techniques should be designed since model training in FL and SplitFed are different? Furthermore, is it adequate to apply the same aggregation algorithm for both the client and the server side or should each part have its aggregation method that aligns with its specific objectives?
### _Open Challenges in 6G_
**Wireless channel constraints:** Higher frequency bands and Terahertz communication are among the main features that set 6G networks apart from other wireless technologies. These aspects have the potential to provide faster data rates and lower latency than current networks. However, they pose some challenges including high path loss, signal attenuation, interference and fading channels. Given these issues, a user in SplitFed may disconnect from the main or Fed server during the transmission of smashed data and/or clients' weights. This will interrupt the training process and call for higher time to complete it. To enhance the reliability of the global model learning, deploying more than one main/fed server is recommended (server redundancy). However, starting training on a different main/fed server from scratch is not a good idea. For instance, if a client moves in the last stages of its sub-model training, the required time to finish the new training would be very lengthy. For that, it is very important to develop effective data/service migration techniques to resume users' training after being interrupted rather than starting over.
**Disproportionate and heterogeneous 6G users:** In effect, the number of SFL contributors depends greatly on the 6G target application. Some scenarios have few participants where the failure of any party impacts the whole communication system. In contrast, in some cases with a great number of participants (IoT devices, mobile phones, etc.), the disconnection of a 6G user does not affect widely the performance of the SFL learning process as a whole. Moreover, 6G clients can have multiple and heterogeneous computing resources (CPU, storage, etc.) This will engender different sub-model training times that penalize the generation of the global model. In this context, an efficient strategy to choose the suitable number of clients with respect to the target application and clients' resource constraints should be applied to allow the participation of
as many clients as possible and accelerate the performance improvement of the SFL model.
**Irrelevant and heterogeneous 6G features:** 6G clients may sense irrelevant features of their private data (lack of domain knowledge, measurements in imperfect network conditions such as interference, noise, collision, etc) that can effectively increase the computational and time complexity of the data preparation phase. Extracting only representative information and filtering out inessential details, that do not affect the decision-making process is of paramount importance. In this context, a semantic information extraction model trained through the SplitFed algorithm could be implemented. The model will learn from diverse SFL collaborators that integrate multiple features. The parameters of the local sub-models are then transmitted to the main server for aggregation. The obtained model is expected to improve the accuracy of extracted information, clients' energy efficiency and model training time overhead.
**Black-box and complex deep learning:** One of the main challenge of DL-based models, including SFL, is that they do not provide any details about how and why their decisions are made, and thus such decisions cannot be properly understood and trusted by the different 6G stakeholders such as managers and executive staff. Therefore, the 6G stakeholders may not perform/execute the SFL-based decisions. To deal with this issue, eXplainable Artificial Intelligence (XAI) is an emerging paradigm that provides a set of techniques, e.g., ante-hoc, post-hoc, visualization, model-agnostic, etc., and aims to improve the transparency of black-box DL decision-making processes [220][221][222]. In other words, XAI helps to explain the SFL-based decisions to make them trustable and interpretable by the different 6G actors [223].
**Security Issues:** There is no doubt that the SplitFed reduces the risk of data and model disclosures. However, this does not mean that SplitFed is entirely proof against all attacks. Indeed, it is still prone to security and privacy risks from the client level to the server level. Diverse data-oriented attacks such as data tampering and data poisoning may target the SFL process causing a significant loss in the global model's accuracy as demonstrated by a recent study [224][225]. As well, other threats and vulnerabilities could impact the success of the SplitFed models: compromised/malicious Fed/main servers, unsecured communication channel, clients dropout and free-riding attacks. Hence, new defensive techniques for protecting SFL entities (Fed server, clients local data, cut layer activations, main server, etc.) during model training, aggregation, and transmission are mandatory to build a risk-free split federated learning ecosystem. In this regard, Blockchain technology and Secure Multi-Party Computation (MPC) can highly benefit SFL.
**Performance degradation of SFL-based models:** As mentioned before, SFL can be used to optimize different functions/operations related to 6G systems. However, a critical challenge is how to train and deploy SFL-based models, while providing stable life-cycle performance. In fact, data profiles evolving may cause performance degradation of the AI learning models [226]. Thus, both models' performance degradation and new data profiles should be studied, to ensure a stable performance of the intelligent 6G functions/operations over time. Therefore, it is required to not only perform continuous monitoring of both data and model profiles, but also automate the whole development process of SFL learning models, including data collection/extraction, model training, validation, and deployment [227][228]. In this context, the DevOps paradigm can be leveraged. DevOps includes a set of practices that combine software development (Dev) and IT operations (Ops). DevOps aims not only to reduce the systems' development life cycle, but also to provide a continuous software delivery with high quality, by leveraging paradigms and concepts like Continuous Integration and Delivery (CI/CD). When dealing with machine learning operations, and automation of the learning process, the paradigm is also called MLOps [227].
**SFL scalability:** The number and stability of SFL collaborators are key factors in the success of SFL-enabled schemes. Consequently, frequent client drop-outs, whatever the reason (intermittent connection, selfish client, malicious client, low battery, mobility, etc.) would have a negative effect on the model convergence time and accuracy. Therefore, proposing a novel strategy to make the SFL system more robust to this issue would be of great value. One solution could be predicting the device disconnection, based on its mobility or resources capacity. Moreover, the incorporation of incentive mechanisms could be beneficial. For instance, in a reputation-based incentive scheme, each client in the network would have a reputation rank based on its participation rate. The SFL clients performing the training in an efficient way will be rewarded. The goal is to encourage the participation of qualified nodes in the SFL training process [229].
Overall, addressing the open challenges of split federated learning in 6G networks will require a combination of novel algorithms, optimization techniques, hardware architectures, and security and privacy mechanisms. An interdisciplinary approach that draws on expertise from computer science, electrical engineering, mathematics, and statistics will be necessary to fully realize the potential of split federated learning in 6G networks.
## VII Limitations of this survey and obstacles faced
In this section, we address the potential gaps that could be attributed to our survey and should be considered as an opening window for further research.
* Lack of existing works: In fact, there are very few studies that focus on the implementation of Split Federated Learning, and specifically for 6G systems. Most works deal with FL or SL separately, of which a large number are dedicated to federated learning techniques. To identify previous works related to the SplitFed algorithm, we conducted a literature search on ACM Digital Library, Springer Link, IEEE Xplore, ScienceDirect, Wiley, Taylor & Francis Online, MDPI, and arXiv databases from 2020 (first appearance of SFL) to 2023. To the best of our knowledge, we found only seven articles that are published in IEEE [230][231][232][233][234][235][236],
three articles in arXiv and MDPI [35][237][238], while no works are published in the other databases 16. These results confirm the few existing studies on SFL for 6G networks which was one of the main difficulties we faced in our study. This also shows the need for further development in this emerging scope to enrich the literature review with relevant papers. Footnote 16: We have used the advanced search filter to find works with titles containing the term “Split Federating Learning.”
* Other 6G aspects and use cases: Our research work covers the most important and representative technical aspects and use cases foreseen for 6G systems. However, the list is not exhaustive and other new technical aspects and applications can be envisioned, driven by 6G requirements and users demands, such as network slicing, smart-governance, unmanned mobility, education, online advertising, sustainable development, etc.
* Experimental studies and results: Another potential limitation is related to the shortage of experiments and results on the implementation of SFL either for technical aspects or use cases. The responding argument is the scarceness of previous research enabling the connection between 6G systems and Split Federated Learning. To overcome this limitation, we encourage researchers to combine both technologies by implementing new models so that the results of future research can improve on this aspect.
## VIII Conclusion
Distributed and collaborative deep learning have taken great strides over recent years and have been applied to many applications. Split Federated Learning, as a nascent technique, provides a secure and faster model training strategy. The core idea is to parallelize the training phase by dividing the global model amongst the participating agents and performing a local training process based on their private and local data. This new method dramatically reduces the training time and, in addition to data privacy, preserves model privacy. As we have seen, works on SFL are still in their growth phase and it is therefore necessary to explore new research avenues about the topic. The current study takes a deep dive on the potential of using the SplitFed Learning algorithm to improve the reliability of the future SFL-based 6G systems. At the beginning, we provide the reader with the existing AI ideas for 6G networks. To the best of our knowledge, our survey is the first to present a comprehensive view of the application of Split Federated Learning in the 6G networks. Afterwards, we outline the primary contributions and organization of the paper along with an exhaustive background on artificial intelligence algorithms, collaborative deep learning and 6G Mobile Networks as foundations and cornerstones. Following that, several 6G technical aspects are thoroughly examined with a representative realistic scenario for each aspect. Furthermore, the applicable 6G use cases which would benefit from Split Federated Learning are analyzed. We believe that the synergy between Split Federated Learning and Edge Computing would enable a significant improvement of both 6G applications and technical aspects. Moreover, a series of datasets and development frameworks that can support the implementation of SFL within 6G networks are summarized. In this context, we realized that only few 6G-related datasets exist, which explains the reason behind the use of other non-6G inputs. Next, we draw researchers and practitioners attention to the fact that SplitFed cannot deal with all issues, through an overview of the relevant limitations and challenges. Therefore, we discuss how to overcome these challenges by giving new research hints. To conclude, we expect this article to stimulate researchers to design, test, and deploy innovative SFL-based solutions for 6G technologies.
|
2309.11811 | Multimodal Transformers for Wireless Communications: A Case Study in
Beam Prediction | Wireless communications at high-frequency bands with large antenna arrays
face challenges in beam management, which can potentially be improved by
multimodality sensing information from cameras, LiDAR, radar, and GPS. In this
paper, we present a multimodal transformer deep learning framework for
sensing-assisted beam prediction. We employ a convolutional neural network to
extract the features from a sequence of images, point clouds, and radar raw
data sampled over time. At each convolutional layer, we use transformer
encoders to learn the hidden relations between feature tokens from different
modalities and time instances over abstraction space and produce encoded
vectors for the next-level feature extraction. We train the model on a
combination of different modalities with supervised learning. We try to enhance
the model over imbalanced data by utilizing focal loss and exponential moving
average. We also evaluate data processing and augmentation techniques such as
image enhancement, segmentation, background filtering, multimodal data
flipping, radar signal transformation, and GPS angle calibration. Experimental
results show that our solution trained on image and GPS data produces the best
distance-based accuracy of predicted beams at 78.44%, with effective
generalization to unseen day scenarios near 73% and night scenarios over 84%.
This outperforms using other modalities and arbitrary data processing
techniques, which demonstrates the effectiveness of transformers with feature
fusion in performing radio beam prediction from images and GPS. Furthermore,
our solution could be pretrained from large sequences of multimodality wireless
data, on fine-tuning for multiple downstream radio network tasks. | Yu Tian, Qiyang Zhao, Zine el abidine Kherroubi, Fouzi Boukhalfa, Kebin Wu, Faouzi Bader | 2023-09-21T06:29:38Z | http://arxiv.org/abs/2309.11811v1 | # Multimodal Transformers for Wireless Communications: A Case Study in Beam Prediction
###### Abstract
Wireless communications at high-frequency bands with large antenna arrays face challenges in beam management, which can potentially be improved by multimodality sensing information from cameras, LiDAR, radar, and GPS. In this paper, we present a multimodal transformer deep learning framework for sensing-assisted beam prediction. We employ a convolutional neural network to extract the features from a sequence of images, point clouds, and radar raw data sampled over time. At each convolutional layer, we use transformer encoders to learn the hidden relations between feature tokens from different modalities and time instances over abstraction space and produce encoded vectors for the next-level feature extraction. We train the model on a combination of different modalities with supervised learning. We try to enhance the model over imbalanced data by utilizing focal loss and exponential moving average. We also evaluate data processing and augmentation techniques such as image enhancement, segmentation, background filtering, multimodal data flipping, radar signal transformation, and GPS angle calibration. Experimental results show that our solution trained on image and GPS data produces the best distance-based accuracy of predicted beams at 78.44%, with effective generalization to unseen day scenarios near 73% and night scenarios over 84%. This outperforms using other modalities and arbitrary data processing techniques, which demonstrates the effectiveness of transformers with feature fusion in performing radio beam prediction from images and GPS. Furthermore, our solution could be pretrained from large sequences of multimodality wireless data, on fine-tuning for multiple downstream radio network tasks.
B - Beam prediction, multimodal learning, transformer, wireless communications
## 1 Introduction
Wireless communications beyond 5G is exploiting high-frequency bands such as mmWave and THz, in order to boost the system capacity by utilizing large available bandwidth. Massive antenna arrays have been leveraged to create ultra-narrow beams, so as to increase the received signal power and reduce interference on targeted users. Significant challenges in beam management arise in such systems and scenarios especially for high mobility vehicle users, to provide ultra-high reliable and low latency communications.
Multimodality sensory information has the potential to improve wireless communications in a challenging environment. Integrated sensing and communication has been actively studied for 6G [1]. In the vehicular network scenario, a roadside Base Station (BS) unit equipped with a camera, LiDAR, radar, and GPS can produce images, point clouds, radar signals, and location information of the road environment, objects, and vehicle users (UE). Such sensory data is potentially useful in assisting the BS to analyze the radio transmission scenario, so as to produce effective beam management.
### Problem statement
In this paper, we present a transformer-based multimodal deep learning approach for sensing-assisted beam prediction, which is a solution to the DeepSense 6G problem statement in the ITU AI/ML for 5G challenge 2022 [2]. The challenge aims to develop machine learning-based models that can adapt to diverse environmental features and accurately predict optimal beam indices in entirely new locations using a multimodal training dataset. The objective is to enable effective generalization and sensing-aided beam prediction for improved wireless communication systems. The challenge provides large multimodal sensing datasets measured in a real environment. As shown in Fig. 1, each data sample contains five sequential instances of camera images, LiDAR point clouds, and radar signals, plus the first two instances of UE GPS position [3]. The ground truth in this context refers to the corresponding 64\(\times\)1 power vectors obtained through beam training at the receiver using a 64-beam codebook, where omi-transmission is employed at the transmitter. The BS sensors capture LiDAR, radar, and visual data, while the positional data is collected from GPS receivers installed on the mobile vehicle. The dataset is measured in four scenarios (31, 32, 33, 34) shown in Fig. 5. Scenarios 31 and 32 are measured in the daytime while scenarios 33 and 34 are at night. Note that scenarios 32 and 33 are in the same location but at different times. A development dataset is provided with thousands of samples collected in scenarios 32, 33, and 34; and an adaptation dataset is provided
with tens of samples collected in scenarios 31, 32, and 33. Both datasets have the ground-truth best beam of the UE associated with each sample. A test dataset with hundreds of samples is provided in all scenarios without labels. Specifically, most labeled data resides in scenarios 32, 33, and 34, whilst half of the unlabeled data resides in scenario 31. The sampling rate of the sequences in the test set is the same as that of the adaptation set but double that of the development set. The objective is to evaluate how the developed model can generalize to the unseen scenario, in which the sensing data is collected in a different location, Field of View (FoV), time (day, night), and sampling rate.
The evaluation metric is a "Distance-Based Accuracy Score (DBA Score)" with the top-3 predicted beams [2]. The DBA score is defined as
\[\text{DBA score}=\frac{1}{3}(Y_{1}+Y_{2}+Y_{3}), \tag{1}\]
where \(Y_{K},\ K\in\{1,2,3\}\) is
\[Y_{k}=1-\frac{1}{N}\sum_{n=1}^{N}\min_{1\leq k\leq K}\min\left(\frac{|\hat{y}_{ n,k}-y_{n}|}{\Delta},1\right). \tag{2}\]
with \(y_{n}\) and \(\hat{y}_{n,k}\) are respectively ground truth and the \(k\)th most-likely predicted beam indices of sample \(n\) in the dataset with a size of \(N\). \(\Delta\) is a normalization factor and set as 5.
### Related work
There exists several solutions for multimodal sensor data fusion for multiple downstream tasks, such as the TransFuser framework proposed for autonomous driving [4]. However, the work is developed for computer vision applications such as semantic segmentation, object detection, recognition, and localization. The data is collected from sensors equipped on the moving vehicles. In comparison, our task has several unique challenges where the TransFuser model is difficult to solve. Firstly, our sensors equipped on the BS produce much wider FoV than those on vehicles. There are many static and moving objects in the scene, but there are no labels or bounding boxes indicating the UE. Secondly, we have also radar signals and GPS location, and how to utilize these modalities to assist our task is unclear. Thirdly, beam prediction is a unique application in wireless communications that has not been well exploited with multimodal sensors. In particular, the relations between radio transmission scenarios and visionary data on abstraction space is not straightforward, making deep learning hard to generalize on unseen scenarios [5].
The use of visual data for wireless communications has been actively studied in recent years, including most work on beam prediction from the DeepSense group. This includes the use of images for beam tracking with Gated Recurrent Unit (GRU)-based deep learning [6]. Radar-aided beam prediction is studied in [7] using 2D Convolutional Neural Network (CNN). It also proposes FFT to transform radar signals to range angle and velocity maps for CNN. LiDAR-aided beam prediction is investigated in [8], which is also based on GRU plus an embedding block to convert 3D point clouds to 1D vectors. Position-aided beam prediction is studied in [9], which utilizes Multilayer Perceptron (MLP). A fusion of vision and position has been studied in [10], which concatenates the normalized position with extracted features from CNN to predict beams with MLP. Despite that, a number of solutions have been developed in this domain, most of which are not scalable to different modalities of sensory data. To achieve this we need to build a generalized ML model which can learn the abstracted features between multiple modalities, which is a key target of this paper.
### Contributions
Our contribution can be summarized as follows. First, we develop a multimodal transformer framework for wireless communication application of beam prediction and prove that the model is flexible to adapt to different data modalities in the wireless domain. Second, we investigate several data processing and augmentation techniques in computer vision for wireless applications, alongside model training and validation methods for data imbalance and domain adaptation problems. Third, we validate with real measurement data that our framework is effective in producing beam prediction from multimodal sensory data, and generalize to unseen scenarios. Finally, we discuss that our framework could be extended to a generative model pretrained on sequences of multimodality data and fine-tuned for multiple tasks in radio air interfaces.1
Footnote 1: [https://github.com/ITU-AI-ML-in-5G-Challenge/DeepSense6G_TTI.git](https://github.com/ITU-AI-ML-in-5G-Challenge/DeepSense6G_TTI.git)
The rest of this paper is structured as follows. Section 2 describes our developed methods for multimodal sensor data transformation and processing. Section 3 describes our proposed multimodal transformer framework for sensing-assisted beam prediction, with discussions on the training method and extension capabilities. Section 4 presents experiments of our solution on the multimodal beam prediction applications with discussions on the studied approaches. Finally, the work is concluded in Section 5 with some future research directions for the framework.
## 2 Multimodal data transformation and processing
### Multimodal data transformation
We start by transforming LiDAR point clouds and radar signals into 2D vector space as well as calibrating GPS location data.
For LiDAR data, we convert the raw point clouds
into an image-like representation through a Bird's-Eye View(BEV). Specifically, the height, intensity, and density of the 3D point cloud are mapped to the red, green, and blue channels of a color image to generate the BEV image. Firstly, the point clouds within the Region Of Interest (ROI) are discretized into grid cells. Secondly, the height and intensity are encoded by their maximum values of the points in each grid cell. Finally, the density of the points is calculated [11]. The BEV representation for LiDAR point clouds has certain advantages. It can be used on CNN [12] to extract hidden features, which can be further processed with images. Moreover, it can preserve the basic structure of the point clouds and the depth information, while reducing the computational complexity in PointNet [11].
For radar data, we adopted the processing techniques used in [7]. The objective is to extract the range, the angles, and the velocity of the moving objects in the environment using 2D Fourier transform, as described in [7]. Since the camera and LiDAR do not provide explicit velocity information, we concatenate the Range-Angle Maps with the Range-Velocity Maps of the radar to preserve the speed information of the moving cars, as illustrated in Fig. 2. Further, radar signals provide reliable speed measurement regardless of weather conditions and lightness level [13].
GPS data plays an important role in locating the UE's position. However, it is not always available or accurate in a practical system (caused by connection and delay issues). In this challenge, only data from the first two out of the five GPS instances are provided. We first transform the GPS coordinate of the UE and the BS from to the Cartesian, then calculate the relative position between the UE and the BS of the \(n\)th GPS data, denoted as \((\Delta x_{n},\Delta y_{n})\). Afterward, we get the angle by \(\arctan(\Delta y_{n}/\Delta x_{n})\).
After exploring the dataset, we observed that the beam indices spread from 1 to 64 according to the UE's locations from left to right in the images. As the camera is located close to the BS, the beam indices are associated with the relative position (angle) between the UE and BS. However, the angles of the same beam index are different between scenarios, because roads are located in different positions with reference to the BS. Therefore, we calibrate the angle of the central pixel in the images of all scenarios.
We first manually select the data samples of these four scenarios where the UE is located in the middle of the images and their corresponding beam indices fall in the range of [31, 34]. We then calculate their angles accord
Figure 1: Schematic representation of the input data sequence utilized in this challenge task [2]
Figure 2: Combining radar Range-Angle \(\mathbf{H}_{RA}\) and Range-Velocity \(\mathbf{H}_{RV}\) maps [7].
ing to their relative positions, as \([\theta_{1}=-50.52^{\circ},\theta_{2}=44.8^{\circ},\theta_{3}=55.6^{\circ},\theta_{4 }=-60^{\circ}]\). We rotate all the possible angles in each scenario with \(\theta_{i}\), (\(i=1,2,3,4\)). Finally, we obtain the calibrated angles of the first two instances.
### Multimodal data processing
In this section, we introduce several data processing techniques on the multimodal data for training the multimodal transformers on the beam prediction task.
#### 2.2.1 Camera data
Beam prediction from camera data is related to object detection and tracking tasks in computer vision. However, since there are no labels of the targeted UE in the image, we cannot distinguish it from other vehicles or pedestrians. Therefore, we tried to enhance the visual information of the vehicles in the images to allow the model to better recognize our targeted object.
**Brightness enhancement**: To overcome the darkness issue in the night scenarios 33 and 34, we utilize MIRNet[14] to enhance the brightness of these images. The vehicles become clearer as shown in Fig. 2(b), compared to the raw image in Fig. 2(a).
**Segmentation**: To highlight the vehicles in the camera data, we use the PIDNet [15] to segment the vehicles from images in the daytime scenarios 31 and 32 shown in Fig. 4. We also test this method on the brightened images in the night scenarios 33 and 34, but the performance is poor, which may be due to loss of background information.
**Background masking**: We also tried to mask the background with the black color and keep the street scene. The images in the same scenario have the same background because the camera is stable. Beam prediction can be partially seen as trajectory prediction over the horizontal axis. We can potentially make the neural network focus on the vehicle's trajectory by making it dominant in the images, as shown in Fig. 5.
#### 2.2.2 LiDAR data
The LiDAR produces on average more than 16000 3D points in each time step. In order to reduce the size of the point-cloud data to speed up training the model, we preprocessed LiDAR data in the following ways:
**Background filtering:** We removed data points that correspond to static objects i.e. buildings. Similar to images these regions are not in the Line-Of-Sight (LOS) link between the BS and UE, which has less effect on the beam prediction. These points potentially add complexity and bias to the model. We subtract the background points from each point cloud frame using the moving average of all the frames in each scenario and then keep the desired region surrounding the moving vehicles.
**FoV calibration:** We crop the BEV projection of the LiDAR data to keep its FoV consistent with the view in the images. This could potentially assist the CNN focus on the region aligned with the images, to better allow transformers to learn the relations between them.
#### 2.2.3 Data augmentation
Due to data imbalance between scenario 31 and others, we investigate data augmentation techniques to increase the dataset size for this scenario.
**Image:** Beam selection relies mainly on the transmitter/receiver locations and the geometry/characteristics of the surrounding environment [7]. In order to conserve this geometric information, we use only some photometric transformations that are'safe' for beam prediction application [16]. We augment each image by randomly changing the brightness, contrast, gamma correction, hue channel, color saturation, the sharpness, and performing Gaussian blurring on the image.
**Point-cloud:** Similar to the camera data, we perform two'safe' data augmentation techniques for each point cloud frame without deteriorating the geometric information of the environment: randomly down-sampling the point cloud by a factor of 10 %, and adding small
Figure 4: Image segmentation on vehicles (blue) in day scenario
Figure 3: Image enhancement in night scenario
and random 3D position deviation for each point. These transformations conserve the position and general shape of the objects in the environment (cars, buildings, pedestrians, etc).
**Radar signal:** In order to augment the radar data, we add a small and random noise to each normalized FFT coefficient. The added noise is limited to 10% of each FFT component amplitude in order to conserve the shape of the spectrum. Hence, this transformation is'safe' in the spectral domain.
**Multimodal data flipping:** According to the observations beam indices spread from 1 to 64 in the images, we horizontally flip the images, radar, and the point cloud data to achieve the augmentation purpose. Meanwhile, to keep the GPS data and beam indices consistent with the multimodal data, we reverse the calibrated GPS angles and get the new beam indices by subtracting the original indices from 65.
## 3 Multimodal Transformers for Beam Prediction
In this section, we introduce our solution of a multimodal transformer framework for wireless communications and deep learning algorithms for sensing-assisted beam prediction.
### Multimodal transformer architecture
With multimodality data transformed into 2D vector spaces, we leverage CNN to extract the higher-order features, and then learn the relations between them using transformers. Since the image, point-cloud, and radar signal raw data resides in very different representation spaces, it is difficult to create effective mathematical functions, i.e. through category theory, to transform them into a common abstraction space, in order to learn their structures. However, with multiple layers of learning extraction on CNN and relations on transformers, the deep learning model could potentially converge on an effective representation of multiple modalities. Such representation can be fine-tuned for different downstream tasks which is essentially a structure minimization process.
In this context, we build a multimodal transformer architecture as illustrated in Fig. 6. We first employ a deep residential network (ResNet) [17] to encode the image, point cloud, and radar signal on feature space. Specifically, the ResNet is used on each of the five instances of the RGB image, LiDAR BEV, and radar range angle-velocity map, after normalization and scaling to a \(512\times 1\) feature vector.
Each ResNet block of convolution, batch normalization, non-linear activation, and pooling produces an abstracted feature vector as tokens. Note that for each modality we have five tokens sampled in different time steps. We use transformer encoder layers after each convolutional block to fuse the intermediate abstractions between the modalities of the image, point cloud, and radar map. The transformer uses linear projections for computing a set of queries, keys, and values. Scaled dot products are used between queries and keys to compute the attention weights and then aggregate the values for each query. Finally, a non-linear transformation is used to calculate the output features. It applies the attention mechanism multiple times throughout the structure, resulting in attention layers with multiple heads to generate several queries, keys, and values. Since each convolutional block encodes different aspects of the scene at different layers, thus several transformer blocks are used to fuse these features at multiple scales throughout the encoder.
The transformer learns the correlation between data at different modalities and time steps. In theory, the fusion of image and point cloud can better represent the scene, especially in some dark and night scenarios. Furthermore, the radar velocity and angle map can position the mobility objects in the scene. In this manner, the transformer could estimate the position of the target UE in the scene at the \(5^{th}\) instance.
The fused feature maps of different modalities are propagated to the next convolutional blocks and repeated several times with transformer blocks, and finally added together to be a \(512\times 1\) feature vector. Because the calibrated GPS locations (angles) have more apparent information than the other three pieces of data and only the first two instances are available, these two angles are concatenated with the \(512\times 1\) vector and passed through MLP layers to produce weights of 64 beam index using the softmax function.
Figure 5: Image background masking
### Training and optimization for beam prediction
We develop a number of training and optimization mechanisms to customize the model to the beam prediction task. Firstly, we transform the one-hot beam indexes to Gaussian distribution, by positioning the peak at the best beam and cutoff to 0 at its neighboring five beams. This is to adapt the cross-entropy loss function to the DBA score, where higher weights are given if the beams are closer to the best beam.
We further apply a focal loss [18] method to improve training on a sparse set of hard examples. Data imbalance is a significant challenge in this task. The data samples from scenario 31 are much less than others. Moreover, some beams have much less probability to be served as the best beam than others. The adaptation dataset is with a different sampling rate than the development dataset. To differentiate between easy and hard examples, a modulating factor \((1-p_{t})^{\gamma}\) is added to the cross-entropy loss, with tunable focusing parameter \(\gamma\geq 0\). Intuitively, it reduces the loss contribution from easy examples and extends the range of examples receiving a low loss.
We also employ several training methods to stabilize the convergence and make the model robust. We maintain the Exponential Moving Average (EMA) of the parameters during training, instead of utilizing the final trained values. This eliminates the fluctuation at the final steps and makes the model robust.
## 4 Performance evaluations and discussions
We performed experiments to train and evaluate our proposed multimodal transformer and data processing frameworks for beam prediction over the DeepSense challenge dataset [3]. The performance is measured in the DBA score defined in Eq. _(1)_, where the distances of the predicted beam to the ground-truth top three beams are averaged according to the mmWave received signal power from the vehicle UE to BS.
### Beam prediction accuracy
We combine the development and adaptation datasets, then randomly split them into 90% for training and 10% for validation (to choose the best model weights and hyperparameters). The learning rate is set to start from \(10^{-4}\). We validate and compare the performance using different proposed data preprocessing, augmentation, ResNets (ResNet18 and ResNet34) and model training approaches, according to the accuracy scores evaluated on the test dataset provided by the organizer. The hyperparameter with the best performance on the validation dataset is submitted for evaluation. Since the training and test datasets have a large imbalance in scenario 31, it can show the generalization capability of the trained model performing in the unseen scenario.
The experimental results of different model training and data processing schemes are shown in Table 1. We compare the performances of the model on camera, radar, LiDAR, GPS, and multimodality data. We can first observe that the experiment using ResNet34 to encode images has a higher accuracy than that with ResNet18, and the experiments with camera data on all five instances of
Figure 6: Multimodal Transformers for Sensing assisted Beam Prediction.
raw images already achieved an overall accuracy of 75%. This outperforms largely using only the last instance, indicating that the transformer can effectively utilize the relations between images sampled at different times to predict the beams, though car user is not indicated in the image. We can also see that its performance is better than, or similar to, most data preprocessing techniques, such as brightness enhancement, segmentation, and background masking, which further proves that the multimodal transformer model can generalize to different data domains without arbitrary processing. For example, its performance at unseen scenario 31 is close to the same day scenario 32 nearly 70%, without any data augmentation. Furthermore, we can see that it performs 10% better in night scenarios 33 and 33, mainly due to the mobility of car lights in the images being easier to identify, than multiple objectives appearing in the day scenarios.
In the performance of radar and LiDAR data, we can see that the model achieves the lower accuracy than images, and ResNet18 outperforms ResNet34 on encoding these two data. This is because the radar signals and point clouds received at the BS are reflected by all the moving vehicles and objects, making the model hard to detect the UE. Meanwhile, deeper residual layers lead to overfitting issues. Moreover, the signal has coverage constraints, causing issues in detecting UEs far away. Specifically, for the radar data, combining range-angle and range-velocity performs 5% better, which validates that velocity information can help the transformer to
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Data Type1** & **Scheme2** & **Overall** & **Scenario 31** & **Scenario 32** & **Scenario 33** & **Scenario 34** \\ \hline \multirow{5}{*}{Camera} & Raw Image18 [18] & 0.6535 & 0.5124 & 0.7457 & 0.7705 & 0.8137 \\ & **Raw Image34** & **0.7548** & **0.6982** & **0.7160** & **0.8024** & **0.8494** \\ & 5th instance34 & 0.6546 & 0.5171 & 0.7568 & 0.7548 & 0.8204 \\ & Brightness Enhancement34 & 0.7327 & 0.6853 & 0.7469 & 0.7371 & 0.8305 \\ & Background Masking34 & 0.7571 & 0.6896 & 0.7383 & 0.8157 & 0.8570 \\ & Image Segmentation34 & 0.6979 & 0.5873 & 0.7556 & 0.7824 & 0.8372 \\ & EMA Model Weights34 & 0.7146 & 0.6178 & 0.7642 & 0.7852 & 0.8402 \\ & Cross Entropy Loss34 & 0.7395 & 0.7018 & 0.7420 & 0.7410 & 0.8234 \\ \hline \multirow{2}{*}{Radar} & Range - Angle \& Velocity34 & 0.2807 & 0.1840 & 0.2827 & 0.4429 & 0.3282 \\ & Range - Angle \& Velocity18 & 0.3563 & 0.2936 & 0.3160 & 0.4800 & 0.3842 \\ & Range - Angle18 & 0.3092 & 0.2462 & 0.1926 & 0.4686 & 0.3313 \\ \hline \multirow{2}{*}{LiDAR} & Raw Point-Cloud34 & 0.4362 & 0.3171 & 0.4037 & 0.6781 & 0.4636 \\ & Raw Point-Cloud18 & 0.4422 & 0.3260 & 0.4272 & 0.6705 & 0.4707 \\ & FoV Calibration18 & 0.4223 & 0.2964 & 0.4370 & 0.6781 & 0.4310 \\ & Background Filtering18 & 0.2794 & 0.2598 & 0.2123 & 0.2986 & 0.3313 \\ \hline \multirow{2}{*}{GPS} & **Angle calibration** & **0.7425** & **0.6353** & **0.7704** & **0.8229** & **0.8906** \\ & Angle calibration + & 0.6266 & 0.4718 & 0.6704 & 0.8481 & 0.7262 \\ & distance on 2nd instance & 0.6992 & 0.6304 & 0.6938 & 0.7533 & 0.8010 \\ \hline \multirow{2}{*}{Multimodal} & Images34 + Radar (Angle)34 & 0.6992 & 0.6304 & 0.6938 & 0.7533 & 0.8010 \\ & Images34 + Radar (Angle)34 & 0.7206 & 0.6378 & 0.7383 & 0.8033 & 0.8148 \\ & Images34 + Radar (Angle)18 & 0.7847 & 0.7846 & 0.7845 & 0.7845 \\ & Images34 + GPS & **0.7844** & **0.7298** & **0.7852** & **0.8462** & **0.8433** \\ \hline \hline \multicolumn{2}{c}{Best score on the leaderboard of the challenge3} & 0.7162 & 0.6536 & 0.7074 & 0.8576 & 0.712 \\ \hline \end{tabular}
\end{table}
Table 1: Distance-based Accuracy of Beam Prediction on Multimodal Test Dataset
predict the UE mobility and select the beam. For the LiDAR data, filtering the background degrades the performance, because reference information of the UE in the environment could be cut out. This also explains that the model with LiDAR performs better than radar which only contains information about moving objects. In the performance of GPS data, angle calibration on the first two instances achieve the best accuracy in scenario 34 at 89%, which outperforms the distance and angle calibration on the 2nd instance. This indicates that only two instances of GPS data can predict the beam very effectively, reaching an accuracy of 74% which is very close to using images. Our angle calibration scheme is very effective while the distance information is less useful.
The best performance is achieved on multimodal data using images and GPS. It can be observed that the transformer on these two modalities produces an overall accuracy of 77%, which is better than using them separately. This is much more significant in the unseen scenario 31, with 10% higher accuracy than using GPS only. This proves the advantage of multimodal fusion on the feature level of our transformer framework. The GPS information can assist the model to identify the UE in images, which improves accuracy in day scenarios. The fusion of images with radar and LiDAR data also largely outperforms using them separately by 30% to 45%. Moreover, when it comes to fusing images, employing ResNet34 for encoding radar and LiDAR data yields considerable performance enhancements as compared to using ResNet18. Furthermore, we also implement the data augmentation techniques in scenario 31. It demonstrates that flipping the images further enhances the performance, reaching the best overall accuracy of 78% than all other schemes. Finally, we compare our solution with the best score on the leaderboard of the challenge, which uses convolutional autoencoders to fuse the images and GPS data [19]. It can be seen that our multimodal transformer achieves 7% better accuracy in overall performance and scenarios 31 and 32, and 13% better in scenario 34. This further proves the effectiveness of this framework in solving the beam prediction problem.
### Model complexity
We investigate the complexity of our proposed framework from aspects of Multiply-Accumulate Operations (MACs) and number of parameters (Params), and then compare ours with the best solution on the leaderboard of the challenge described in [19]. Note that the best solution of [19] utilized images, radar, and GPS data. Authors extract features from images by using CNN-based autoencoders. They get a threshold of radar heatmaps using a 2D Constant False Alarm Rate (CFAR), and then apply Density-Based Spatial Clustering of Applications with Noise (DBSCAN) to obtain the object angle. They also calibrate GPS data in a similar way to ours. Finally, these three pieces of preprocessed data are concatenated and go through a dense model to predict the best beam. As we can't get the detailed parameters or the code of CFAR and DBSCAN in [19], we only calculate the MACs and Params of the feature extraction and dense model. For our model, we studied the main blocks in Fig. 6 with an input of five-instance data and three of the best schemes in Table 1. From Table 2, we observe that the MACs and Params of ResNet34 are nearly double that of ResNet18. MACs and Params of 'Transformer 1', 'Transformer 2', 'Transformer 3', and 'Transformer 4' increase quadruply. The most complex block is the 4th transformer. Our overall best scheme with transformers is the one that is most computationally costly. However, when considering only the 5th image, the scheme experiences a substantial reduction of \(\frac{4}{5}\) in terms of MACs and Params. On the other hand, the best scheme solely relying on GPS is the simplest among all the solutions. It is important to note that this scheme demonstrates significantly lower complexity while delivering superior performance compared to the best solution presented in [19]. Therefore, our low-complexity scheme is suitable for scenarios with limited computational resources, making it a viable option. On the other hand, the high complexity scheme is best suited for scenarios that demand high accuracy and possess abundant computational resources.
## 5 Conclusions and Future Work
In this paper, we present a multimodal transformer deep-learning solution for wireless communications and perform a case study in mmWave beam prediction for a target vehicle user. The transformer encoder is used to learn abstracted relations between features of im
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Source** & **Block or Method** & **MACs** & **Params** \\ \hline \multirow{4}{*}{\begin{tabular}{c} Main blocks \\ in Fig. 6 \\ (5 instances) \\ \end{tabular} } & ResNet18 & 2,368,733,184 & 11,166,912 \\ & ResNet34 & 4,784,652,288 & 21,267,648 \\ & Transformer 1 & 127,221,760 & 400,000 \\ & Transformer 2 & 506,101,760 & 1,586,432 \\ & Transformer 3 & 2,018,836,480 & 6,318,592 \\ & Transformer 4 & 8,064,204,800 & 25,220,096 \\ \hline \multirow{4}{*}{\begin{tabular}{c} This paper \\ in Table 1 \\ \end{tabular} } & Overall best scheme & \multirow{2}{*}{\begin{tabular}{c} This paper \\ in Table 1 \\ \end{tabular} } & Overall best scheme & \multirow{2}{*}{\begin{tabular}{c} This paper \\ in Table 1 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Overall best scheme \\ in Table 2 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 34,740,378,624 \\ data (5th instance\({}^{24}\)) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 54,982,784 \\ 6,948,213,248 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 54,982,272 \\ 54,982,272 \\ 41,920 \\ \end{tabular} } \\ & \begin{tabular}{c} Best scheme of GPS \\ data (Angle calibration) \\ \end{tabular} & \multirow{2}{*}{\begin{tabular}{c} 6,948,213,248 \\ 41,472 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 54,982,272 \\ 41,920 \\ \end{tabular} } \\ \hline \multirow{2}{*}{
\begin{tabular}{c} In [19] \\ \end{tabular} } & Feature extraction & 191,949,184 & 39,998,304 \\ & Dense model & 303,616 & 304,704 \\ \hline \hline \end{tabular}
\end{table}
Table 2: MACs and Params
ages, point clouds, and radar signals at multiple time instances, extracted by convolutional layers. Multiple layers of transformers and ResNets are stacked to learn higher-level abstractions for downstream tasks. We employed data transformation techniques of point cloud projection, radar range-angle, and range-velocity FFT to convert the multimodal data on 2D vector space, as well as GPS calibration. Furthermore, we develop data processing techniques to improve the task, including image brightness enhancement, segmentation, background masking, LiDAR field of view calibration, and background filtering. We also proposed data augmentation to reduce overfitting in training, including flipping the images. We trained the model by applying focal loss and exponential moving average techniques.
Experimental results show that our proposed multimodal transformer solution using image and GPS data achieves the best distance-based accuracy of predicted beams at 78.44%, with effective generalization to each of the scenarios at 73%, 78.5%, 84.6%, 84.3%, respectively. This outperforms significantly using LiDAR and radar, as well as each single modality. Specifically, the transformer effectively utilizes GPS information to detect the target UE in the images, whilst the images can assist GPS to generalize better in the unseen scenario. Furthermore, it also performs 7% better than the best state-of-the-art using autoencoders. We can conclude that our proposed multimodal transformer can effectively perform tasks between visual and radio domains, and generalize to different scenarios without customized data preprocessing and augmentation.
Further advanced deep learning models and techniques are worth studying for improving performance, especially feature extraction from data in other modalities. Domain generalization is an important issue in this task because the data in scenario 31 and the changed sampling rate in the test dataset have a different distribution than the training dataset. The Batchformer [20] algorithm is potentially efficient in making the model robust to imbalanced data, by exploring data sample relationships. Moreover, semi-supervised learning such as the FixMatch [21] algorithm can improve the model on unlabeled data by training on pseudo-labels from evaluation confidences. These methods are useful in practice with no additional computing complexity.
The multimodal transformer framework can be utilized to build a foundation model to empower multiple downstream tasks in wireless communications. We can pretrain with self-supervised learning to build a generative model from sequences of images, LiDAR, radar, and radio signals collected at different times, frequencies, and locations, which learns high-level abstractions and relations among them. The transformer output can be stacked with classification or regression layers and fine-tuned for downstream tasks related to this data, such as channel prediction, beam management, and modulation. The pretrained model can also be used on devices with fewer sensors and less computing power by adapting the model branch and depth. It is a promising research direction to investigate such architecture for foundation models in the wireless communication domain.
## Acknowledgment
We would like to thank the support from TII for our participation in the ITU AI/ML for 5G Challenge 2022.
|
2309.08398 | Exploring Meta Information for Audio-based Zero-shot Bird Classification | Advances in passive acoustic monitoring and machine learning have led to the
procurement of vast datasets for computational bioacoustic research.
Nevertheless, data scarcity is still an issue for rare and underrepresented
species. This study investigates how meta-information can improve zero-shot
audio classification, utilising bird species as an example case study due to
the availability of rich and diverse meta-data. We investigate three different
sources of metadata: textual bird sound descriptions encoded via (S)BERT,
functional traits (AVONET), and bird life-history (BLH) characteristics. As
audio features, we extract audio spectrogram transformer (AST) embeddings and
project them to the dimension of the auxiliary information by adopting a single
linear layer. Then, we employ the dot product as compatibility function and a
standard zero-shot learning ranking hinge loss to determine the correct class.
The best results are achieved by concatenating the AVONET and BLH features
attaining a mean unweighted F1-score of .233 over five different test sets with
8 to 10 classes. | Alexander Gebhard, Andreas Triantafyllopoulos, Teresa Bez, Lukas Christ, Alexander Kathan, Björn W. Schuller | 2023-09-15T13:50:16Z | http://arxiv.org/abs/2309.08398v2 | # Exploring Meta Information for Audio-Based
###### Abstract
Advances in passive acoustic monitoring and machine learning have led to the procurement of vast datasets for computational bioacoustic research. Nevertheless, data scarcity is still an issue for rare and underrepresented species. This study investigates how meta-information can improve zero-shot audio classification, utilising bird species as an example case study due to the availability of rich and diverse meta-data. We investigate three different sources of metadata: textual bird sound descriptions encoded via (S)Bert, functional traits (Avonet), and bird life-history (BLH) characteristics. As audio features, we extract audio spectrogram transformer (AST) embeddings and project them to the dimension of the auxiliary information by adopting a single linear layer. Then, we employ the dot product as compatibility function and a standard zero-shot learning ranking hinge loss to determine the correct class. The best results are achieved by concatenating the Avonet and BLH features attaining a mean F1-score of \(.233\) over five different test sets with \(8\) to \(10\) classes.
Alexander Gebhard\({}^{1}\), Andreas Triantafyllopoulos\({}^{1}\), Teresa Bez\({}^{1}\),
Lukas Christ\({}^{1}\), Alexander Kathan\({}^{1}\), Bjorn W. Schuller\({}^{1,2}\)
\({}^{1}\)Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany
\({}^{2}\)GLAM - Group on Language, Audio, & Music, Imperial College London, UK bioacoustics, zero-shot classification, machine learning, computer audition
## 1 Introduction
The field of computational bioacoustics has recently witnessed tremendous growth thanks to rapid technological progress, and in particular by exploiting the recent advances in machine learning [1]. The availability and affordability of good hardware, such as microphones or storage devices, vastly expands the capabilities of bioacoustic monitoring [1]. It is now possible to continuously capture audio at multiple and large areas at the same time, which leads to an enormous amount of audio data that needs to be processed [1, 2]. As a result, experts do not have enough time and resources to analyse or annotate the data on their own without automated processes, which makes the use of computational methods imperative. These methods, once trained on sufficient amounts of data, can be used to speed up the detection of species through their vocalisations. However, there is a plethora of rare or threatened species for which there may not be enough data to train an initial model [3]; yet, they in particular are most interesting from a biodiversity perspective, making their successful detection a crucial aspect of monitoring and conservation efforts. This is where zero-shot learning (ZSL) could be applied to annotate audio samples without any previous labelling efforts, relying only on external meta information. This auxiliary information can be textual annotations of sound classes or events [4, 5], coming from other modalities like images [6], or even from multiple modalities at the same time [7].
Important advancements in ZSL have been primarily achieved in the computer vision domain, and rely on semantic information such as text data [8, 9, 10]. After the initial success in the visual domain, the computer audition community also adopted and refined ZSL approaches for their tasks [5, 6]. Among the recent breakthroughs are adaptations of the CLIP method from computer vision, such as AudioClip[7], Wav2CLIP [11], or CLAP [4].
The visual recognition of avian species has become a standardised benchmark for ZSL, owing to the importance of the problem and the availability of suitable metadata: binary description attributes [12], textual descriptions [13], field-guide visualisations [14], and even DNA data [15] can all serve as mediating attributes for ZSL. Yet, while the visual recognition of birds is a vital aspect of biodiversity research, the more pressing issue of zero-shot auditory recognition has not received as much attention, despite the fact that audio offers improved monitoring capabilities for birds in the high-occlusion conditions of their natural habitats. While considerable efforts have been expended in closed-set [16] and few-shot recognition [17, 18], and some recent interest in open-set recognition [19], we have found no previous works investigating the application of ZSL on audio-based recognition of birds. This is a gap we attempt to mitigate in the present contribution.
Specifically, we aim to identify the most promising form of metadata that can serve as mediating variables for ZSL. Our starting point is a dataset of \(95\) European bird species assembled from Xeno-Canto. These particular species have been selected based on a recent survey from Jung _et al._[20]
on the presence of avian species in the areas monitored by an ongoing, large-scale biodiversity programme, the biodiversity exploratories (BE)1. We explore the following alternative forms of metadata: a) vocalisation descriptions extracted from a standardised field-guide, b) aggregated morphological attributes, and c) life-history traits. For our modelling, we draw on recent works on audio-based ZSL and rely primarily on a simple ZSL model [5] - our goal is to establish a baseline and not go after state-of-the-art results.
Footnote 1: [https://www.biodiversity-exploratories.de/en/](https://www.biodiversity-exploratories.de/en/)
## 2 Dataset
For our experiments, we utilise audio data and meta information from \(95\) European bird species. The 95 birds were chosen based on a previous survey by Jung _et al._[20] from the BE 2, since our ultimate goal will be to deploy our ZSL model to automatically annotate the audio data of this project. The audio data was collected from xeno-canto while the auxiliary information was taken from three different sources to investigate their influence on the model performance. The audio data from xeno-canto are in MP3 format and comprise roughly \(725\) hours. The species distribution is quite imbalanced as can be seen in Figure 1, highlighting the relative dearth of information for some species.
Footnote 2: [https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)
The first type of metadata was taken from the standard Princeton Field Guide by Svensson _et al._[21], which contains descriptions of bird sounds for the 95 species. That is, the sound of a bird species is described in a textual manner w. r. t. specific patterns and peculiarities, while relying heavily on onomatopoeia. The following quote, belonging to the species _phoenicurus ochruros_, gives an impression of these descriptions:
Call a straight, slightly sharp whistle, 'vis', often repeated impatiently. When highly agitated, a discreet clicking is added, 'visit, 'tk-tk'. Song loud, frequently given at first light from high peeth, usually consists of four parts: starts with a few whistes and a ratting repetition of same note, followed by a pause c. 2 sec. long, then a peculiar crackting sound (not very far-carrying), after which the verse terminates with some brief whisted notes, e. g.'si-stir' TILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILL-ILLILL-ILL-ILL-ILLILL-ILL-ILLILL-ILLILL-ILLILL-ILLILL-ILLILLILL-
mpnet-base-v2 model5. For each bird, we extract both Bert and SBert feature vectors of size \(768\) by feeding the bird's entire sound description into the respective model. From the Bert model, the averaged embeddings of the final layer are taken as the text representation, while the SBert model's embedding is obtained via the provided pooling method.
Footnote 5: [https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
In this context, we analyse the pairwise cosine similarities among the bird species' embeddings to see which of the two textual embedding methods creates stronger distinctions. That is, for each bird species we compute the cosine similarity between the (S)Bert embeddings of this species and every other species, yielding a matrix of cosine similarities w. r. t. those embeddings. These matrices are visualised as heat maps in Figure 2 and suggest that SBert yields more distinguishable representations for the species. Therefore, we expect SBert to achieve a better model performance than Bert. The corresponding results are reported and discussed in Section 4.
Regarding the Avonet features, we keep only information unrelated to the species name or family and also discard the geographical attributes, such as latitude and longitude. We also omit the species or family related information for the BLH dataset. Additionally, all attributes that have more than \(10\) NaN values are dropped for both feature sets, ending up with \(23\) and \(77\) features, respectively. All remaining NaN values are set to \(0\). Finally, each feature being a string is encoded to a numerical representation with a common label encoder. Before these features are then fed to the model, each of them is scaled to the range \([0,1]\) via min/max normalisation.
### Zero-Shot Classification
The ZSL procedure we employ in this study builds on the approach presented in [5]. They rely on previous work from Weston _et al._[27] and Akata _et al._[28] and apply a compatibility function to an acoustic-semantic projection in order to classify a sound class. Their training procedure involves a ranking hinge loss that exploits the compatibility of the projections. The sound class with the highest compatibility is considered the correct class.
Out of the features from Section 3.1, we employ the AST representations as our acoustic embeddings \(A(x)\) and the (S)Bert embeddings, the Avonet, as well as the BLH features as class embeddings \(C(y)\). In order to project the acoustic to the class embeddings, we utilise a single linear layer which has as many neurons as the size of the respective class embeddings, as done by Xie and Virtanen [5]:
\[P(A(x))=W^{T}A(x) \tag{1}\]
In order to check the compatibility between the projected acoustic and the class embeddings, we adopt the dot product.
\[F(P(A(x)),C(y))=P(A(x))^{T}C(y) \tag{2}\]
This compatibility function is later exploited in the ranking loss function which is the same as in [5, 27]. The goal is that the highest ranked class embeddings best describe the audio sample. Thus, after computing the compatibility, the ranks \(r_{y_{n}}\) for each batch element are determined and the corresponding penalties \(\rho(r_{y_{n}})\) are calculated as
\[\rho(r_{y_{n}})=\sum_{i=1}^{r_{y_{n}}}\frac{1}{i} \tag{3}\]
with \(\rho(0)=0\). Then, a version of the hinge loss \(h\) is computed by employing the compatibility function as in [5], with \(\Delta(y_{n},y)=0\) if \(y_{n}=y\) and 1 otherwise:
\[h(x_{n},y_{n},y)=\Delta(y_{n},y)+F(P(A(x_{n})),C(y))\] \[-F(P(A(x_{n})),C(y_{n})) \tag{4}\]
Following the weighted approximate-rank pairwise (WARP) loss from Weston _et al._[27], our final ranking hinge loss is
\[\frac{1}{N}\sum_{n=1}^{N}\frac{\rho(r_{y_{n}})}{r_{y_{n}}}\sum_{y}\max(0,h(x_{n },y_{n},y)), \tag{5}\]
where \(0/0=0\) if \(r_{y_{n}}=0\).
### Experimental Setup
In order to obtain robust results, we create five different splits, each of which consists of a training (\(\sim\)80%), development (\(\sim\)10%), and test (\(\sim\)10%) set. We make sure that each dev and test set comprises different species than the other four dev/test sets, i. e., they are disjoint. Our experiments investigate how well the ZSL approach described in Section 3.2 performs with the three meta information sources described in Section 2. For this purpose, we employ the Bert, SBert,
Figure 2: The pairwise cosine similarities between the \(95\) bird species for the Bert embeddings (left) and the SBert embeddings (right) visualised as heat maps. The SBert embeddings show a stronger difference among the birds.
Avonet, and BLH features introduced in Section 3.1. The experiments are conducted on the five splits and the mean of the performance on the dev/test sets are reported. We train for \(30\) epochs and utilise a stochastic gradient descent (SGD) optimiser with a learning rate of \(.0001\) and a batch size of \(16\). Furthermore, we apply the dot product as compatibility function for our ranking loss explained in Section 3.2. These settings and parameters were determined by preliminary experiments. The model state which performs best on the development set is then employed for the evaluation on the test set. The best model state is selected based on its macro F1-score which is a balance between precision and recall. The results and the corresponding discussion are presented in Section 4.
## 4 Results
For each test set of the five splits, we have \(8\) to \(10\) species which implies a chance between \(10\) % and \(12.5\) % to pick the correct class. Regarding the development sets we have \(9\) to \(11\) species entailing a chance level between \(9.1\) % to \(11.1\) %. The results of our experiments are listed in Table 1 and show the mean performance over all five splits. Since our experiments were optimised w. r. t. the F1-score, this is also the decisive metric regarding their evaluation. Therefore, the best-performing setting is the concatenation of the Avonet and BLH features. This outcome makes especially sense considering that the BLH features achieved the second and the Avonet the third best performance.
Interestingly, the bird sound descriptions which we encoded with Bert and SBert perform worse than the morphological, ecological, and life-historical meta features. Avonet and BLH even outperform both encodings on all three tabulated metrics of Table 1. This may be because the pre-trained language models have likely never seen bird-specific onomatopoeia such as "vist, tk-tk-tk" or "si-sru" TILL-ILL-ILL-ILL-ILL-...... (krschkrschkrsch) SRUsvisvi" in the description example quoted in Section 2 and thus might omit this information completely.
Since both language model embeddings performed on an equal level, we only investigated the concatenation of the Bert embeddings with the other meta feature sets. Unlike the concatenation of Avonet with BLH, the concatenation of Bert with the functional and life-historical feature sets does not lead to an improvement, but rather a deterioration.
As Figure 2 from Section 3.1 suggests that SBert should create better distinguishable embeddings than simple Bert, we furthermore expect to see a noticeable difference in their performances. However, SBert even performs slightly worse than Bert. This might be due to SBert putting more focus on semantic meaning than Bert which is difficult to achieve when the onomatopoeia are not properly considered and thus, crucial information is discarded. The difference becomes more obvious when consulting the results of Table 1. The mean F1-score over the development and the test sets is nearly the same for Bert while there is a high discrepancy for SBert features, which may be a sign of overfitting.
## 5 Conclusion
We investigated three different sources of meta information for zero-shot audio classification of bird species: (S)Bert embeddings of textual descriptions of bird sounds, Avonet features comprising functional traits, and BLH features containing bird life-historical characteristics. Our results suggest that the concatenation of Avonet and BLH achieve the best performance with a mean F1-score of \(.233\) over five disjoint test sets, followed by solely utilising the BLH and Avonet features with a mean F1-score of \(.221\) and \(.191\), respectively. Therefore, the morphological, ecological, and life-historical meta information outperformed the encoded bird sound descriptions. We hypothesise that this is due to the language models not being pre-trained or fine-tuned on bird-specific onomatopoeic words or sentences.
Future work could be to pre-train or fine-tune existing language models in order to better deal with onomatopoeic words and sentences so that this information is included in the embeddings. Furthermore, images of the bird species should be considered as another auxiliary information which can be properly encoded and processed together with the audio samples. Since our goal was not to achieve state-of-the-art performance but to investigate different meta-features for our task, next steps could also be to improve the general model performance by means of employing and adapting more sophisticated ZSL models.
## 6 Acknowledgement
This work was funded from the DFG project No. 512414116 (HearTheSpecies) and the DFG project No. 442218748 (AUDI0NOMOUS).
\begin{table}
\begin{tabular}{l c c c c c c} & \multicolumn{3}{c}{**Dev**} & \multicolumn{3}{c}{**Test**} \\ \cline{2-7}
**Embeddings** & **ACC** & **UAR** & **F1** & **ACC** & **UAR** & **F1** \\ \hline Bert &.220 &.195 &.169 &.188 &.208 &.167 \\ \hline Avonet &.372 & **.298** &.262 &.267 &.215 &.191 \\ \hline BLH & **.384** & _.288_ & **.265** & **.289** &.286 &.221 \\ \hline SBert &.306 &.238 &.219 &.197 &.185 &.163 \\ \hline Bert+Avonet+BLH &.181 &.175 &.154 &.175 &.168 &.151 \\ \hline Bert+Avonet &.254 &.193 &.178 &.169 &.158 &.141 \\ \hline Bert+BLH &.198 &.183 &.164 &.164 &.178 &.141 \\ \hline Avonet+BLH & _.335_ &.281 &.244 &.287 & **.295** & **.233** \\ \hline \end{tabular}
\end{table}
Table 1: The mean results over the development (Dev) and test (Test) sets of the five splits from Section 3.3. The best performance of each metric is marked bold, the second best is marked italic. The F1-score poses the main evaluation metric. |
2309.16818 | MEM: Multi-Modal Elevation Mapping for Robotics and Learning | Elevation maps are commonly used to represent the environment of mobile
robots and are instrumental for locomotion and navigation tasks. However, pure
geometric information is insufficient for many field applications that require
appearance or semantic information, which limits their applicability to other
platforms or domains. In this work, we extend a 2.5D robot-centric elevation
mapping framework by fusing multi-modal information from multiple sources into
a popular map representation. The framework allows inputting data contained in
point clouds or images in a unified manner. To manage the different nature of
the data, we also present a set of fusion algorithms that can be selected based
on the information type and user requirements. Our system is designed to run on
the GPU, making it real-time capable for various robotic and learning tasks. We
demonstrate the capabilities of our framework by deploying it on multiple
robots with varying sensor configurations and showcasing a range of
applications that utilize multi-modal layers, including line detection, human
detection, and colorization. | Gian Erni, Jonas Frey, Takahiro Miki, Matias Mattamala, Marco Hutter | 2023-09-28T19:55:29Z | http://arxiv.org/abs/2309.16818v1 | # MEM: Multi-Modal Elevation Mapping for Robotics and Learning
###### Abstract
Elevation maps are commonly used to represent the environment of mobile robots and are instrumental for locomotion and navigation tasks. However, pure geometric information is insufficient for many field applications that require appearance or semantic information, which limits their applicability to other platforms or domains. In this work, we extend a 2.5D robot-centric elevation mapping framework by fusing multi-modal information from multiple sources into a popular map representation. The framework allows inputting data contained in point clouds or images in a unified manner. To manage the different nature of the data, we also present a set of fusion algorithms that can be selected based on the information type and user requirements. Our system is designed to run on the GPU, making it real-time capable for various robotic and learning tasks. We demonstrate the capabilities of our framework by deploying it on multiple robots with varying sensor configurations and showcasing a range of applications that utilize multi-modal layers, including line detection, human detection, and colorization.
## I Introduction
Autonomous unmanned ground vehicles deployed in outdoor environments need to understand their surroundings to operate in a safe and reliable manner. Onboard range sensors, such as LiDARs, stereo cameras, or RADARs can provide the robot with a spatial understanding of its surroundings. However, these different representations have varying nature and noise, which motivates their aggregation into a single unifying map.
Different types of maps have been developed for robotics. The simplest are 2D maps [24], which store the occupancy information in each cell. 3D volumetric maps are another popular representation [15, 20] that better encode the geometry of the environment, though at a higher computational and memory cost. 2.5D maps (also called elevation or height maps) [5, 6, 18] are a compromise between both, and well suited for ground platforms. All the aforementioned maps only contain geometric information about the environment. To successfully capture different environments, these representations need to be _multi-modal_, here _multi-modal_ refers to different information content types i.e., encode semantic classes, friction, traversability, color, or other task-specific information, which is an active area of research nowadays [4, 10, 14, 23]. The multi-modal information can significantly enhance the performance of a variety of downstream tasks. For example, semantic information enables robots to differentiate concrete from grass and mud, which allows them to plan a path on the less cost-intensive road surface.
In this work, we aim to contribute to the development of multi-modal robotic perception by presenting a flexible, real-time capable Multi-Modal Elevation Map (MEM) framework. It builds upon our previous work [18] to allow the seamless integration of multi-modal information such as
geometry, semantics, and other data in various modalities from different sources. It is implemented on a Graphics Processing Unit (GPU) to accelerate the heavy calculation of large data structures. In addition, our framework provides customizable post-processing plugins as introduced in [18], enabling the map to be adopted in a variety of tasks, including its use in learning approaches. This allowed us to develop different robotics and learning applications in a simple way, only using the single unified MEM framework. Specifically, the contributions are:
* A flexible multi-modal elevation mapping framework that can be fed with different representations such as point clouds and monocular images.
* Fusion strategies to deal with multi-modal data, such as geometry, RGB information, high-dimensional features, and semantic classes.
* Demonstration of multiple applications for robotics and learning that exploit the flexibility of the approach in terms of sensing and data modalities.
* Open source implementation of the system for the benefit of the community1. Footnote 1: [https://github.com/leggedrobotics/elevation_mapping_cupy](https://github.com/leggedrobotics/elevation_mapping_cupy)
## II Related Work
In this section, we review existing literature on map representations and semantic maps.
### _Map representations_
First we review discrete representations for mapping, such as 2D occupancy maps, 2.5D elevation maps, and 3D volumetric representations. 2D occupancy maps are a common representation of the environment, that efficiently represent free and occupied space on a horizontal grid [24]. They are suited for flat terrain and are computationally efficient. However, they cannot capture uneven surfaces, making them unsuitable for complex terrains. 3D volumetric representations overcome this limitation and are able to represent the 3D complexity of the environment [8], making them ideal for platforms such as drones, which can freely move in 3D space. Common approaches include octrees [15] or voxel maps [20]. These maps are capable of handling overhanging obstacles, but creating volumetric maps comes at a cost of increased memory and computation. For ground robots, such as wheeled and legged platforms, 2.5D representations such as elevation maps [5] built from 3D sensing are a good compromise of expressiveness and efficiency, making them a popular choice for robotic applications. These maps discretize the horizontal plane into cells, but store a continuous height value for each cell, allowing them to represent complex and rough terrain such as staircases, steep slopes, or cluttered surfaces, which are particularly relevant for ground platforms. Multiple implementations of elevation mapping systems have been developed for Central Processing Unit (CPU) systems, such as Fankhauser et al. [6] and Ewen et al. [4]. Pan et al. [21] and Miki et al. [18] introduce a GPU-based implementation that enables real-time use of 2.5D elevation maps even with large amounts of data. Our work builds upon the latter and extends it for RGB sensors, as well as adding new features to store semantic information.
### _Semantic Mapping_
To successfully perform certain tasks such as navigation and locomotion, purely geometric information does not suffice and semantic information is imperative [10]. Accumulation of the generated multi-task data in a map is desirable for downstream applications. This has been done for 3D volumetric maps [14, 23], as well as for 2.5D semantic elevation maps [4, 17]. In the following, we will review the semantic mapping approaches applied to ground robots and provide an overview of the existing methods and their features in Tab. I.
In [17], a robot-centric 2.5D elevation map is generated by fusing a LiDAR point cloud into the map. A pixelwise semantic segmentation is extracted from images with a neural network and fused into the map using Bayesian inference. Hence, the map mainly contains fixed class probabilities given by the network. While our work is inspired by this fusion approach, we aim to extend the formulation to any kind of data modality. Additionally, as shown in Tab. I, this work is closed-source.
In [10] the authors present a multi-tasking neural network that takes RGB images as input and outputs class probabilities and traversability estimations. The semantics in combination with their depth maps are fused into a multi-layer semantic 3D volumetric map with a Bayesian inference approach [11], that leverages local correlations of the grid. This method runs on a CPU and it is fixed to the map frame targeting large-scale semantic reconstructions. This volumetric approach runs with logarithmic complexity. While it is able to extend to new information modalities, new tasks need to be added to the multi-task network which requires retraining the entire custom network. As shown in Tab. I, the method is not able to process monocular images containing multi-modal information.
In [4] the authors present a method that generates a robot-centric elevation map that takes data from a single RGB-D camera as input and fuses the semantics into the map by associating the pixels to the elevation map with the depth information. To update the semantic information, a Bayesian inference technique is employed. This approach focuses on
inputting class probabilities while we aim to provide a general framework that accepts multi-modal data and can potentially be extended for different platforms.
In this work, we extend our framework presented in [18] to additionally capture multi-modal information. We present a highly configurable GPU-based robot-centric elevation mapping, that can take multi-modal point clouds as input as well as additional data modalities from multi-modal images. The latter do not provide geometrical information but merely extend the modalities of the map. The MEM has already proven its effectiveness for fusing visual traversability in concurrent work [9]. It runs in real-time and it is open source as shown in Tab. I.
## III Approach
The goal of this approach is to extend the elevation mapping presented in [18] by additionally fusing multi-modal information into the map.
### _Overview_
As shown in Fig. 2, the framework can be divided into three core modules: Data Association (Sec. III-B), Fusion Algorithms (Sec. III-C), and Post-Processing (Sec. III-D). Additionally, before inputting the sensor data into the core modules, the raw sensor data can be pre-processed to extract multi-modal information, such as visual features or semantic segmentations. The framework can take both multi-modal point clouds with additional fields as well as multi-modal images with custom channels as input. These structures can contain multi-modal information such as class probabilities, RGB colors, and semantic features in each data point, which will be fused into the map. In the Data Association module, the data is matched to the cells in the map either by associating each data point of the point cloud to a cell or by projecting each cell into the image. After establishing the correspondence between the data and the cells, the Fusion Algorithm updates the values of the cells by fusing the new information. The map consists of pure geometric as well as multi-modal layers. The latter are task-agnostic information containers and thus do not differentiate between different information modalities. The fusion algorithm is layer-specific and defines which information modality can be fused into these layers, as different modalities might need different fusion techniques. These two modules combined generate a unified manner to input multi-modal data into the map. As a final step, the generated layers can be used for post-processing. The post-processing module consists of a plugin system where the user can choose which map layers to input and whether to generate new layers, modify the information within existing layers, or perform additional processing for external components.
### _Data association_
The proposed framework establishes correspondences between map cells and the data of a multi-modal point cloud or a monocular multi-modal image. Multi-modal point clouds can be directly associated to cells according to the horizontal coordinates of each point in parallel. The point cloud can be additionally used to update the elevation information according to [18].
In contrast, multi-modal images do not provide elevation information and we, therefore, rely on the elevation
Fig. 3: Association of pixel-wise semantic information to individual cells within the map. Each cell within the frustum of the corresponding camera is projected onto the image plane. An efficient ray-casting approach checks if the cell is visible or occluded by another cell based on the cell’s height.
Fig. 2: Overview of our multi-modal elevation map structure. The framework takes multi-modal images (purple) and multi-modal (blue) point clouds as input. This data is input into the elevation map by first associating the data to the cells and then fused with different fusion algorithms into the various layers of the map. Finally the map can be post-processed with various custom plugins to generate new layers (e.g. traversability) or process layer for external components (e.g. line detection).
information accumulated within the map and a ray-casting schema for the data association shown in Fig. 3. For most applications, the number of potentially visible cells within the robot-centric elevation map is smaller than the number of pixels contained within the image, therefore we choose to project the cells onto the image. For each cell in the Field of View (FoV) of the camera, we evaluate if the cell is visible by checking the height of each intermediate cell along the ray connecting the camera's focal point and the cell. The intermediate cells are computed using Bresenham's algorithm [2]. A correspondence is successfully established between the cell and the image plane if, for all intermediate cells along the ray, the map elevation is smaller than the height of the ray connecting the camera focal point and the cell. In other words, the cell can be seen from the camera's perspective and is not occluded. The respective pixel coordinate in the image plane for the cell is computed using the pin-hole camera model with the known camera intrinsics and extrinsics. This process is performed for each cell in parallel.
### _Fusion algorithms_
Once the correspondence of the new data and the cell is defined, the semantic data is fused into the respective layers, updating their values. To support different types of information and also a variety of use cases we implemented four different fusion algorithms: _Latest_ (Sec. III-C2), _Exponential Averaging_ (Sec. III-C3), _Gaussian Bayesian Inference_ (Sec. III-C4), and _Dirichlet Bayesian Inference_ (Sec. III-C5). The fusion algorithms can be configured by the user for each input source separately. In addition, the user can select which layer of the MEM the data for each input source is fused to, allowing to fuse data from different sources into the same layer or in individual layers. The fusion algorithm needs to be selected based on the information modality since certain fusion algorithms cannot be applied to every modality. The algorithms are applied to each layer of each data point in parallel. Given the modular design, new fusion algorithms can be added easily according to the user's needs. Class probabilities can be input into the semantic mapping framework by either selecting a subset of all the classes or by inputting the top k classes. The latter is convenient for memory optimization if there are many different classes.
#### Iii-C1 Notation
Vectors are written in bold and lowercase letters, and the index \(i\) is used for measurement data points such as point cloud points and pixels. The subscript \(j\) is used to index elevation map cell elements and \(t\) is the time step of an update. Each point cloud consists of \(N\) data points and for cell \(c_{j}\) there are \(N_{j}\) data points that correspond to that cell. The class information can be represented with class likelihood \(\mathbf{p}\in\mathbb{R}^{K}\) where \(p_{k}\in[0,1]\) where \(k\) is the class index and \(\sum_{k}^{K}p_{k}=1\) and \(K\) is the total amount of classes, or in one-hot encoded vectors \(\mathbf{p}\in\mathbb{R}^{K}\) where \(p_{k}\in\{0,1\}\) and \(\sum_{k}^{K}p_{k}=1\). \(\theta_{j}\) is the value of a layer of cell \(j\). Features are vectors associated with the image content at a certain location of a pixel or of a 3D point. The extracted features are encoded in \(d\)-dimensional vectors \(\mathbf{f}\in\mathbb{R}^{d}\).
#### Iii-C2 Latest
A straightforward method to fuse the information is to only retain the latest measurement per cell. For the case of point clouds, it might be that multiple data points fall into one cell. In that case the information is fused into the semantic elevation map by layer-wise averaging all the data points that fall within the same cell \(c_{j}\) according to:
\[\mathbf{a_{t,j}}=\frac{1}{N_{j}}\sum_{i=0}^{N}\mathbf{m_{i}}\cdot\mathds{1}_{ \{\mathbf{m_{i}}\in c_{j}\}} \tag{1}\]
where \(\mathbf{m_{i}}\) is the \(i^{th}\) observation.
This fusion method can be used for all different information modalities. However, it does not consider any past measurements making it more prone to noise.
#### Iii-C3 Exponential averaging
Exponential averaging consists of assigning greater weight to more recent data and exponentially decreasing the significance of previous data as shown in Fig. 4b. This method improves the robustness towards noise by considering previous measurements. For point clouds, the algorithm first computes the average of all the measurements that correspond to a cell \(c_{j}\) according to Eq. (1). Then the new cell value is computed as:
\[\boldsymbol{\theta_{t,j}}=w\cdot\mathbf{a_{t,j}}+(1-w)\cdot\boldsymbol{\theta _{t-1,j}} \tag{2}\]
where \(w\) is a user-defined weight. This fusion technique can be used for all semantic information types as it can be accommodated to work with real numbers \(\mathbb{R}\) as well as probabilities \(\mathbb{R}\in[0,1]\). Since exponential averaging does not follow a probabilistic approach, the information stored in the layer cannot be regarded as a probability measure. This motivates the two following probabilistic fusion algorithms.
#### Iii-C4 Bayesian inference of Gaussian distributions
This method is suited for \(d\)-dimensional data \(\boldsymbol{f}\in\mathbb{R}^{d}\) and uses Bayesian inference with the assumption that the data \(\boldsymbol{f_{i}}\) is drawn from a Gaussian distribution with known variance. The likelihood function is the probability of the observations given the mean.
\[p(\mathcal{F}|\boldsymbol{\mu_{f,j}})=\mathcal{N}(\mathcal{F}|\boldsymbol{\mu_ {f,j}},\boldsymbol{\sigma_{f,j}^{2}}I) \tag{3}\]
where \(\boldsymbol{\mu_{f,j}},\boldsymbol{\sigma_{f,j}^{2}}\) are the parameter vectors describing the cell's mean and variance and \(\mathcal{F}=\{\boldsymbol{f_{0}},...,\boldsymbol{f_{N_{j}}}\}\) is the set of \(N_{j}\) observations. We choose the prior to be Gaussian distributed:
\[p(\boldsymbol{\mu_{f,j}})=\mathcal{N}(\boldsymbol{\mu_{f,j}}|\boldsymbol{\mu_ {0,j}},\boldsymbol{\sigma_{0,j}^{2}}\mathbf{I}) \tag{4}\]
where \(\boldsymbol{\mu_{0,j}},\boldsymbol{\sigma_{0,j}^{2}}\) are the parameter vectors describing the prior's mean and variance for cell \(j\). This prior is a conjugate prior with respect to the likelihood, thus the posterior will also be a Gaussian distribution, and allows for closed form solution for the posterior using Bayes rule [1]:
\[p(\boldsymbol{\mu_{f,j}}|\mathcal{F})=\mathcal{N}(\boldsymbol{\mu_{f,j}}| \boldsymbol{\mu_{N,j}},\boldsymbol{\sigma_{N,j}^{2}}\mathbf{I}) \tag{5}\]
where
\[\mu_{N,d}=\frac{\sigma_{f,d}^{2}}{N\sigma_{0,d}^{2}+\sigma_{f,d}^{2}}\mu_{0,d} +\frac{N\sigma_{0,d}^{2}}{N\sigma_{0,d}^{2}+\sigma_{f,d}^{2}}\mu_{ML,d} \tag{6}\]
\[\sigma_{N,d}=\frac{\sigma_{f,d}^{2}\sigma_{0,d}^{2}}{N\sigma_{0,d}^{2}+\sigma_{f,d }^{2}} \tag{7}\]
where \(\mu_{ML,d}=\frac{1}{N_{j}}\sum_{i=1}^{N}f_{i,d}\cdot\mathds{1}_{\{\mathbf{\hat{t} }_{i}\in c_{j}\}}\).
In equations Eq. (6) and Eq. (7), we neglect the cell's index \(j\) for readability purposes and \(d\) indicates the index of the dimension. The closed-form solution allows for real-time updating of the cell values. This method, as previously mentioned, is used for real-valued data \(\mathbb{R}^{d}\) and thus it is suited for e.g. features.
#### Iii-C5 Bayesian inference of Dirichlet distributions
This fusion algorithm is suited for class probabilities. Cell-wise class probabilities are represented by a categorical distribution \(\mathbf{\theta_{j}}\in[0,1]^{K}\), where each element represents the probability of cell \(j\) belonging to class \(k\) from a fixed set of \(K\) classes \(\mathcal{K}=\{1,2,...,K\}\).
The likelihood of the map cell is described with a categorical distribution \(p(\mathbf{m_{i}}|\mathbf{\theta_{j}})=\prod_{k=1}^{K}{\theta_{j,k}}^{m_{i,k}}\), where \(\mathbf{m_{i}}=[m_{i,1},...,m_{i,K}]\) is the semantic measurement. Thus the likelihood over a set of \(\mathcal{D}\) observations is:
\[p(\mathcal{D}|\mathbf{\theta_{j}})=\prod_{k=1}^{K}{\theta_{j,k}}^{m_{k}} \tag{8}\]
where \(m_{k}=\sum_{i}^{N_{j}}m_{i,k}\) and the value \(N_{j}\) is the number of measurements that fall within the cell \(j\).
Our goal is to seek the posterior over \(\mathbf{\theta_{j}}\). Similar to Sec. III-C4 we choose a conjugate prior, in this case, the Dirichlet distribution:
\[p(\mathbf{\theta_{j}}|\mathbf{\alpha}_{j})=\frac{\Gamma(\sum_{k=1}^{K}\alpha_{j,k})}{ \prod_{k=1}^{K}\Gamma(\alpha_{j,k})}\prod_{k=1}^{K}\theta_{j,k}^{\alpha_{j,k} -1}\propto\prod_{k=1}^{K}\theta_{j,k}^{\alpha_{j,k}-1} \tag{9}\]
where \(\Gamma=\int_{0}^{\infty}t^{\alpha_{j,k}-1}e^{-t}dt\) and \(\mathbf{\alpha_{j}}\) are the parameters of the distribution. The resulting posterior is a categorical distribution:
\[p(\mathbf{\theta_{j}}|\mathcal{D},\mathbf{\alpha}_{j})\propto p(\mathcal{D}|\mathbf{ \theta_{j}})\cdot p(\mathbf{\theta_{j}}|\mathbf{\alpha}_{j})\propto\prod_{k=1}^{K} \theta_{j,k}^{\alpha_{j,k}+m_{k}-1}, \tag{10}\]
with the closed-form solution being:
\[p(\mathbf{\theta_{j}}|\mathcal{D},\mathbf{\alpha}_{j})=\frac{1}{\sum_{k=1}^{K}\alpha _{j,k}}\mathbf{\alpha}_{j} \tag{11}\]
\[\mathbf{\alpha}_{j,t}=\mathbf{\alpha}_{j,t-1}+\sum_{i}^{N_{j}}\mathbf{m_{i}}, \tag{12}\]
following the derivation by Bishop et al. [1]. The posterior is the updated class probability for each cell. This Bayesian inference approach can be used for \(K\) different classes and keeps track of all past observations as shown in Fig. 4.
### _Post processing_
Once all the sensor information is contained in the map, the framework allows to post-process the map layers to generate task-relevant data. These updates are triggered by an event and thus run at a user-defined frequency. The different post-processing options are implemented as plugins that can be configured by the user with a few lines of code, as described in [18].
We enhance this system by also providing additional layers such as RGB or semantics to the plugins. These plugins can utilize the layer information that is already on GPU, thus reducing the data transferring time. We demonstrate the usability with several examples such as neural networks that take map layers as input and generate a new layer (Fig. 5b), modify existing layers within the MEM, or perform additional processing for external components (Fig. 5a). Please refer to Sec. IV for a detailed description.
## IV Experiments
We first show the implementation details of the framework (Sec. IV-A). We then present the results by profiling the time performance of the implementation (Sec. IV-B). Finally, we showcase how our MEM framework can benefit a variety of robotic applications with multiple case studies performed on different robots (Sec. IV-C, Sec. IV-D, Sec. IV-E).
### _Implementation_
The map is implemented in Python, using CuPy [19] which allows GPU acceleration and the generation of user-defined low-level Compute Unified Device Architecture (CUDA) kernels for customized highly parallel data processing. It uses ROS [22] for inter-process communication and the core modules implemented in Python are wrapped by a C++ framework to accelerate the serialization of ROS data. The map is published at a user-defined rate as a GridMap message [7].
Fig. 4: Fusion algorithms behaviour over time. In Figure (a) the prior at \(t_{0}\) and the measurement that falls into one cell is shown. (b) depicts the exponential decay of Class 0 confidence as a result of exponential averaging fusion. (c) illustrates the Bayesian inference fusion method, which exhibits a more gradual decrease in the Class 0 confidence.
### _Performance analysis_
For the performance analysis, we run the elevation mapping framework with a \(10\,\mathrm{m}\times 10\,\mathrm{m}\) grid with a resolution of \(4\,\mathrm{cm}\) resulting in a grid of \(250\times 250\) cells. The map is tested on a robot with one Stereolabs ZED 2i camera, publishing an image of \(360\times 640\) pixels at a framerate of \(3\,\mathrm{Hz}\). The image stream is processed with a pre-trained semantic segmentation network [16]. All the time profiling tests were performed over 300 iterations.
We measure the time performance of the MEM update as shown in Tab. II, following [18]. The measurements are performed on a workstation with a GeForce RTX 4090 GPU and Nvidia Jetson Orin. The total time required for the entire update is \(2.6\,\mathrm{ms}\) / \(23.6\,\mathrm{ms}\) corresponding to an update rate of \(385.2\,\mathrm{Hz}\) / \(42.3\,\mathrm{Hz}\) on the respective hardware. Fusing the points into the elevation map (height update & ray cast) is the step that takes the most time as it needs to fuse a high number of points into the map.
Comparing the time performance with [4], we can notice that even when having four times more cells, we run at approximately the same speed. By adding multi-modal data to the elevation map [18], the time performance of the map stays in the same order of magnitude, making it real-time capable.
We test the scalability of the approach by running the framework on a Jetson Orin with the previously presented configuration with an increasing number of multi-modal layers. In Fig. 6 we show that the computational time of multi-modal update increases linearly to the number of multi-modal layers. The total processing time of the Jetson Orin for 20 layers is \(28.9\,\mathrm{ms}\) for exponential averaging and \(29.0\,\mathrm{ms}\) for Bayesian inference.
Finally, we inspect the time performance of the point cloud creation, which consists of comp
Fig. 5: Post-processing. This figure shows how the map can be employed to generate insight and a new layer. On the left, the input image and the feature extraction step are displayed. This data is fed into the elevation map in the middle of the figure. The top right side displays a post-processing plugin that generates the left (red) and right (blue) tree lines predictions of a vineyard. On the lower right side a plugin takes feature layers as input and generates a PCA layer.
Fig. 6: Time performance of the semantic fusion step with increasing numbers of layers recorded on a Jetson Orin.
of the points starting from the RGB-D setup presented in Sec. IV-B which publishes images at a rate of \(3\,\mathrm{Hz}\), processing the images, and publishing the point cloud. The bottleneck in this configuration proves to be running the image through a Lite R-ASPP Network with a MobileNetV3-Large backbone [16] that takes \(31\,\mathrm{ms}\). However, given that our framework does not depend on the specific network the user can select the network according to the time and performance requirements.
Our approach is memory efficient. The point cloud with \(100^{\prime}000\) points with x, y, z, RGB, and 5 semantic channels solely needs \(7.2\,\mathrm{MB}\) of memory. The 2.5D MEM with \(200\times 200\) cells and the same amount of semantic channels requires \(3.8\,\mathrm{MB}\) of memory. Hence, we only require \(1.6\,\mathrm{MB}\) of additional memory space in comparison to the elevation map of [18] which allocates \(2.2\,\mathrm{MB}\). Additionally, as described in III-C5, the memory consumption can be further reduced by choosing the solution where we capture the top k class probabilities.
### _Colorization from multi-modal data_
Our framework is able to fuse color information from different data modalities, whereas previous approaches always required a colorized point cloud or a depth-aligned RGB image. This is particularly useful for robots with a diverse set of sensors. In Fig. 7, a qualitative analysis shows that the map can accurately represent the colors of the environment. The sensing setup consists of a Sevensense Alphasense Core camera unit with three monocular color cameras with global shutter, that capture images at a frame rate of \(9\,\mathrm{Hz}\) with a resolution of \(1080\times 1440\) pixels and four Intel RealSense D435 depth cameras available on the ANYbotics ANYmal C legged robot, capturing images at a frame rate of \(15\,\mathrm{Hz}\) with a resolution of \(480\times 848\) pixels. The layer can be used to improve the interpretability of the map by humans, as well as for downstream learning tasks.
### _2.5D Semantic Segmentation_
In this second application, we demonstrate that the framework can be used to project class probabilities into the map. In this specific case study, we use a similar setup as described in Sec. IV-B where a stereo camera with a resolution of \(360\times 640\) pixels is used. The image is processed with Detectron2 [13] which outputs pixel-wise class probabilities. The class probabilities and the depth map are used to generate a semantic point cloud. These points are then fused into the MEM according to the Bayesian inference of Dirichlet distributions described in Sec. III-C5. We showcase the pipeline in an agricultural setting as shown in Fig. 8. The figure depicts a person lying in the grass, who is identified by the segmentation network. A purely geometric perception of the environment is not sufficient to recognize the human in the high grass. By integrating the semantics into the elevation map it can be used by a local planning module, which can take into account geometric as well as semantic obstacles. This example showcases the importance of multi-modal and more specifically semantic information for real-world robotic applications.
### _Line detection in agricultural setting_
We show that the proposed framework can be extended to take features as input, fuse them into the map, and train a custom network that takes multiple layers as input. Specifically, we showcase visual features representing latent embeddings of RGB images in an agricultural setting where we predict the tree lines in special crops as shown in Fig. 5a. The described framework is tested on a wheeled robot with one RGB-D camera, publishing an image of \(360\times 640\) pixels. In this application, we extract features from the RGB image with a self-supervised pre-trained vision transformer (ViT) [3]. We fuse these features according to the Bayesian inference of Gaussian distributions described in Sec. III-C4. In this use case, we use elevation and semantic information
Fig. 8: Semantic segmentation mask layer of the elevation map. The colors of the elevation map encode the highest class probabilities. The human lying in the high grass is not distinguishable using the height (geometric) information but the semantics clearly detect the person (orange).
Fig. 7: Color layer: The semantic elevation map is used in an outdoor environment with three point clouds and three images as input and displays the color layer.
as input to an end-to-end learnable network that predicts two tree lines in special crop fields. The goal of the model is to predict the line parameters in the robot's frame. The lines are modeled as second-order polynomials. The machine learning pipeline consists of two sequential blocks and follows the work of [12]. The first block is a Convolutional Neural Network (CNN) that generates weight maps from the elevation and feature data. The second block takes the weight maps as input and predicts the line parameters of the tree line. The CNN consists of the Efficient Residual Factorized ConvNet (ERF Net). It predicts a weight map with 2 channels, one for each predicted line. The output of the ERF Net is fed into the second block consisting of a least squares layer (LSQ) predicting the line parameters. The predicted lines resulting from the approach can enable robot navigation, even in the presence of state estimation inaccuracies.
## V Conclusion
In this work we extended a state-of-the-art 2.5D elevation map by including multi-modal information of the environment, in order to provide an easy-to-use and computationally efficient tool for various robotics and learning tasks. We presented a highly configurable framework that allows the user to configure the map content and enables a great variety of downstream tasks. We have shown high performance and low memory consumption of our MEM framework running on GPU. To demonstrate that our approach can be used for various use cases, we have shown three different example applications. We showed that the RGB colors can be injected into the map, and we demonstrated that various pre-trained semantic segmentation networks can be used to update the layers. Finally, we introduced a learning-based tree line detection of special crops which uses the feature and elevation layers of the MEM as input. We hereby demonstrate that our MEM framework serves as a highly configurable environmental representation tool which is easy to use and openly accessible for the research community.
|
2305.19626 | The probability density function of the arrival time of Čerenkov
light | The probability density function of the arrival time of \v{C}erenkov light on
a photo-multiplier tube has been studied. This study covers light production,
transmission and detection. The light production includes the light from a
muon, the light from a shower and the light due to the energy loss of a muon.
For the transmission of light, the effects of dispersion, absorption and
scattering in the medium are considered. For the detection of light, the
angular acceptance and the quantum efficiency of the photo-multiplier tube are
taken into account. | M. de Jong, E. van Campenhout | 2023-05-31T07:51:03Z | http://arxiv.org/abs/2305.19626v1 | # The probability density function
###### Abstract
The probability density function of the arrival time of Cerenkov light on a photo-multiplier tube has been studied. This study covers light production, transmission and detection. The light production includes the light from a muon, the light from a shower and the light due to the energy loss of a muon. For the transmission of light, the effects of dispersion, absorption and scattering in the medium are considered. For the detection of light, the angular acceptance and the quantum efficiency of the photo-multiplier tube are taken into account.
## 1 Introduction
The generic topology of a muon or a shower producing light that is detected on a photo-multiplier tube (PMT) is shown in figure 1. The coordinate system is defined such that the muon or shower direction is pointed along the \(z-\)axis and the position of the PMT is located in the \(x-z\) plane. The orientation of the PMT is defined by the zenith angle, \(\theta_{\wp}\), and the azimuth angle, \(\phi_{\wp}\). The distance of closest approach of the muon or shower to the PMT is denoted by \(R\). This topology is obtained after the following operations:
1. Rotation - Muon or shower direction is pointed along the \(z-\)axis.
2. Translation - Extrapolation of muon or shower trajectory passes through the coordinate origin.
3. Rotation - PMT is located in the \(x-z\) plane.
The rotation of the coordinate system, \(\mathcal{R}\), can be expressed as a \(3\times 3\) matrix:
\[\mathcal{R}=\left(\begin{array}{ccc}\cos\theta\,\cos\phi&\cos\theta\,\sin \phi&-\sin\theta\\ -\sin\phi&\cos\phi&0\\ \sin\theta\,\cos\phi&\sin\theta\,\sin\phi&+\cos\theta\end{array}\right) \tag{1}\]
For the first rotation, \(\theta\) and \(\phi\) correspond to the zenith and the azimuth angle of the direction of the muon or shower in the original frame, respectively. The \(x\) and \(y\) values of the position of the muon or shower in the rotated system (i.e. after step 1) are then used to translate the coordinate system such that the extrapolation of the muon or shower trajectory passes through the origin. A second rotation is applied to the coordinate system such that the position of the PMT is located in the \(x-z\) plane. For this rotation, \(\theta=0\) and \(\phi=\mathrm{atan2}(y,x)\), where \(x\) and \(y\) refer to the position of the PMT after step 2.
The topology of a muon producing light that is detected on a PMT after a single scattering of the light is shown in figure 2. The complete path of the photon from a position along the muon trajectory to the position of the PMT can be expressed as the sum of the two vectors \(\bar{u}\) and \(\bar{v}\). The scattering angle, \(\theta_{s}\), is then defined as:
\[\cos\theta_{s} \equiv \hat{u}\cdot\hat{v} \tag{2}\]
Figure 1: Topology of a muon or shower producing light that is detected on a PMT. The muon or shower direction is pointed along the \(z-\)axis and the PMT is located at position \((R,0,0)\). The zenith and azimuth angle of the orientation of the PMT are denoted by \(\theta_{\wp}\) and \(\phi_{\wp}\), respectively. The compass refers to the orientation of the PMT when its axis lies within the \(x-z\) plane (i.e. \(\sin\phi_{\wp}=0\)).
where \(\hat{u}\) (\(\hat{v}\)) corresponds to the unit direction vector of the photon before (after) the scattering. In addition to the zenith angle, \(\theta_{0}\), the azimuth angle, \(\phi_{0}\), is required to describe the direction of the emitted photon (i.e. \(\hat{u}\)). This angle is defined as the angle between the \(x-z\) plane and the \(\hat{u}-z\) plane (see figure 2).
As a result of the coordinate transformations, the time response of the PMT to the various sources of light is completely determined by the distance between the PMT and the muon or shower and the orientation of the PMT. In the following sections, the calculation of the probability density function of the arrival time of Cerenkov light is explained and in section 5, results are presented for the deep-sea water and PMT used in KM3NeT (see www.km3net.org).
## 2 Light production, transmission and detection
In the following, the production, transmission and detection of light is presented. The light production includes the Cerenkov light from a muon, the light from showers and the light due to the energy loss of a muon. For the transmission of light, the effects of dispersion, absorption and scattering in the medium are considered. For the detection of light, the angular acceptance and the quantum efficiency of the photo-multiplier tube are taken into account.
### Cerenkov light
The number of Cerenkov photons produced per unit path length of a particle with charge, \(ze\), moving with speed, \(\beta\), through a medium can be expressed as [8]:
\[\frac{d^{2}N}{dxd\lambda} = \frac{2\pi\alpha z^{2}}{\lambda^{2}}\left(1-\frac{1}{\beta^{2}n^ {2}}\right) \tag{3}\]
Figure 2: Topology of a muon or a shower producing light that is detected on a PMT after a single scattering of the photons (left \(x-z\) view and right \(x-y\) view). The muon or shower direction is pointed along the \(z-\)axis and the PMT is located at position \((R,0,0)\). The angle \(\theta^{\prime}_{0}\) is defined by \(\tan\theta^{\prime}_{0}\,=\,\frac{\sin\theta_{0}\cos\phi_{0}}{\cos\theta_{0}}\).
where \(\lambda\) is the wavelength of the light, \(\alpha\) the Electro-Magnetic coupling constant, and \(n\) the index of refraction of the medium. The index of refraction of the medium and the characteristic angle \(\theta_{0}=\theta_{C}\) of the Cerenkov cone are related in the following way:
\[\cos\theta_{C} = 1/n \tag{4}\]
where we have assumed a highly relativistic particle, such that \(\beta\simeq 1\).
In the hypothesis of a light cone, the length of a track segment and the height of the light cone are related (see figure 3). The number of detectable photons per unit wavelength and per unit area at a distance, \(R\), can thus be formulated as:
\[\Phi_{0}(R,\lambda) = \frac{d^{2}N}{dxd\lambda}\;\frac{1}{2\pi R\,\sin\theta_{C}} \tag{5}\]
### Light from showers
The detectable signal from showers is primarily due to the Cerenkov light produced by charged particles in the shower. As the hadronic interaction length and the radiation length of water (and ice) are very similar, the light from both hadronic and Electro-Magnetic showers are treated in the same way. It is convenient to express the number of detectable photons per unit wavelength and per unit shower energy in terms of the equivalent track length per unit shower energy and the number of detectable photons per unit wavelength and per unit track length (equation 3), i.e:
\[\frac{d^{2}N}{dEd\lambda} = \frac{dx}{dE}\frac{d^{2}N}{dxd\lambda} \tag{6}\]
The equivalent track length depends only on the medium and not on the PMT. For water (and ice) it typically amounts to about \(4.7\;\mathrm{m/GeV}\).
Figure 3: Relation between the length of a track segment (\(dz\)) and the height of the light cone (\(dz^{\prime}\)). The area of the light cone is \(A=2\pi Rdz\,\sin\theta_{0}\).
The angular distribution of light emission from an Electro-Magnetic shower has been studied extensively and is presented in references [5, 4]. For energies in excess of \(1\,{\rm GeV}\), it has been found that the angular distribution is rather independent of the energy of the Electro-Magnetic shower. The angular distribution can be parametrised reasonably well as [3]:
\[\frac{d^{2}P_{\star}}{d\cos\theta_{0}\;d\phi_{0}} = c\,e^{b}\left|\cos\theta_{0}-\cos\theta_{C}\right|^{a} \tag{7}\]
where
\[a = +0.35\] \[b = -5.40\] \[c = \frac{1}{2\pi}\frac{1}{0.06667}\]
The constant, \(c\), is defined such that \(P_{\star}\) is normalised to unity for the full solid angle. The result of the parametrisation is shown in figure 4.
The number of detectable photons per unit wavelength, per unit energy and per unit solid angle as a function of the angle of emission can then be formulated as:
\[\Phi_{1}(\cos\theta_{0},\lambda) = \frac{d^{2}N}{dEd\lambda}\;\frac{d^{2}P_{\star}}{d\cos\theta_{0} \,d\phi_{0}} \tag{8}\]
The longitudinal profile of a shower has been presented in reference [4]. It can be parametrised reasonably well as:
Figure 4: Parametrisation of the angular distribution of light emission from an EM-shower.
\[\frac{dP}{dz} = z^{a-1}\frac{e^{-z/b}}{b^{a}\,\Gamma(a)} \tag{9}\]
where
\[a = 1.85\,+\,0.62\times\log\frac{E}{\rm GeV}\] \[b = 0.54\]
where \(E\) is the energy of the shower. The normalisation is defined such that the integral from \(0\) to \(\infty\) is normalised to unity.
### Light due to energy loss of a muon
The energy loss of the muon per unit track length can be expressed as [8]:
\[-\frac{dE}{dx} = a(E)+b(E)\,E \tag{10}\]
where \(a\) refers to the ionisation energy loss and \(b\) to the sum of \(e^{+}e^{-}\) pair production and bremsstrahlung. The \(e^{+}e^{-}\) pair production and bremsstrahlung both contribute to the detectable signal. The number of detectable photons per unit wavelength and per unit track length due to the energy loss of a muon can then be formulated as:
\[\frac{d^{2}N(E)}{dxd\lambda} = b(E)\,E\,\frac{d^{2}N}{dEd\lambda} \tag{11}\]
The number of detectable photons per unit wavelength, per unit track length and per unit solid angle as a function of the energy of the muon and the angle of emission can then be formulated as:
\[\Phi_{2}(\cos\theta_{0},E,\lambda) = b(E)\,E\,\Phi_{1}(\cos\theta_{0},\lambda) \tag{12}\]
There is also a contribution of energetic knock-on electrons (\(\delta\) rays). The energy loss due to \(\delta\) rays can be expressed as [8]:
\[T\frac{d^{2}N}{dTdx} = \frac{1}{2}Kz^{2}\frac{Z}{A}\frac{1}{\beta^{2}}\frac{F(T)}{T} \tag{13}\]
where \(T\) is the kinetic energy of the knocked-on electron. The minimal and maximal kinetic energy can be expressed as:
\[T_{min} = m_{e}c^{2}\frac{1}{\sqrt{n^{2}-1}}\] \[T_{max} = \frac{2m_{e}c^{2}\beta^{2}\gamma^{2}}{1+2\gamma m_{e}/M_{\mu}+(m_ {e}/M_{\mu})^{2}}\]
where \(\beta\) and \(\gamma\) refer to the speed and the Lorentz factor of the muon, respectively.
The number of detectable photons per unit wavelength, per unit track length and per unit solid angle as a function of the energy of the muon and the angle of emission can then be formulated as:
\[\Phi_{3}(\cos\theta_{0},E,\lambda) = \frac{d^{2}N}{dEd\lambda}\frac{1}{4\pi}\int_{T_{min}}^{T_{max}}dT\, T\,\frac{d^{2}N}{dTdx} \tag{14}\]
In this, it is assumed that the emission of photons from \(\delta\) rays is isotropic.
### Light propagation
In is commonly assumed that the phase velocity of light is related to the Cerenkov angle and the group velocity to the speed at which the light propagates through the medium. The index of refraction, \(n\), is defined as:
\[n \equiv c/v \tag{15}\]
where \(c\) refers to the speed of light (in vacuum) and \(v\) to the phase velocity of the light in the given medium. The index of refraction, \(n\), and inverse of the relative group velocity, \(n_{g}=c/v_{g}\) are shown in figure 5 as a function of the wavelength of the light, \(\lambda\).
### Light absorption
In general, light can be absorbed in the medium. This will affect the detected amount of light. Absorption of light can be taken into account by introducing an extra term in the expression for the number of detectable photons, \(\Phi\):
Figure 5: Index of refraction and inverse of relative group velocity (left) and absorption length and scattering length (right) as a function of the wavelength of the light.
\[\Phi^{\prime} = \Phi\times e^{-d/\lambda_{abs}} \tag{16}\]
where \(\lambda_{abs}\) refers to the absorption length and \(d\) to the distance traveled by the light. The absorption length depends on the wavelength of the light. A typical absorption length for deep-sea water as a function of the wavelength of the light is shown in figure 5[6].
### Light scattering
Various models exist that describe the effects of light scattering in deep-sea waters[1]. Because the light scattering is rotation symmetric, the scattering probability depends only on the space angle, \(\theta_{s}\), which is defined as the angle between the direction of the light before and after the scattering. Two commonly used light scattering models are presented in the following.
* The f4 model is based on the so-called "medsea" parametrisation which is a combination of two Henyey-Greenstein functions, each of which is defined as: \[f(a;\cos\theta_{s}) = \frac{1}{4\pi}\frac{1-a^{2}}{(1+a^{2}-2a\cos\theta_{s})^{\frac{3} {2}}}\] (17) where \(a\) is the average cosine of the scattering angle. This function is normalised to unity for the full solid angle. The parametrisation of the probability density function is then defined as: \[\frac{dP_{s}}{d\Omega_{s}} = p\times f(a_{1};\cos\theta_{s})\ +\ (1-p)\times f(a_{2};\cos\theta_{s})\] (18) In the f4 model, the values of \(p\), \(a_{1}\) and \(a_{2}\) are respectively \(1\), \(0.77\) and \(0\).
* The p0.0075 model is based on a combination of Rayleigh scattering and (geometric) scattering off large particles. Rayleigh scattering is the elastic scattering of light by particles that are typically much smaller than the wavelength of the light. The corresponding cross section can be expressed as[2]: \[\frac{d\sigma}{d\Omega_{s}} = \frac{\pi^{4}}{8}\left(\frac{n^{2}-1}{n^{2}+2}\right)^{2}\frac{d ^{6}}{\lambda^{4}}(1+\cos^{2}\theta_{s})\] (19) where \(n\) is the index of refraction of the medium, \(d\) the diameter of the particle and \(\lambda\) the wavelength of the light. In the p0.0075 model, a slightly different parametrisation for the angular distribution is assumed to take into account the anisotropy of the water molecules: \[g(a,b;\cos\theta_{s}) = a\ (1+b\cos^{2}\theta_{s})\] (20) where \(a=0.06225\) and \(b=0.835\). The (geometric) scattering off large particles is well described by Mie's solution of the Maxwell equations. In the p0.0075 model, the distribution for the scattering off large particles is obtained from measurements _in situ_. The average cosine of the scattering angle has been measured and is found to be \(0.924\). A Henyey-Greenstein function is used which leads to the same average cosine. The parametrisation of the probability density function is then defined as:
\[\frac{dP_{s}}{d\Omega_{s}} = p\times g(\cos\theta_{s})\ +\ (1-p)\times f(a;\cos\theta_{s}) \tag{21}\]
where \(a=0.924\). In the p0.0075 model, the relative contribution of the Rayleigh function is set to \(p=0.17\).
The distributions of the scattering angles of the f4 and p0.0075 models are shown in figure 6. The number of light scatterings per unit track length can be expressed as:
\[-\frac{dN}{dx} = \frac{N}{\lambda_{s}} \tag{22}\]
where \(N\) refers to the number of photons and \(\lambda_{s}\) to the scattering length. Due to the scattering of light, the path of the photon is not uniquely defined (see below). The scattering will also affect the amount of direct (i.e. not scattered) light that can be detected. This can be taken into account by introducing an extra term in the expression for the number of detectable photons, \(\Phi\):
\[\Phi^{\prime} = \Phi\times e^{-d/\lambda_{s}}\]
where \(d\) refers to the distance traveled by the light. For indirect light, an effective attenuation length is used which is applied to the calculated intensity of single-scattered light:
\[\frac{1}{\lambda_{att}} = \frac{1}{\lambda_{abs}}+\frac{w}{\lambda_{s}} \tag{23}\]
For direct light, \(w=1\). For indirect light, \(w\) is usually defined as \(1-\langle\cos\theta_{s}\rangle\). Here, the weight is defined as:
\[w(\cos\theta) = \int_{-1}^{\cos\theta_{s}}\frac{dP_{s}}{d\Omega_{s}}\ 2\pi\ d\cos\theta\]
As a result, light which scattered at a small (large) angle is more (less) attenuated compared to the usual definition.
For the application of either model in the Monte Carlo simulation (see below), it is assumed that the dependence of the scattering length on the wavelength of the light is identical for the different contributions to the light scattering. The assumed common scattering length is shown in figure 5. As can be seen from figure 5, the scattering length increases somewhat between linearly and quadratically with the wavelength of the light. For scattering off large particles ("Mie scattering"), the scattering length does not depend strongly on the wavelength of the light. For Rayleigh scattering, the scattering length should increase with the fourth power of the wavelength of the light (see equation 19). This apparent discrepancy has not been resolved.
### Light detection
The light is detected using a PMT. The acceptance of a PMT depends primarily on the wavelength, \(\lambda\), and the angle of incidence, \(\theta_{\os}\), of the photon. This angle is defined as the angle between the direction of the photon and the axis of the PMT (see figure 1). Here, it is implicitly assumed that the acceptance of the PMT is independent of the azimuth angle and the impact point of the photo-cathode area. The angular
acceptance and the quantum efficiency of a typical 3" PMT are shown in figure 7. The quantum efficiency, QE, shown in figure 7 includes the collection efficiency and the transparency of the glass sphere.
The cosine of the angle of incidence can be determined from the direction of the photon and the orientation of the PMT, i.e:
Figure 6: Parametrisations of the angular distributions of light scattering of two commonly used models.
Figure 7: Parametrisations of the angular acceptance of the PMT as a function of the cosine of the angle of incidence (left) and the quantum efficiency of the PMT as a function of the wavelength of the light (right).
\[\cos\theta_{\odot} = \left(\begin{array}{c}\sin\theta_{\wp}\cos\phi_{\wp}\\ \sin\theta_{\wp}\sin\phi_{\wp}\\ \cos\theta_{\wp}\end{array}\right)\cdot\left(\begin{array}{c}\sin\theta_{1} \cos\phi_{1}\\ \sin\theta_{1}\sin\phi_{1}\\ \cos\theta_{1}\end{array}\right) \tag{24}\]
where \(\theta_{\wp}\) and \(\phi_{\wp}\) correspond to the zenith and azimuth angle of the orientation of the PMT and \(\theta_{1}\) and \(\phi_{1}\) to the zenith and azimuth angle of the direction of the photon. In the absence of light scattering, equation 24 reduces to:
\[\cos\theta_{\odot} = \sin\theta_{\wp}\cos\phi_{\wp}\sin\theta_{0}+\cos\theta_{\wp} \cos\theta_{0} \tag{25}\]
The solid angle of a PMT, \(d\Omega\), is defined as:
\[d\Omega \equiv \frac{A}{d^{2}} \tag{26}\]
where \(A\) refers to the photo-cathode area and \(d\) to the distance traveled by the light.
## 3 Probability Density Functions
In this section, the probability density functions (PDFs) of the arrival times of photons due to various sources of light are presented. In the absence of light scattering, the expected arrival time is completely determined by the position \(z\) or the angle \(\theta_{0}\) of the emitted photon and the velocity of light (see figure 1).
\[ct = z+n_{g}\sqrt{z^{2}+R^{2}} \tag{27}\] \[= -\frac{R}{\tan\theta_{0}}+n_{g}\frac{R}{\sin\theta_{0}} \tag{28}\]
In the absence of light dispersion, the earliest possible arrival time, \(t_{0}\), is sharply defined. This time corresponds by definition to the shortest optical path. Due to light dispersion, the shortest optical path gets smeared. The dependence of the arrival time on the wavelength of the light, \(\lambda\), should then be considered. If the light is emitted from a fixed position (e.g. in the case of light from a shower), the dependence of the arrival time on the wavelength of the light can be formulated as:
\[\frac{\partial ct}{\partial\lambda} = d\,\frac{\partial n_{g}}{\partial\lambda} \tag{29}\]
where \(d\) refers to the distance traveled by the light. When the wavelength dependence of the Cerenkov angle should be taken into account as well, the derivative of the arrival time as a function of \(\lambda\) becomes:
\[\frac{\partial ct}{\partial\lambda} = R\,\left(\frac{1}{\sin\theta_{0}}\frac{dn_{g}}{d\lambda}+\frac{ n-n_{g}}{\tan^{3}\theta_{0}}\frac{dn}{d\lambda}\right) \tag{30}\]
For the light due to the energy loss of a muon, the dependence of the arrival time on the position \(z\) should be considered. The derivative of the arrival time as a function of \(z\) can be formulated as:
\[\frac{\partial ct}{\partial z} = 1-n_{g}\cos\theta_{0} \tag{31}\]
As can be seen from equation 31, the derivative of the time is zero at \(\theta_{0}=\theta_{C}\). The second derivative of the arrival time as a function of \(z\) can be formulated as:
\[\frac{\partial^{2}ct}{(\partial z)^{2}} = n_{g}\frac{\sin^{3}\theta_{0}}{R} \tag{32}\]
As can be seen from equation 32, the second derivative in strictly positive. This shows that indeed there is an arrival time, \(t_{0}\), that corresponds to the shortest optical path. Assuming that the effect of dispersion of light is small, this implies that the distribution of the arrival time of any light will exhibit a leading edge at \(t=t_{0}\).
Due to scattering of light, the path of the photon is not uniquely defined. Assuming a single scattering of the light, the arrival time can be expressed as:
\[ct = z+n_{g}(u+v) \tag{33}\]
where \(u\equiv|\bar{u}|\) and \(v\equiv|\bar{v}|\) refer to the distances traveled by the photon before and after the scattering, respectively (see figure 2). The various paths of the photon are constrained by the following geometrical condition:
\[\left(\begin{array}{c}R\\ 0\\ 0\end{array}\right) = \left(\begin{array}{c}0\\ 0\\ z\end{array}\right)+u\left(\begin{array}{c}\sin\theta_{0}\cos\phi_{0}\\ \sin\theta_{0}\sin\phi_{0}\\ \cos\theta_{0}\end{array}\right)+v\left(\begin{array}{c}\sin\theta_{1}\cos \phi_{1}\\ \sin\theta_{1}\sin\phi_{1}\\ \cos\theta_{1}\end{array}\right) \tag{34}\]
The term on the left hand side corresponds to the position of the PMT and the terms on the right hand side correspond to the point of emission of the photon along the muon trajectory, the path upstream of the scattering (\(\bar{u}\)) and the path downstream of the scattering (\(\bar{v}\)), respectively.
In order to evaluate the total PDF, one should integrate over the full range of directions of the emitted photons provided that equations 33 and 34 are satisfied. For the PDF of light from energy loss processes, one should also integrate over all \(z-\)positions. In general, this requires a summation using discrete values for \(\cos\theta_{0}\), \(\phi_{0}\) and \(z\). For each set of values, equations 33 and 34 should then be solved explicitly. This is possible because there are 4 equations and 4 un-knowns, namely \(u\), \(v\), \(\theta_{1}\) and \(\phi_{1}\). The solution to this problem can be summarised as:
\[d = \frac{ct-z}{n_{g}} \tag{35}\] \[u = \frac{R^{2}+z^{2}-d^{2}}{2R\sin\theta_{0}\cos\phi_{0}-2z\cos \theta_{0}-2d}\] (36) \[v = d-u \tag{37}\]
The cosine of the scattering angle can be obtained directly from equation 34 by multiplying the left hand side and the right hand side with \(\hat{u}\) (see equation 2 for the definition of the scattering angle), i.e:
\[\cos\theta_{s}=\frac{R\sin\theta_{0}\cos\phi_{0}-z\cos\theta_{0}-u}{v} \tag{38}\]
Due to the scattering of the light, the solid angle of the PMT should now be evaluated from the point where the scattering took place instead of the point where the photon has been emitted. In general, the number of
scattered photons is equal to the product of some photon flux, \(\Phi\), the volume \(V=A\,du\) and the inverse of the scattering length. For a Cerenkov light cone, \(\Phi=\Phi_{0}(u\sin\theta_{C})\) and \(A=u\sin\theta_{C}\,d\phi_{0}\,sin^{2}\theta_{C}\,dz\). For light from a shower, \(\Phi=\Phi_{1}(\cos\theta)\,u^{-2}\) and \(A=u^{2}d\cos\theta_{0}d\phi_{0}\). In both cases, the number of scattered photons does not depend on the distance \(u\)[3]. The solid angle is thus:
\[d\Omega = \frac{A}{v^{2}} \tag{39}\]
It should be noted that for small \(v\), the solid angle is of course limited to \(2\pi\).
The direction of the photon after the scattering is needed to determine the angle of incidence on the PMT.
The unit direction vector \(\hat{v}\) is completely determined by the solution above, and is given here for completeness.
\[\hat{v} = \frac{1}{v}\left(\begin{array}{c}R-u\sin\theta_{0}\,\cos\phi_{0 }\\ -\,u\sin\theta_{0}\,\sin\phi_{0}\\ -z-u\cos\theta_{0}\end{array}\right) \tag{40}\]
In the case of light scattering, the dependence of the arrival time on the length \(u\) should be considered (\(\cos\theta_{0}\), \(\phi_{0}\), and \(z\) have been fixed). The derivative of the arrival time as a function of the length \(u\) can be formulated as (see equation 33):
\[\frac{\partial ct}{\partial u} = n_{g}\left(1+\frac{\partial v}{\partial u}\right) \tag{41}\]
where
\[\frac{\partial v}{\partial u} = -\cos\theta_{s} \tag{42}\]
The derivation is given in Appendix A. It is obvious but worth noting that for very small scattering angles, the arrival time does not depend on the length \(u\). Or, in other words, the arrival time of the light no longer depends on the location of a scattering point along a line that is almost straight. As a result, the probability density function, \(P_{s}\), for the scattering of light is weighed with a function that exhibits a pole at \(\cos\theta_{s}=1\).
For the light from a shower, the corresponding PDF should be convoluted with the shower profile which depends on the energy of the shower (see equation 9).
### Direct light from a muon
For direct light from the muon, the zenith angle at which the photons are emitted can be considered fixed (\(\theta_{0}=\theta_{C}\)). The distribution of the arrival times of the photons is then mainly determined by the dispersion of light in the medium. The probability density function for the distribution of the arrival times can then be expressed as:
\[\frac{d{\cal P}}{dt} = \Phi_{0}(R,\lambda)\,A\,\,\left(\frac{\partial t}{\partial\lambda }\right)^{-1}\,\varepsilon(\cos\theta_{\odot})\,\,QE(\lambda)\,\,e^{-d/ \lambda_{abs}}\,\,e^{-d/\lambda_{s}} \tag{43}\]
where \(\Phi_{0}()\) is the detectable photon flux per unit wavelength as a function of \(R\) (equation 5), \(A\) the photo-cathode area of the PMT, \(\varepsilon\) the angular acceptance of the PMT as a function of the angle of
incidence of the photon, and \(QE\) the quantum efficiency of the PMT as a function of the wavelength. The wavelength, \(\lambda\), is constraint by equation 27. The derivative of the time is given by equation 30. The distance traveled by the photons and the Cerenkov angle are related as \(d=\sqrt{R^{2}+z^{2}}=R/\sin\theta_{C}\). It is interesting to note that due to the \(R\) dependence of the time derivative, the number of photons detected in a small time window decreases with the square of the distance \(R\) (see equation 30). The integrated signal decreases -as expected from the Cerenkov cone hypothesis- linearly with the distance \(R\).
### Direct light from a shower
In general, the light from a shower is emitted at all angles. As a consequence, the amount of detected light is proportional to the solid angle of the PMT. The probability density function per unit energy for the distribution of the arrival times can then be expressed as:
\[\frac{d^{2}{\cal P}}{dEdit}\ =\ \Phi_{1}(\cos\theta_{0},\lambda)\ \left(\frac{ \partial t}{\partial\lambda}\right)^{-1}\ \varepsilon(\cos\theta_{\odot})\ QE(\lambda)\ e^{-d/\lambda_{abs}}\ e^{-d/ \lambda_{s}}\ d\Omega \tag{44}\]
where \(\Phi_{1}()\) refers to the detectable photon flux per unit energy as a function of \(\cos\theta_{0}\) (equation 8) and \(d\Omega\) to the solid angle of the PMT (equation 26). The wavelength, \(\lambda\), is constraint by equation 27. The derivative of the time is given by equation 29.
### Direct light due to the energy loss of a muon
For an arrival time that is later than the shortest optical path, equation 27 has two solutions, namely:
\[z_{1,2} = \frac{-b\pm\sqrt{b^{2}-4ac}}{2a} \tag{45}\]
where
\[a = n_{g}^{2}-1\] \[b = 2ct\] \[c = (Rn_{g})^{2}-(ct)^{2}\]
The corresponding distance traveled by the photon and the angle at which the light is emitted can be formulated respectively as:
\[d = \sqrt{R^{2}+z^{2}}\] \[\cos\theta_{0} = -z/d\]
For \(\cos\theta_{0}=1/n_{g}\), the derivative of the arrival time as a function of \(z\) becomes zero (see equation 31). The second derivative of the arrival time as a function of \(z\) should therefore also be considered (equation 32). To second order approximation, the dependence of the arrival time as a function of \(z\) can then be formulated as:
\[\frac{dt}{dz} = \frac{\partial t}{\partial z}\ +\ \frac{1}{2}\frac{D}{\sin\theta_{ 0}}\frac{\partial^{2}t}{(\partial z)^{2}} \tag{46}\]
where \(D\) is the diameter of the photo-cathode area (\(D=2\sqrt{A/\pi}\)). The probability density function for the distribution of the arrival times can then be expressed as:
\[\frac{d{\cal P}}{dt}\ =\ \int d\lambda\,\sum_{z=z_{1},z_{2}}\,\left(\frac{dt}{dz} \right)^{-1}\,\Phi_{2,3}(\cos\theta_{0},E,\lambda)\,d\Omega\ \varepsilon(\cos\theta_{\osamond})\ QE(\lambda)\ e^{-d/\lambda_{abs}}\ e^{-d/ \lambda_{s}} \tag{47}\]
where \(\Phi_{2,3}()\) refers to the detectable photon intensity per unit wavelength, per unit track length and per unit solid angle as a function of \(\cos\theta_{0}\) and \(E\) (equation 12, 14, respectively), and \(d\Omega\) to the solid angle of the PMT (equation 26). The derivative of the time is given by equation 46.
### Indirect light from a muon
For indirect light from the muon, one has to integrate over the full range of azimuth angles and positions of the photon emission points. The probability density function for the distribution of the arrival times can then be expressed as:
\[\frac{d{\cal P}}{dt}\ =\ \iiint d\lambda\,dz\,d\phi_{0}\ \frac{1}{2\pi}\frac{d^{2}N}{ dxd\lambda}\,\frac{1}{\lambda_{s}}\ \left(\frac{\partial t}{\partial u}\right)^{-1} \varepsilon(\cos\theta_{\osamond})\ QE(\lambda)\ e^{-d/\lambda_{att}}\ \frac{dP_{s}}{d \Omega_{s}}\ d\Omega \tag{48}\]
where \(d\Omega\) refers to the solid angle of the PMT (equation 39). The factor \(2\pi\) takes into account the re-normalisation of the number of detectable photons per unit track length (equation 5) due to the explicit integration over \(\phi_{0}\). The term \(1/\lambda_{s}\) corresponds to the probability for the scattering of the light per unit length. The derivative of the time is given by equation 41. The distance traveled by the photons is defined as \(d=u+v\) (see above).
The lower and upper limit for the integral of \(z\) can be determined using equation 45. These values correspond to the case that the photon would scatter immediately after the point of emission and continues to travel to the PMT in a straight line. Apart from the orientation of the PMT, equation 48 exhibits a mirror symmetry in the \(x-z\) plane. One can thus integrate \(\phi_{0}\) between \(0\) and \(\pi\) and evaluate for each \(\phi_{0}\) the angle of incidence of the PMT twice using equation 24, i.e. substituting \(\phi_{0}\) and \(-\phi_{0}\) in equation 40.
### Indirect light from a shower
For indirect light from a shower, one has to integrate over the full range of zenith and azimuth angles of the photon emission profile. The probability density function per unit energy for the distribution of the arrival times can then be expressed as:
\[\frac{d^{2}{\cal P}}{dEdt} = \iiint d\lambda\,d\phi_{0}\,d\mbox{cos}\,\theta_{0}\,\Phi_{1}( \cos\theta_{0},\lambda)\,\frac{1}{\lambda_{s}}\ \left(\frac{\partial t}{\partial u}\right)^{-1} \varepsilon(\cos\theta_{\osamond})\ QE(\lambda)\ e^{-d/\lambda_{att}}\ \frac{dP_{s}}{d \Omega_{s}}\ d\Omega \tag{49}\]
where \(\Phi_{1}()\) refers to the detectable photon flux per unit energy as a function of \(\cos\theta_{0}\) (equation 8) and and \(d\Omega\) to the solid angle of the PMT (equation 39). The term \(1/\lambda_{s}\) corresponds to the probability for the scattering of the light per unit length. The derivative of the time is given by equation 41. The distance traveled by the photons is defined as \(d=u+v\) (see above).
### Indirect light due to the energy loss of a muon
For indirect light due to the energy loss of a muon, one has to integrate over the full range of zenith and azimuth angles and positions of the photon emission points. The probability density function for the distribution of the arrival times can then be expressed as:
\[\frac{d{\cal P}}{dt}\ =\ \iiiint d\lambda\,dz\,d\phi_{0}\,d\!\cos\theta_{0}\ \Phi_{2,3}(\cos\theta_{0},E,\lambda)\,\frac{1}{\lambda_{s}}\,\left(\frac{ \partial t}{\partial u}\right)^{-1}\varepsilon(\cos\theta_{\odot})\ QE(\lambda) \ e^{-d/\lambda_{att}}\ \frac{dP_{s}}{d\Omega_{s}}\ d\Omega \tag{50}\]
where \(\Phi_{2,3}()\) refers to the number of detectable photons per unit wavelength, per unit track length and per unit solid angle as a function of \(\cos\theta_{0}\) and \(E\) (equation 12 and 14, respectively) and \(d\Omega\) to the solid angle of the PMT (equation 39). The term \(1/\lambda_{s}\) corresponds to the probability for the scattering of the light per unit length. The derivative of the time is given by equation 41. The distance traveled by the photons is defined as \(d=u+v\) (see above).
The lower and upper limit for the integral of \(z\) can be determined using equation 45. Apart from the orientation of the PMT, equation 50 exhibits a mirror symmetry in the \(x-z\) plane. One can thus integrate \(\phi_{0}\) between \(0\) and \(\pi\) and evaluate for each \(\phi_{0}\) the angle of incidence of the PMT twice using equation 24, i.e. substituting \(\phi_{0}\) and \(-\phi_{0}\) in equation 40.
## 4 Numerical computation
The PDF of the direct light from a muon can be computed directly using equation 43. The PDFs of light from showers and scattered light from a muon involves a (multi-dimensional) integral. In order to evaluate these integrals accurately, designated variable transformations are introduced.
The integral over \(\lambda\) is evaluated by integrating over the inverse of the relative group velocity of light (\(n_{g}\)) and finding the corresponding wavelength at each point. As a result, the sampling is denser when the dependence of arrival time on the wavelength (i.e. the dispersion of light) is stronger.
The integral over \(z\) is evaluated using the variable, \(x\):
\[x \equiv e^{-a}(z-z_{2}) \tag{51}\]
where the value of the slope parameter, \(a\), is set to \(a=1/\lambda_{abs}\).
The integral over \(\phi_{0}\) in equation 44 is evaluated using the variable, \(y\):
\[y \equiv e^{-b}\phi_{0} \tag{52}\]
The value of the slope parameter, \(b\), is defined as:
\[b = \frac{1}{\pi}\log\frac{v_{\pi}^{2}}{v_{0}^{2}} \tag{53}\]
where \(v_{\pi}\) represents the longest possible path length of the photon after the scattering (\(\phi_{0}\simeq\pi\)) and \(v_{0}\) the shortest possible path length (\(\phi_{0}\simeq 0\)). The two path lengths can be expressed as:
\[v_{\pi} = l \tag{54}\] \[v_{0} = d-\frac{1}{2}\frac{(d+l)\times(d-l)}{d-l\cos(\theta_{C}-\theta)} \tag{55}\]
where \(l=\sqrt{z^{2}+R^{2}}\) and \(\theta=\arctan(-R/z)\). The distance \(d\) is defined in equation 35. For \(z\) close to the position of the Cerenkov cone (\(\theta\simeq\theta_{C}\)), the expression for the shortest possible path length reduces to \(v_{0}\simeq(d-l)/2\). For \(\Delta t\simeq 0\,\mathrm{ns}\), the \(\phi\) dependence of the PDF is expected to be very strong. The value of the slope parameter is then correspondingly large. For \(z\) far away from the position of the Cerenkov cone or large \(\Delta t\), the \(\phi\) dependence of the PDF is expected to be rather weak. The value of the slope parameter is then correspondingly small.
The integral over \(\cos\theta_{0}\) and \(\phi_{0}\) in equation 50 is evaluated using the variables \(\sin\beta\) and \(\phi\) instead. These variables are defined in figure 8. The integral over \(\cos\theta_{0}\) and \(\phi_{0}\) can then be expressed as:
\[d\mathrm{cos}\,\theta_{0}d\phi_{0} = d\mathrm{cos}\,\alpha d\phi \tag{56}\] \[= \frac{v^{2}}{u^{2}}\tan\beta\;d\mathrm{sin}\,\beta d\phi \tag{57}\]
The Jacobian in equation 57 compensates completely the \(v^{-2}\) term in the expression of the PDF due to the solid angle of the PMT (equation 39) and compensates partly the \((1-\cos\theta_{s})^{-1}\) term due to the time derivative (equation 41). This ensures a proper sampling of the phase-space (starting from the muon trajectory it is very unlikely to find the right \(\cos\theta_{0}\) and \(\phi_{0}\) that produce a hit on the PMT at a given time, in particular when the PMT is looking away from the muon).
Finally, the integrals are evaluated using the Gauss-Legendre technique [7]. As a result of the variable transformations, the integrals in equations 47, 44 and 50 can be evaluated rather accurately with a relatively small number of integration points (typically 25 for each variable). It is obvious but worth noting that for each \(\Delta t\), the determination of the value of the PDF of scattered light from a muon and scattered light due to the energy loss of a muon thus take only \(25^{3}\) and \(25^{4}\) steps, respectively.
Figure 8: Definition of the integration angles, \(\beta\) and \(\phi\), used in the numerical computation of equation 50.
Example plots
In the following, some example plots are shown. For this, it is useful to introduce the time difference, \(\Delta t\), with respect to the expected arrival time assuming the Cerenkov cone hypothesis, i.e:
\[\Delta t \equiv t-t_{0} \tag{58}\]
The expected arrival time, \(t_{0}\), is defined as:
\[t_{0} \equiv \frac{R\ \tan\theta_{C}}{c} \tag{59}\]
where \(c\) is the speed of light in vacuum. The value of the angle \(\theta_{C}\) corresponds here to a typical value. The definition of the arrival time \(t_{0}\) corresponds to the shortest optical path from any point on the muon trajectory to the position of the PMT.
Unless stated otherwise, the PMT is located at a distance of \(50\ \mathrm{m}\), the angular acceptance and the QE of the PMT are taken from figure 7 and the probability density function of the light scattering is set to the one labeled p0.0075 in figure 6. The absorption length and the scattering length are set to those shown in figure 5. The values of the other parameters required for the evaluation of the PDFs are summarised in table 1.
The PDFs of light due to the energy loss of a muon have been evaluated for a muon energy fixed at \(1\ \mathrm{GeV}\). Because the number of detectable photons is proportional to the muon energy (see equation 11), the results should be scaled by a factor 1000 for a muon with an energy of \(1\ \mathrm{TeV}\). The various orientations of the PMT that have been considered are listed in table 2 (see figure 1 for the definition of the quoted angles):
The PDFs as a function of time are shown in figure 9 for different orientations of the PMT. As can be seen from figure 9, all PDFs exhibit a leading edge at \(\Delta t\simeq 0\ \mathrm{ns}\). For a PMT pointing West or South, the PDFs show an outstanding peak at the leading edge. For direct light from a muon, this is conform the
\begin{table}
\begin{tabular}{|c|c|c|} \hline parameter & symbol & value \\ \hline \hline photo-cathode area & \(A\) & \(0.00454\ \mathrm{m}^{2}\) \\ Energy loss & \(a(E)\) & \(0.267\ \mathrm{GeV/m}\) \\ Energy loss & \(b(E)\) & \(3.4\times 10^{-4}\ \mathrm{m}^{-1}\) \\ shower light & \(\frac{dx}{dE}\) & \(4.7319\ \mathrm{m/GeV}\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameter values
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(\theta_{\wp}\) & \(\phi_{\wp}\) & label \\ \hline \hline
0 & 0 & \(\mathcal{N}\) North \\ \(\pi/2\) & 0 & \(\mathcal{E}\) East \\ \(\pi\) & 0 & \(\mathcal{S}\) South \\ \(\pi/2\) & \(\pi\) & \(\mathcal{W}\) West \\ \hline \end{tabular}
\end{table}
Table 2: PMT orientations.
characteristics of the light cone and the definition of \(\Delta t\) (equation 58). For direct light from the energy loss processes, this is due to the angular distribution of the emitted light which peaks at \(\theta_{0}=\theta_{C}\) (see figure 4). Under the influence of the scattering of light, the main signature is preserved. The preservation of the peak is due to the shape of the angular distribution of the scattering probability (figure 6) and the pole of the PDF at small \(\Delta t\) (equations 31 and 41). The PDFs of the direct light from the muon for the West and South orientations are identical because the angle of incidence of the light on the PMT is the same. The height of the peaks of the other PDFs are also very similar. For small \(\Delta t\), the optical path is close to that of the Cerenkov cone. Hence, the PDFs should be very similar indeed because the angles of incidence of the light on the PMT are the same. The tails of the distributions are, however, quite different. In general, the light originates from anywhere along the track segment between \(z_{1}\) and \(z_{2}\) (c.f. equation 45). Although the track segments are identical in both cases, due to the absorption of light the downstream end contributes more (c.f. equations 16 and 35). Consequently, the detected light yield depends on the coverage of the track segment by the field of view of the PMT which depends on its orientation. For a PMT pointing North or East, the PDFs do not show a significant peak at the leading edge. No direct light from a muon is detected. The shape of the PDFs of scattered light from the muon and that due to the energy loss are quite similar. In general, the PDF for a PMT pointing North is higher than that of a PMT pointing East due to the different coverages of the track segment between \(z_{1}\) and \(z_{2}\) by the corresponding field of view of the PMT. This effect is particularly strong for direct light from energy loss processes.
## Appendix A Derivation of \(\frac{\partial v}{\partial u}\)
For the determination of the derivative of the arrival time as a function of the length \(u\), the quantity \(\frac{\partial v}{\partial u}\) should be evaluated. To this end, equation 34 should be multiplied with \(\hat{u}\).
\[Ru_{x}-zu_{z} = u+v\cos(\theta_{s})\]
This yields an expression for \(v\cos(\theta_{s})\). Taking the square of equation 34 yields:
\[R^{2}+z^{2} = u^{2}+v^{2}+2uv\cos(\theta_{s})\] \[\Rightarrow R^{2}+z^{2}-u^{2} = v^{2}+2u\left(Ru_{x}-zu_{z}-u\right)\] \[\Rightarrow v = \sqrt{R^{2}+z^{2}+u^{2}-2u\left(Ru_{x}-zu_{z}\right)}\] \[\Rightarrow\frac{\partial v}{\partial u} = -\cos(\theta_{s})\]
Figure 9: PDFs as a function of time for a PMT located at a distance of 50 m. The labels refer to the PMT orientations listed in table 2. The colour coding is as follows, black: muon direct; red: muon indirect; green: energy loss direct; and blue: energy loss indirect light. The PDFs for energy loss processes have been normalised to 1 TeV muon energy. |
2309.07915 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context
Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing vision-language Model with Multi-Modal
In-Context Learning(MMICL), a new approach to allow the VLM to deal with
multi-modal inputs efficiently; 2) proposing a novel context scheme to augment
the in-context learning ability of the VLM; 3) constructing the Multi-modal
In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to
understand complex multi-modal prompts. Our experiments confirm that MMICL
achieves new state-of-the-art zero-shot performance on a wide range of general
vision-language tasks, especially for complex benchmarks, including MME and
MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge
of complex multi-modal prompt understanding and emerges the impressive ICL
ability. Furthermore, we observe that MMICL successfully alleviates language
bias in VLMs, a common issue for VLMs that often leads to hallucination when
faced with extensive textual context. Our code, dataset, dataset tool, and
model are available at https://github.com/PKUnlp-icler/MIC | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | 2023-09-14T17:59:17Z | http://arxiv.org/abs/2309.07915v3 | # MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
###### Abstract
Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. However, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in downstream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state-of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi-modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully alleviates language bias in VLMs, a common issue for VLMs that often leads to hallucination when faced with extensive textual context. Our code, dataset and model are available at _[https://github.com/PKUnlp-icler/MIC_](https://github.com/PKUnlp-icler/MIC_).
## 1 Introduction
General-purpose vision-language pre-trained models (VLMs) have made significant advancements (Li et al., 2022, 2023; Zhu et al., 2023; Li et al., 2023). Recent VLMs mostly augment a large language model (LLM) with a visual encoder and exhibit impressive zero-shot capacities in various visual tasks. However, unlike LLMs that can extract rich background knowledge and task information from the prompt with _in-context learning_ (ICL), most VLMs still struggle to understand complex multi-modal prompts that include multiple images. Previous studies (Li et al., 2023;b) primarily focus on handling the user queries with a single image rather than multi-modal prompts with interleaved multiple images and text. Although some VLMs like Flamingo (Alayrac et al., 2022) and Kosmos-1 (Huang et al., 2023) can handle user queries with multiple images, their pre-training data can not provide more sophisticated multi-modal prompts than interleaved image and text crawled from the web (Awadalla et al., 2023). Hence, there is a gap between the prompts used in pre-training these VLMs and the user queries in real-world scenarios, which always contain multiple images and more sophisticated text. Specifically, these VLMs may suffer from the following three limitations, which makes VLMs less effective in downstream vision-language tasks.
**Hard to Understand Text-to-Image Reference**: Previous studies rarely attempt to address the issue of text-to-image reference in the multi-modal prompts. However, there are often intricate referential relationships between the text and images in user queries, with different words mentioning different
images. For example, the user may ask a specific question about multiple images(Fig. 1.c and Fig. 1.f) or use multiple images as exemplars to ask the question only about a specific image(Fig. 1.d). However, the training data used in previous studies (Li et al., 2023; Alayrac et al., 2022; Huang et al., 2023a) are crawled from the web and may lack explicit text-to-image references. VLMs thus might fail to handle user queries involving intricate text-to-image references.
**Hard to Understand the Relationships between Multiple Images**: There are often spatial, temporal, and logical relationships between multiple images, and correctly understanding them allows the model to handle user queries better. However, the pre-training data used by previous VLMs (Alayrac et al., 2022) are collected from the internet, lacking close connections among images, especially when these images are far apart on the same webpage. It hampers the ability of VLMs to understand the intricate relationships among the images and further limits their reasoning ability.
**Hard to Learn from In-Context Multi-Modal Demonstrations**: Previous studies have shown that pretrained LLMs can benefit from few in-context demonstrations (Brown et al., 2020; Dong et al., 2023). However, the ICL ability of current VLMs is rather limited, specifically: 1) VLMs like BLIP-2 (Li et al., 2023d), LLaVA (Li et al., 2023b) only support multi-modal prompts with a single image, hampering their abilities to use multiple multi-modal demonstrations to enhance their performance during the inference; 2)Although VLMs such as Flamingo (Alayrac et al., 2022) support multi-image inputs during pretraining and emerge ICL abilities, their context schemes fall to provide text-image references and closely related images. It inhibits them from offering sophisticated enough prompts to the VLMs, thereby limiting the effectiveness of their ICL ability. Besides, the lack of further supervised instruction tuning hinders their effectiveness across downstream tasks.
In this paper, to address the aforementioned limitations 1) We present MMICL, a new approach to allow VLMs to efficiently deal with multi-modal inputs, including relationships among multiple images and text-to-image references. 2) We propose a novel context scheme in which incorporating
Figure 1: Examples of vision-language dialogue generated by MMICLtypically contain prompts with interleaved images and text. MMICL understands spatial (**a**), logical (**b**), and temporal (**e**) relationships among images. MMICL can also grasp text-to-image references as (**c**),(**d**) and (**f**).
an extra image declaration section, along with the inclusion of image proxy tokens, enhances the ICL ability of the VLM. 3) We construct a multi-modal in-context learning dataset in accordance with the proposed scheme. The dataset is adapted from a range of existing datasets and can be used to provide support for the training of more capable VLMs.
Our experiments show that MMICL achieves new state-of-the-art performance on various of vision-language benchmarks including MME (Fu et al., 2023) and MMBench (Liu et al., 2023c) *. Comprehensive examinations of the three limitations we aim to address reveal that MMICL exhibits exceptional ability in understanding text-to-image references (13-points improvement on the vision-language compositionality benchmark, Winoground (Thrusth et al., 2022a)) and intricate relationships among images(12-points improvement on the multi-image reasoning benchmark, RAVEN (Huang et al., 2023a)). Moreover, MMICL demonstrates impressive multi-model ICL performance across various tasks. We also observe that MMICL efficiently mitigates the language bias, which often causes VLMs to ignore visual contents when facing extensive textual contexts, leading to hallucinations.
Footnote *: Results of MMICL are submitted on August 28th, 2023.
## 2 Mmicl
### Model Architecture
Most VLMs utilize Visual-Prompt Generators (VPG) (e.g., Resampler (Alayrac et al., 2022), Qformer (Li et al., 2023d)) to extract visual embeddings from the image features encoded by vision backbones and use visual embeddings to help LLMs understand visual inputs. The model architecture shown in Fig. 2.a belongs to VLMs that focus on prompts with a single image, such as Blip-2 (Li et al., 2023d), which always places the image at the top of the entire input and can not handle the inputs with multiple images In Fig. 2.b, VLMs with few-shot ability, such as Flamingo (Alayrac et al., 2022), encode images into image embeddings with a fixed number of visual tokens and use cross-attentions in LLM to mixture the visual and text content. Different from previous work, MMICL shown in Fig. 2.c treats image and text representations equally and establishes the reference between image and text via image declaration. It enables users to have the flexibility to input multiple images and text in any desired order, with no restrictions on the quantity or placement of images in contexts. As shown in Fig. 4, each given image is encoded by a vision encoder (e.g., ViT (Radford et al., 2021)) to get the image representation. Then, we use the Q-former as the VPG to extract the visual embedding. We utilize a fully connected layer as the projection layer to convert each visual embedding to the same dimension as the text embedding of the LLM. Finally, we combine the visual embeddings of multiple images with text embeddings in an interleaved style and feed them into the LLM. We set the weights for mapping query and value vectors in the attention layer of LLM as learnable to better adapt to multi-modal prompts with multiple images. More details are presented in D.
### The Design of Context Scheme of MMICL
In this section, we outline the design of the Context Scheme for MMICL. The proposed scheme is devised to proficiently transform the interleaved image-text data into the training context for MMICL.
Figure 2: **Comparison of different VLM architectures:** VLMs focused on a single image, VLMs with few-shot ability, and MMICL with equal treatment of image and text representation.
#### 2.2.1 Image Declaration
Users may use textual descriptions to refer to particular images in their queries. Such reference can provide information about the visual content mentioned in the text to the VLM, allowing it to learn alignment between two modalities. To precisely link text and image, we form image declaration templates for each image in mixed inputs, as shown in Fig. 3.a. Firstly, we allocate a unique image proxy (**[IMG]**) to reference the visual embedding of image \(j\), which provides a unique identifier for VLMs to index and distinguish between visual and text embeddings. Then, we utilize natural language prompts to establish references between text and image. Incorporating the explicit text-to-image reference in the image declaration assists the model in correlating the text with the appropriate image. Meanwhile, the image declaration, maintained as textual content, can also preserve the flexibility to appear at any position within the prompt. Each instance \(\mathbf{I}_{i}\) follows the structure, where the \(\mathbf{X}_{i}\) symbolizes the set of image decorations that can be placed anywhere within the instance \(\mathbf{I}_{i}\). \(\mathbf{q}_{i}\) and \(\mathbf{a}_{i}\) denote the question with instruction and corresponding answer, respectively.
\[\mathbf{I}_{i}=(\mathbf{X}_{i},\mathbf{q}_{i},\mathbf{a}_{i}) \tag{1}\]
#### 2.2.2 Multi-model Data with Interconnected Images
To incorporate abundant multi-image information within the context schema of MMICL, we generate interconnected multi-image data that includes spatial, logical, and temporal relationships. It aids MMICLin understanding the intricate relationships among images in user queries. Specifically, we derive frames from videos to build multi-image data. The frames extracted from video inherently sustain close temporal and spatial relations, which infuse spatial and temporal correlation information among images into the context scheme. Besides, we build multi-image data from images depicting multiple object interactions. We detect the objects within the image and generate bounding boxes for each object. We acquire multiple sub-images of different objects by cropping the image according to bounding boxes. We then replace the textual references to these objects with their corresponding cropped images, thus forming interleaved multi-modal data with logical and causal interconnected images, as delineated in Fig. 3.b. Each instance \(\mathbf{I}_{i}\) comprises a question-answer text pair along with \(K\) images, where the \(\mathbf{x}_{i,k}\in\mathbf{X}_{i}\) represents the image declaration for the \(k\)-th image.
\[\mathbf{I}_{i}=(\{\mathbf{x}_{1},\mathbf{x}_{2},\dots,\mathbf{x}_{k}\}, \mathbf{q}_{i},\mathbf{a}_{i}) \tag{2}\]
#### 2.2.3 Unified Multi-modal In-Context Format for Different Tasks
We propose a design for producing multi-modal in-context learning data for different tasks to enrich the context scheme of MMICL. It aims to improve the instruction-aware ability of VLM and expand
Figure 3: Context scheme for MMICL, which seamlessly transforms the interleaved image-text data into training context in a unified format
its abilities for proficient multi-modal in-context learning. Specifically, we start by crafting diverse instructions for each task and generate different templates for the task utilizing these instructions. We then fill in the randomly selected template with the original task to assemble data equipped with instructions as Appendix F. Moreover, we convert the data into a multi-modal in-context format by constructing few-shot exemplars generated by sampling instances from the data. These exemplars are combined with the input instance to produce the multi-modal in-context data. In this way, we can transform all tasks into a unified multi-modal in-context format, as illustrated in Fig. 3.c. This method facilitates amassing an extensive amount of high-quality data from different tasks, enriching the context schema of MMICL with an abundant diversity of multi-modal in-context data teeming with diverse instructions. Ultimately, this improves the model's ability to follow instructions and multi-modal in-context learning ability. Each instance \(\mathbf{I}_{i}\) comprises N exemplars.
\[\mathbf{I}_{i}=(\{\mathbf{P}_{1},\cdots,\mathbf{P}_{N}\},\mathbf{X}_{i}, \mathbf{q}_{i},\mathbf{a}_{i}) \tag{3}\]
Each exemplar \(\mathbf{P}_{j}=(\mathbf{X}_{j},\mathbf{q}_{j},\mathbf{a}_{j})\), \(\mathbf{X}_{j}\) denotes the image declaration of the \(j\)-th exemplar. \(\mathbf{q}_{j}\) and \(\mathbf{a}_{j}\) denote the question and answer for the \(j\)-th exemplar, respectively.
### Multimodality In-Context Learning (MIC) Dataset Construction
To help VLMs understand the complex prompts, we construct MIC dataset by gathering data from public data resources and converting them based on the context scheme. It has three key aspects: 1) image declaration, 2) multi-modal data with closely related images, and 3) multi-modal in-context data for different tasks. Training set of MIC comes from 16 datasets across 8 categories, while the test set comes from 18 datasets across 10 categories. Additional details can be found in Appendix B and Appendix C.
Firstly, we create an image declaration per instance in all datasets using Algorithm 1 to generate datasets with explicit text-to-image
Figure 4: Illustration of MMICL architecture and training paradigm. The upper part denotes the overview of model architecture and the bottom denotes the pipeline of the two-stage training paradigm.
reference. We then have annotators scrutinze every dataset's samples and provide task instructions. This practice aids in gaining a comprehensive understanding of the task and helps craft high-quality templates. Next, we employ ChatGPT+ to rewrite the instructions to describe the key characteristics of each task accurately. After ChatGPT generates the instructions, we undergo a manual review to guarantee the high quality of the instructions. We select ten suitable templates matching as candidates, then merge the original dataset's input into a randomly chosen template. We assemble demonstrations for each instance from the dataset by selecting a small amount of data and arranging them sequentially. These demonstrations are integrated with the input instance to generate multi-modal contextual data++. We construct multi-image data by extracting eight frames per video from MSRVTT (Xu et al., 2016) and MSRVTTQA (Xu et al., 2016) datasets. We also crop images from the VCR (Zellers et al., 2019) dataset using object bounding boxes to produce intertwined multi-modal data with closely related images. We convert all data into a vision-language Q&A format to create high-quality multi-modal training data and accumulate 5.8M samples in MIC dataset. Due to resource constraints, we use approximately 10% of MIC with the sampling strategy described in Appendix E to finetune MMICL. It is anticipated that a larger model trained on all of our data would yield a more promising result.
Footnote †: We use the _gpt-3.5-turbo_ version of ChatGPT.
### Training Paradigm
**Stage I: Pretraining.** This stage aims to assist the model in aligning the image and text embeddings. During this stage, both the vision encoder and the LLM remain frozen. The VPG (i.e., Q-Former) and projection layer are trained to learn visual embeddings that can be interpreted by the LLM.
**Stage II: Multi-Modal In-Context Tuning.** In this stage, we aim to address the aforementioned limitations and take our model a step further by extending it to multi-modal in-context learning. Specifically, we aim to make the model understand the intricate referential relationships between the text and images and the complex relationships among multiple images and ultimately acquire a proficient multi-modal in-context learning ability. Therefore, we perform multi-modal In-Context Tuning on MIC dataset. During the stage II, we freeze the image encoder, Q-former, and LLM while jointly training the projection layer and query and value vectors.
## 3 Experiment
### Experimental Setup
**Evaluation Setup.** We aim to develop general-purpose VLMs that can generally adapt to diverse, challenging multi-modal prompts. Therefore, we evaluate our models in several vision-language benchmarks, including tasks that involve images and videos. The metrics used in these benchmarks and further details are shown in Appendix L.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{**Cognition**} & \multicolumn{4}{c}{**Perception**} & \\ \cline{2-13} Model & Comm. & Num. & Text. & Code. & Exit. & Comm. & Pos. & Color & OCR & Poster & Cele. & Scene & Land. & Art. & **Total Avg.** \\ \hline LLVA & 57.14 & 50.00 & 57.50 & 50.00 & 50.00 & 50.00 & 55.00 & 50.00 & 50.00 & 48.82 & 50.00 & 50.00 & 49.00 & 51.25 \\ MiniGPT-4 & 59.29 & 45.00 & 60.00 & 40.00 & 68.33 & 55.00 & 43.33 & 55.00 & 57.50 & 41.84 & 54.41 & 71.75 & 54.00 & 60.50 & 51.85 \\ MultiModal-GPT & 49.29 & 62.50 & 60.00 & 55.00 & 61.67 & 55.00 & 68.33 & 63.33 & 52.00 & 57.28 & 73.82 & 60.00 & 69.75 & 59.00 & 62.97 \\ VisualML-6B & 39.29 & 45.00 & 50.00 & 47.05 & 50.00 & 50.00 & 48.33 & 55.00 & 42.50 & 65.99 & 53.34 & 16.45 & 82.37 & 57.25 & 63.36 \\ VPGTrans & 64.29 & 50.00 & 77.50 & 57.00 & 70.80 & 60.00 & 63.33 & 73.33 & 70.50 & 84.01 & 53.43 & 14.72 & 64.75 & 72.75 & 74.27 \\ LAYN & 87.14 & 65.00 & 57.00 & 58.00 & 85.83 & 63.33 & 70.50 & 77.50 & 77.59 & 73.45 & 136.75 & 93.50 & 82.75 & 86.66 \\ LLLA-Adaptive-V2 & 81.43 & 62.50 & 50.00 & 58.00 & 120.00 & 50.00 & 40.38 & 75.00 & **12.50** & **99.66** & 86.18 & 148.45 & 150.25 & 69.75 & 87.26 \\ mPHC-OMP & 78.57 & 60.00 & 58.00 & 57.10 & 120.00 & 50.00 & 55.00 & 55.00 & 16.05 & 10.65 & 150.50 & 150.95 & 19.925 & 96.25 & 88.82 \\ InetrackHIP & 129.29 & 400.00 & 65.00 & 57.50 & 185.00 & 143.33 & 66.67 & 153.33 & 72.50 & 123.81 & 101.18 & 153.00 & 79.75 & 134.25 & 107.47 \\ BLI-2 & 110.00 & 40.00 & 65.00 & 78.00 & 160.10 & 153.00 & 73.33 & 143.33 & 143.00 & 114.84 & 105.45 & 143.00 & 136.50 & 113.13 \\ I-V2 & 110.71 & 17.50 & 42.00 & 55.00 & 55.00 & 15.51 & 55.00 & 10.00 & 170.70 & 77.50 & 123.83 & 114.64 & 154.50 & **16.200** & 119.50 & 113.50 \\ GIT2 & 99.29 & 50.00 & 67.50 & 45.00 & 160.00 & 181.33 & **96.76** & 185.33 & 60.120 & 119.58 & 145.85 & 158.00 & **146.26** & **113.55** \\ Outer & 160.43 & 75.20 & 50.00 & 70.00 & **95.80** & 83.86 & 167.13 & 73.250 & 138.78 & 124.78 & **158.75** & 137.25 & 129.00 & 141.19 \\ Chebot & 98.77 & 75.50 & 75.00 & **87.90** & 80.06 & 96.70 & 166.00 & 166.70 & 166.70 & 162.42 & 164.50 & 156.00 & 145.73 & 113.05 & 115.79 \\ LV-Instructing & 100.71 & 70.00 & 58.00 & 72.50 & 165.00 & 111.87 & 166.60 & 166.00 & 110.90 & 1129.41 & 147.98 & 140.53 & 101.25 & 162.29 \\ BLIVA & 136.43 & 57.50 & 72.50 & 60.00 & 180.00 & 138.33 & 81.67 & **180.00** & 87.50 & **155.10** & 140.85 & 151.50 & 89.50 & 133.25 & 119.23 \\ \hline MMICL & **136.43** & **82.50** & **132.90** & **77.50** & 170.00 & **160.00** & 81.67 & 156.67 & 170.000 & 146.26 & 141.76 & 153.75 & 136.13 & 138.50 & **129.33** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation results on the MME. Top two scores are highlighted and underlined, respectively.
Models and Baselines.We provide two versions of MMICL: (1) MMICL (FLAN-T5) which uses BLIP-2 (Li et al., 2023d) as the backbone and (2) MMICL (Instruct-FLAN-T5) which uses Instruct-BLIP (Dai et al., 2023) as the backbone. We also adopt XL and XXL of FLANTS (Chung et al., 2022) model for both versions. We compare MMICL with following strong baselines: **Flamingo** (Alayrac et al., 2022), **KOSMOS-1** (Huang et al., 2023a), **BLIP-2-FLAN-T5, InstructBLIP-FLAN-T5, **Shikra** (Chen et al., 2023), **Otter** (Li et al., 2023a), **Ying-VLM** (Li et al., 2023e). The details of MMICL and baselines are shown in Appendix G, and Appendix M.
### General Performance Evaluations
We evaluate the general performance of MMICL on both MME (Fu et al., 2023) and MMBench (Liu et al., 2023c) benchmarks8. MME evaluates VLMs with 14 sub-tasks that encompass cognition and perception abilities. Results in Table 1 show that MMICL can achieve the best average scores compared with current VLMs on cognition and perception tasks. MMICL also demonstrates outstanding performance and significantly surpasses other VLMs on the MMBench benchmark, which thoroughly evaluates the diverse skills of VLMs. The detailed results are presented in Table 21. See Appendix H and I for MMICL's evaluation detail and comparisons with other VLMs.
Footnote 8: All the reported performance for the baseline methods is from the leaderboard of MME (Fu et al., 2023) and MMBench (Liu et al., 2023c). We report the result of MMICL with FLANTS-XXL backbone.
### Performance Prob
#### 3.3.1 Understanding Text-to-Image Reference
The Winoground (Thrush et al., 2022b) proposes a task of correctly matching two given images and captions, as depicted in the left of Fig. 5. The challenge lies in the fact that both captions consist of the exact same words, albeit in a different order. VLMs must compare both images and texts to discern their subtle differences and capture the implicit reference between them. Therefore, we select the Winoground to evaluate whether VLMs understand the text-to-image reference. Results in Table 2 demonstrate that MMICL captures the referential relationship between image and text, surpassing previous baselines.
#### 3.3.2 Understanding Complex Image-to-Image Relationship
RAVEN (Zhang et al., 2019; Huang et al., 2023a) test is widely used to evaluate the nonverbal reasoning ability of VLMs. It requires visual and logical skills to understand the relationships among images.
\begin{table}
\begin{tabular}{l c c c} \hline Model & Text & Image & Group \\ \hline M Turk Human & 89.50 & 88.50 & 85.50 \\ Random Chance & 25.00 & 25.00 & 16.67 \\ \hline \multicolumn{4}{c}{CLIP-based Model} & \\ \hline VQ2 (Yarom et al., 2023) & **47.00** & 42.20 & 30.50 \\ \hline \multicolumn{4}{c}{Vision-language Model} & \\ \hline PAL1 (Chen et al., 2022) & 46.50 & 38.00 & 28.75 \\ Blip-2 (Li et al., 2023d) & 44.00 & 26.00 & 23.50 \\ MMICL (FLAN-T5-XXL) & 45.00 & **44.99** & **43.00** \\ \hline \end{tabular}
\end{table}
Table 2: Results on Winoground across text, image and group score metrics.
Figure 5: Illustration of two complex vision language reasoning tasks: **Winoground** (Thrush et al., 2022b) (Left) and **RAVEN** (Zhang et al., 2019) (Right).
We conduct zero-shot experiments on the Raven test to evaluate VLM's ability to understand image-to-image relationships. Each instance has \(3\) or \(8\) images as inputs and \(6\) candidate images with a unique answer, and the goal is to predict the right image as shown in the right of Fig. 5. The result in Table 3 shows that MMICL achieves 12 points improvement compared to KOSMOS-1. It indicates that MMICL is able to capture the complex image-to-image relationships and conduct nonverbal visual reasoning tasks.
### Learning from In-Context Multi-Modal Demonstrations
As shown in Table 4, we evaluate the multi-modal in-context learning ability of MMICL across various vision-language tasks. MMICL outperforms other VLMs on both the held-in and held-out datasets and achieves the state-of-art few-shot performance. For example, few-shot evaluation (4-shot) of MMICL on the VizWiz benchmark outperforms the baseline Flamingo-9B (Alayrac et al., 2022) and KOSMOS-1 (Huang et al., 2023) by \(15.38\) and \(14.98\) points, respectively. Since VizWiz has never been exposed in the training data, this superior suggests the ability of MMICL to generalize to new tasks with a few exemplars. The few-shot performance of Flickr30K decreases with examples given because the captions examples may provide noise for the VLM to finish the task(i.e., in-context exemplars generally do not provide hints for models to perform image captioning tasks).
### Hallucination and Language Bias of VLMS
Current VLMs have significant visual hallucinations (Li et al., 2023), preventing VLMs from benefiting from multi-modal ICL. Especially when dealing with complex prompts with multiple images (e.g., multi-modal chain of thoughts (Zhang et al., 2023)), VLMs often overlook visual content when facing extensive text. This language bias reduces their efficiency in answering questions that require both images and text. ScienceQA-IMG (Lu et al., 2022) is a challenging task that requires a model to use both modalities to answer the question. We manually split the dataset into two groups: questions needing images to answer and those not. Extensive experiments in Table 5 demonstrate that MMICL effectively mitigates language bias as it performs equally well in both groups. On the other hand, other VLMs suffer from language bias and exhibit vastly different performances in the two groups. Specifically, MMICL achieves a significant improvement compared to other VLMs with a similar model structure (e.g., Instructblip and Ying-VLM) in reducing language bias. Comparison
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & Flickr 30K & WebSRC & VQAv2 & Hatful Memes & VizWiz \\ \hline Flamingo-38 (Alayrac et al., 2022) (Zero-Shot) & 60.60 & - & 49.20 & 53.70 & 28.90 \\ Flamingo-38 (Alayrac et al., 2022) (4-shot) & 72.00 & - & 53.20 & 53.60 & 34.00 \\ Flamingo-98 (Alayrac et al., 2022) (Zero-Shot) & 61.50 & - & 51.80 & 57.00 & 28.80 \\ Flamingo-98 (Alayrac et al., 2022) (4-Shot) & 72.60 & - & 56.30 & 62.70 & 34.90 \\ KOSMOS-1 (Huang et al., 2023) (Zero-Shot) & 67.10 & 3.80 & 51.00 & 63.90 & 29.20 \\ KOSMOS-1 (Huang et al., 2023) (4-Shot) & 75.30 & - & 51.80 & - & 35.30 \\ \hline \hline \multicolumn{7}{c}{Zero-Shot Evaluation} \\ \hline BLIP-2 (Li et al., 2023) (FLANTS-XL) & 64.51 & 12.25 & 58.79 & 60.00 & 25.52 \\ BLIP-2 (Li et al., 2023) (FLANTS-XXL) & 60.74 & 10.10 & 60.91 & 62.25 & 22.50 \\ \hline Instruc.BLIP (Dai et al., 2023) (FLANTS-XL) & 77.16 & 10.80 & 36.77 & 58.54 & 32.08 \\ Instruc.BLIP (Dai et al., 2023) (FLANTS-XL) & 71.13 & 11.50 & 63.69 & 61.70 & 15.11 \\ \hline \multicolumn{7}{c}{Zero-Shot Evaluation} \\ \hline MMICL (FLAN-T5-XL) & 60.56 & 12.55 & 62.17 & 60.28 & 25.04 \\ MMICL (FLANTS-XXL) & 78.64 & 18.85 & 69.99 & 60.32 & 29.34 \\ MMICL (Instruct.FLAN-T5-XL) & **78.89** & 14.75 & 69.13 & 61.12 & 29.92 \\ MMICL (Instruct.FLAN-T5-XXL) & 44.29 & 17.05 & 70.30 & 62.23 & 24.45 \\ \hline \multicolumn{7}{c}{Few-Shot (4-Shot) Evaluation} \\ \hline MMICL (FLAN-T5-XL) & 71.95 & 12.30 & 62.63 & 60.80 & 50.12 \\ MMICL (FLAN-T5-XXL) & 75.37 & 18.70 & 69.83 & 61.12 & 33.16 \\ MMICL (Instruct.FLAN-T5-XL) & 74.27 & 14.80 & 69.16 & 61.12 & 33.16 \\ MMICL (Instruct.FLAN-T5-XXL) & 72.04 & **19.65** & **70.56** & **64.60** & **50.28** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Main results of multi-modal in-context learning ability of MMICL across vision-language tasks. All evaluation metrics used in the evaluation is introduced as Table 24.
\begin{table}
\begin{tabular}{l c} \hline \hline Model & Accuracy \\ \hline Random Choice & 17\% \\ KOSMOS-1 (Huang et al., 2023) & 22\% \\ MMICL (FLAN-T5-XXL) & **34\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Zero-shot generalization on Raven IQ test.
with Otter shows that the lack of understanding in text-to-image reference and multiple-image relationships can result in significant language bias for Otter, even with the multimodal instruction in-context tuning. Shrika1 mitigates the language bias by including spatial coordinate inputs and achieves the lowest performance gap except for MMICL. We also examined object hallucination in MMICLin Appendix K, which shows impressive performance.
Footnote 1: We use 0708 version of Shikra, which performs better for multi-choice questions to ensure fair competition.
### Ablation Study on Training Paradigm
We conduct an ablation study on various tasks to evaluate the effect of multi-modal in-context tuning. Table 6 displays a significant enhancement of MMICL's performance due to the multi-modal in-context tuning. Significant improvements can be observed across all types and sizes of models, especially for tasks that involve multiple images. Specifically, MMICL (Stage I + Stage II) gained 15.75 and 21.05 points improvement in IconQA-img and Bongard-HOI respectively, compared to the Stage I only model. This indicates that with the help of Stage II, MMICL can handle complex multi-modal prompts and accomplish challenging tasks with multiple images. Result in Appendix J also confirms this point with the outstanding performance of MMICL across various video datasets.
## 4 Related Work
**Vision-Language Pretraining:** Recent VLMs (Zhu et al., 2023; Liu et al., 2023; Li et al., 2022; Alayrac et al., 2022; Dai et al., 2023) have been proven effective for aligning visual inputs and frozen LLMs to obtain cross-modal generalization ability. However, previous works overlooked multi-image VLMs, mainly focusing on handling single-image prompts. Tsimpoukelli et al. (2021) supports multi-image inputs using self-attention for images but performs poorly in downstream tasks. Although Flamingo (Alayrac et al., 2022) supports Few-Shot Learning in VLMs and uses cross-attention to capture text-image relationships, it still suffers from exact reference to specific images.
**Multi-Modal Instruction Tuning:** Instruction tuning (Kung and Peng, 2023; Wei et al., 2022) achieves great success in cross-task generalization for LLMs. However, multi-modal instruction tuning still requires further exploration. Multiinstruct (Xu et al., 2023) introduces instruction tuning to enhance the performance of VLMs in instruction-following ability. Due to the architectural design,
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & VSR & IconQA text & VisDial & IconQA img & Bongard HOI \\ \hline \multicolumn{6}{l}{Stage I} \\ Stage I (Bip-2-FLANTS-XL) & 61.62 & 45.44 & 35.43 & 48.42 & 52.75 \\ Stage I (Bip-2-FLANTS-XXL) & 63.18 & 50.08 & 36.48 & 48.42 & 59.20 \\ Stage I (InstructBLIP-FLANTS-XXL) & 61.54 & 47.53 & 35.36 & 50.11 & 53.15 \\ Stage I (InstructBLIP-FLANTS-XXL) & 65.06 & 51.39 & 36.09 & 45.10 & 63.35 \\ \hline \multicolumn{6}{l}{Stage I + Stage II} \\ Stage I + Stage II (BBLIP-2-FLANTS-XXL) & 62.85 & 47.23 & 35.76 & 51.24 & 56.95 \\ Stage I + Stage II (BBLIP-2-FLANTS-XXL) & 64.73 & 50.55 & 37.00 & 34.93 & 68.05 \\ Stage I + Stage II (InstructBLIP-FLANTS-XXL) & **P.54** & **52.55** & 36.87 & 47.27 & **74.20** \\ Stage I + Stage II (InstructBLIP-FLANTS-XXL) & 66.45 & 52.00 & **37.98** & **60.85** & 67.20 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study on Training Paradigm across five datasets: VSR (Liu et al., 2022), IconQA-text (Lu et al., 2021), VisDial (Das et al., 2017), IconQA-img, and Bongard-HOI (Jiang et al., 2022).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Moder} & Average & Don’t Require & Require & Performance \\ & Performance & Visual Information & Visual Information & Gap \\ \hline Random Guess & 35.50 & 35.80 & 34.90 & - \\ Ying-VLM (Li et al., 2023e) & 55.70 & 66.60 & 44.90 & 21.70 \\ InstructBLIP (Dai et al., 2023) & 71.30 & 82.00 & 60.70 & 21.30 \\ Otter (Li et al., 2023a) & 63.10 & 70.90 & 55.70 & 15.20 \\ Shikra (Chen et al., 2023) & 45.80 & 52.90 & 39.30 & 13.60 \\ MMICL & **82.10** & **82.60** & **81.70** & **0.90** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Zero-shot performance of different VLMs on ScienceQA-IMG dataset in different split. MMICL outperforms other VLMs by successfully alleviating language bias. |
2310.20211 | Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics | Calibration ensures that probabilistic forecasts meaningfully capture
uncertainty by requiring that predicted probabilities align with empirical
frequencies. However, many existing calibration methods are specialized for
post-hoc recalibration, which can worsen the sharpness of forecasts. Drawing on
the insight that calibration can be viewed as a distribution matching task, we
introduce kernel-based calibration metrics that unify and generalize popular
forms of calibration for both classification and regression. These metrics
admit differentiable sample estimates, making it easy to incorporate a
calibration objective into empirical risk minimization. Furthermore, we provide
intuitive mechanisms to tailor calibration metrics to a decision task, and
enforce accurate loss estimation and no regret decisions. Our empirical
evaluation demonstrates that employing these metrics as regularizers enhances
calibration, sharpness, and decision-making across a range of regression and
classification tasks, outperforming methods relying solely on post-hoc
recalibration. | Charles Marx, Sofian Zalouk, Stefano Ermon | 2023-10-31T06:19:40Z | http://arxiv.org/abs/2310.20211v1 | # Calibration by Distribution Matching:
###### Abstract
Calibration ensures that probabilistic forecasts meaningfully capture uncertainty by requiring that predicted probabilities align with empirical frequencies. However, many existing calibration methods are specialized for post-hoc recalibration, which can worsen the sharpness of forecasts. Drawing on the insight that calibration can be viewed as a distribution matching task, we introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression. These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization. Furthermore, we provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions. Our empirical evaluation demonstrates that employing these metrics as regularizers enhances calibration, sharpness, and decision-making across a range of regression and classification tasks, outperforming methods relying solely on post-hoc recalibration.
## 1 Introduction
Probabilistic forecasts are valuable tools for capturing uncertainty about an outcome of interest. In practice, the exact outcome distribution is often impossible to recover [51], leading to overconfident forecasts [15]. Calibration provides statistical guarantees on the forecasts, ensuring that the predicted probabilities align with the true likelihood of events. For example, consider a weather forecast that assigns an 80% probability of rain tomorrow. A well-calibrated forecast would imply that, on average, it rains on 80% of the days with such predictions. By appropriately quantifying uncertainty, calibration empowers decision-makers to efficiently allocate resources and mitigate risks, even when the outcome distribution cannot be recovered exactly.
Many forms of calibration have been proposed in both classification [8; 21; 35] and regression [42; 23; 53] to ensure that uncertainty estimates match true frequencies. From a methodological perspective, the literature on enforcing calibration is fractured; many algorithms to enforce calibration are specialized for a particular form of calibration [39; 29; 52; 10; 17] or are designed exclusively for post-hoc calibration [27; 23]. Although post-hoc methods are effective at enforcing calibration, they often lead to a degradation in the predictive power, i.e. sharpness, of the forecaster [26]. This motivates the need for versatile metrics that can enforce various forms of calibration during training.
In this work, we introduce a unified framework wherein existing notions of calibration are presented as distribution matching constraints. To enforce distribution matching, we use the Maximum Mean Discrepancy (\(\mathrm{MMD}\)), giving us unbiased estimates of miscalibration that are amenable to gradient-based optimization. We frame these metrics as regularizers and optimize them alongside proper scoring rules. This allows us to enforce calibration while preserving sharpness.
Framing calibration as distribution matching [38] allows us to use the Maximum Mean Discrepancy (\(\mathrm{MMD}\)) [14] to give unbiased estimates of miscalibration that are amenable to gradient-based optimization. An advantage of this approach is that we express existing calibration measures by varying the kernel in the \(\mathrm{MMD}\) metric. Moreover, we can express new notions of calibration specific to a downstream decision task. This allows us to prioritize errors that impact decision-making [52]. Our contributions can be summarized as follows:
* We propose a framework based on distribution matching to unify many notions of calibration.
* We provide differentiable training objectives to incentivize a wide variety of popular forms of calibration during empirical risk minimization in supervised learning. We frame these metrics as regularizers that can be optimized alongside proper scoring rules. This allows us to enforce calibration without degrading sharpness.
* We show how to define and enforce new calibration metrics tailored to specific downstream decision problems. When these metrics are successfully optimized, decision makers can accurately estimate the decision loss on unlabeled data.
* We perform an empirical study across 10 tabular prediction tasks (5 regression and 5 classification) and find that using our metrics as regularizers improves calibration, sharpness, and decision-making relative to existing recalibration methods.
## 2 Related Work
CalibrationOur work builds on the literature on calibrated forecasting [31; 8; 9; 13]. Recently, many forms of calibration have been proposed in classification [5; 25; 24; 32; 35; 10; 17; 28; 36; 27], regression [23; 42; 53; 39; 51], and beyond [47; 22]. Algorithms to enforce calibration are often designed specifically for one form of calibration. For example, significant attention has been devoted to effectively estimating the Expected Calibration Error (ECE) to calibrate confidence levels in binary classification [19; 44; 4; 15; 35]. Our work aims to supplement this work by providing general calibration metrics that can be used as training objectives. These methods can be used in concert with post-hoc calibration, to encourage calibration during model training and correct any remaining miscalibration after training is complete.
Decision CalibrationAn interesting line of recent work focuses on calibrating predictions for downstream decision problems [39; 52]. Zhao et al. [52] shows that decision calibration can be achieved when the set of possible actions is finite. We show cases in which this result can be extended to infinite action spaces by leveraging a low-dimensional sufficient statistic of the decision problem. Whereas Zhao et al. [52] compute calibration by approximating a worst-case decision problem with a linear classifier, we use the kernel mean embedding form of an MMD objective, making it simple to calibrate with respect to a restricted set of loss functions by adjusting the kernel.
Kernel-Based Calibration MetricsMotivated by issues arising from the histogram binning of the Expected Calibration Error, a recent stream of research uses kernels to define smooth calibration metrics. Zhang et al. [50] and Popordanoska et al. [37] use kernel density estimates to estimate calibration in classification settings. In contrast, our work applies to regression and takes a distribution matching perspective. Widmann et al. [47] and Kumar et al. [26] are the most similar existing works to our own, and also introduce MMD-based calibration metrics. Our work differs from Widmann et al. [46] due to our focus on trainability and optimization, connections to existing forms of calibration, and applications to decision-making. Kumar et al. [26] can be seen as a special case of our method applied to top-label calibration in multiclass classification (see Table 1). Our work extends the literature on MMD-based calibration metrics, showing how to express and optimize 11 existing forms of calibration by varying the kernel in the MMD objective (see Tables 1 and 2).
## 3 Calibration is Distribution Matching
We consider the regression setting, where, given a feature vector \(x\in\mathcal{X}=\mathbb{R}^{d}\), we predict a label \(y\in\mathcal{Y}\subset\mathbb{R}\). We are given access to \(n\) i.i.d. examples \((x_{1},y_{1}),\ldots,(x_{n},y_{n})\) distributed according to some true but unknown cumulative distribution function (cdf) \(P\) over \(\mathcal{X}\times\mathcal{Y}\). A lower case \(p\) denotes the probability density function (pdf) or probability mass function (pmf), when it exists. We use
subscripts for marginal and conditional distributions, so \(P_{Y|x}(\cdot)\) is the conditional cdf for \(Y\) given \(X=x\) and \(p_{Y}(\cdot)\) is the marginal pdf for \(Y\). When the input \(X\) is random, we write \(P_{Y|X}\) as the cdf-valued random variable that takes value \(P_{Y|x}\) upon the event that \(X=x\).
Our goal is to learn a forecaster \(Q\) that, given a feature vector \(x\), forecasts a cdf \(Q_{Y|x}(\cdot)\) over \(\mathcal{Y}\). A forecaster implicitly defines a joint distribution over \(\mathcal{X}\times\mathcal{Y}\) by combining the forecasted conditional pdf with the marginal pdf for the features, i.e. \(q(x,y)=p_{X}(x)q_{Y|x}(y)\). The forecaster's joint cdf \(Q(x,y)\) is defined by integrating the pdf \(q(x,y)\) over the appropriate region of \(\mathcal{X}\times\mathcal{Y}\). In the same way as \((X,Y)\sim P\) is a sample from the true distribution, we write a sample from \(Q\) as \((X,\widehat{Y})\sim Q\), where \(X\sim P_{X}\) and \(\widehat{Y}\) is a label drawn from the forecast \((\widehat{Y}|X=x)\sim Q_{Y|x}\).
An ideal forecaster perfectly recovers the true probabilities so that \(Q=P\). However, in practice we often only observe each feature vector \(x\) once, making perfect forecasts unattainable [51]. Instead, calibration enforces that forecasts are accurate on _average_, leaving us with a tractable task that is strictly weaker than perfect forecasting. Each form of calibration requires the forecasts to respect a particular law of probability. For example, if \(Y\) is binary, distribution calibration states that on examples for which the forecast is \(q_{Y|x}(Y=1)=c\), we should observe that \(Y=1\) with frequency \(c\). In regression, quantile calibration [23] states that \(y\) should exceed each quantile \(c\in[0,1]\) of the forecast with frequency \(c\). Calibration can also be enforced within subgroups of the data (i.e. group calibration [36]), in smooth neighborhoods of the input space (i.e. local calibration [27]), and even for a single feature vector (i.e. individual calibration [51]) by using randomization.
Calibration can be expressed as a conditional distribution matching problem, requiring that certain variables associated with the true distribution match their counterparts from the forecasts.
**Lemma 3.1** (informal).: _Quantile calibration is equivalent to distribution matching between \(Q_{Y|X}(Y)\) and \(P_{Y|X}(Y)\). Distribution calibration is equivalent to distribution matching between \(\widehat{Y}\) and \(Y\) given \(Q_{Y|X}\). Individual calibration is equivalent to distribution matching between \(\widehat{Y}\) and \(Y\) given \(X\)._
The choice of random variables that we match in distribution determines the form of calibration recovered. We show how to express many popular forms of calibration in terms of distribution matching in Table 1, and include additional details on these correspondences in Appendix A. In Section 4, we describe how to express and estimate these forms of calibration using kernel-based calibration metrics. In Section 5, we show how to tailor these metrics to downstream decision problems. Finally, in Section 6, we apply these metrics as regularizers and explore the empirical effects on calibration, sharpness, and decision-making.
## 4 Calibration Metrics as Trainable Regularizers
In this section, we introduce a framework to enforce calibration using distribution matching. We quantify calibration using integral probability metrics which admit unbiased estimates from data
\begin{table}
\begin{tabular}{l l l l} \hline \hline Calibration Objective & Forecast Variable & Target Variable & Conditioning Variable \\ \hline Quantile Calibration [23] & \(Q_{Y|X}(Y)\) & \(P_{Y|X}(Y)\) & – \\ Threshold Calibration [39] & \(Q_{Y|X}(Y)\) & \(P_{Y|X}(Y)\) & \(\mathds{1}\left\{Q_{Y|X}(y_{0})\leq\alpha\right\}\) \\ Marginal Calibration [12] & \(\widehat{Y}\) & \(Y\) & – \\ Decision Calibration [52] & \(\widehat{Y}\) & \(Y\) & \(\delta_{\epsilon}^{*}(Q_{Y|X})\) \\ Group Calibration [36] & \(\widehat{Y}\) & \(Y\) & Group(\(X\)) \\ Distribution Calibration [42] & \(\widehat{Y}\) & \(Y\) & \(Q_{Y|X}\) \\ Individual Calibration [51] & \(\widehat{Y}\) & \(Y\) & \(X\) \\ Local Calibration [27] & \(\widehat{Y}\) & \(Y\) & \(\phi(X)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: We express popular forms of calibration in terms of distribution matching, where \(P\) is the true distribution and \(Q\) is the forecast. The forecaster achieves calibration if the forecast variable and target variable are equal in distribution, given the conditioning variable. Here, \(\delta_{\epsilon}^{*}(Q_{Y|X})\) is a Bayes optimal action under \(Q_{Y|X}\), \(y_{0}\) is a fixed label in \(\mathcal{Y}\), \(\alpha\in[0,1]\) is a threshold, and \(\phi(X)\) is a feature mapping. See Appendix A for additional details.
and are amenable to gradient-based optimization. This allows us to optimize calibration alongside a standard training objective, achieving calibration without degrading sharpness. Jointly optimizing calibration and sharpness has been shown to outperform only enforcing calibration post-hoc [26].
Integral probability metrics (IPMs) are a flexible set of metrics to quantify the difference between two distributions [30] in terms of a family of witness functions. Since we need to perform distribution matching _conditionally_, we consider witness functions over both the label \(Y\), as well as a conditioning variable \(Z\) that assumes the same distribution under \(P\) and \(Q\).
\[D(\mathcal{F},P,Q)=\sup_{f\in\mathcal{F}}\left|\mathbb{E}_{P}\left[f(Y,Z) \right]-\mathbb{E}_{Q}\left[f(\widehat{Y},Z)\right]\right|. \tag{1}\]
Different choices for the witness functions \(\mathcal{F}\) and the conditioning variable \(Z\) measure different notions of calibration. Similar to Widmann et al. [47], we focus on the case where \(\mathcal{F}\) is the unit ball of a Reproducing Kernel Hilbert Space (RKHS) \(\mathcal{H}\), so that the IPM corresponds to the Maximum Mean Discrepancy [14], denoted \(\mathrm{MMD}(\mathcal{F},P,Q)\). MMD can either be expressed in terms of the canonical feature map \(\phi(y,z)\in\mathcal{H}\) or the reproducing kernel \(k((y,z),(\widehat{y},z^{\prime}))=\left\langle\phi(y,z),\phi(\widehat{y},z^{ \prime})\right\rangle_{\mathcal{H}}\) for \(\mathcal{H}\):
\[\mathrm{MMD}^{2}(\mathcal{F},P,Q) =\left\|\mathbb{E}_{P}\left[\phi(Y,Z)\right]-\mathbb{E}_{Q}\left[ \phi(\widehat{Y},Z)\right]\right\|_{\mathcal{H}}^{2} \tag{2}\] \[=\mathbb{E}_{P}\left[k((Y,Z),(Y^{\prime},Z^{\prime}))\right]+ \mathbb{E}_{Q}\left[k((\widehat{Y},Z),(\widehat{Y}^{\prime},Z^{\prime})) \right]-2\mathbb{E}_{P}\mathbb{E}_{Q}\left[k((Y,Z),(\widehat{Y}^{\prime},Z^{ \prime}))\right]\]
Given a dataset \(D\) consisting of \(n\) i.i.d. pairs \((x_{i},y_{i})\sim P\), we can generate forecasted labels \(\widehat{y}_{i}\sim Q_{Y|x_{i}}\). The conditioning variables \(z_{i}\) are computed from \((x_{i},y_{i})\) according to the expressions given in Table 1. The plug-in MMD estimator is unbiased and takes the form [14]:
\[\widehat{\mathrm{MMD}}^{2}(\mathcal{F},D,Q)=\frac{1}{n(n-1)}\sum_{i=1}^{n} \sum_{j\neq i}^{n}h_{ij}, \tag{3}\]
where \(h_{ij}\) is a one-sample U statistic:
\[h_{ij}:=k((y_{i},z_{i}),(y_{j},z_{j}))+k((\widehat{y}_{i},z_{i}),(\widehat{y }_{j},z_{j}))-k((y_{i},z_{i}),(\widehat{y}_{j},z_{j})-k((y_{j},z_{j}),( \widehat{y}_{i},z_{i}))\]
The variance of this estimator can be reduced by marginalizing out the simulated randomness used to sample \(\widehat{y}_{i}\sim Q_{Y|x_{i}}\). This can be done either analytically, or empirically by resampling the forecasted labels multiple times per example. We find that empirically marginalizing out the simulated randomness improves training stability in practice. The full training objective combines a proper scoring rule (we use negative log-likelihood) with an MMD estimate to incentivize calibration. To train, we divide the dataset \(D\) into batches \(D_{b}\subset\{1,\dots,n\}\) large enough so that the MMD estimate on a batch has low enough variance to provide useful supervision (we find that \(D_{b}\) of size 64 suffices in practice). Given a family of predictive models \(\{Q_{\theta}:\theta\in\Theta\}\) (e.g., a neural network), we optimize the following training objective on each batch:
\[\min_{\theta\in\Theta}\sum_{i\in D_{b}}-\log q_{Y|x_{i};\theta}(y_{i})+ \lambda\cdot\widehat{\mathrm{MMD}}^{2}(\mathcal{F},D_{b},Q_{\theta}) \tag{4}\]
In order to differentiate \(\widehat{\mathrm{MMD}}^{2}(\mathcal{F},D_{b},Q_{\theta})\) with respect to \(\theta\), we require that the kernel \(k\) is differentiable and the samples can be expressed as a differentiable function of the model parameters (e.g., using a reparameterization trick). For parameterizations that do not admit exact differentiable sampling, we can apply a differentiable relaxation (e.g., if \(Q_{\theta}\) is a mixture distribution, we can use a Gumbel-Softmax approximation). In principle, it is possible to remove the need for differentiable sampling altogether by optimizing our MMD objective using Monte-Carlo RL techniques (e.g., REINFORCE [48]). However, this is beyond the scope of our work. Note that regularizing for calibration at training time does not give calibration guarantees, unless one can guarantee the objective will be optimized out-of-sample. In this sense, regularization and post-hoc calibration are complementary: regularization mitigates the trade-off between calibration and sharpness while post-hoc methods can provide finite-sample calibration guarantees (e.g., [45, 29]). For this reason, we suggest using regularization and post-hoc calibration together (see Table 3).
On the Choice of the KernelWhen the kernel \(k\) is universal, the MMD metric is zero if and only if \(Y\) and \(\widehat{Y}\) are equal in distribution, given \(Z\)[14]. Universality is preserved when we decompose the kernel over \(\mathcal{Y}\times\mathcal{Z}\) into a pointwise product of two universal kernels, one over \(\mathcal{Y}\) and one over \(\mathcal{Z}\)[43].
**Lemma 4.1**.: _Let \(k\) be the function defined by the pointwise product \(k((y,z),(y^{\prime},z^{\prime}))=k_{y}(y,y^{\prime})k_{z}(z,z^{\prime})\) where \(k_{y}\) and \(k_{z}\) are universal kernels over spaces \(\mathcal{Y}\) and \(\mathcal{Z}\) respectively. Then, under mild conditions, \(k\) is a universal kernel over \(\mathcal{Y}\times\mathcal{Z}\)._
See [43, Theorem 4] for a proof. In our experiments, we explore using a universal RBF kernel over \(\mathcal{Y}\times\mathcal{Z}\), since it satisfies the mild conditions needed for Lemma 4.1. To enforce decision calibration, we also consider a kernel \(k_{Y}\) that is intentionally not universal, and instead only measures differences between \(P\) and \(Q\) that are relevant for decision-making (see Section 5). Furthermore, to measure Quantile Calibration and Threshold Calibration, we must match the distributions of the probability integral transforms \(P_{Y|X}(Y)\) and \(Q_{Y|X}(Y)\) instead of \(Y\) and \(\widehat{Y}\). Thus, we replace \(k_{Y}\) with a universal kernel over \([0,1]\), the codomain of the probability integral transform.
### Calibration Metrics for Classification
Thus far, we have focused on estimating calibration metrics in the regression setting. We now describe how our framework applies to classification.
In the classification setting, given a feature vector \(x\in\mathcal{X}=\mathbb{R}^{d}\) we predict a label \(y\in\mathcal{Y}=\{1,\ldots,m\}\). As before, we are given access to \(n\) i.i.d. examples \((x_{1},y_{1}),\ldots,(x_{n},y_{n})\) distributed according to some unknown distribution \(P\) over \(\mathcal{X}\times\mathcal{Y}\). Our goal is to learn a forecaster \(Q\) that, given a feature vector \(x\), forecasts a probability mass function (pmf) \(q_{Y|x}(\cdot)\) over labels \(\mathcal{Y}\).
We express popular notions of calibration used in classification (i.e., canonical, top-label, marginal) in terms of distribution matching in Table 2. See Appendix A.2 for additional details on calibration and distribution-matching in classification. We use the same unbiased plug-in estimator for MMD shown in Equation 3. In the classification setting, we can analytically marginalize out the randomness originating from the forecast when computing the MMD estimate. After performing this marginalization, the one-sample U-statistic \(h_{ij}\) is is given by
\[h_{ij}=k((y_{i},z_{i}),(y_{j},z_{j}))+\sum_{y\in\mathcal{Y}}\sum_{y^{\prime} \in\mathcal{Y}}q_{i}(y)q_{j}(y^{\prime})k((y,z_{i}),(y^{\prime},z_{j}))-2 \sum_{y\in\mathcal{Y}}q_{i}(y)k((y,z_{i}),(y_{j},z_{j})),\]
where the conditioning variables \(z_{i}\) are computed from \((x_{i},y_{i})\), as detailed in Table 2. Here, \(q_{i}(y)=q_{Y|x_{i}}(y)\) is the predicted probability of the label \(Y=y\) given features \(x_{i}\). When the forecast and target variables depend on variables other than \(Y\) (e.g., \(Y^{*}\) depends on \(q_{Y|X}\) in top-label calibration), we add those variables as inputs to the kernel. Note that the plug-in MMD estimator with this U-statistic is differentiable in terms of the predicted label probabilities, and that it can be computed in \(O(n^{2}m^{2})\) time.
## 5 Calibration and Decision-Making
Some of the most practically useful results of calibration are the guarantees provided for downstream decision-making [52]. Universal kernels guarantee distribution matching in principle, but in some cases require many samples. However, if we have information about the decision problem in advance, we can design calibration metrics that are only sensitive to errors relevant to decision-making. In this section, we show how to design such metrics and show that they enable us to (1) accurately evaluate the loss associated with downstream actions and (2) choose actions that are in a sense optimal.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Calibration Objective & Forecast Variable & Target Variable & Conditioning Variable \\ \hline Canonical Calibration [44, Eq 1] & \(\widehat{Y}\) & \(Y\) & \(q_{Y|X}\) \\ Top-label Calibration [44, Eq 2] & \(\mathds{1}\{\hat{Y}=Y^{*}\}\) & \(\mathds{1}\{Y=Y^{*}\}\) & \(q_{Y|X}(Y^{*})\) \\ Marginal Calibration [44, Eq 3] & \(\mathds{1}\{\hat{Y}=y\}\) & \(\mathds{1}\{Y=y\}\) & \(q_{Y|X}(y),\forall y\in\mathcal{Y}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Popular forms of calibration expressed as distribution matching for a classification problem with classes \(\mathcal{Y}=\{1,\ldots,m\}\). Recall that \(q_{Y|X}(\cdot)\) is the forecasted pmf. The forecaster is calibrated if the forecast variable and target variable are equal in distribution, given the conditioning variable. For top-label calibration, we use \(Y^{*}:=\arg\max_{y\in\mathcal{Y}}q_{Y|X}(y)\), which is random due to \(X\). The marginal calibration condition expresses \(m\) distribution matching constraints, one for each \(y\in\mathcal{Y}\).
We begin by introducing a general decision problem. For each example, a decision-maker chooses an action \(a\in\mathcal{A}\) and incurs a loss \(\ell(a,y)\) that depends on the outcome. Specifically, given information \(z\) (such as the forecast), a decision policy \(\delta:\mathcal{Z}\rightarrow\mathcal{A}\) assigns an action \(\delta(z)\). The set of all such policies is denoted \(\Delta(\mathcal{Z})\). The Bayes optimal policy under a label distribution \(P_{Y}\) is denoted \(\delta^{*}_{\ell}(P_{Y}):=\arg\min_{a\in\mathcal{A}}\mathbb{E}_{P}\left[\ell(a,Y)\right]\). Decision calibration [52] provides two important guarantees:
\[\mathbb{E}_{Q}\left[\ell(a,\widehat{Y})\mid Z=z\right] =\mathbb{E}_{P}\left[\ell(a,Y)\mid Z=z\right],\quad\forall a\in \mathcal{A},z\in\mathcal{Z}\qquad\text{(loss estimation)}\] \[\mathbb{E}_{P}\left[\ell(\delta^{*}_{\ell}(Q_{Y|x}),Y)\mid Z=z \right] \leq\mathbb{E}_{P}\left[\ell(\delta(z),Y)\mid Z=z\right],\quad\forall z \in\mathcal{Z},\delta\in\Delta(\mathcal{Z})\qquad\text{(no regret)}\]
_Loss estimation_ says that the expected loss of each action is equal under the true distribution and the forecast. This means we can estimate the loss on unlabeled data. _No regret_ says that the Bayes optimal action according to the forecast \(\delta^{*}_{\ell}(Q_{Y|x})\) achieves expected loss less than or equal to any decision policy \(\delta\in\Delta(\mathcal{Z})\) that depends only on \(z\). If we condition on the forecast by choosing \(z=Q_{Y|x}\), then a decision-maker who only has access to the forecast is incentivized to act as if the forecast is correct. Additionally, we consider cases where the loss function is not known in advance. For example, when a vendor releases a foundation model to serve a large population of users, it is likely that different users have distinct loss functions. Thus, we also show settings in which we can achieve accurate loss estimation and no regret for all loss functions \(\ell\) included in a family of losses \(\mathcal{L}\).
A General Recipe for Decision CalibrationWe now describe an approach to measure decision calibration using our kernel-based metrics. For each action \(a\in\mathcal{A}\) and loss \(\ell\in\mathcal{L}\), we want to measure the discrepancy between the expectation of the loss \(\ell(a,Y)\) under \(P\) and \(Q\). To achieve this, we define the feature map that computes the losses of all actions \(\phi(y)=\left(\ell(a,y)\right)_{a\in\mathcal{A},\ell\in\mathcal{L}}\). This assigns to \(P\) and \(Q\) the mean embeddings \(\mu_{P}=\mathbb{E}_{P}\left[\left(\ell(a,Y)\right)_{a\in\mathcal{A},\ell\in \mathcal{L}}\right]\) and \(\mu_{Q}=\mathbb{E}_{Q}\left[\left(\ell(a,\widehat{Y})\right)_{a\in\mathcal{A},\ell\in\mathcal{L}}\right]\), respectively. Letting \(\mathcal{F}\) be the unit ball of the Hilbert space associated with this feature map, the mean embedding form of MMD gives us that \(\operatorname{MMD}(\mathcal{F},P,Q)^{2}=\left\|\mu_{P}-\mu_{Q}\right\|_{2}^{2} =\sum_{a\in\mathcal{A}}\sum_{\ell\in\mathcal{L}}\left(\mathbb{E}_{P}\left[ \left(\ell(a,Y)\right)-\mathbb{E}_{Q}\left[\ell(a,\widehat{Y})\right]\right] \right)^{2}\). When either \(\mathcal{A}\) or \(\mathcal{L}\) is infinite, the corresponding sum is replaced by an integral. This metric measures whether each action has the same expected loss _marginally_ under the true distribution and the forecasts. Thus, accurate loss estimation is only guaranteed marginally and the no regret result is relative to constant decision functions that choose the same action for all examples.
Next, we refine the decision calibration results to hold conditionally on a variable \(Z\). Recall that the kernel over \(y\) is given by \(k_{y}(y,y^{\prime})=\left\langle\phi(y),\phi(y^{\prime})\right\rangle\). We lift this kernel into \(\mathcal{Y}\times\mathcal{Z}\) by defining \(k((y,z),(y^{\prime},z^{\prime}))=k_{y}(y,y^{\prime})k_{z}(z,z^{\prime})\) where \(k_{z}\) is a universal kernel. The MMD metric corresponding to this kernel measures decision calibration conditioned on \(Z\). If we choose \(Z\) to be the forecast \(Z=Q_{Y|X}\), then the no regret guarantee says that acting as if the forecasts were correct is an optimal decision policy among all policies depending only on the forecast.
Note that the feature maps defined above are infinite dimensional when \(\mathcal{A}\) or \(\mathcal{L}\) are infinite. However, applying the kernel trick sometimes allows us to estimate the MMD metric efficiently. Still, optimizing the MMD metric may be difficult in practice, especially when the conditioning variable \(Z\) is very expressive. We suggest using the metric as a regularizer to incentivize decision calibration during training, and then still apply post-hoc recalibration if needed. We now give two concrete examples, one where the action space is infinite and one where the set of loss functions is infinite.
**Example 1** (Point Estimate Decision).: Consider a regression problem in which a decision-maker chooses a point estimate \(a\in\mathbb{R}\) and incurs the squared loss \(\ell(a,y)=(a-y)^{2}\). The expected loss of an action \(a\) is \(\mathbb{E}_{P}\left[\left(a-Y\right)^{2}\right]=\left(a-\mathbb{E}_{P}\left[Y \right]\right)^{2}+\mathbb{E}_{P}\left[Y^{2}\right]-\mathbb{E}_{P}\left[Y \right]^{2}\). Observe that the expected loss only depends on \(Y\) through the first two moments, \(\mathbb{E}_{P}\left[Y\right]\) and \(\mathbb{E}_{P}\left[Y^{2}\right]\). We use the representation \(\phi(y)=(y,y^{2})\), so that the kernel is \(k(y,y^{\prime})=\left\langle(y,y^{2}),(y^{\prime},y^{\prime 2})\right\rangle\). From the mean embedding form of MMD, we see that \(\operatorname{MMD}(\mathcal{F},P,Q)=\left\|\mu_{P}-\mu_{Q}\right\|^{2}=(\mathbb{ E}[Y]-\mathbb{E}_{Q}[\widehat{Y}])^{2}+(\mathbb{E}_{P}[Y^{2}]-\mathbb{E}_{Q}[ \widehat{Y}^{2}])^{2}\). Therefore, the metric is zero exactly when the expected loss of each action \(a\) is the same under the true distribution and the forecasts. The MMD metric incentivizes the forecasts to accurately reflect the marginal loss of each action. However, we often want to know the optimal action given a particular forecast, not marginally. We can now choose the conditioning variable to refine the guarantee. Let \(Z\) be the forecast \(Z=Q_{Y|X}\) and choose the kernel \(k((y,z),(y^{\prime},z^{\prime}))=k_{y}(y,y^{\prime})k_{z}(z,z^{\prime})\) where \(k_{y}(y,y^{\prime})=\left\langle(y,y^{2}),(y^{\prime},y^{\prime 2})\right\rangle\) as before and \(k_{z}(z,z^{\prime})\) is universal. For this kernel, the MMD will equal zero if and only if \(\mathbb{E}_{P}\left[Y\mid Q_{Y|X}\right]=\mathbb{E}_{Q}\left[\widehat{Y}\mid Q _{Y|X}\right]\) and \(\mathbb{E}_{P}\left[Y^{2}\mid Q_{Y|X}\right]=\mathbb{E}_{Q}\left[\widehat{Y}^{2}\mid\)
\(Q_{Y|X}\) almost surely. The Bayes optimal action suggested by the forecast is optimal among decision policies that depend only on the forecast:
\[\mathbb{E}_{P}\,\left[\ell\left(\delta^{*}(Q_{Y|X}\right),Y)\right]\leq\mathbb{E }_{P}\,\left[\ell\left(\delta(Q_{Y|X}\right),Y)\right],\quad\forall\delta\in \Delta(Q_{Y|X}) \tag{5}\]
In other words, the decision maker is incentivized to behave as if the forecasts are correct.
**Example 2** (Threshold Decision).: The above example had an infinite action space and a single loss function. Here, we consider a problem where the action space is small but the set of loss functions is infinite. This setting applies for a model vendor who serves a model to multiple decision makers, each with a potentially different loss function. For example, suppose that \(Y\in[0,T]\) is a blood value for a patient for some \(T>0\), and the decision task is to determine whether \(Y\) exceeds a threshold \(t\in[0,T]\) that indicates eligibility for a clinical trial. Formally, the decision maker chooses a threshold \(a\in\{\pm 1\}\) to minimize the loss \(\ell_{t}(a,y)=\mathds{1}\left\{a\neq\operatorname{sign}(y-t)\right\}\). Our goal is to simultaneously calibrate our forecasts for all thresholds \(t\in[0,T]\), i.e. \(\mathcal{L}=\{\ell_{t}:t\in[0,T]\}\). In this case, we choose the feature map \(\phi(y)=(\mathds{1}\left\{y\geq t\right\})_{t\in[0,T]}\). The corresponding kernel is \(k(y,y^{\prime})=\langle\phi(y),\phi(y^{\prime})\rangle=\min\{y,y^{\prime}\}\). The mean embeddings are \(\mathbb{E}_{P}\,\left[\phi(Y)\right]=P_{Y}\) and \(\mathbb{E}_{Q}[\phi(\widehat{Y})]=Q_{Y}\), the marginal cdfs for \(Y\) and \(Y_{Q}\). Thus, the MMD metric measures \(\|P_{Y}-Q_{Y}\|_{2}^{2}=\int_{t=0}^{T}\left(P_{Y}(t)-Q_{Y}(t)\right)^{2}dt\), which is zero exactly when \(P_{Y}=Q_{Y}\). When \(\operatorname{MMD}(\mathcal{F},P,Q)=0\), the expected loss of each action \(a\) is the same under the true distribution and the forecasts, for all \(\ell\in\mathcal{L}\). Similar to the previous example, we can achieve conditional decision calibration by adding a universal kernel over the conditioning variable.
DiscussionThe connections between decision-making and calibration have been previously studied by Zhao et al. [52] and Sahoo et al. [39]. Here, we extend results from Zhao et al. [52] to include infinite action spaces and further refine the decision guarantees to depend on an arbitrary conditioning variable. In Example 2, we connect our methods to the special case of threshold calibration [39]. Notably, our framework provides a strategy to optimize decision calibration at training time.
## 6 Experiments
The main goals of our experiments are to: (1) compare the performance our method with other trainable calibration metrics, and with standard post-hoc recalibration methods; (2) study the impact of tailoring the \(\operatorname{MMD}\) kernel to a specific decision task; and (3) study whether trainable calibration metrics can be used to improve calibration across local neighborhoods of data.2
Footnote 2: Code to reproduce experiments can be found at [https://github.com/kernel-calibration/kernel-calibration/](https://github.com/kernel-calibration/kernel-calibration/).
### Datasets
We consider standard benchmark datasets and datasets relating to decision-making tasks. For each dataset, we randomly assign 70% of the dataset for training, 10% for validation, and 20% for testing.
Regression DatasetsWe use four tabular UCI datasets (superconductivity[16], crime[34], blog[6], fb-comment[41]), as well as the Medical Expenditure Panel Survey dataset (medicalexpenditure[7]). They are common benchmarks in uncertainty quantification literature [23, 39]. The number of features ranges from \(d=53\) to \(280\). The total number of examples \(n\), and features \(d\) for each dataset can be found in Table 3.
Classification DatasetsWe use five tabular UCI datasets: breast-cancer[49], heart-disease[18], online-shoppers[40], dry-bean[1], and adult[2]. The datasets range from \(m=2\) to \(7\) label classes, and \(d=16\) to \(104\) features. The total number of examples \(n\), features \(d\), and classes \(m\) for each dataset can be found in Table 4.
Crop Yield DatasetWe introduce a crop-yield dataset, where the task is to predict yearly wheat crop yield from weather data for different counties across the US. We use the NOAA database to track summary weather statistics across each year (temperature, precipitation, cooling/heating degree days), and the NASS database to track annual wheat crop yield. The dataset spans from 1990-2007, where data from 1995 was held-out for visualization, and the remaining data was used for training. We use this dataset to evaluate local calibration with respect to the latitude and longitude coordinates.
### Experimental Setup and Baselines
ModelsTo represent the model uncertainty, we use a feed-forward neural network with three fully-connected layers. In the classification setting, the model outputs the logits for all \(m\) classes. In the regression setting, the model outputs the predicted mean \(\mu(Y)\) and variance \(\sigma^{2}(Y)\), which are used to parameterize a Gaussian probability distribution. This allows us to tractably compute the inverse cdf \(Q_{Y|X}^{-1}\) and perform differential sampling, enabling us to test our calibration metrics in multiple settings. See Appendix B for more details on model hyperparameter tuning.
Training ObjectivesIn the regression setting, we consider two training objectives: Negative Log-likelihood (\(\mathrm{NLL}\)), and our trainable calibration metric as a regularizer (\(\mathrm{NLL}+\mathrm{MMD}\)). For \(\mathrm{MMD}\), we enforce a notion of individual calibration (Table 1) by choosing the conditioning variable \(Z=X\) to match the distributions of \((X,\widehat{Y})\) and \((X,Y)\). More specifically, we optimize an MMD objective where the kernel \(k\) is decomposed among the pair of random variables as \(k((x,y),(x^{\prime},y^{\prime}))=k_{X}(x,x^{\prime})\cdot k_{Y}(y,y^{\prime})\). The form of \(k_{X}\) and \(k_{Y}\) varies across experiments. In the classification setting, we consider cross-entropy (XE) loss, and three trainable calibration metrics as regularizers: \(\mathrm{MMSE}\)[26], \(\mathrm{KDE}\)[37], and \(\mathrm{MMD}\) (_Ours_).
BaselinesIn the regression setting, we compare our approach against an uncalibrated forecaster trained with \(\mathrm{NLL}\), and with post-hoc recalibration using isotonic regression [23]. In the classification setting, we compare our approach against an uncalibrated forecaster trained with \(\mathrm{XE}\) loss. Furthemore, we compare our approach to two trainable calibration metrics (\(\mathrm{MMSE}\)[26] and \(\mathrm{KDE}\)[37]). We compare all approaches against temperature-scaling [15], a standard post-hoc recalibration method. For both classification and regression, we train the post-hoc recalibration methods on a validation set, and subsequently evaluate on the held-out test set.
MetricsTo quantify predictive performance in regression, we evaluate the negative log-likelihood, quantile calibration error (QCE) [23], and decision calibration error (see Section 6.3). In classification, we compute accuracy, expected calibration error (calibration), and average Shannon entropy (sharpness) of the forecast. We also report kernel calibration error (KCE), defined as the MMD
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Dataset** & Training Objective & \(\mathrm{NLL}\downarrow\) & Quantile Cal. \(\downarrow\) & Decision Cal. \(\downarrow\) \\ \hline \multirow{3}{*}{\(n=1992\)} & NLL & -0.716 \(\pm\) 0.007 & 0.220 \(\pm\) 0.004 & 0.151 \(\pm\) 0.009 \\ & + Post-hoc & -0.387 \(\pm\) 0.012 & 0.154 \(\pm\) 0.003 & 0.089 \(\pm\) 0.005 \\ \multirow{2}{*}{\(d=102\)} & NLL + MMD (_Ours_) & **-0.778 \(\pm\) 0.008** & 0.164 \(\pm\) 0.004 & 0.042 \(\pm\) 0.011 \\ & + Post-hoc & -0.660 \(\pm\) 0.014 & **0.105 \(\pm\) 0.004** & **0.041 \(\pm\) 0.005** \\ \hline \multirow{3}{*}{\(n=52397\)} & NLL & 0.997 \(\pm\) 0.040 & 0.402 \(\pm\) 0.006 & 4.087 \(\pm\) 0.004 \\ & + Post-hoc & **0.624 \(\pm\) 0.021** & 0.053 \(\pm\) 0.001 & 4.090 \(\pm\) 0.001 \\ \multirow{2}{*}{\(d=280\)} & NLL + MMD (_Ours_) & 0.957 \(\pm\) 0.008 & 0.302 \(\pm\) 0.002 & **4.051 \(\pm\) 0.002** \\ & + Post-hoc & 0.945 \(\pm\) 0.007 & **0.042 \(\pm\) 0.001** & 4.067 \(\pm\) 0.001 \\ \hline \multirow{3}{*}{\(n=33005\)} & NLL & 1.535 \(\pm\) 0.000 & 0.078 \(\pm\) 0.001 & 0.476 \(\pm\) 0.002 \\ & + Post-hoc & 1.808 \(\pm\) 0.002 & 0.059 \(\pm\) 0.001 & 0.472 \(\pm\) 0.002 \\ \multirow{3}{*}{\(d=107\)} & NLL + MMD (_Ours_) & **1.532 \(\pm\) 0.000** & 0.059 \(\pm\) 0.001 & 0.438 \(\pm\) 0.002 \\ & + Post-hoc & 1.780 \(\pm\) 0.001 & **0.045 \(\pm\) 0.000** & **0.431 \(\pm\) 0.001** \\ \hline \multirow{3}{*}{\(n=21264\)} & NLL & 3.375 \(\pm\) 0.008 & 0.062 \(\pm\) 0.003 & 0.211 \(\pm\) 0.003 \\ & + Post-hoc & 3.707 \(\pm\) 0.007 & 0.066 \(\pm\) 0.001 & 0.202 \(\pm\) 0.003 \\ \multirow{3}{*}{\(d=81\)} & NLL + MMD (_Ours_) & **3.269 \(\pm\) 0.012** & **0.042 \(\pm\) 0.003** & **0.182 \(\pm\) 0.003** \\ & + Post-hoc & 3.643 \(\pm\) 0.010 & 0.050 \(\pm\) 0.002 & 0.186 \(\pm\) 0.003 \\ \hline \multirow{3}{*}{\(n=40949\)} & NLL & 0.634 \(\pm\) 0.016 & 0.334 \(\pm\) 0.004 & 3.151 \(\pm\) 0.003 \\ & + Post-hoc & 0.734 \(\pm\) 0.015 & 0.063 \(\pm\) 0.001 & 3.151 \(\pm\) 0.002 \\ \multirow{3}{*}{\(d=53\)} & NLL + MMD (_Ours_) & **0.605 \(\pm\) 0.004** & 0.258 \(\pm\) 0.003 & **3.131 \(\pm\) 0.003** \\ & + Post-hoc & 0.639 \(\pm\) 0.005 & **0.054 \(\pm\) 0.001** & 3.138 \(\pm\) 0.002 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of model performance on five different regression datasets. Models were trained with two objectives: NLL and NLL + MMD (Ours). We display the metrics on the test set for each training procedure, both with and without post-hoc Quantile Calibration [23] fit to the validation set. \(n\) is the number of examples in the dataset and \(d\) is the number of features.
objective for an RBF kernel over \(\mathcal{X}\times\mathcal{Y}\). Each metric is computed on a held-out test set. We repeat all experiments across 50 random seeds and report the mean and standard error for each metric.
ResultsThe results for regression and classification are shown in Table 3 and Table 4, respectively. In regression, our objective (\(\mathrm{NLL}+\mathrm{MMD}\)) achieves better QCE than the \(\mathrm{NLL}\) objective across all datasets while maintaining comparable (or better) \(\mathrm{NLL}\) scores on the test set. On most datasets, we find that our method also reduces the sharpness penalty induced by post-hoc calibration. Importantly, we also observe that post-hoc calibration and our method are complementary for calibrating fore-casters; post-hoc methods can guarantee calibration on held-out data, and calibration regularization mitigates the sharpness penalty. For classification, we find that our method improves accuracy, ECE, and entropy relative to the baselines we tested for calibration regularization. These trends also generally hold with post-hoc calibration, where we observe that our method achieves better calibration and accuracy across most datasets (See Table 5).
Computational RequirementsRelative to training with an unregularized NLL objective, our method with MMD regularization requires an average wall clock time per epoch of 1.2x for the 5 regression datasets, and 1.3x for the 5 classification datasets.
### Experiment: Decision Calibration
One of the key advantages of our trainable calibration metrics is their flexibility in enforcing multiple notions of calibration. We consider a decision-making task to study the effectiveness of our method in improving decision calibration. The decision task is to choose an action \(a\in\{\pm 1\}\), whose loss \(\ell(a,y)=\mathds{1}\left\{a\neq\mathrm{sign}(y-c)\right\}\) depends only on whether the label \(y\) exceeds a threshold \(c\in\mathbb{R}\). Given a true distribution \(P\) and a forecaster \(Q_{Y|X}\), the **Decision Calibration Error** (\(\mathrm{DCE}\)) is \(\mathrm{DCE}^{2}(P,Q)=\sum_{a\in\mathcal{A}}\|\mathbb{E}_{P}[\ell(Y,a)]- \mathbb{E}_{X}\mathbb{E}_{\hat{\mathcal{Y}}\sim Q_{Y|X}}[\ell(\hat{Y},a)]\|_{ 2}^{2}\).
MMD ObjectiveTo optimize for decision calibration, we tailor the kernels \(k_{X},k_{Y}\) to reflect the decision problem. More concretely, we choose a kernel function that penalizes predicted labels which are on the wrong side of \(c\), namely \(k_{Y}(y,y^{\prime})=+1\) if \(\mathrm{sign}(y-c)=\mathrm{sign}(y^{\prime}-c)\), and \(-1\) otherwise.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline
**Dataset** & Training Objective & Accuracy \(\uparrow\) & ECE \(\downarrow\) & Entropy \(\downarrow\) & KCE \(\downarrow\) \\ \hline breast-cancer & XE & 95.372 \(\pm\) 0.160 & 0.194 \(\pm\) 0.003 & 0.453 \(\pm\) 0.004 & 6.781 \(\pm\) 0.601 \\ \(n=569\) & XE + MMCE & 94.770 \(\pm\) 0.147 & 0.060 \(\pm\) 0.001 & 0.076 \(\pm\) 0.002 & 0.392 \(\pm\) 0.086 \\ \(d=30\) & XE + ECE KDE & 94.351 \(\pm\) 0.163 & 0.062 \(\pm\) 0.001 & 0.065 \(\pm\) 0.001 & 0.225 \(\pm\) 0.083 \\ \(m=2\) & XE + MMD (_Ours_) & **95.789 \(\pm\) 0.060** & **0.052 \(\pm\) 0.000** & **0.006 \(\pm\) 0.000** & **0.014 \(\pm\) 0.000** \\ \hline heart-disease & XE & 55.904 \(\pm\) 0.196 & 0.373 \(\pm\) 0.003 & 1.512 \(\pm\) 0.007 & 0.581 \(\pm\) 0.016 \\ \(n=921\) & XE + MMCE & 60.787 \(\pm\) 0.208 & 0.267 \(\pm\) 0.002 & 0.947 \(\pm\) 0.003 & 0.097 \(\pm\) 0.002 \\ \(d=23\) & XE + ECE KDE & 50.036 \(\pm\) 2.801 & 0.304 \(\pm\) 0.012 & 1.392 \(\pm\) 0.016 & 0.450 \(\pm\) 0.050 \\ \(m=5\) & XE + MMD (_Ours_) & **61.516 \(\pm\) 0.255** & **0.261 \(\pm\) 0.003** & **0.945 \(\pm\) 0.006** & **0.077 \(\pm\) 0.002** \\ \hline online-shoppers & XE & 89.816 \(\pm\) 0.037 & 0.129 \(\pm\) 0.001 & 0.221 \(\pm\) 0.003 & 0.022 \(\pm\) 0.001 \\ \(n=12330\) & XE + MMCE & 89.933 \(\pm\) 0.036 & 0.128 \(\pm\) 0.001 & 0.220 \(\pm\) 0.002 & -0.019 \(\pm\) 0.001 \\ \(d=28\) & XE + ECE KDE & **90.019 \(\pm\) 0.034** & **0.127 \(\pm\) 0.000** & 0.225 \(\pm\) 0.002 & -0.022 \(\pm\) 0.001 \\ \(m=2\) & XE + MMD (_Ours_) & 89.976 \(\pm\) 0.031 & **0.127 \(\pm\) 0.001** & **0.218 \(\pm\) 0.002** & **-0.026 \(\pm\) 0.000** \\ \hline dry-bean & XE & 92.071 \(\pm\) 0.025 & 0.113 \(\pm\) 0.000 & 0.264 \(\pm\) 0.001 & 0.061 \(\pm\) 0.002 \\ \(n=13612\) & XE + MMCE & 92.772 \(\pm\) 0.035 & 0.099 \(\pm\) 0.000 & 0.224 \(\pm\) 0.002 & 0.048 \(\pm\) 0.004 \\ \(d=16\) & XE + ECE KDE & 92.760 \(\pm\) 0.037 & 0.097 \(\pm\) 0.000 & 0.232 \(\pm\) 0.001 & 0.041 \(\pm\) 0.003 \\ \(m=7\) & XE + MMD (_Ours_) & **92.894 \(\pm\) 0.035** & **0.089 \(\pm\) 0.000** & **0.174 \(\pm\) 0.002** & **0.040 \(\pm\) 0.004** \\ \hline adult & XE & 84.528 \(\pm\) 0.040 & 0.186 \(\pm\) 0.000 & 0.320 \(\pm\) 0.002 & -0.014 \(\pm\) 0.002 \\ \(n=32561\) & XE + MMCE & 84.203 \(\pm\) 0.042 & 0.191 \(\pm\) 0.000 & 0.340 \(\pm\) 0.002 & -0.015 \(\pm\) 0.002 \\ \(d=104\) & XE + ECE KDE & 84.187 \(\pm\) 0.045 & **0.178 \(\pm\) 0.001** & 0.414 \(\pm\) 0.002 & **-0.020 \(\pm\) 0.003** \\ \(m=2\) & XE + MMD (_Ours_) & **84.565 \(\pm\) 0.035** & **0.178 \(\pm\) 0.001** & **0.315 \(\pm\) 0.002** & -0.018 \(\pm\) 0.001 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Experimental results for five tabular classification tasks, comparing MMCE regularization [26], KDE regularization [37], and MMD regularization (Ours), alongside a standard cross-entropy (XE) loss without post-hoc calibration. Here, \(n\) is the number of examples in the dataset, \(d\) is the number of features, and \(m\) is the number of classes.
To allow for gradient based optimization, we relax the objective to yield the differentiable kernel \(k_{Y}(y,y^{\prime})=\tanh(y-c)\cdot\tanh(y^{\prime}-c)\). Notice that \(k_{Y}(y,y^{\prime})\approx+1\) when \(\operatorname{sign}(y-c)=\operatorname{sign}(y-c)\), and \(-1\) otherwise. To encourage decision calibration for all features, we let \(k_{X}\) be an RBF kernel.
ResultsThe results in Table 3 demonstrate that the \(\operatorname{MMD}\) training objective tailored to the decision problem achieves the best decision calibration across all datasets. Additional results in Appendix B.3 show that using an \(\operatorname{MMD}\) kernel tailored to a decision problem (i.e. the \(\tanh\) kernel) improves decision calibration scores across all datasets.
### Experiment: Local Calibration
SetupTo provide intuition on how our metrics can facilitate calibration in local neighbourhoods of features, we study how location affects the reliability of forecasts on the crop-yield regression dataset. We tailor the kernels \(k_{X},k_{Y}\) to local calibration for geospatial neighborhoods. Namely, we select \(\phi(x)\) which extracts the geospatial features (latitude, longitude) from the full input vector \(x\). We then define \(k_{X}(x,x^{\prime})=k_{\text{RBF}}(\phi(x),\phi(x^{\prime}))\) and \(k_{Y}(y,y^{\prime})=k_{\text{RBF}}(y,y^{\prime})\).
MetricExtending prior work on local calibration [27], given a cdf forecaster \(Q_{Y|X}\), a feature embedding \(\phi(\cdot)\) and kernel \(k\), we define **Local Calibration Error** (\(\operatorname{LCE}\)) for regression is is computed as \(\operatorname{LCE}_{\text{total}}(x)=\frac{1}{B}\sum_{i=1}^{B}\operatorname{ LCE}(x;c_{i})^{2}\), where \(c_{i}=\frac{i}{B}\). Intuitively, \(\operatorname{LCE}\) measures the QCE across local regions determined by \(\phi(\cdot)\). We visualize the results in Figure 1 for \(B=20\).
## 7 Conclusion
We introduced a flexible class of trainable calibration metrics based on distribution matching using \(\operatorname{MMD}\). These methods can be effectively combined with existing post-hoc calibration methods to achieve strong calibration guarantees while preserving sharpness.
LimitationsOptimizing the \(\operatorname{MMD}\) metric requires that samples from the forecast are differentiable with respect to the model parameters. This restricts the family of applicable forecasters, but differentiable relaxations such as Gumbel-softmax can mitigate this challenge. Furthermore, in practice it is often not possible to achieve zero calibration error via regularization due to computational and data limitations. To mitigate this limitation, we suggest pairing our calibration regularizers with post-hoc calibration methods. Lastly, the scale of the \(\operatorname{MMD}\) metric is sensitive to the chosen kernel (e.g., the bandwidth of a RBF kernel), so it can be difficult to understand how calibrated a model is in an absolute sense based on the \(\operatorname{MMD}\) metric.
Figure 1: We visualize Local Calibration Error for a forecaster trained to predict annual wheat crop-yield based on climate and geospatial data. The standard \(\operatorname{NLL}\) objective **(Left)** leads to a forecaster that is miscalibrated across local geospatial neighbourhoods, as seen by areas of brighter color. Our kernel-based calibration objective **(Right)** leads to better calibration across local neighborhoods.
Acknowledgements
CM is supported by the NSF GRFP. This research was supported in part by NSF (#1651565), ARO (W911NF-21-1-0125), ONR (N00014-23-1-2159), and CZ Biohub.
|
2304.00048 | Natural optical activity from density-functional perturbation theory | We present an accurate and computationally efficient first-principles
methodology to calculate the natural optical activity. Our approach is based on
the long-wave density-functional perturbation theory and includes
self-consistent field (SCF) terms naturally in the formalism, which are found
to be of crucial importance. The final result is expressed exclusively in terms
of response functions to uniform field perturbations and avoids troublesome
summations over empty states. Our strategy is validated by computing the
natural optical activity tensor in representative chiral crystals (trigonal Se,
$\alpha$-HgS and $\alpha$-SiO$_2$) and molecules (C$_4$H$_4$O$_2$), finding
excellent agreement with experiment and previous theoretical calculations. | Asier Zabalo, Massimiliano Stengel | 2023-03-31T18:04:32Z | http://arxiv.org/abs/2304.00048v1 | # Natural optical activity from density-functional perturbation theory
###### Abstract
We present an accurate and computationally efficient first-principles methodology to calculate the natural optical activity. Our approach is based on the long-wave density-functional perturbation theory and includes self-consistent field (SCF) terms naturally in the formalism, which are found to be of crucial importance. The final result is expressed exclusively in terms of response functions to uniform field perturbations and avoids troublesome summations over empty states. Our strategy is validated by computing the natural optical activity tensor in representative chiral crystals (trigonal Se, \(\alpha\)-HgS and \(\alpha\)-SiO\({}_{2}\)) and molecules (C\({}_{4}\)H\({}_{4}\)O\({}_{2}\)), finding excellent agreement with experiment and previous theoretical calculations.
Natural optical activity (NOA) refers to the first-order spatial dispersion of the macroscopic dielectric tensor [1]. Empirically, it manifests as optical rotation (OR), which is a property of certain structures to rotate the plane of the polarization of light that travels through them [2; 3]; at difference with the Faraday effect, NOA is reciprocal and doesn't require magnetism to be present [4]. It was first measured in quartz crystals back in 1811 by Arago, and historically, most of the studied optically active materials turned out to be chiral. In fact, chirality is a sufficient but not necessary condition for NOA to be present, as optically active achiral systems also exist [5]. Since its discovery, natural optical activity has been attracting increasing research interest, and reliable experimental measurements now exist for many materials, both in molecular [6; 7; 8; 9; 10] and crystalline form [11; 12; 13; 14; 15; 16; 17].
Parallel to the experiments, there have been considerable advances in the theoretical understanding of optical rotation as well [2; 18; 19; 20; 11]. _Ab-initio_ methods like Hartee-Fock (HF), [9] coupled-cluster (CC) [21] and density functional theory (DFT) [7; 22; 6] have recently become popular in the context of NOA. While most of the available literature is about small molecules, notable attempts at calculating optical activity in solids do exist. It is worth mentioning, for example, the pioneering works by Zhong, Levine, Allan and Wilkins, [19; 20] based on a numerical long-wavelength expansion of the electromagnetic response function. Later, Malashevich and Souza [10] and Pozo and Souza [24] derived analytical expressions for the NOA, thus reviving the interest in the field; their formalism has been implemented very recently within an _ab initio_ context [25]. The agreement between theory and experiment achieved in these works is quite good, e.g., for trigonal Se [24; 10], \(\alpha\)-quartz [19; 20] and trigonal Te [26].
In spite of the remarkable progress, however, a systematic, first-principles-based and computationally efficient methodology to compute the NOA has not been established yet. The first issue concerns the treatment of the self-consistent (SCF) fields. These were accounted for in Ref. [27] and found to be of crucial importance, but the numerical differentiations with respect to the wave vector \(\mathbf{q}\) that were used therein have limited a widespread application of their method. The existing analytical expressions [24; 10] for the NOA are, in principle, better suited to an _ab initio_ implementation [25], but the SCF contributions are systematically neglected therein. Another disadvantage with the existing techniques lies in that they require cumbersome sums over empty states; this introduces an additional potential source of error, as the convergence with respect to the number of bands tends to be slow. There are additional technical subtleties that have not been considered in the context of the NOA, for example regarding the correct treatment of the current-density response in presence of nonlocal pseudopotentials [28]. It is unquestionable that the current limitations rule out the study of many systems of outstanding interest (e.g., electrotoroidic compounds [29; 30]), which are hardly accessible to the currently available schemes.
Here we present, within the framework of first-principles long-wave density functional perturbation theory (DFPT), a method to calculate the natural optical activity that overcomes the aforementioned limitations and is equally valid for molecules and extended solids. Building on Ref. [7], we express the natural optical activity tensor as the first-order spatial dispersion (i.e., derivative with respect to the wave vector \(\mathbf{q}\)) of the macroscopic dielectric function. Crucially, the capabilities of the recently implemented long-wave module [32] of abinit[33; 34] allow for an efficient calculation by combining response functions that are already available in the code (e.g., \(\mathbf{k}\)-derivatives, electric and orbital magnetic field perturbations). This way, summations over excited states are entirely avoided, and the effect of local fields is automatically included without the need for an ad-hoc treatment. We validate our methodology by computing the NOA tensor for well known chiral structures, including trigonal crystals (Se, \(\alpha\)-HgS and \(\alpha\)-SiO\({}_{2}\)) and the C\({}_{4}\)H\({}_{4}\)O\({}_{2}\) molecule. Our numerical results show fast convergence with respect to the main computational parameters, and are in excellent agreement with experiment and earlier theoretical calculations.
Our starting point is the double Fourier transform in frequency \(\omega\) and wave vector \(\mathbf{q}\) of the permittivity function, \(\epsilon_{\alpha\beta}(\omega,\mathbf{q})\). By expanding \(\epsilon_{\alpha\beta}(\omega,\mathbf{q})\) in powers of the wave vector \(\mathbf{q}\), around \(\mathbf{q=0}\), we obtain
\[\epsilon_{\alpha\beta}(\omega,\mathbf{q})=\epsilon_{\alpha\beta}(\omega, \mathbf{q=0})+iq_{\gamma}\eta_{\alpha\beta\gamma}(\omega)+\dots, \tag{1}\]
where \(\eta_{\alpha\beta\gamma}(\omega)\) is the natural optical activity tensor [1]. (From now on, we adopt Einstein summation conventions for the Cartesian indices \(\alpha\beta\gamma\).) In absence of dissipation (i.e., in the transparent regime), \(\epsilon_{\alpha\beta}(\omega,\mathbf{q})\) is a \(3\times 3\) Hermitian matrix, which at \(\mathbf{q=0}\) becomes real symmetric in crystals with time-reversal (TR) symmetry. The frequency-dependent natural optical activity tensor is then also real and satisfies \(\eta_{\alpha\beta\gamma}(\omega)=-\eta_{\beta\alpha\gamma}(\omega)\), which means that only 9 of the 27 components of \(\eta_{\alpha\beta\gamma}\) are independent. As a consequence, \(\eta_{\alpha\beta\gamma}\) is often rearranged into the second-rank _gyration_ or _gyrotropic_ tensor, \(g_{\alpha\beta}\), [1]
\[g_{\alpha\beta}(\omega)=\frac{1}{2}\epsilon_{\gamma\delta\alpha}\eta_{\gamma \delta\beta}(\omega), \tag{2}\]
where \(\epsilon_{\gamma\alpha\delta}\) is the Levi-Civita symbol. Assuming a crystal structure with the point group 32 (trigonal Se, \(\alpha\)-HgS and \(\alpha\)-SiO\({}_{2}\) belong to this crystal class), and considering that the optical axis is oriented along the \(z\) Cartesian direction [19],
\[\mathbf{g}(\omega)=\begin{pmatrix}g_{11}(\omega)&0&0\\ 0&g_{11}(\omega)&0\\ 0&0&g_{33}(\omega)\end{pmatrix}, \tag{3}\]
where \(g_{11}=\eta_{231}\) and \(g_{33}=\eta_{123}\). The optical rotatory power \(\rho\) is then given by [19]
\[\rho(\omega)=\frac{\omega^{2}}{2c^{2}}g_{33}(\omega), \tag{4}\]
where \(c\) is the speed of light. In this work, we shall focus on the \(\omega\to 0\) limit, where the components of both \(\mathbf{g}\) and \(\mathbf{\eta}\) tend to a finite constant,
\[\eta_{\alpha\beta\gamma}=\eta_{\alpha\beta\gamma}(\omega\to 0),\qquad g_{ \alpha\beta}=g_{\alpha\beta}(\omega\to 0). \tag{5}\]
At leading order in the frequency, this yields a rotatory power of
\[\rho(\omega)\simeq(\hbar\omega)^{2}\bar{\rho},\qquad\bar{\rho}=\frac{g_{33}} {2(\hbar c)^{2}}, \tag{6}\]
where \(\hbar\) is the reduced Planck constant. The constant \(\bar{\rho}\) is usually expressed in the units of \(\text{deg}/[\text{mm (eV)}^{2}]\) and can be directly compared to experimental measurements.
To make further progress, we shall express the dielectric function in the low-frequency limit as a second derivative of the ground state energy with respect to two spatially modulated electric fields (\(\mathbf{\mathcal{E}}\)) [35]
\[\epsilon_{\alpha\beta}(\mathbf{q})=\delta_{\alpha\beta}-\frac{4\pi}{\Omega}E _{\mathbf{q}}^{\mathcal{E}_{\alpha}\mathcal{E}_{\beta}}. \tag{7}\]
This allows us to write the natural optical activity tensor as the first derivative of \(\epsilon_{\alpha\beta}(\mathbf{q})\) with respect to \(q_{\gamma}\),
\[\eta_{\alpha\beta\gamma}=-\frac{4\pi}{\Omega}\text{Im}E_{\gamma}^{\mathcal{E }_{\alpha}\mathcal{E}_{\beta}},\qquad E_{\gamma}^{\mathcal{E}_{\alpha}\mathcal{ E}_{\beta}}=\frac{\partial E_{\mathbf{q}}^{\mathcal{E}_{\alpha}\mathcal{E}_{ \beta}}}{\partial q_{\gamma}}\Big{|}_{\mathbf{q=0}}, \tag{8}\]
where \(\Omega\) is the volume of the unit cell. By virtue of the "\(2n+1\)" theorem [7], \(E_{\gamma}^{\mathcal{E}_{\alpha}\mathcal{E}_{\beta}}\) can be written in terms of uniform-field response functions, which are already available in public first-principles packages like abinit. More specifically, we find
\[E_{\gamma}^{\mathcal{E}_{\alpha}\mathcal{E}_{\beta}}=E_{\text{elst},\gamma}^{ \mathcal{E}_{\alpha}\mathcal{E}_{\beta}}+2s\int_{\text{BZ}}[d^{3}k]E_{\mathbf{ k},\gamma}^{\mathcal{E}_{\alpha}\mathcal{E}_{\beta}}, \tag{9}\]
where \(s=2\) is the spin multiplicity, and the shorthand notation \([d^{3}k]=\Omega/(2\pi)^{3}d^{3}k\) is used for the Brillouin-zone (BZ) integral. (We assume that the system under study is a time-reversal (TR) symmetric insulator.) The electrostatic (elst) term is defined as
\[E_{\text{elst},\gamma}^{\mathcal{E}_{\alpha}\mathcal{E}_{\beta}}=\int_{\Omega} \int n^{\mathcal{E}_{\alpha}}(\mathbf{r})K_{\gamma}(\mathbf{r},\mathbf{r}^{ \prime})n^{\mathcal{E}_{\beta}}d^{3}rd^{3}r^{\prime}, \tag{10}\]
where \(n^{\mathcal{E}_{\beta}}\) is the first-order charge density response to \(\mathcal{E}_{\beta}\), and \(K_{\gamma}(\mathbf{r},\mathbf{r}^{\prime})\) is the first \(\mathbf{q}\)-derivative of the Hartree exchange and correlation (Hxc) kernel. The wave function term of Eq. (9), in turn, can be written as
\[\begin{split} E_{\mathbf{k},\gamma}^{\mathcal{E}_{\alpha}\mathcal{ E}_{\beta}}=&\mathcal{X}_{\mathbf{k}}^{\mathcal{E}_{\alpha}k_{ \gamma}\mathcal{E}_{\beta}}+\mathcal{Y}_{\mathbf{k}}^{\mathcal{E}_{\alpha} \mathcal{E}_{\beta}k_{\gamma}}+\mathcal{Y}_{\mathbf{k}}^{k_{\gamma}\mathcal{E}_{ \alpha}\mathcal{E}_{\beta}}\\ &+\mathcal{W}_{\mathbf{k}}^{\alpha,\beta\gamma}+\left(\mathcal{W}_{ \mathbf{k}}^{\beta,\alpha\gamma}\right)^{*},\end{split} \tag{11}\]
We shall explain Eq. (11) term by term in the following.
For three generic perturbations, \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\), the calligraphic symbols in the first line are defined as
\[\mathcal{X}_{\mathbf{k}}^{\lambda_{1}\lambda_{2}\lambda_{3}}=\sum_{m}\ \langle u_{m\mathbf{k}}^{\lambda_{1}}|\,\hat{\mathcal{H}}_{\mathbf{k}}^{\lambda_ {2}}\,|u_{m\mathbf{k}}^{\lambda_{3}}\rangle \tag{12}\]
and
\[\mathcal{Y}_{\mathbf{k}}^{\lambda_{1}\lambda_{2}\lambda_{3}}=-\sum_{mn}\ \langle u_{m\mathbf{k}}^{\lambda_{1}}|u_{n\mathbf{k}}^{\lambda_{3}}\rangle\ \langle u_{n\mathbf{k}}^{(0)}|\,\hat{\mathcal{H}}_{\mathbf{k}}^{\lambda_{2}}\,|u_{m \mathbf{k}}^{(0)}\rangle\,. \tag{13}\]
(The band indices \(m,n\) run over the occupied states only.) Here, \(\,|u_{m\mathbf{k}}^{\lambda}\rangle\) are the first-order wave functions and the first-order calligraphic Hamiltonian is given by \(\hat{\mathcal{H}}_{\mathbf{k}}^{\lambda}=\hat{H}_{\mathbf{k}}^{\lambda}+\hat{V}^{\lambda}\), where \(\hat{H}_{\mathbf{k}}^{\lambda}\) is the external perturbation and \(\hat{V}^{\lambda}\) is the self-consistent field (SCF) potential response. Note that \(\hat{\mathcal{H}}_{\mathbf{k}}^{k_{\gamma}}=\hat{H}_{\mathbf{k}}^{k_{\gamma}}\) as there is no SCF contribution to the derivative in \(\mathbf{k}\)-space, and \(\hat{\mathcal{H}}_{\mathbf{k}}^{\mathcal{E}_{\alpha}}=\hat{V}^{\mathcal{E}_{ \alpha}}\) in the above equations since the "external potential" is a purely cross-gap operator in the electric-field case [7].
The third line is defined as
\[\mathcal{W}_{\mathbf{k}}^{\alpha,\beta\gamma}=\sum_{m}i\,\langle u_{m\mathbf{k }}^{\mathcal{E}_{\alpha}}|u_{m\mathbf{k},\gamma}^{A_{\beta}}\rangle\,, \tag{14}\]
where \(\left|u^{A_{\beta}}_{m{\bf k},\gamma}\right\rangle\) indicates the wave function response to an electromagnetic vector potential at first order in the modulation vector \({\bf q}\). (See Sec. IV of Ref. [36] for more details.) We can write \(\mathcal{W}\) as a sum of two contributions that are, respectively, symmetric (\(\mathcal{S}^{\alpha,\beta\gamma}_{\bf k}\)) and an antisymmetric (\(\mathcal{A}^{\alpha,\beta\gamma}_{\bf k}\)) with respect to \(\beta\leftrightarrow\gamma\) exchange,
\[\mathcal{W}^{\alpha,\beta\gamma}_{\bf k}=\mathcal{S}^{\alpha,\beta\gamma}_{ \bf k}+\mathcal{A}^{\alpha,\beta\gamma}_{\bf k}. \tag{15}\]
These objects are given by
\[\mathcal{S}^{\alpha,\beta\gamma}_{\bf k}=\frac{i}{2}\sum_{m}\left\langle u^{ \mathcal{E}_{\alpha}}_{m{\bf k}}|\partial^{2}_{\beta\gamma}u^{(0)}_{m{\bf k}}\right\rangle \tag{16}\]
and
\[\mathcal{A}^{\alpha,\beta\gamma}_{\bf k}=\frac{1}{2}\sum_{m}\epsilon_{\delta \beta\gamma}\left\langle u^{\mathcal{E}_{\alpha}}_{m{\bf k}}|u^{B_{\delta}}_{m{ \bf k}}\right\rangle. \tag{17}\]
In Eq. (16), \(\partial^{2}_{\beta\gamma}\) represents a second derivative in \({\bf k}\)-space. The \(|\partial^{2}_{\beta\gamma}u^{(0)}_{m{\bf k}}\rangle\) functions in \(\mathcal{S}\) are the well known \(d^{2}/dk_{\beta}dk_{\gamma}\) wave functions [7; 8; 38]; whereas in Eq. (17), \(|u^{B_{\delta}}_{m{\bf k}}\rangle\) is the wave function response to a uniform orbital magnetic field, \(B_{\delta}\), as defined in Refs. [9; 40].
For finite systems, the above theory nicely recovers the established formulas that are used in quantum chemistry calculations (more details can be found in Sec. VI of Ref. [36]). Our formulation, however, presents many crucial advantages. First, Eq. (9) has been derived within a DFPT framework, and hence avoids the cumbersome summations over unoccupied states that are required by other methods. Second, all contributions to Eq. (11) are individually independent of the choice of the origin, and equally valid for both molecules and extended crystals; this implies that our formulas are free of cancellation errors due to incomplete basis sets. Third, all the aforementioned terms are independent of the choice of the wave function gauge by construction, as they are all expressed as parametric derivatives (with respect to \({\bf q}\)) of multiband gauge-invariant quantities. Fourth, the treatment of the current-density response in presence of nonlocal pseudopotentials complies with the prescriptions of Ref. [28]. Finally, and most importantly, SCF terms naturally appear in our formalism, both directly in \(E_{\rm elst}\) and \(\mathcal{Y}\) (both terms vanish if local fields are neglected), and indirectly in the other terms via the first-order wave functions \(\left|u^{\mathcal{E}_{\alpha}}_{m{\bf k}}\right\rangle\) (see Sec. V. of Ref. [36]).
A natural question to ask at this point is whether Eq. (11) is unique, or whether there other combinations of the same ingredients that yield the same result. Two inequivalent definitions of \(E^{\mathcal{E}_{\alpha}\mathcal{E}_{\beta}}_{{\bf k},\gamma}\) can, at most, differ by a vanishing Brillouin-zone integral; so the question boils down to asking whether we can combine the individual pieces in Eq. (11) in such a way that the result is the total \({\bf k}\)-derivative of some function \(f({\bf k})\). An obvious choice for \(f({\bf k})\) consists in identifying it with the \({\bf k}\)-derivative of the macroscopic dielectric tensor. Indeed, by applying the "\(2n+1\)" theorem to the stationary expression [41; 35; 42] for \(E^{\mathcal{E}_{\alpha}\mathcal{E}_{\beta}}_{{\bf k},{\bf q}=0}\), we find
\[\begin{split}\frac{\partial E^{\mathcal{E}_{\alpha}\mathcal{E}_{ \beta}}_{{\bf k},{\bf q}}}{\partial k_{\gamma}}\Big{|}_{{\bf q}={\bf 0}}=&\mathcal{X}^{\mathcal{E}_{\alpha}\mathcal{E}_{\beta}k_{\gamma }}_{{\bf k}}+\mathcal{X}^{k,\mathcal{E}_{\alpha}\mathcal{E}_{\beta}}_{{\bf k }}+\mathcal{X}^{\mathcal{E}_{\alpha}k,\mathcal{E}_{\beta}}_{{\bf k}}\\ &+\mathcal{Y}^{\mathcal{E}_{\alpha}\mathcal{E}_{\beta}k_{\gamma}} _{{\bf k}}+\mathcal{Y}^{\mathcal{E}_{\alpha}k,\mathcal{E}_{\beta}}_{{\bf k}}+ \mathcal{Y}^{k,\mathcal{E}_{\alpha}k,\mathcal{E}_{\beta}}_{{\bf k}}\\ &+2\mathcal{S}^{\alpha,\beta\gamma}_{\bf k}+2\Big{(}\mathcal{S}^{ \beta,\alpha\gamma}_{\bf k}\Big{)}^{*}.\end{split} \tag{18}\]
Then, by subtracting the latter expression from Eq. (11), we obtain another equally valid formula for the NOA,
\[\begin{split}\left[E^{\mathcal{E}_{\alpha}\mathcal{E}_{\beta}}_{ {\bf k},\gamma}\right]^{\prime}=&-\left(\mathcal{X}^{\mathcal{E}_ {\alpha}\mathcal{E}_{\beta}k_{\gamma}}_{{\bf k}}+\mathcal{X}^{k,\mathcal{E}_{ \alpha}\mathcal{E}_{\beta}}_{{\bf k}}+\mathcal{Y}^{\mathcal{E}_{\alpha}k_{ \gamma}\mathcal{E}_{\beta}}_{{\bf k}}\right)\\ &-\mathcal{W}^{\alpha,\gamma\beta}_{\bf k}-\left(\mathcal{W}^{ \beta,\gamma\alpha}_{\bf k}\right)^{*}.\end{split} \tag{19}\]
Numerical tests confirm the consistency of Eq. (11) and (19) to a very high degree of accuracy. We therefore conclude that Eq. (9) is not unique; on the contrary, there are infinite possible definition of the gyrotropy tensor, differing from our Eq. (9) by a dimensionless constant times Eq. (18).
This arbitrariness can be regarded a direct consequence of the _electromagnetic_ (EM) gauge freedom. Indeed, the last lines in both Eq. (11) and Eq. (19) have the physical meaning of Berry curvatures in the parameter space spanned by a uniform magnetic field (\({\bf B}\)) and an electric field. Such curvatures are, as we said, insensitive to the choice of the coordinate origin and the wave function gauge. This result was achieved by expressing the \({\bf B}\)-field response function in a cell-periodic form, consistent with the density-operator theory of Essin et al. [9]. Notwithstanding these undeniable advantages, the aforementioned Berry curvatures retain an inherent dependence on the EM-gauge [43]. More specifically, the symbol \(\mathcal{W}^{\alpha,\beta\gamma}\) is expressed in a Landau gauge where the \(\beta\) component of the \({\bf A}\)-field increases linearly along \(\gamma\); so when going from Eq. (11) to Eq. (19) we have essentially switched between two Landau gauges in the last term, and collected the leftovers in the form of \(\mathcal{X}\) and \(\mathcal{Y}\). (It is, of course, possible to define a third variant of Eq. (11), where the contribution of \(\mathcal{S}\) cancels out, at the expense of having a slightly longer list of \(\mathcal{X}\)- and \(\mathcal{Y}\)-symbols.) Ideally, one would like to exploit this freedom to obtain a physically intuitive separation between well-defined (and possibly individually measurable) physical effects; whether such a choice exists is an interesting open question, which we shall defer to a later work.
Our first principles calculations are performed with the open-source abinit[33; 34] package. (Details of the computational parameters are provided in Sec. I of Ref. [36].) Overall, our approach displays a remarkably fast convergence with respect to the main computational parameters (plane-wave energy cutoff and number of \({\bf k}\) points,
see Ref. [36], Sec. III). In Table 1 we show the converged numerical values for the independent components of the gyration tensor and the optical rotatory power in our test set of trigonal crystals: trigonal Se, \(\alpha\)-HgS and \(\alpha\)-SiO\({}_{2}\) (numerical values in brackets are obtained neglecting SCF terms). Our results are in fairly good agreement with literature values, even if a scissor operator was applied in Ref. [27] to correct the LDA band gap. (Trigonal Se is an interesting exception, in that Ref. [27] reports an opposite sign to ours for the non-SCF value of the \(g_{33}\) component; although the reason for this discrepancy is unclear, we remain confident in the accuracy of our results, as other values nicely agree with ours in both magnitude and sign.) Overall, our results confirm the crucial importance of local-field SCF contributions, consistent with the conclusions of Ref. [27].
Given the large impact of SCF fields on the results, we decided to repeat our calculations within the PBE [6] parametrization of the GGA. The corresponding values are reported in Table 2. (Further details can be found in Ref. [36], Sec. II.) Interestingly, for a given crystal structure the choice between LDA and GGA seems to have a relatively small influence on the calculated coefficients, except for the \(g_{33}\) component of Se where such deviation reaches \(\sim\)50%. Conversely, the structural parameters do appear to have a significant impact on the final result. To account for this fact, we have tested various models for the crystal structure, either using the experimental (exp) one, or relaxed to mechanical equilibrium (either within LDA or GGA). Our analysis shows that the fundamental gap depends on the volume of the unit cell, and such a dependence has a strong impact on the calculated \(\mathbf{g}\)-tensor components. For example, in the LDA equilibrium structure of Se the electronic band gap is so small that we were unable to converge \(g_{11}\) and \(g_{33}\) to meaningful values. While GGA displays the usual overcorrection of the equilibrium volume, it yields results that are in much closer agreement with the experiment. It is also interesting to note that the NOA, unlike other linear-response properties (e.g. the dielectric tensor), has a nontrivial dependence on the structure (and hence on the amplitude of the gap). The final result originates from the mutual cancellation of several terms, not all of which are expected to diverge in the metallic limit. This means that some components of \(\mathbf{g}\) may change rather dramatically with structure, while others remain essentially unaltered (see Sec. II of Ref. [36] for more details).
We now focus on the isolated molecule C\({}_{4}\)H\({}_{4}\)O\({}_{2}\). Table 3 shows our computed gyration tensor (multiplied by the volume of the simulation cell \(\Omega\)), with and without SCF terms; as in crystals, the latter have a huge impact on some components. We also report the optical rotatory parameter \(\beta\), which in molecular systems relates to the rotatory power \(\alpha(\omega)\) via [45; 46]
\[\alpha(\omega)=\frac{N_{A}\omega^{2}}{Mc^{2}}\beta,\qquad\beta=\frac{\Omega}{ 4\pi}\frac{1}{2}\sum_{a}\frac{1}{3}g_{aa}. \tag{20}\]
Here \(N_{A}\) is the Avogadro number and \(M\) is the molar mass of the molecule. Our computed value of \(\beta\) almost exactly matches the value of \(\beta=-2.29\) that was reported in Ref. [47]. Although such a level of agreement gives us confidence in the correctness of our implementation, it may be to some extent coincidental, given the differences in our respective approximations and computational schemes.
In summary, we have presented a formulation of optical dispersion within the framework of density-functional perturbation theory. Our methodology brings the first-principles calculation of the gyration tensor to the same level of accuracy and computational ease as standard linear-response properties, e.g., the dielectric tensor. We have also discussed some formal aspects of the theory, e.g., the non-uniqueness of Eq. (9), which we relate to the gauge freedom of electromagnetism. As an
\begin{table}
\begin{tabular}{c c c c c} Structure & \multicolumn{2}{c}{\(g_{11}\) (bohr)} & \multicolumn{2}{c}{\(g_{33}\) (bohr)} \\ & LDA & GGA & LDA & GGA \\ \hline Se(exp) & -1.306 & -1.301 & -1.910 & -1.329 \\ Se(GGA) & -1.408 & -1.431 & -1.802 & -1.216 \\ \hline \(\alpha\)-HgS (LDA) & 0.775 & 0.663 & -1.861 & -1.645 \\ \(\alpha\)-HgS (GGA) & -0.716 & -0.692 & -0.065 & -0.065 \\ \hline \(\alpha\)-SiO\({}_{2}\) (LDA) & -0.071 & -0.071 & 0.125 & 0.125 \\ \(\alpha\)-SiO\({}_{2}\) (GGA) & -0.085 & -0.085 & 0.168 & 0.167 \\ \end{tabular}
\end{table}
Table 2: Comparison between LDA and GGA for the independent components of the gyration tensor for Se, \(\alpha\)-HgS and \(\alpha\)-SiO\({}_{2}\), for different structures. In the Structure column, “exp” refers to the experimental structure, while Se (LDA) means that the structure was relaxed with LDA, for example.
\begin{table}
\begin{tabular}{c c c c c} & \(\Omega g_{11}\) & \(\Omega g_{22}\) & \(\Omega g_{33}\) & \(\frac{\Omega}{2}(g_{12}+g_{21})\) & \(\beta\) \\ \hline With SCF & -69.69 & -68.12 & -33.98 & -267.32 & -2.28 \\ Without SCF & -72.52 & -56.18 & 144.90 & -629.35 & 0.21 \\ \end{tabular}
\end{table}
Table 3: Calculated independent components of the gyration tensor times the volume of the simulation cell (\(\Omega\)) for C\({}_{4}\)H\({}_{4}\)O\({}_{2}\). Values are given in Hartree atomic units.
\begin{table}
\begin{tabular}{c c c c c} & \(g_{11}\) & \(g_{23}\) & \(\bar{\rho}\) \\ \hline Se & -1.307 (-1.547) & -1.913 (-0.458) & -74.5 (-17.8) \\ \(\alpha\)-HgS & 0.775 (0.554) & -1.861 (-1.274) & -72.5 (-49.6) \\ \(\alpha\)-SiO\({}_{2}\) & -0.071 (-0.001) & 0.125 (0.019) & 4.9 (0.7) \\ \end{tabular}
\end{table}
Table 1: Calculated independent components of the gyration tensor (in bohr) and the optical rotatory power \(\bar{\rho}\) defined in Eq. (6) (in deg/[mm (eV)\({}^{2}\)] units). Values in brackets are computed neglecting the SCF terms.
outlook, a natural step forward consists in generalizing our method to finite frequencies, and to magnetic materials with broken time-reversal symmetry; progress along these lines will be presented in a forthcoming publication.
###### Acknowledgements.
We acknowledge support from Ministerio de Ciencia e Innovacion (MICINN-Spain) through Grant No. PID2019-108573GB-C22; from Severo Ochoa FUNFUTURE center of excellence (CEX2019-000917-S); from Generalitat de Catalunya (Grant No. 2021 SGR 01519); and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 724529).
|
2309.07903 | Shell-model study for allowed and forbidden $β^-$ decay properties
in the mass region "south" of $^{208}$Pb | The large-scale shell-model calculations have been performed for the
neutron-rich nuclei in the south region of $^{208}$Pb in the nuclear chart. The
$\beta$-decay properties, such as the $\log ft$, average shape factor values,
half-lives, and partial decay rates are calculated for these neutron-rich
nuclei using recent effective interaction for the $^{208}$Pb region. These
calculations have been performed without truncation in a particular model space
for nuclei $N\leq 126$; additionally, particle-hole excitations are included in
the case of core-breaking nuclei ($Z\leq 82, N>126$). An extensive comparison
with the experimental data has been made, and spin parities of several states
have been proposed. | Shweta Sharma, Praveen C. Srivastava, Anil Kumar, Toshio Suzuki, Cenxi Yuan, Noritaka Shimizu | 2023-09-14T17:51:56Z | http://arxiv.org/abs/2309.07903v3 | Shell-model study for allowed and forbidden \(\beta^{-}\) decay properties in the "south" region of \({}^{208}\)Pb
###### Abstract
The large-scale shell-model calculations have been performed for the neutron-rich nuclei in the south region of \({}^{208}\)Pb in the nuclear chart. The weak interaction properties, such as the \(\log ft\), average shape factor values, half-lives, and neutron emission probabilities, are calculated for these neutron-rich nuclei with the help of newly developed effective interaction [Yuan _et al._, Phys. Rev. C **106**, 044314 (2022)] for \({}^{208}\)Pb region. These calculations have been performed without truncation for a particular model space, however, for some nuclei, a certain truncation has been employed and particle-hole excitations are included in the case of core-breaking nuclei. An extensive comparison with the experimental data has been made and spin parities of several states have been proposed. In this study we have considered 270 transitions.
pacs: 21.60.Cs, 23.40.-s, 27.80.+w
## I Introduction
In the early universe, light elements up to mass number \(A=7\) were produced by big-bang nucleosynthesis. When stars were formed later, nucleosynthesis started with the fusion of elements in the hot dense environment inside the stars [1]. This process continues up until iron and the nuclei heavier than iron are produced via various astrophysical processes like \(r\)-(rapid neutron capture) [2], \(s\)-(slow neutron capture) [3], and \(p\)-(proton capture) [4] process. The \(r\)-process is a major phenomenon responsible for the nucleosynthesis of half of the solar system's elements. Further, the nuclei around doubly magic nucleus \({}^{208}\)Pb warrants extensive investigation due to their astrophysical significance. For instance, the third peak of the abundance pattern of the solar elemental composition is observed around \({}^{208}\)Pb. To study neutron-rich nuclei, one has to investigate beta decay theory as this is the primary decay process in the nucleosynthesis of these nuclei [5; 6].
The beta decay can be characterized into three processes based on the type of leptons emitted from the nucleus, i.e., \(\beta^{+},\beta^{-}\), and electron capture decay. Further, beta decay can also be characterized into allowed and forbidden beta decay based on \(l\) (orbital angular momentum) and \(S\) (spin) of the emitted leptons. If \(l=0\), then the beta decay is of allowed type transition; otherwise, it is forbidden in the case of \(l>0\). In allowed beta decay, the Fermi transitions correspond to \(S=0\), i.e., antiparallel spins of the emitted leptons, and Gamow-Teller transitions correspond to \(S=1\), i.e., parallel spins of the emitted leptons. In forbidden beta decay, forbidden unique (FU) transitions correspond to \(\Delta J=K+1\), i.e., the maximum angular momentum attained by the emitted leptons and forbidden non-unique (FNU) transitions corresponds to \(\Delta J=K-1,K\) where \(K\) is the degree of forbiddenness with the parity of the transition given by \(\pi=(-1)^{K}\).
The forbidden transitions compete with the allowed transitions making forbidden transitions the major decay mode in the heavier mass region. To study this competition, the most preferable nuclei are those which have the small number of both negative and positive parity levels below the Q-value. In order to test this competition, Carol _et al._[7] studied the beta decay of \({}^{208}\)Hg into the one-proton-hole, one-neutron-particle nucleus \({}^{208}\)Tl. Further, Brunet _et al._[8] studied beta decay of nuclei in the \(N<126\), \(Z>82\) region, i.e., \({}^{208}\)At into \({}^{208}\)Po transitions. Thus, the beta decay study provides a major opportunity to address this quest.
Further, limited study of heavier neutron-rich nuclei is found because of their structure complexities via both theoretical and experimental aspects. Additionally, it is challenging to experimentally produce the nuclei in the vicinity of \({}^{208}\)Pb. We have studied beta-decay properties of neutron-rich nuclei in the \({}^{132}\)Sn region [9] and in the northeast region of \({}^{208}\)Pb using shell-model calculations [10; 11]. Thus, we also intend to examine neutron-rich nuclei in the south region of \({}^{208}\)Pb with the nuclear shell model.
In this work, we adopt a new effective interaction
for nuclei in the south region of \({}^{208}\)Pb [12], which can very well describe the energy spectra and electromagnetic properties of nuclei in the mentioned region. Thus, our further motivation is to discuss the validity and predictive power of this new interaction in the case of weak decay processes.
The wave functions of some of the excited states of the doubly magic nucleus such as \({}^{208}\)Pb and the nuclei around it have complex structures because they involve core breaking. It is a difficult task to study these states because of the increased model space and at the same time, it is interesting to examine these nuclei to study the collective nature of these states. It can be done via particle-hole excitations in the nucleus. In \({}^{208}\)Pb, the first excited state, i.e., \(3^{-}\)[13; 14] shows collective behavior thus making the wave function more complex and mixed. In \({}^{207}\)Tl also, the high spin states are studied by considering particle-hole excitations [15]. Further, Kumar _et al._[16] studied beta decay \(\log ft\) values for the decay of \({}^{207}\)Hg into one proton hole nucleus \({}^{207}\)Tl using particle-hole truncation and predicted some of the spin parity states in \({}^{207}\)Tl where experimentally assignment were tentative.
Due to the interaction between the single particle and the collective degree of freedom, neutron-rich nuclei with neutron number \(110<N<122\) show structural evolution. These isotopes show collective behavior and transit from the prolate shape in the ground state to the oblate shape as the neutron number approaches to \(N=126\). This shape transition were observed in Pt and Os isotopes [17; 18; 19; 20; 21; 22]. Recently, the \(\beta-\gamma\) spectroscopy was done for \({}^{159}\)Os at KEK isotope separation system with half-life 6.5(4) min and \(3/2^{-}\) ground state spin parity. Further, beta decay spectroscopy of \({}^{197,198}\)Os was done with a revised half-life for \({}^{197}\)Os of 91(8) s and measured half-life of \({}^{198}\)Os, i.e., 125(28) s. Similarly, there are new experimental data available for other Os and Ir isotopes [23; 24; 25; 26; 27] and we have included all of these isotopes in our large-scale shell-model calculations.
The present work is organized as follows: in section II, we briefly explain the shell-model Hamiltonian and formalism of beta decay theory followed by quenching factor. In section III, results and their explanation are given. Beta decay properties like \(\log ft\), shape factor values, half-lives, strength functions and neutron emission probability have been calculated in this work. The discussion ends with a conclusion in the section IV.
## II Formalism
### Shell-model Hamiltonian
One of the significant complex tasks in the shell-model calculation is determining the effective Hamiltonian operator. To resolve this, one has to choose a suitable effective interaction that describes the nuclear properties in the desired mass region well. In shell-model Hamiltonian construction, a particular model space is chosen, and effective single-particle potential and two-body interaction between the nucleons of the selected model space is determined. Then, the Hamiltonian is constructed and diagonalized to find the eigenvalues and eigenfunctions. The shell-model Hamiltonian [28] can be defined as
\[H=T+V=\sum_{\alpha}\epsilon_{\alpha}c_{\alpha}^{\dagger}c_{\alpha}+\frac{1}{ 4}\sum_{\alpha\beta\gamma\delta}v_{\alpha\beta\gamma\delta}c_{\alpha}^{ \dagger}c_{\beta}^{\dagger}c_{\delta}c_{\gamma}, \tag{1}\]
where \(c_{\alpha}^{\dagger}\) denotes a creation operator of the single-particle state \(\alpha\). \(\epsilon_{\alpha}\) corresponds to the single particle energy of the state \(\alpha\). \(v_{\alpha\beta\gamma\delta}=\langle\alpha\beta|V|\gamma\delta\rangle\) are the antisymmetrized two-body matrix elements.
In the present work, the shell-model Hamiltonian consists of the model space with proton number \(50\leq Z\leq 82\) and neutron number \(82\leq N\leq 184\). There are five proton orbitals (below \(Z=82\)) and thirteen neutron orbitals (six below \(N=126\) and seven above \(N=126\)). The proton orbitals are \(0g_{7/2},1d_{5/2},1d_{3/2},2s_{1/2},0h_{11/2}\) (PO5) and the neutron orbitals are \(0h_{9/2},1f_{7/2},1f_{5/2},2p_{3/2},2p_{1/2},0i_{13/2}\) (NO6) and \(0i_{11/2},1g_{9/2},1g_{7/2},2d_{5/2},2d_{3/2},3s_{1/2},oj_{15/2}\) (NO7). Further, in the case of particle-hole excitation, the six proton orbitals above \(Z=82\), i.e., \(0h_{9/2},1f_{7/2},1f_{5/2},2p_{3/2},2p_{1/2},0i_{13/2}\) are also considered. The Hamiltonian is constructed with the help of the KHIHE interaction [29] for the proton-proton interaction in NO6, and the proton-neutron interaction between PO5 and NO6 orbitals. The KHIPE interaction [30] is also included for proton-proton interaction above \(Z=82\) and neutron neutron interaction inside NO7 orbitals. Further, the monopole-based universal (\(V_{MU}\)) interaction [31], and spin-orbit interaction from the M3Y interaction [32], which is referred as \(V_{MU}\)+LS, is considered for proton-neutron interaction between PO5 and NO7 and neutron-neutron interaction between NO6 and NO7 orbitals.
It is very challenging to perform large-scale shell-model calculations in the entire Pb region without truncation. In our study, we have performed four sets of calculations: (i) full set means without truncation (ii) truncation I (iii) truncation II and (iv) for nuclei with \(N>126\). In the case of full set with \(A\geq 199\) and \(N\leq 126\), we have performed calculation without any truncation in the model space \(50\leq Z\leq 82\) and \(82\leq N\leq 126\). When we move towards the lower mass region, the dimension starts increasing with increasing hole numbers making the shell-model calculations difficult. Thus, for nuclei with \(195\leq A\leq 198\), we have used \(\pi(90g_{7/2}^{7-8},1d_{5/2}^{0-6},1d_{3/2}^{0-4},2s_{1/2}^{1-2},0h_{11/2}^{1 -12})\otimes\nu(0h_{9/2}^{9-10},1f_{7/2}^{7-8},1f_{5/2}^{0-6},2p_{3/2}^{9-4},2p_ {1/2}^{0-2},0i_{13/2}^{13-14})\) restrictions defined as truncation I. Similarly, for \({}^{192}\)Ir, \({}^{194}\)Ir, \({}^{195}\)Ir and \({}^{194}\)Os nuclei, we have employed \(\pi(90g_{7/2}^{7-8},1d_{5/2}^{0-6},1d_{3/2}^{0-2},2s_{1/2}^{1-2},0h_{11/2}^{6 -12})\otimes\nu(0h_{9/2}^{9-10},1f_{7/2}^{6-8},1f_{5/2}^{0-6},2p_{3/2}^{0-4},2p_ {1/2}^{0-0},0i_{13/2}^{13-14})\) restrictions defined as truncation II. When the neutron
number is \(N>126\), then beta decay proceeds via core breaking; therefore, particle-hole excitations are needed for these cases. We have extended our model space i.e., \(50\leq Z\leq 126\) and \(82\leq N\leq 184\) for nuclei with neutron number above 126 and used two particles two holes (\(2p2h\)) excitation, i.e., one proton and one neutron simultaneously for these nuclei.
### Formalism for allowed and forbidden \(\beta\) decay transitions
In this work, we have calculated beta decay properties for both allowed and forbidden beta decay transitions. Therefore, the beta decay formalism [33; 34] for both types of decay is given here in brief. This formalism is based on the concept of impulse approximation [28]. Impulse approximation can be explained as an assumption that at the instant of beta decay, the decaying nucleon does not interact with the remaining nucleons but only feels weak interaction.
In beta decay, the total half-life can be obtained from the partial half-lives as
\[\frac{1}{T_{1/2}}=\sum_{k}\frac{1}{t_{1/2}^{(k)}}, \tag{2}\]
where \(t_{1/2}^{(k)}\) denotes partial half-life to the final state \(k\) which can be written in the form
\[t_{1/2}^{(k)}=\frac{T_{1/2}}{B^{(k)}}, \tag{3}\]
where \(B^{(k)}\) is the branching probability to the final state \(k\). Further, in beta decay theory, the \(\log ft\) values are computed and compared. The expression for the \(\log ft\) value is given by
\[\text{log}ft\equiv\text{log}(f_{0}t_{1/2}), \tag{4}\]
where \(f_{0}\) is the phase space factor which has the form
\[f_{0}=\int_{1}^{w_{0}}pw_{e}(w_{0}-w_{e})^{2}F_{0}(Z,w_{e})dw_{e}. \tag{5}\]
Here \(Z\) denotes the proton number of the daughter nucleus and \(w_{0}=W_{0}/m_{e}c^{2}\), \(w_{e}=W_{e}/m_{e}c^{2}\) and \(p=(w_{e}^{2}-1)^{1/2}\) are the dimensionless quantities introduced to make \(f_{0}\) dimensionless and to ease the integration. \(W_{e}\) is the energy of the emitted lepton, and \(W_{0}\) is the endpoint energy, i.e., maximum energy attained by the emitted lepton. The quantity \(F_{0}(Z,W_{e})\) denotes the Fermi function, which is introduced in the expression due to the Coulombic force between the daughter nucleus and the beta particles. The expression for the Fermi function is given later in the text.
To evaluate the log\(ft\) value in Eq. (4), the other quantity \(t_{1/2}\), the partial half-life, is needed and given as
\[t_{1/2}=\frac{\kappa}{\tilde{C}}, \tag{6}\]
where the constant \(\kappa\) is evaluated as [35]
\[\kappa=\frac{2\pi^{3}\hbar^{7}\text{ln}(2)}{m_{e}^{5}c^{4}(G_{\text{F}}\text {cos}\theta_{\text{C}})^{2}}=6289\text{ s}, \tag{7}\]
where \(\theta_{\text{C}}\) is the Cabibbo angle, which is defined as the mixing angle between the two generations of quarks. The quantity \(\tilde{C}\) in Eq. (6) is the dimensionless integrated shape function, which is given by
\[\tilde{C}=\int_{1}^{w_{0}}C(w_{e})(w_{e}^{2}-1)^{1/2}w_{e}(w_{0}-w_{e})^{2}F_{ 0}(Z,w_{e})dw_{e}. \tag{8}\]
Here, the term \(C(w_{e})\) is the shape factor that contains nuclear structure information, and in case of forbidden beta decay \(C(w_{e})\) has the form
\[\begin{split} C(w_{e})=\sum_{k_{e},k_{\nu},K}\lambda_{k_{e}}\Big{[} M_{K}(k_{e},k_{\nu})^{2}+m_{K}(k_{e},k_{\nu})^{2}\\ -\frac{2\gamma_{k_{e}}}{k_{e}w_{e}}M_{K}(k_{e},k_{\nu})m_{K}(k_{e },k_{\nu})\Big{]}.\end{split} \tag{9}\]
Here the smallest possible value of \(K\), i.e., forbiddenness order is chosen as most contributions come from the terms which contain the minimal transfer of angular momentum. Thus, \(K=K_{min},K_{min}+1\) where \(K_{min}=|J_{i}-J_{f}|\) with \(J_{i}\) and \(J_{f}\) being the initial and final angular momentum, respectively. The quantities \(k_{e}\) and \(k_{\nu}\) denote the positive integers emerging from the partial wave expansion of the electron and neutrino wave functions. There are two possibilities, i.e., \(k_{e}+k_{\nu}=K+1\) and \(k_{e}+k_{\nu}=K+2\) to obtain the smallest possible values for the sum \(k_{e}+k_{\nu}\) as contributions from higher order terms in leptonic wave function expansion are small. The quantities \(M_{K}(k_{e},k_{\nu})\) and \(m_{K}(k_{e},k_{\nu})\) are the complicated combinations of the different form factors containing nuclear structure information and the leptonic phase space factors. More information about these can be found in [36; 37]. \(\gamma_{k_{e}}\) and \(\lambda_{k_{e}}\) (Coulomb function) are the auxiliary quantities which are given as
\[\gamma_{k_{e}}=\sqrt{k_{e}^{2}-(\alpha Z)^{2}}\quad\text{and}\ \ \lambda_{k_{e}}=F_{k_{e}-1}(Z,w_{e})/F_{0}(Z,w_{e}),\]
where \(\alpha=1/137\) is the fine structure constant and \(F_{k_{e}-1}(Z,w_{e})\) is the generalized Fermi function [36] which is given as
\[F_{k_{e}-1}(Z,w_{e}) = 4^{k_{e}-1}(2k_{e})(k_{e}+\gamma_{k_{e}})[(2k_{e}-1)!!]^{2}e^{xy} \tag{10}\] \[\times\left(\frac{2p_{e}R}{\hbar}\right)^{2(\gamma_{k_{e}}-k_{e })}\left(\frac{[\Gamma(\gamma_{k_{e}}+iy)]}{\Gamma(1+2\gamma_{k_{e}})}\right)^{ 2}.\]
The auxiliary quantity \(y\) is defined as \(y=(\alpha Zw_{e}/p_{e}c)\), where \(p_{e}\) is the momentum of the emitted lepton.
Further, in the case of allowed beta decay, the shape factor is given by \(C(w_{e})=B(\text{F})+B(\text{GT})\) where \(B(\text{F})\) is the Fermi reduced transition probability which rarely depends on the nuclear structure and \(B(\text{GT})\) is the nuclear
Figure 1: Comparison of calculated and experimental average shape factors for (a) full set (b) truncation I, II (c) \(N>126\) set by using different values of quenching factors for axial and vector coupling constants calculated via the chi-squared fitting method.
structure dependent term which is known as Gamow -Teller reduced transition probability and these quantities are given as
\[B_{\rm F}\equiv\frac{g_{V}^{2}}{2J_{i}+1}|\mathcal{M}_{\rm F}|^{2},B_{\rm GT} \equiv\frac{g_{A}^{2}}{2J_{i}+1}|\mathcal{M}_{\rm GT}|^{2}, \tag{11}\]
where \(g_{V}\) and \(g_{A}\) are the vector and axial-vector coupling constants, respectively, and the \(\mathcal{M}_{\rm F}\) and \(\mathcal{M}_{\rm GT}\) stands for the Fermi and Gamow-Teller nuclear matrix elements, respectively. The Fermi and Gamow-Teller nuclear matrix elements are given as
\[\mathcal{M}_{\rm F} \equiv(\xi_{f}J_{f}\parallel\tau_{-}\parallel\xi_{i}J_{i})\] \[=\delta_{J_{i}J_{f}}\sum_{ab}\mathcal{M}_{\rm F}(ab)(\xi_{f}J_{f} \parallel[c_{a}^{\dagger}\tilde{c}_{b}]_{0}\parallel\xi_{i}J_{i}), \tag{12}\]
\[\mathcal{M}_{\rm GT} \equiv(\xi_{f}J_{f}\parallel\sigma\tau_{-}\parallel\xi_{i}J_{i})\] \[=\sum_{ab}\mathcal{M}_{\rm GT}(ab)(\xi_{f}J_{f}\parallel[c_{a}^{ \dagger}\tilde{c}_{b}]_{1}\parallel\xi_{i}J_{i}), \tag{13}\]
where the quantities \(\xi_{f}\) and \(\xi_{i}\) denote all quantum numbers needed to specify the final and initial state, respectively and \(\tau_{-}|n\rangle=|p\rangle\). The quantities \(\mathcal{M}_{\rm F}(ab)\) and \(\mathcal{M}_{\rm GT}(ab)\) are Fermi, and Gamow-Teller reduced single particle matrix elements [38], which are nuclear model-independent. The \((\xi_{f}J_{f}\parallel[c_{a}^{\dagger}\tilde{c}_{b}]_{0}\parallel\xi_{i}J_{i})\) and \((\xi_{f}J_{f}\parallel[c_{a}^{\dagger}\tilde{c}_{b}]_{1}\parallel\xi_{i}J_{i})\) are the nuclear model dependent terms and termed as one body transition densities (OBTDs). The OBTDs are calculated using shell-model code KSHELL [39].
Further, in the case of unique forbidden transitions, the quantity \(f_{0}\), i.e., phase space factor in Eq. (4) gets replaced by \(f_{Ku}\) where \(K\) denotes the forbiddenness order and \(u\) stands for unique transition. Thus, the phase-space factor for \(K^{\rm th}\) unique forbidden transition is given by
\[f_{Ku}=\Big{(}\frac{3}{4}\Big{)}^{K}\frac{(2K)!!}{(2K+1)!!}\] \[\times\int_{1}^{w_{0}}C_{Ku}(w_{e})(w_{e}^{2}-1)^{1/2}w_{e}(w_{0} -w_{e})^{2}F_{0}(Z_{f},w_{e})dw_{e}, \tag{14}\]
where \(C_{Ku}(w_{e})\) is the shape function for \(K^{\rm th}\) forbidden unique transition which has the form
\[C_{Ku}(w_{e})=\sum_{k_{e}+k_{\nu}=K+2}\frac{\lambda_{k_{e}}(w_{e}^{2}-1)^{(k_ {e}-1)}(w_{0}-w_{e})^{2(k_{\nu}-1)}}{(2k_{e}-1)!(2k_{\nu}-1)!}. \tag{15}\]
For first forbidden unique transitions, \(f_{1u}=12f_{K=1,u}\).
### Quenching factor
The single-particle states other than those included in our model space in the shell model (SM) also have a direct or indirect role in the nuclear properties of the particular nucleus. Thus, by truncating our model space, these results need to be renormalized before further comparison with the experimental data. Usually, these renormalizations are done in the values of weak coupling constants. The bare values of weak coupling constants, i.e., vector and axial vector coupling constants are \(g_{V}^{free}=1.00\) and \(g_{A}^{free}=1.27\), respectively. The model space truncation and other nuclear medium effects significantly impact these free nucleon values of weak coupling constants [40; 41]. Thus, we have calculated the quenching factor and defined new quantities, i.e., the effective value of weak coupling constants such that \(g_{V}^{eff}=q_{V}g_{V}^{free}\) and \(g_{A}^{eff}=q_{A}g_{A}^{free}\). These values of quenching factors are obtained by comparing experimental and theoretical average shape factor values and using the chi-square fitting method to obtain minimum rms value from this comparison [42; 43]. Experimental average shape factor values have been calculated from the experimental log ft values. The average shape factor value is given by
\[\overline{C(w_{e})}(\text{fm}^{2K})=\frac{6289\lambda_{\rm Ce}^{2K}}{ft}, \tag{16}\]
where \(\lambda_{\rm Ce}=386.15926796\) fm denotes the reduced Compton wavelength of electrons and \(ft\) is obtained from Eq. (4).
Further, in the case of \(\Delta J=0^{-}\) transitions, the matrix element of the time component of the axial-vector current, i.e., \(\gamma_{5}=\mathbf{\sigma}\cdot\mathbf{p_{e}}\) is improved over the impulse approximation with the help of mesonic enhancement factor [44], which is represented by \(\epsilon_{\rm MEC}\). We have taken \(\epsilon_{\rm MEC}=2.01\) in our calculation which is recommended for \({}^{208}\)Pb region in Ref. [29].
## III Results and discussion
In this work, the \(\log ft\) values for the nuclei in the south region of \({}^{208}\)Pb are calculated. These nuclei include Os\(\rightarrow\)Ir, Ir\(\rightarrow\)Pt, Pt\(\rightarrow\)Au, Au\(\rightarrow\)Hg, Hg\(\rightarrow\)Tl, Tl\(\rightarrow\)Pb chains. At first, the \(\log ft\) values for the mentioned nuclei have been calculated using bare values of weak coupling constants, i.e., \(g_{A}^{free}=1.27,g_{V}^{free}=1.00\). After the calculation of these \(\log ft\) values, the average shape factor values are calculated and compared with the experimental average shape factor values as shown in Fig. 1. Further, the quenching factor is calculated with the help of chi-squared fitting method, and the calculated average shape factor values via the inclusion of the quenching factor are also compared with the experimental data as shown in Fig. 1. This value of the quenching factor is being calculated in both axial vector and vector coupling constant because the inclusion of the quenching factor in both weak coupling constants gives more reliable results as compared to the results with the inclusion of the quenching factor in axial vector coupling constant only
\begin{table}
\begin{tabular}{c c c c c c c c} & Transition & Decay mode & Energy & \(\log ft\) & & \(\overline{[C(w_{e})]}^{1/2}\) & \\ \hline Initial (\(J_{i}^{x}\)) & Final (\(J_{f}^{x}\)) & & (keV) & Expt. & SM & Expt. & SM \\ \hline \({}^{194}\)Os(\(0^{+}\)) & \({}^{194}\)Ir(\(1_{-}^{-}\)) & 1st FNU & 0.0 & 7.6(2) & 8.378 & 4.854 & 1.981 \\ & \({}^{194}\)Ir(\(0_{1}^{-}\)) & 1st FNU & 43.119(1) & 6.3(1) & 9.062 & 21.680 & 0.902 \\ \({}^{195}\)Os(\(3/2^{-}\)) [27] & \({}^{195}\)Ir(\(3/2_{1}^{+}\)) & 1st FNU & 0.0 & 5.99(4)\({}^{4}\)-6.25(18)\({}^{b}\) & 8.782 & 30.978\({}^{*}\)-2.964\({}^{b}\) & 1.245 \\ & \({}^{195}\)Ir(\(1/2_{1}^{+}\)) & 1st FNU & 69.181(1) & 6.26(21)\({}^{c}\)-7.51(69)\({}^{a}\) & 10.932 & 22.702\({}^{c}\)-5.383\({}^{a}\) & 0.105 \\ & \({}^{195}\)Ir(\(5/2_{1}^{+}\)) & 1st FNU & 175.221(2) & \(<\)7.4 & 9.651 & 6.110 & 0.457 \\ & \({}^{195}\)Ir(\(3/2_{1}^{+}\)) & 1st FNU & 233.512(2) & 7.51(50) & 9.840 & 5.383 & 0.368 \\ & \({}^{195}\)Ir(\(3/2_{3}^{+}\)) & 1st FNU & 286.521(2) & 6.59(15) & 8.380 & 15.526 & 1.976 \\ & \({}^{195}\)Ir(\((5/2_{2}^{+})\)) & 1st FNU & 412.038(4) & 7.01(14) & 9.381 & 9.573 & 0.625 \\ & \({}^{195}\)Ir(\((1/2_{1}^{+})\)) & 1st FNU & 428.672(4) & 7.31(15) & 8.326 & 6.777 & 2.104 \\ & \({}^{195}\)Ir(\((3/2_{1}^{+})\)) & 1st FNU & 428.672(4) & 7.31(15) & 8.425 & 6.777 & 1.877 \\ & \({}^{195}\)Ir(\(3/2_{1}^{+}\)) & 1st FNU & 539.206(5) & 6.77(16) & 7.933 & 12.620 & 3.309 \\ & \({}^{195}\)Ir(\(5/2_{3}^{+}\)) & 1st FNU & 581.795(7) & 7.56(22) & 7.714 & 5.082 & 4.258 \\ \({}^{196}\)Os(\(0^{+}\)) & \({}^{196}\)Ir(\((0_{1}^{-})\)) & 1st FNU & 0.0 & \(>\)5.6 & 7.156 & 48.535 & 8.090 \\ & \({}^{196}\)Ir(\((1_{1}^{-})\)) & 1st FNU & 126.20(20) & \(\leq\)6.6 & 6.998 & 15.348 & 9.707 \\ & \({}^{196}\)Ir(\((1_{2}^{-})\)) & 1st FNU & 207.04(16) & NA & 7.947 & NA & 3.254 \\ \({}^{197}\)Os(\((3/2^{-})\)) & \({}^{197}\)Ir(\(3/2_{1}^{+}\)) & 1st FNU & 0.0 & \(\leq\) 5.93(4) & 6.375 & 33.194 & 19.892 \\ & \({}^{197}\)Ir(\(1/2_{1}^{+}\)) & 1st FNU & 52(5) & 7.5(2) & 7.171 & 5.446 & 7.952 \\ & \({}^{197}\)Ir(\((5/2_{1}^{+})\)) & 1st FNU & 606(5) & 7.1(2) & 6.102 & 8.631 & 27.231 \\ \({}^{197}\)Os(\((5/2^{-})\)) [24] & \({}^{197}\)Ir(\(3/2_{1}^{+}\)) & 1st FNU & 0.0 & \(\leq\) 5.93(4) & 6.058 & 33.194 & 28.647 \\ & \({}^{197}\)Ir(\((2_{1}^{+})\)) & 1st FNU & 51.4(2) & 7.5(2) & 11.258 & 5.446 & 0.072 \\ & \({}^{197}\)Ir(\(5/2_{1}^{+}\)) & 1st FNU & 599.7(3) & 7.1(2) & 5.833 & 8.631 & 37.130 \\ \({}^{198}\)Os(\(0^{+}\)) [24] & \({}^{198}\)Ir(\((0_{1})^{-}\)) & 1st FNU & 0.0 & \(\geq\)5.29(10) & 6.012 & 69.352 & 30.189 \\ & \({}^{198}\)Ir(\((1_{1})^{-}\)) & 1st FNU & 0.0 & \(\geq\)5.29(10) & 5.961 & 69.352 & 32.020 \\ \({}^{192}\)Ir(\(4^{+}\)) & \({}^{192}\)Pt(\(3_{1}^{+}\)) & Allowed & 920.91854(21) & 8.260(7) & 10.279 & 0.006 & 0.001 \\ & \({}^{192}\)Pt(\(3_{1}^{-}\)) & 1st FNU & 1378.03(3) & 8.20(5) & 7.407 & 2.432 & 6.059 \\ & \({}^{192}\)Pt(\((5_{1})^{-}\)) & 1st FNU & 1383.99(15) & 9.51(20) & 10.986 & 0.538 & 0.098 \\ & \({}^{192}\)Pt(\(3_{2}^{+}\)) & Allowed & 1406.37(6) & 8.83(8) & 9.887 & 0.003 & 0.001 \\ \({}^{192}\)Ir(\(1^{-}\)) & \({}^{192}\)Pt(\(0_{1}^{+}\)) & 1st FNU & 0.0 & 8.8 & 7.296 & 1.219 & 6.885 \\ & \({}^{192}\)Pt(\(2_{1}^{+}\)) & 1st FNU & 316.50645(16) & 8.3 & 7.561 & 2.168 & 5.079 \\ & \({}^{192}\)Pt(\(2_{2}^{+}\)) & 1st FNU & 612.46318(18) & 8.4 & 7.936 & 1.932 & 3.298 \\ \({}^{194}\)Ir(\(1_{1}^{-}\)) & \({}^{194}\)Pt(\(0_{1}^{+}\)) & 1st FNU & 0.0 & 8.22(1) & 8.317 & 2.377 & 2.125 \\ \({}^{194}\)Pt(\(2_{2}^{+}\)) & 1st FNU & 328.475(7) & 8.9(1) & 8.541 & 1.087 & 1.643 \\ & \({}^{194}\)Pt(\(2_{2}^{+}\)) & 1st FNU & 622.023(7) & 9.5(1) & 8.623 & 0.545 & 1.495 \\ & \({}^{194}\)Pt(\(3_{1}^{+}\)) & 1st FNU & 922.773(9) & 10.51\({}^{u}\)(1) & 11.627\({}^{u}\) & 0.172 & 0.047 \\ & \({}^{194}\)Pt(\(0_{2}^{+}\)) & 1st FNU & 1267.203(9) & 8.5(1) & 7.150 & 1.722 & 8.149 \\ &
\begin{table}
\begin{tabular}{l c c c c c c c} & \({}^{195}\)Ir(3/2\({}^{+}\)) & \({}^{195}\)Pt(1/2\({}^{-}_{1}\)) & 1st FNU & 0.0 & 7.0(3) & 7.093 & 9.684 & 8.700 \\ & \({}^{195}\)Pt(3/2\({}^{-}_{1}\)) & 1st FNU & 98.830(10) & 6.5(4) & 7.496 & 17.221 & 5.470 \\ & \({}^{195}\)Pt(5/2\({}^{-}_{1}\)) & 1st FNU & 129.71(3) & 6.13(17) & 6.522 & 26.367 & 16.792 \\ & \({}^{195}\)Pt(3/2\({}^{-}_{2}\)) & 1st FNU & 211.30(10) & 7.12(12) & 7.673 & 8.434 & 4.461 \\ \({}^{195}\)Pt(13/2\({}^{+}_{1}\)) & 1st FNU & 259.28(12) & 6.52(10) & 6.044 & 16.829 & 29.107 \\ \({}^{195}\)Pt(9/2\({}^{+}_{1}\)) & 1st FNU & 432.03(12) & 6.87(8) & 7.726 & 11.248 & 4.196 \\ \({}^{195}\)Pt(5/2\({}^{-}_{1}\)) & 2nd FU & 455.12(6) & 7.51(18) & 15.488 & 2078.852 & 0.213 \\ & \({}^{195}\)Pt(7/2\({}^{-}_{1}\)) & 2nd FNU & 508.04(6) & 7.50(19) & 12.758 & 2102.923 & 4.941 \\ & \({}^{195}\)Pt((11/2\({}_{1}\))\({}^{+}\)) & 1st FNU & 547.24(12) & 7.54(25) & 8.230 & 5.201 & 2.349 \\ & \({}^{195}\)Pt(9/2\({}^{-}_{1}\)) & Allowed & 562.78(6) & 6.57(8) & 7.592 & 0.041 & 0.013 \\ & \({}^{195}\)Pt((7/2\({}_{2}\))\({}^{-}\)) & 2nd FNU & 695.26(7) & 7.3(5) & 12.200 & 2647.425 & 9.393 \\ & \({}^{195}\)Pt(9/2\({}^{-}_{2}\)) & Allowed & 814.48(5) & 5.18(8) & 9.320 & 0.020 & 0.002 \\ & \({}^{195}\)Pt(9/2\({}^{-}_{3}\)) & Allowed & 895.34(8) & 6.13(8) & 9.418 & 0.068 & 0.002 \\ & \({}^{195}\)Pt(9/2\({}^{-}_{4}\)) & Allowed & 930.67(7) & 5.54(9) & 7.725 & 0.135 & 0.011 \\ \({}^{196}\)Ir(\((0^{-})\)) & \({}^{196}\)Pt(0\({}^{+}_{1}\)) & 1st FNU & 0.0 & 5.75(4) & 5.419 & 40.837 & 59.795 \\ & \({}^{196}\)Pt(2\({}^{+}_{1}\)) & 1st FU & 355.65(19) & 8.8\({}^{18}\)(3) & 11.571\({}^{11u}\) & 1.219 & 0.050 \\ & \({}^{196}\)Pt(2\({}^{+}_{2}\)) & 1st FU & 689.20(24) & \(>\)9.2\({}^{1u}\) & 11.452\({}^{1u}\) & 0.769 & 0.058 \\ & \({}^{196}\)Pt(0\({}^{+}_{1}\)) & 1st FNU & 1135.61(25) & 5.73(10) & 7.755 & 41.789 & 4.061 \\ & \({}^{196}\)Pt(0\({}^{+}_{1}\)) & 1st FNU & 1402.6(3) & 6.56(14) & 6.968 & 16.072 & 10.053 \\ & \({}^{196}\)Pt(0\({}^{+}_{1}\)) & 1st FNU & 1824.1(3) & 6.24(18) & 7.981 & 23.230 & 3.131 \\ \({}^{196}\)Pt(0\({}^{+}_{5}\)) & 1st FNU & 1918.83(25) & 6.09(17) & 7.158 & 27.609 & 8.076 \\ \({}^{198}\)Ir(1\({}^{-}\)) [24] & 198Pt(0\({}^{+}_{1}\)) & 1st FNU & 0.0 & 5.88(15) & 6.220 & 35.161 & 23.775 \\ & \({}^{198}\)Pt(0\({}^{+}_{2}\)) & 1st FNU & 914.52 & 5.32(6) & 7.618 & 66.997 & 4.756 \\ \({}^{202}\)Ir(\((1^{-})\)) & 202Pt(0\({}^{+}_{1}\)) & 1st FNU & 0.0 & \(>\) 5.9 & 8.014 & 34.360 & 3.012 \\ & \({}^{202}\)Pt(\((2^{+}_{1})\)) & 1st FNU & 534.90(20) & NA & 7.560 & NA & 5.084 \\ & \({}^{202}\)Pt(\((4^{+}_{1})\)) & 3rd FU & 1253.6(3) & NA & 12.635 & NA & 2198.315 \\ \({}^{202}\)Ir(\((2^{-})\)) & \({}^{202}\)Pt(\((0^{+}_{1})\)) & 1st FU & 0.0 & \(\geq\) 5.9 & 9.800 & 34.360 & 0.385 \\ & \({}^{202}\)Pt(\((2^{+}_{1})\)) & 1st FNU & 534.90(20) & NA & 8.387 & NA & 1.961 \\ & \({}^{202}\)Pt(\((4^{+}_{1})\)) & 1st FNU & 1253.6(3) & NA & 9.275 & NA & 0.706 \\ \({}^{197}\)Pt(\(1/2^{-}\)) & 1st FNU & 0.0 & 7.36(12) & 7.150 & 6.398 & 8.147 \\ & \({}^{197}\)Au(1/2\({}^{+}_{1}\)) & 1st FNU & 77.35(4) & 6.310(17) & 7.347 & 21.432 & 6.497 \\ & \({}^{197}\)Au(3/2\({}^{+}_{2}\)) & 1st FNU & 268.78(4) & 6.79(5) & 7.457 & 12.333 & 5.721 \\ \({}^{197}\)Pt\({}^{m}\)(13/2\({}^{+}_{1}\)) & 1st FNU & 409.0(8) & 6.75(6) & 7.735 & 12.914 & 4.154 \\ \({}^{199}\)Pt(5/2\({}^{-}\)) & 1st FNU & 0.0 & 6.30(1) & 6.015 & 21.680 & 30.105 \\ & \({}^{199}\)Au(1/2\({}^{+}_{1}\)) & 1st FNU & 77.170(21) & \(>\)8.8\({}^{1u}\) & 12.195\({}^{1u}\) & 1.219 & 0.024 \\ & \({}^{199}\)Au(5/2\({}^{+}_{1}\)) & 1st FNU & 317.174(24) & 7.30(1) & 6.660 & 6.856 & 14.324 \\ & \({}^{199}\)Au(3/2\({}^{+}_{2}\)) & 1st FNU & 323.605(25) & 7.62(3) & 7.258 & 4.743 & 7.199 \\ & \({}^{199}\)Au((7/2\({}_{1}\))\({}^{+}_{1}\)) & 1st FNU & 493.76(3) & 8.03(9) & 7.590 & 2.958 & 4
\begin{table}
\begin{tabular}{l c c c c c c c} & \({}^{199}\)Hg(1/2\({}^{-}_{1}\)) & 1st FNU & 0.0 & 7.50(9) & 7.277 & 5.446 & 7.040 \\ & \({}^{199}\)Hg(5/2\({}^{-}_{1}\)) & 1st FNU & 158.37859(10) & 5.850(9) & 5.799 & 36.396 & 38.586 \\ & \({}^{199}\)Hg(3/2\({}^{-}_{1}\)) & 1st FNU & 208.20494(10) & 6.118(9) & 6.141 & 26.733 & 26.040 \\ \({}^{200}\)Au((1\({}^{-}\))) & \({}^{200}\)Hg(0\({}^{+}_{1}\)) & 1st FNU & 0.0 & 6.93(5) & 6.195 & 10.497 & 24.466 \\ & \({}^{200}\)Hg(2\({}^{+}_{1}\)) & 1st FNU & 367.943(10) & 7.85(10) & 8.268 & 3.640 & 2.249 \\ & \({}^{200}\)Hg(0\({}^{+}_{2}\)) & 1st FNU & 1029.344(17) & 8.46(10) & 8.407 & 1.803 & 1.917 \\ & \({}^{200}\)Hg(2\({}^{+}_{2}\)) & 1st FNU & 1254.098(17) & 8.37(11) & 7.162 & 2.000 & 8.036 \\ & \({}^{200}\)Hg(0\({}^{+}_{3}\)) & 1st FNU & 1515.173(18) & NA & 7.275 & NA & 7.058 \\ & \({}^{200}\)Hg(1\({}^{+}_{1}\)) & 1st FNU & 1570.275(17) & 7.12(14) & 6.648 & 8.434 & 14.530 \\ & \({}^{200}\)Hg(2\({}^{+}_{3}\)) & 1st FNU & 1573.665(18) & NA & 7.388 & NA & 6.195 \\ & \({}^{200}\)Hg(2\({}^{+}_{4}\)) & 1st FNU & 1593.434(18) & 5.83(14) & 5.766 & 37.244 & 40.076 \\ & \({}^{200}\)Hg(1\({}^{+}_{2}\)) & 1st FNU & 1630.892(17) & 6.20(15) & 7.781 & 24.325 & 3.942 \\ & \({}^{200}\)Hg(2\({}^{+}_{5}\)) & 1st FNU & 1641.443(17) & 7.55(16) & 7.097 & 5.141 & 8.663 \\ & \({}^{200}\)Hg(1\({}^{+}_{3}\)) & 1st FNU & 1718.304(17) & 7.29(18) & 6.826 & 6.935 & 11.835 \\ & \({}^{200}\)Hg(2\({}^{+}_{6}\)) & 1st FNU & 1730.927(17) & 8.37(24) & 8.660 & 2.000 & 1.432 \\ & \({}^{200}\)Hg(0\({}^{+}_{4}\)) & 1st FNU & 1856.779(18) & 9.38(21) & 7.711 & 0.625 & 4.273 \\ & \({}^{200}\)Hg(2\({}^{+}_{7}\)) & 1st FNU & 1882.859(17) & 6.67(23) & 6.516 & 14.160 & 16.915 \\ & \({}^{200}\)Hg(2\({}^{+}_{8}\)) & 1st FNU & 1972.280(18) & 6.4(3) & 7.299 & 19.322 & 6.860 \\ & \({}^{200}\)Hg(1\({}^{+}_{4}\)) & 1st FNU & 2061.256(18) & 6.1(5) & 6.923 & 27.293 & 10.577 \\ \({}^{200}\)Au\({}^{\rm m}\)(12\({}^{-}\)) & 200Hg(11\({}^{-}_{1}\)) & Allowed & 2641.60(21) & 6.1(3) & 6.676 & 0.071 & 0.036 \\ \({}^{201}\)Au(3/2\({}^{+}\)) & 201Hg(3/2\({}^{-}_{1}\)) & 1st FNU & 0.0 & \(\geq\)5.717 & 6.010 & 42.419 & 30.273 \\ & \({}^{201}\)Hg(1/2\({}^{-}_{1}\)) & 1st FNU & 1.5648(10) & NA & 7.203 & NA & 7.665 \\ & \({}^{201}\)Hg(5/2\({}^{-}_{1}\)) & 1st FNU & 262.738(3) & NA & 5.621 & NA & 47.367 \\ & \({}^{201}\)Hg(3/2\({}^{-}_{2}\)) & 1st FNU & 321.68(14) & \(\approx\)6.7 & 6.764 & 13.679 & 12.712 \\ & \({}^{201}\)Hg(1/2\({}^{-}_{2}\)) & 1st FNU & 167.48(4) & 6.85(8) & 6.919 & 11.510 & 10.628 \\ & \({}^{201}\)Hg((5/2\({}^{-}_{2}\))) & 1st FNU & 384.605(17) & 7.07(8) & 6.839 & 8.934 & 11.659 \\ & \({}^{201}\)Hg(5/2\({}^{-}_{3}\)) & 1st FNU & 4664.41(4) & 7.23(11) & 8.679 & 7.431 & 1.402 \\ \({}^{202}\)Au((1\({}^{-}\))) & 202Hg(0\({}^{+}_{1}\)) & 1st FNU & 0.0 & \(\sim\)5.3 & 5.727 & 68.558 & 41.939 \\ & \({}^{202}\)Hg(2\({}^{+}_{1}\)) & 1st FNU & 439.512(8) & NA & 6.285 & NA & 22.062 \\ & \({}^{202}\)Hg(2\({}^{+}_{2}\)) & 1st FNU & 959.92(5) & 6.9(4) & 6.565 & 10.866 & 15.983 \\ & \({}^{202}\)Hg((1\({}^{+}_{1}\))) & 1st FNU & 1347.92(7) & 5.80(19) & 6.252 & 38.553 & 22.923 \\ \({}^{202}\)Hg((2\({}^{+}_{3}\))) & 1st FNU & 1347.92(7) & 5.80(19) & 6.193 & 38.553 & 24.533 \\ \({}^{202}\)Hg(0\({}^{+}_{2}\)) & 1st FNU & 1411.37(12) & 6.32(20) & 7.032 & 21.186 & 9.333 \\ \({}^{202}\)Hg(0\({}^{+}_{3}\)) & 1st FNU & 1564.78(8) & 5.66(21) & 6.476 & 45.296 & 17.710 \\ \({}^{202}\)Hg(0\({}^{+}_{4}\)) & 1st FNU & 1643.62(10) & 5.63(23) & 6.544 & 46.888 & 16.375 \\ \({}^{202}\)Hg((1\({}^{+}_{2}\))) & 1st FNU & 1746.11(7) & 5.35(24) & 7.188 & 64.723 & 7.795 \\ \({}^{202}\)Hg((1\({}^{+}_{1}\))) & Allowed & 1746.11(7) & 5.35(24) & 9.585 & 0.168 & 0.001 \\ \({}^{202}\)Hg((2\({}^{+}_{4}\))) & 1st FNU & 1746.11(7) & 5.35(24) & 5.711 & 64.723 & 42.689 \\ \({}^{202}\)Hg(2\({}^{+}_{5}\)) & 1st FNU & 1852.14(
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \({}^{205}\)Hg(1/2\({}^{-}\)) & \({}^{205}\)Tl(1/2\({}^{+}_{1}\)) & 1st FNU & 0.0 & 5.257(11) & 5.240 & 72.037 & 73.456 \\ & \({}^{205}\)Tl(3/2\({}^{+}_{1}\)) & 1st FNU & 203.65(19) & 6.51(21) & 6.133 & 17.024 & 26.265 \\ & \({}^{205}\)Tl(5/2\({}^{+}_{1}\)) & 1st FU & 619.3(3) & 8.73\({}^{1u}\)(22) & 9.271\({}^{1u}\) & 1.321 & 0.709 \\ & \({}^{205}\)Tl(3/2\({}^{+}_{2}\)) & 1st FNU & 1140.6(3) & 7.65(22) & 6.688 & 4.582 & 13.871 \\ & \({}^{205}\)Tl(1/2\({}^{+}_{1}\)) & 1st FNU & 1218.6(4) & 7.03(25) & 6.697 & 9.355 & 13.732 \\ & \({}^{205}\)Tl(3/2\({}^{+}_{3}\)) & 1st FNU & 1340.3(5) & 6.43(22) & 6.159 & 18.666 & 25.499 \\ & \({}^{205}\)Tl(1/2\({}^{+}_{1}\)) & 1st FNU & 1434.0(5) & 5.6(3) & 5.845 & 48.535 & 36.591 \\ \({}^{207}\)Hg(9/2\({}^{+}\)) [15] & 206Tl(1/2\({}^{+}_{1}\)) & 1st FNU & 1348.3(2) & 7.2(4) & 8.279 & 7.692 & 2.220 \\ & \({}^{207}\)Tl(7/2\({}^{+}_{1}\)) & 1st FNU & 2676.0(2) & 7.8(3) & 7.625 & 9.695 & 4.716 \\ & \({}^{207}\)Tl(5/2\({}^{+}_{1}\)) & 1st FU & 2709.3(6) & \(>\)8.7 & 9.840 & 1.368 & 0.368 \\ & \({}^{207}\)Tl(9/2\({}^{+}_{1}\)) & 1st FNU & 2912.6(3) & 6.3(2) & 5.244 & 21.680 & 73.152 \\ & \({}^{207}\)Tl(9/2\({}^{+}_{2}\)) & 1st FNU & 2985.8(3) & 5.42(7) & 6.199 & 59.711 & 24.363 \\ & \({}^{207}\)Tl(7/2\({}^{+}_{1}\)) & 1st FNU & 3013.8(3) & 7.7(2) & 6.386 & 4.326 & 19.645 \\ & \({}^{207}\)Tl(9/2\({}^{+}_{3}\)) & 1st FNU & 3104.9(3) & 5.58(8) & 5.973 & 49.666 & 31.589 \\ & \({}^{207}\)Tl(9/2\({}^{+}_{4}\)) & 1st FNU & 3143.2(3) & 5.95(7) & 5.654 & 32.438 & 45.610 \\ & \({}^{207}\)Tl(5/2\({}^{+}_{2}\)) & 1st FU & 3197.3(5) & \(>\)9.5 & 10.695 & 0.545 & 0.138 \\ & \({}^{207}\)Tl(7/2\({}^{+}_{3}\)) & 1st FNU & 3273.5(2) & 6.34(8) & 6.611 & 20.704 & 15.155 \\ & \({}^{207}\)Tl(9/2\({}^{+}_{5}\)) & 1st FNU & 3296.2(3) & 6.17(8) & 5.756 & 25.180 & 40.538 \\ & \({}^{207}\)Tl(9/2\({}^{+}_{6}\)) & 1st FNU & 3336.5(2) & 5.81(7) & 7.423 & 38.112 & 5.951 \\ & \({}^{207}\)Tl(9/2\({}^{+}_{7}\)) & 1st FNU & 3358.7(2) & 6.01(7) & 6.252 & 30.273 & 22.902 \\ & \({}^{207}\)Tl(7/2\({}^{+}_{4}\)) & 1st FNU & 3430.5(2) & 6.65(8) & 6.931 & 14.490 & 10.489 \\ & \({}^{207}\)Tl(5/2\({}^{+}_{3}\)) & 1st FU & 3493.6(5) & 8.63(9) & 9.976 & 1.483 & 0.315 \\ & \({}^{207}\)Tl(7/2\({}^{+}_{1}\)) & Allowed & 3493.6(5) & 8.63(9) & 11.033 & 0.004 & 0.0002 \\ & \({}^{207}\)Tl(7/2\({}^{+}_{5}\)) & 1st FNU & 3493.6(5) & 8.63(9) & 6.251 & 1.483 & 22.951 \\ & \({}^{207}\)Tl(11/2\({}^{+}_{1}\)) & Allowed & 3569.7(4) & 7.21(10) & 10.902 & 0.020 & 0.0003 \\ & \({}^{207}\)Tl(11/2\({}^{+}_{2}\)) & 1st FNU & 3569.7(4) & 7.21(10) & 6.067 & 7.604 & 28.359 \\ & \({}^{207}\)Tl(9/2\({}^{+}_{8}\)) & 1st FNU & 3581.3(2) & 6.97(8) & 7.664 & 10.024 & 4.507 \\ & \({}^{207}\)Tl(7/2\({}^{+}_{6}\)) & 1st FNU & 3592.4(4) & 7.11(9) & 7.201 & 8.532 & 7.683 \\ & \({}^{207}\)Tl(11/2\({}^{+}_{3}\)) & 1st FNU & 3633.6(3) & 6.34(8) & 5.989 & 20.704 & 31.003 \\ & \({}^{207}\)Tl(11/2\({}^{+}_{4}\)) & 1st FNU & 3644.2(3) & 6.72(9) & 7.585 & 13.368 & 4.939 \\ \({}^{204}\)Tl(2\({}^{-}_{2}\)) & 204Pb(0\({}^{+}_{1}\)) & 1st FU & 0.0 & 10.0980\({}^{1u}\)(15) & 10.888\({}^{1u}\) & 0.273 & 0.110 \\ \({}^{206}\)Tl(0\({}^{-}\)) & 206Pb(0\({}^{+}_{1}\)) & 1st FNU & 0.0 & 5.1775(13) & 5.170 & 78.942 & 79.619 \\ & \({}^{206}\)Pb(2\({}^{+}_{1}\)) & 1st FU & 803.049(25) & 8.601\({}^{u}\)(3) & 9.479\({}^{1u}\) & 1.535 & 0.558 \\ & \({}^{206}\)Pb(0\({}^{+}_{2}\)) & 1st FNU & 1166.4(5) & 5.99(6) & 5.634 & 30.978 & 46.689 \\ \({}^{208}\)Tl(5\({}^{+}_{1}\)) & 208Pb(5\({}^{-}_{1}\)) & 1st FNU & 3197.717(11) & 5.61(1) & 5.385 & 47.980 & 62.164 \\ \({}^{208}\)Pb(4\({}^{+}_{1}\)) & 1st FNU & 3475.088(11) & 5.68(1) & 5.729 & 44.265 & 41.830 \\ & \({}^{208}\)Pb(5\({}^{-}_{2}\)) & 1st FNU & 3708.41(7) & 5.38(1) & 5.61
[10]. These values of quenching factors have been computed for all three sets of calculations separately, i.e., full-set; truncation I and II; and \(N>126\) set. The r.m.s. deviation becomes minimum at \((g_{A}^{eff},g_{V}^{eff})=(0.25,1.0)\) and \((1.27,0.50)\) for the truncation I and II, respectively, while the minimum is quite shallow. Actually, any value of \(g_{V}^{eff}(g_{A}^{eff})\) could be adopted for the truncation I (II) as the corresponding r.m.s. deviations change only within 10% (0.006%). Here, we have adopted a set \((0.25,0.50)\) for the truncation I and II, which are smaller and close to the set \((0.38,0.60)\) for the full-configuration case. Now the effective value of weak coupling constants comes out to be \(g_{A}^{eff}=0.38,g_{V}^{eff}=0.60\) for the full set; \(g_{A}^{eff}=0.25,g_{V}^{eff}=0.50\) for truncation I and truncation II; and \(g_{A}^{eff}=0.38,g_{V}^{eff}=0.50\) for \(N>126\).
Now using these effective values of weak coupling constants, the \(\log ft\) values for the mentioned nuclei are calculated and shown in Table \(1\). The discussion for the \(\log ft\) values for different chains is given below.
### \(\log ft\) Results
#### iv.1.1 **Os\(\rightarrow\)Ir**
In the case of Os\(\rightarrow\)Ir chain, the \(\log ft\) values corresponding to \(\beta^{-}\) decay of \({}^{194,195,196,197,198}\)Os have been calculated. Out of these isotopes, the \(\log ft\) values for \({}^{194}\)Os have been computed using truncation II, and truncation I is used for the rest. In the case of \({}^{194}\)Os(\(0^{+}\)) \(\rightarrow\)\({}^{194}\)Ir(\(1_{1}^{-}\)) transition, the SM calculated \(\log ft\) value is 8.378 which is not very far from the experimental \(\log ft\) value, i.e., 7.6(2). Similarly, in \({}^{195}\)Os(\(3/2^{-}\)) \(\rightarrow\)\({}^{195}\)Ir(\(5/2_{3}^{+}\)) transition, the SM \(\log ft\) value is 7.714 and the experimentally \(\log ft\) value is 7.56(22). However, the \(\log ft\) results corresponding to \({}^{195}\)Os deviate slightly for some states, while for others they are very similar to the experimental data. There are also some energy levels in the isotopes where spin parity states are not confirmed experimentally, and more than one spin parity state is tentatively assigned. We have computed \(\log ft\) values for all of the possible spin parity states using SM calculations and compared those with the corresponding experimental data. Based on this comparison, we can assign one particular spin parity to these states. For instance, the ground state spin and parity are experimentally not confirmed yet for \({}^{197}\)Os isotope. Experimentally for the ground-state spin-parity (\(3/2^{-}\)) and (\(5/2^{-}\)) are tentatively assigned. Thus, SM \(\log ft\) values corresponding to both spin parities of the parent nucleus have been calculated. In the beta decay of \({}^{198}\)Os, two spin-parity states are possible for the ground state of the daughter nucleus (\({}^{198}\)Ir). The shell-model \(\log ft\) values corresponding to both spin parity states are computed. Still, the calculated \(\log ft\) results for both spin parity states are very close to each other; thus, it is very difficult to assign spin parity on this level. The deviation in some results could be due to the adopted truncations and experimentally unconfirmed spin parity states of concerned nuclei.
#### iv.1.2 **Ir\(\rightarrow\)Pt**
For Ir\(\rightarrow\)Pt chain, the \(\log ft\) values corresponding to \({}^{192,192m,194,195,195m,196,198,202}\)Ir are computed. Here, full model space is considered for \({}^{202}\)Ir, truncation I is used for \({}^{196,198}\)Ir isotopes, and truncation II is used for \({}^{192,192m,194,195,195m}\)Ir isotopes. In \({}^{194}\)Ir(\(1^{-}\))\(\rightarrow\)\({}^{194}\)Pt(\(0_{1}^{+}\)) transition, the calculated \(\log ft\) value is 8.317 which is close to the experimental value, i.e., 8.22(1). Similarly, in \({}^{195}\)Ir(\(3/2^{+}\))\(\rightarrow\)\({}^{195}\)Pt(\(1/2_{1}^{-}\)) transition, the calculated \(\log ft\) value is 7.093 and the experimental value is 7.0(3). We have also calculated \(\log ft\) values using the shell-model calculations where experimental data is unavailable. Further, two spin parity states are tentatively assigned for the ground state of \({}^{202}\)Ir isotope experimentally. Thus, the shell-model \(\log ft\) values corresponding to both spin parity states have been calculated, and the results are shown in Table 1. Assigning one particular spin parity state to the ground state of \({}^{202}\)Ir from the corresponding \(\log ft\) values is difficult. Alternatively, by analyzing the half-life values (reported later in the text), one can interpret that \(1^{-}\) is more suitable for the ground state of \({}^{202}\)Ir. Additionally, there are few allowed and forbidden unique transitions in the Ir chain for which the shell-model and experimental \(\log ft\) values differ significantly. It can be reasoned because of the fact that one needs to evaluate the quenching factor for these transitions separately. Overall, the SM results for \(\log ft\) values corresponding to this chain show good agreement.
#### iv.1.3 **Pt\(\rightarrow\)Au**
Moving forward towards Pt\(\rightarrow\)Au chain, the \(\log ft\) values corresponding to \(\beta^{-}\) decay of \({}^{197,197m,199,202}\)Pt isotopes have been computed where full model space is used for SM calculations of \({}^{199,202}\)Pt and truncation I is used for \({}^{197,197m}\)Pt. In \({}^{197}\)Pt(\(1/2^{-}\))\(\rightarrow\)\({}^{197}\)Au(\(3/2_{1}^{+}\)) transition, the calculated SM \(\log ft\) value is 7.150 which is close to the experimental \(\log ft\) value, i.e., 7.36(12). Similarly, in \({}^{197}\)Pt\({}^{m}\)(\(13/2^{+}\))\(\rightarrow\)\({}^{197}\)Au(\(1/2_{1}^{-}\)) transition, the SM \(\log ft\) value is 7.735 and the experimental \(\log ft\) value is 6.75(6). In the same manner, in \({}^{199}\)Pt(\(5/2^{-}\))\(\rightarrow\)\({}^{199}\)Au(\(3/2_{1}^{+}\)) transition, the shell model and experimental \(\log ft\) values are close to each other. In the \({}^{199}\)Au isotope, more than one spin parity state is tentatively assigned experimentally at some energy levels. Thus, \(\log ft\) values for all possible spin parity states are calculated using SM calculations and compared with the corresponding experimental data. Since SM \(\log ft\) results do not significantly differ from one another, we cannot confirm a specific spin parity state, but we can rule out some spin parity states based on \(\log ft\) value comparison. For instance, at
\(1103.99(13)\) keV energy level, \((7/2^{+}_{2})\) can be discarded as the \(\log ft\) results for this spin parity state is very high as compared to the others. Similarly, the \((3/2^{+}_{6})\) can be discarded at \(1159.01(7)\) keV energy level. In the same manner, the \((3/2^{+}_{7})\) can be discarded at \(1249.4(3)\) keV energy level. Moving forward, in \({}^{202}\)Pt isotope, the SM calculated \(\log ft\) value is very low as compared to the experimental \(\log ft\) value. This is because we have only considered one transition for \({}^{202}\)Pt, and experimentally spin parity of the daughter nucleus is not yet confirmed. Also, there are some forbidden unique transitions in the Pt chain for which SM \(\log ft\) values are very high compared to the experimental \(\log ft\) values.
#### iv.2.4 \(\mathbf{Au\mathbf{\rightarrow}Hg}\)
In the case of Au\(\rightarrow\)Hg chain, the \(\log ft\) values corresponding to \(\beta^{-}\) decay of \({}^{196,198,199,200,200m,201,202,203,204}\)Au isotopes have been calculated. Truncation I is used for \({}^{196,198}\)Au isotopes and full model space is used for \({}^{199,200,200m,201,202,203,204}\)Au isotopes. In \({}^{198}\)Au\((2^{-})\rightarrow^{198}\)Hg\((0^{+}_{1})\) transition, the SM \(\log ft\) value is \(12.583^{1u}\) which is close to the experimental \(\log ft\) value, i.e., \(12.28^{1u}(9)\). Similarly, in \({}^{199}\)Au\((3/2^{+})\rightarrow^{199}\)Hg\((5/2^{-}_{1})\) transition, the SM \(\log ft\) value is \(5.799\) and the experimental \(\log ft\) value is \(5.850(9)\). The SM \(\log ft\) values corresponding to beta-decay of \({}^{200}\)Au and \({}^{201}\)Au are also good. Further, we have also computed \(\log ft\) values in those cases where experimental data are unavailable. Further, in \({}^{202}\)Hg isotope, three spin parity states are tentatively assigned experimentally at \(1746.11(7)\) keV energy level. Thus, comparing the SM \(\log ft\) values with experimental data, the \((2^{+}_{4})\) spin parity state can be suggested at this energy level for \({}^{202}\)Hg isotope. Further, in the beta decay of \({}^{203}\)Au isotope, all of the calculated SM \(\log ft\) values agree with the corresponding experimental \(\log ft\) values. For instance, in \({}^{203}\)Au\((3/2^{+})\)\(\rightarrow^{203}\)Hg\((5/2^{-}_{1})\) transition, the SM \(\log ft\) value is \(5.323\) which is near to the experimental \(\log ft\) value, i.e., \(5.2\). Similarly, in \({}^{203}\)Au\((3/2^{+})\)\(\rightarrow^{203}\)Hg\(((3/2^{-}_{1}))\) transition, the SM \(\log ft\) value is \(5.682\) and the experimental \(\log ft\) value is \(5.63(10)\). The SM \(\log ft\) value for \({}^{203}\)Au\((3/2^{+})\)\(\rightarrow^{203}\)Hg\(((1/2^{-}_{1}))\) transition has been computed, which comes out to be \(7.611\). Further, three spin parity states are tentatively assigned experimentally for \({}^{203}\)Hg isotope at \(368.9(3)\) keV energy level. We have also calculated SM \(\log ft\) values for all these three spin parity states and compared them with the experimental data, but assigning one particular spin parity state at this energy level is difficult since SM \(\log ft\) values for all of the possible spins are close to each other.
#### iv.2.5 \(\mathbf{Hg\mathbf{\rightarrow}Tl}\)
The \(\log ft\) values for Hg\(\rightarrow\)Tl chain have been calculated, which includes \(\beta^{-}\) decay of \({}^{203,205,207}\)Hg isotopes of this chain. For this chain, the \(\log ft\) and average shape factor values calculated using SM greatly resemble the corresponding experimental results. The SM results for \({}^{203,205}\)Hg isotopes have been calculated with full model space mentioned in the text, and SM results for \({}^{207}\)Hg have been calculated with the particle-hole excitations as mentioned above. In \({}^{203}\)Hg \((5/2^{-})\rightarrow{}^{203}\)Tl\((3/2^{+}_{1})\) transition, the SM calculated \(\log ft\) value (\(6.386\)) matches quite well with the experimental \(\log ft\) value, i.e., \(6.457(8)\). Further moving towards \({}^{205}\)Hg isotope, the SM \(\log ft\) values for \({}^{205}\)Tl\((1/2^{+}_{1})\) and \({}^{205}\)Tl\((1/2^{+}_{3})\) are \(5.240\) and \(5.845\) which are very close to the experimental \(\log ft\) values, i.e., \(5.257(11)\) and \(5.6(3)\), respectively. Similarly, in \({}^{207}\)Hg isotope, the SM and experimental \(\log ft\) values are close. Further, in the case of \({}^{207}\)Hg beta decaying isotope at energy \(3493.6(5)\) keV, three spin parity states \((5/2^{-}_{3},7/2^{+}_{1},7/2^{-}_{5})\) are possible experimentally. Thus, SM \(\log ft\) values for all these spin parity states are calculated separately, and it is inferred from the results that SM \(\log ft\) value for the \((5/2^{-}_{3})\) spin parity state is closer to the experimental \(\log ft\) value as compared to the other ones. Thus, the \((5/2^{-}_{3})\) spin parity state can be suggested at \(3493.6(5)\) keV energy level. Similarly, at energy \(3569.7(4)\) keV, there are two spin parity states \((11/2^{+}_{1},11/2^{-}_{2})\) are possible experimentally, and SM \(\log ft\) value for \((11/2^{-}_{2})\) spin parity state is closer to the experimental \(\log ft\) value as compared to the other one. Thus \((11/2^{-}_{2})\) spin parity state can be suggested at \(3569.7(4)\) keV energy level.
#### iv.2.6 \(\mathbf{Tl\mathbf{\rightarrow}Pb}\)
In Tl\(\rightarrow\)Pb chain, the \(\log ft\) values corresponding to beta-decay of \({}^{204,206,208,209,210}\)Tl are computed where full model space is used for \({}^{204,206,207}\)Tl and particle-hole excitation is used for \({}^{208,209,210}\)Tl isotopes because of involvement of core breaking during beta decay in the mentioned transitions. Our SM calculated \(\log ft\) results matches quite well with the experimental data except for the case \({}^{210}\)Tl. In case of \({}^{204}\)Tl\((2^{-})\)\(\rightarrow{}^{204}\)Pb\((0^{+}_{1})\) transition at ground state energy, the calculated SM \(\log ft\) value, i.e., \(10.888^{1u}\) is nearer to the experimental \(\log ft\) value, i.e., \(10.0980^{1u}(15)\). Further, in the case of \({}^{206}\)Tl\((0^{-})\)\(\rightarrow{}^{206}\)Pb\((0^{+}_{1})\), the SM calculated \(\log ft\) value is \(5.170\) which is in excellent agreement with the experimental \(\log ft\) value, i.e., \(5.1775(13)\). Moving forward, in \({}^{208}\)Tl, the SM calculated \(\log ft\) values for all of the transitions match pretty well with the experimental data except for \({}^{208}\)Pb\((4^{+}_{1})\) transition where it deviates a little from the experimental \(\log ft\) value. Similarly, in \({}^{209}\)Tl \(\rightarrow{}^{209}\)Pb, the experimental and SM calculated \(\log ft\) values are very near to each other, except for \({}^{209}\)Pb (\((5/2^{-}_{1})\)). Despite all these good results, the SM \(\log ft\) values for
the case of \({}^{210}\)Tl deviate much from the experimental \(\log ft\) values.
### Half-life Results
The half-life values for the various isotopic chains under consideration are listed in Table 2. Due to the large range of these half-life values, they are represented in Fig. 2 using a logarithmic scale of half-life in seconds, while they do not show the half-lives of the isomeric states. These plots show the neutron number for a certain isotope chain on the \(x\) axis and the half-life in the log scale on the \(y\) axis. In the case of Os\(\rightarrow\)Ir isotopic chain, the experimental and SM half-life values differ by a factor of approximately 10. This difference decreases as we move towards a higher neutron number in this isotopic chain. Further, half-life values for both tentatively assigned spin parities (\(3/2^{-},5/2^{-}\)) of parent nucleus \({}^{197}\)Os are computed; however, we are focusing on the half-life value (410 s) of spin parity (\(5/2^{-}\)) because it is closer to the experimental one than the other. Moving forward towards Ir\(\rightarrow\)Pt isotopic chain, the experimental and SM half-life values are close to each other at neutron numbers from \(N=117\) to \(119\); but, these values differ by a factor of approximately 10 at neutron numbers \(115\), \(121\), \(125\). Similar to \({}^{197}\)Os nucleus, two spin parity states are tentatively assigned to the ground state of \({}^{202}\)Ir parent nucleus experimentally. Thus, by comparing the SM half-lives with the experimental half-life, the half-life for the (\(1^{-}\)) state is considered in the half-life plot, since it is closer to the experimental one as compared to the other. Further moving towards Pt\(\rightarrow\)Au isotopic chain, the SM and experimental half-life values are close to one another at \(N=119,121\), but our SM calculations give a much lower half-life value at \(N=124\) than the experimental one. Similarly, for the Au\(\rightarrow\)Hg isotopic chain, our SM calculations for half-life values match the experimental data pretty well. Moving forward towards Hg\(\rightarrow\)Tl isotopic chain, we find that the half-life values calculated using the SM here also agree very closely with the corresponding experimental half-life values. Similarly, in the case of Tl\(\rightarrow\)Pb chain, the SM results are very good in accordance with the experimental data except for \(N=129\). This discussion leads to the conclusion that as one approaches shell closure, SM half-life results get better.
### \(N=126\) Isotones
Table 3 shows the \(\log ft\) and average shape factor values for nuclei with neutron number \(N=126\) in the above-mentioned chains. These SM results have been calculated via the inclusion of the effective value of weak coupling constants for the full set, i.e., \(g_{A}^{eff}=0.38,g_{V}^{eff}=0.60\). There are few nuclei for which experimental data for \(\beta^{-}\) decay results is unavailable. However, we have also calculated \(\log ft\) values for these nuclei using SM calculations. In \({}^{202}\)Os nucleus beta decay, two spin parity states are tentatively proposed experimentally for the g. s. of the daughter nucleus. The SM \(\log ft\) values for both spin parity states have been computed. However, the (\(2_{1}^{-}\)) spin
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{Nucleus} & \multicolumn{2}{c}{Half-life} \\ \cline{2-3} & Expt. & SM \\ \hline \({}^{194}\)Os(\(0^{+}\)) & 6.0(2) yr & 138 yr \\ \({}^{195}\)Os(\(3/2^{-}\)) & 6.5(4) min & 520 min \\ \({}^{196}\)Os(\(0^{+}\)) & 34.9(2) min & 653 min \\ \({}^{197}\)Os(\(3/2^{-}\)) & 91(8) s & 748 s \\ \({}^{197}\)Os(\(5/2^{-}\)) & 91(8) s & 410 s \\ \({}^{198}\)Os(\(0^{+}\)) & 125(28) s & 40.8 s \\ \({}^{202}\)Os(\(0^{+}\)) & NA & 5.97 s \\ \({}^{192}\)Ir(\(4^{+}\)) & 73.829(11) d & 6.59 \(\times 10^{3}\) d \\ \({}^{192}\)Ir\({}^{m}\)(\(1^{-}\)) & 1.45(5) min & 512 min \\ \({}^{194}\)Ir(\(1^{-}\)) & 19.18(3) h & 9.98 h \\ \({}^{195}\)Ir\({}^{m}\)(\(3/2^{+}\)) & 2.29(17) h & 9.62 h \\ \({}^{195}\)Ir\({}^{m}\)(\(11/2^{-}\)) & 3.67(8) h & 3.31 h \\ \({}^{196}\)Ir(\((0^{-}\))) & 52(1) s & 26.9 s \\ \({}^{198}\)Ir(\(1^{-}\)) & 8(1) s & 62.0 s \\ \({}^{202}\)Ir(\((1^{-}\))) & 15(3) s & 416 s \\ \({}^{202}\)Ir(\((2^{-}\))) & 15(3) s & 3.83 \(\times 10^{3}\) s \\ \({}^{203}\)Ir(\((3/2^{+}\))) & NA & 7.84 s \\ \({}^{197}\)Pt(\(1/2^{-}\)) & 19.8915(19) h & 68.6 h \\ \({}^{197}\)Pt\({}^{m}\)(\(13/2^{+}\)) & 95.41(18) min & 2.54 \(\times 10^{4}\) min \\ \({}^{199}\)Pt(\(5/2^{-}\)) & 30.8(4) min & 14.9 min \\ \({}^{202}\)Pt(\(0^{+}\)) & 44(15) h & 0.04 h \\ \({}^{204}\)Pt(\(0^{+}\)) & 16\({}^{+6}_{-5}\) s [47] & 49.0 s \\ \({}^{196}\)Au(\(2^{-}\)) & 6.1669(6) d & 406 d \\ \({}^{198}\)Au(\(2^{-}\)) & 2.6941(2) d & 0.57 d \\ \({}^{199}\)Au(\(3/2^{+}\)) & 3.139(7) d & 2.57 d \\ \({}^{200}\)Au(\(1^{-}\)) & 48.4(3) min & 9.80 min \\ \({}^{200}\)Au(\(12^{-}\)) & 18.7(5) h & 77.8 h \\ \({}^{201}\)Au(\(3/2^{+}\)) & 26.0(8) min & 16.3 min \\ \({}^{202}\)Au(\(1^{-}\))) & 28.4(12) s & 55.8 s \\ \({}^{203}\)Au(\(3/2^{+}\)) & 60(6) s & 70.0 s \\ \({}^{204}\)Au(\((2^{-}\))) & 39.8(9) s & 60.7 s \\ \({}^{205}\)Au(\(3/2^{+}\)) & 32.0(14) s & 20.2 s \\ \({}^{203}\)Hg(\(5/2^{-}\)) & 46.610(10) d & 36.0 d \\ \({}^{205}\)Hg(\(1/2^{-}\)) & 5.14(9) min & 4.30 min \\ \({}^{206}\)Hg(\(0^{+}\)) & 8.32(7) min & 7.68 min \\ \({}^{207}\)Hg(\(9/2^{+}\)) & 2.9(2) min & 2.12 min \\ \({}^{204}\)Tl(\(2^{-}\)) & 3.783(12) yr & 18.7 yr \\ \({}^{206}\)Tl(\(0^{-}\)) & 4.202(11) min & 3.73 min \\ \({}^{207}\)Tl(\(1/2^{+}\)) & 4.77(3) min & 4.43 min \\ \({}^{208}\)Tl(\(5^{+}\)) & 3.053(4) min & 2.31 min \\ \({}^{209}\)Tl(\(1/2^{+}\)) & 2.162(7) min & 1.73 min \\ \({}^{210}\)Tl(\(5^{+}\))) & 1.30(3) min & 1.11 \(\times 10^{5}\) min \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between shell-model and experimental [15; 24; 45; 46] half-life values using quenched values of weak coupling constants, i.e., \(g_{A}^{eff}=0.38,g_{V}^{eff}=0.60\) for full set and \(g_{A}^{eff}=0.25,g_{V}^{eff}=0.50\) for truncation I and truncation II and \(g_{A}^{eff}=0.38,g_{V}^{eff}=0.50\) for \(N>126\) with \(\epsilon_{\rm MEC}=2.01\).
parity state turns out to be the ground state in the SM energy spectrum; for this reason, this spin parity state has been taken into account for the half-life plot. Further, the SM calculated \(\log ft\) values agree pretty well with the experimental \(\log ft\) values corresponding to \({}^{205}\)Au, \({}^{206}\)Hg and \({}^{207}\)Tl isotones. For instance, in \({}^{206}\)Hg(\(0^{+})\rightarrow\)\({}^{206}\)Tl(\(0^{-}_{1}\)) transition, the SM \(\log ft\) value is 5.356 which is very close to the experimental \(\log ft\) value, i.e., 5.41(6). Similarly, in \({}^{207}\)Tl(\(1/2^{+})\rightarrow\)\({}^{207}\)Pb(\(1/2^{-}_{1}\)) transition, the SM \(\log ft\) value is 5.122 which is approximately same as the experimental \(\log ft\) value, i.e., 5.108(6). Further, there are energy levels in \({}^{205}\)Hg nucleus for which experimentally, more than one spin parity state is assigned tentatively. The SM \(\log ft\) values for all the tentatively assigned spin parity states are calculated and compared with the corresponding experimental \(\log ft\) value. For instance, four spin parity states are tentatively proposed at 1280.61(21) keV energy level. Based on the comparison with the experimental data, the (\(1/2^{-}_{2}\)) and (\(3/2^{+}_{1}\)) can be discarded at 1280.61(21) keV energy level as \(\log ft\) values for these spin parity states differ largely as compared to the other ones with the experimental value. Similarly, (\(1/2^{-}_{3}\)) and (\(3/2^{+}_{2}\)) can be discarded at 1325.08(24) keV energy level; and (\(3/2^{+}_{3}\)) and (\(5/2^{+}_{1}\)) can be discarded at 1447.2(4) keV energy level. After the calculation of \(\log ft\) values, the half-life values are calculated, which are included in Table 2. In the half-life calculation of \({}^{202}\)Os, \({}^{203}\)Ir and \({}^{204}\)Pt beta-decay, all possible allowed and FF transitions (at most five eigenvalues for each spin parity) below the Q-value have been included for daughter nuclei using SM calculations since there is not enough experimental data available for these transitions. These half-life values in seconds are plotted in Fig. 3. As we can see from the plot our shell-model half-life values match well with the available experimental data.
\begin{table}
\begin{tabular}{l c c c c c c c} & Transition & Decay mode & Energy & \multicolumn{2}{c}{\(\log ft\)} & \multicolumn{2}{c}{\(\left[\overline{C(w_{c})}\right]^{1/2}\)} \\ \cline{3-8} Initial (\(J^{x}_{i}\)) & Final (\(J^{x}_{f}\)) & & (keV) & Expt. & SM & Expt. & SM \\ \hline \({}^{202}\)Os(\(0^{+}\)) & \({}^{202}\)Ir(\((1^{-}_{1})\)) & 1st FNU & 0.0 & NA & 6.738 & NA & 13.096 \\ & \({}^{202}\)Ir(\((2^{-}_{1})\)) & 1st FU & 0.0 & NA & 9.387 & NA & 0.620 \\ \({}^{203}\)Ir(\(3/2^{+}\)) & \({}^{203}\)Pt(\(1/2^{-}_{1}\)) & 1st FNU & 0.0 & NA & 7.044 & NA & 9.207 \\ & \({}^{203}\)Pt(\(5/2^{-}_{1}\)) & 1st FNU & 367.0 & NA & 6.306 & NA & 21.537 \\ \({}^{204}\)Pt(\(0^{+}\)) & \({}^{204}\)Au(\((2^{-}_{1})\)) & 1st FU & 0.0 & NA & 10.136 & NA & 0.262 \\ \({}^{205}\)Au(\((3/2^{+})\)) [46] & \({}^{205}\)Hg(\((1/2^{-}_{1})\)) & 1st FNU & 0.0 & 5.79(9) & 6.524 & 38.999 & 16.747 \\ & \({}^{205}\)Hg(\(5/2^{-}_{1}\)) & 1st FNU & 379.16(21) & 6.37(12) & 5.806 & 20.001 & 38.282 \\ & \({}^{205}\)Hg(\(3/2^{-}_{1}\)) & 1st FNU & 467.45(24) & 6.43(11) & 8.070 & 18.666 & 2.825 \\ & \({}^{205}\)Hg(\((1/2^{-}_{2})\)) & 1st FNU & 1280.61(21) & 5.51(12) & 7.160 & 53.834 & 8.059 \\ & \({}^{205}\)Hg(\((3/2^{+}_{1})\)) & 1st FNU & 1280.61(21) & 5.51(12) & 5.205 & 53.834 & 76.524 \\ & \({}^{205}\)Hg(\((3/2^{+}_{1})\)) & Allowed & 1280.61(21) & 5.51(12) & 8.669 & 0.139 & 0.004 \\ & \({}^{205}\)Hg(\((5/2^{-}_{2})\)) & 1st FNU & 1280.61(21) & 5.51(12) & 5.142 & 53.834 & 82.194 \\ & \({}^{205}\)Hg(\((1/2^{-}_{3})\)) & 1st FNU & 1325.08(24) & 5.53(11) & 8.026 & 52.609 & 2.974 \\ & \({}^{205}\)Hg(\((3/2^{+}_{3})\)) & 1st FNU & 1325.08(24) & 5.53(11) & 5.259 & 52.609 & 71.906 \\ & \({}^{205}\)Hg(\((3/2^{-}_{2})\)) & Allowed & 1325.08(24) & 5.53(11) & 9.456 & 0.136 & 0.001 \\ & \({}^{205}\)Hg(\((5/2^{-}_{3})\)) & 1st FNU & 1325.08(24) & 5.53(11) & 6.022 & 52.609 & 29.853 \\ & \({}^{205}\)Hg(\((7/2^{-}_{1})\)) & 1st FU & 1346.1(5) & NA & 9.111 & NA & 0.853 \\ & \({}^{205}\)Hg(\((9/2^{-}_{1})\)) & 3rd FNU & 1395.0(6) & NA & 12.545 & NA & 2438.317 \\ & \({}^{205}\)Hg(\((1/2^{+}_{4})\)) & 1st FNU & 1447.2(4) & 6.29(15) & 6.266 & 21.931 & 22.551 \\ & \({}^{205}\)Hg(\((3/2^{+}_{4})\)) & 1st FNU & 1447.2(4) & 6.29(15) & 6.372 & 21.931 & 19.956 \\ & \({}^{205}\)Hg(\((3/2^{+}_{3})\)) & Allowed & 1447.2(4) & 6.29(15) & 8.018 & 0.057 & 0.008 \\ & \({}^{205}\)Hg(\((5/2^{+}_{4})\)) & 1st FNU & 1447.2(4) & 6.29(15) & 6.294 & 21.931 & 21.835 \\ & \({}^{205}\)Hg(\((5/2^{+}_{1})\)) & Allowed & 1447.2(4) & 6.29(15) & 7.346 & 0.057 & 0.017 \\ \({}^{206}\)Hg(\(0^{+}\)) & \({}^{206}\)Tl(\(0^{-}_{1}\)) & 1st FNU & 0.0 & 5.41(6) & 5.356 & 60.403 & 64.253 \\ & \({}^{206}\)Tl(\(1^{-}_{1}\)) & 1st FNU & 304.896(6) & 5.24(10) & 5.385 & 73.461 & 62.201 \\ & \({}^{206}\)Tl(\(1^{-}_{2}\)) & 1st FNU & 649.42(4) & 5.67(8) & 5.544 & 44.777 & 51.748 \\ \({}^{207}\)Tl(\(1/2^{+}\)) & \({}^{207}\)Pb(\(1/2^{-}_{1}\)) & 1st FNU & 0.0 & 5.108(6) & 5.122 & 85.518 & 84.188 \\ & \({}^{207}\)Pb(\(5/2^{-}_{1}\)) & 1st FU & 569.64(10) & \(>\)10.51\({}^{\it{u}}\) & 13.174\({}^
### Strength Functions
Figures 4, 5 and 6 show plots of strength function [48], i.e., \(\log(ft)\) and the corresponding excitaion energy (E\({}_{x}\)(MeV)) for Os and Ir isotopes. The Q-value for corresponding beta decay is also shown in the plots. These plots show the contribution of GT and first forbidden transitions in the total half-life of the nucleus. The GT strengths are mainly due to \(\nu 0h_{9/2}\to\pi 0h_{11/2}\) transitions. In \({}^{208}\)Pb region, the first forbidden transitions strongly compete with the GT transitions. In these plots, one can clearly see that GT transitions are generally ob
Figure 2: Comparison of calculated and experimental half-life values [15; 24; 27; 45; 46] for (a) Os (b) Ir (c) Pt (d) Au (e) Hg (f) Tl isotopes.
served at low transition energies and higher excitation energy. The contribution of GT transitions gets suppressed due to low transition energy and partial blocking (filling) of \(0h_{11/2}\) proton orbital. In the case of \({}^{197}\)Os and \({}^{202}\)Ir isotopes, the spin parity of the ground state for the parent nucleus is not confirmed, and two spin parity states are tentatively assigned experimentally, thus, we have shown plots for both possibilities. Similarly, for the transition \({}^{198}\)Os\(\rightarrow^{198}\)Ir and \({}^{202}\)Os\(\rightarrow^{202}\)Ir, the spin parity of the daughter nucleus is not confirmed experimentally, so we have considered both of the possibilities and shown plots for the both of the spin parities states separately.
### Neutron Emission Probability
The neutron emission probability [49] is calculated (see Table 4) with the help of partial half-lives of all the allowed and forbidden transitions involved, and it is given by
\[P_{n}=\left(\sum_{E_{k}\geq S_{n}}\frac{1}{t_{1/2}^{(k)}}\right)\bigg{/}\left( \sum_{\text{all k}}\frac{1}{t_{1/2}^{(k)}}\right), \tag{17}\]
where \(S_{n}\) is the one neutron separation energy, and here it is assumed that all the final \(k\) states above the one neutron separation energy and below the Q-value undergo neutron emission. In this work, the neutron emission probability has been calculated using both SM and experimental values of neutron separation energy (\(S_{n}\)), here we have taken experimental Q-values for both cases. The one condition for the neutron emission is that the Q-value of the nucleus should be greater than the neutron separation energy. Since most of the nuclei mentioned in the present work have a Q-value smaller than the \(S_{n}\) value, we have calculated the neutron emission probability of only two isotopes with \(Q(\beta^{-})>S_{n}\). Thus neutron emission probability of \({}^{207}\)Hg and \({}^{208}\)Tl have been calculated. In the case of \({}^{207}\)Hg, \(P(n)\%\) using the experimental value of \(S_{n}\) comes out to be 1.29, whereas it is 35.42 using the SM value of \(S_{n}\). Similarly, in the case of \({}^{208}\)Tl, \(P(n)\%\) using the experimental value of \(S_{n}\) comes out to be 2.53, whereas it is 12.20 using the SM value of \(S_{n}\). Hence, it can be concluded that the shell-model calculations support neutron emission.
## IV Conclusion
In this work, systematic shell-model calculations have been performed to evaluate the beta-decay properties of nuclei in the south region of the \({}^{208}\)Pb nucleus with \(76\leq Z\leq 82\). These beta-decay properties include \(\log ft\), average shape factor values, half-lives, strength functions and neutron emission probabilities. Four sets of calculations have been performed; full set, truncation I, truncation II and \(N>126\) set. For all of these sets, the quenching factor is computed using chi-squared fitting method such that the effective value of weak coupling constants is given by \((g_{A}^{eff},g_{V}^{eff})=(0.38,0.60)\) for the full set; \((g_{A}^{eff},g_{V}^{eff})=(0.25,0.50)\) for truncation I and truncation II; and \((g_{A}^{eff},g_{V}^{eff})=(0.38,0.50)\) for \(N>126\). Now, using these values of weak coupling constants, the \(\log ft\) and average shape factor values are calculated. The shell-model calculations for \(\log ft\) values exhibit satisfactory agreement with the corresponding experimental data. There are some unconfirmed states in the nuclei at various energy levels for which tentative spin parity is assigned experimentally. Thus, we have compared the shell-model and the experimental \(\log ft\) values for these states and suggested spin parity accordingly corresponding to that particular energy level. In some cases, the beta decay properties of the shell-model result deviate
Figure 3: Comparison of calculated and experimental half-life values [45; 47] for \(N=126\) isotones.
from the experimental data. This may be due to using the same quenching factor for allowed and forbidden transitions. Thus, the quenching factor for these transitions should be evaluated separately. Moreover, the \(\log ft\) results are also calculated where experimental data are not available. Further, half-life values are calculated and plotted in logarithmic scale against neutron and proton numbers (\(N=126\)). One can conclude from the half-life plots that shell-model results get better as one moves towards shell closures, \(Z=82\) and \(N=126\). Further, the variations of the strength functions (\(\log ft\)) for beta-decay transitions against the excitation energy of the daughter nucleus in the case of Os and Ir isotopes are also shown. We have also calculated neutron emission from the \(\log ft\) data, and calculated neutron emission from the \(\log ft\) data. The \(\log ft\) data are also calculated with the \(\log ft\) data.
sion probability for \({}^{207}\)Hg and \({}^{208}\)Tl.
## Acknowledgement
This work is supported by a research grant from SERB (India), CRG/2022/005167. S. S. would like to thank CSIR-HRDG (India) for the financial support for her Ph.D. thesis work. T. S. acknowledges JSPS (Japan) for a grant JSPS KAKENHI, Nos. JP19K03855 and JP20K03988. We acknowledge the National Supercomputing Mission (NSM) for providing computing resources of 'PARAM Ganga' at the Indian Institute of Technology Roorkee. N.S. acknowledge the support of "Program for promoting researches on the supercomputer Fugaku",
Figure 5: Strength function (\(\log(ft)\)) vs. energy (E\({}_{x}\) (MeV)) for Os and Ir isotopes.
Figure 6: Strength function (\(\log(ft)\)) vs. energy (E\({}_{x}\) (MeV)) for Os and Ir isotopes.
MEXT, Japan (JPMXP1020230411) and the MCRP program of the Center for Computational Sciences, University of Tsukuba (NUCLSM).
|
2301.01588 | Power Spectral Density-Based Resting-State EEG Classification of
First-Episode Psychosis | Historically, the analysis of stimulus-dependent time-frequency patterns has
been the cornerstone of most electroencephalography (EEG) studies. The abnormal
oscillations in high-frequency waves associated with psychotic disorders during
sensory and cognitive tasks have been studied many times. However, any
significant dissimilarity in the resting-state low-frequency bands is yet to be
established. Spectral analysis of the alpha and delta band waves shows the
effectiveness of stimulus-independent EEG in identifying the abnormal activity
patterns of pathological brains. A generalized model incorporating multiple
frequency bands should be more efficient in associating potential EEG
biomarkers with First-Episode Psychosis (FEP), leading to an accurate
diagnosis. We explore multiple machine-learning methods, including
random-forest, support vector machine, and Gaussian Process Classifier (GPC),
to demonstrate the practicality of resting-state Power Spectral Density (PSD)
to distinguish patients of FEP from healthy controls. A comprehensive
discussion of our preprocessing methods for PSD analysis and a detailed
comparison of different models are included in this paper. The GPC model
outperforms the other models with a specificity of 95.78% to show that PSD can
be used as an effective feature extraction technique for analyzing and
classifying resting-state EEG signals of psychiatric disorders. | Sadi Md. Redwan, Md Palash Uddin, Anwaar Ulhaq, Muhammad Imran Sharif | 2022-11-23T00:28:41Z | http://arxiv.org/abs/2301.01588v1 | **Power Spectral Density-Based Resting-State EEG Classification of First-Episode Psychosis**
## Abstract
Historically, the analysis of stimulus-dependent time-frequency patterns has been the cornerstone of most electroencephalography (EEG) studies. The abnormal oscillations in high-frequency waves associated with psychotic disorders during sensory and cognitive tasks have been studied many times. However, any significant dissimilarity in the resting-state low-frequency bands is yet to be established. Spectral analysis of the alpha and delta band waves shows the effectiveness of stimulus-independent EEG in identifying the abnormal activity patterns of pathological brains. A generalized model incorporating multiple frequency bands should be more efficient in associating potential EEG biomarkers with First-Episode Psychosis (FEP), leading to an accurate diagnosis. We explore multiple machine-learning methods, including random-forest, support vector machine, and Gaussian Process Classifier (GPC), to demonstrate the practicality of resting-state Power Spectral Density (PSD) to distinguish patients of FEP from healthy controls. A comprehensive discussion of our preprocessing methods for PSD analysis and a detailed comparison of different models are included in this paper. The GPC model outperforms the other models with a specificity of 95.78% to show that PSD can be used as an effective feature extraction technique for analyzing and classifying resting-state EEG signals of psychiatric disorders.
**Keywords**: First-Episode Psychosis, EEG, PSD, GPC, Machine-Learning
## 1 Introduction
Psychosis is a symptom commonly associated with an extended array of neurological and psychiatric disorders, including schizophrenia spectrum (schizophrenia, schizoaffective, and
paranoid schizophrenia). The first episode of psychosis in schizophrenia can be hard to distinguish from other forms of psychosis. An early diagnosis relies heavily on identifying trait markers of schizophrenia in First-Episode Psychosis (FEP/First-Episode Schizophrenia/FESz) patients. Electroencephalography (EEG) has been tremendously successful in the time-frequency analysis of neural activation patterns during different cognitive and behavioral assessments. Recent resting-state studies show that EEG can also be used to decode intrinsic brain activity in a task-negative state. Multiple studies involving spectral analysis support the alterations in resting-state delta/alpha activity in schizophrenia spectrum [1, 2, 36]. Several cortical alpha networks have been shown to be pathological in FEP patients in a recent magnetoencephalography (MEG) study [3]. Power Spectral Density (PSD) has been used in analyzing the alpha band Default Mode Network (DMN) in schizophrenia in another MEG analysis [4]. This raises the question of whether PSD can also be used for EEG analysis to identify FEP patients accurately.
In contemporary EEG and MEG studies, delta and alpha powers have been affiliated with attention and prolonged focus, signifying spontaneous resting-state brain activity. A more generalized model using multiple robust feature extraction techniques for highly accurate schizophrenia classification has also been proposed recently [5]. Several studies support the use of PSD as an effective EEG feature extraction method for machine-learning classification [37, 38]. In another study, researchers used PSD of multiple frequency bands along with fuzzy entropy and functional connectivity for Generalized Anxiety Disorder (GAD) classification with 97.83 (\(\pm\)0.4)% accuracy [6]. This signifies the potential utility of combining the spectral features of multiple bands for EEG classification of FEP. The core objective of this work is to combine the PSD of delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-12 Hz), and sigma (12-16 Hz) bands of resting-state EEG for the machine learning approaches.
Machine learning models for EEG classification have been popularized with the success of Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), and neural networks in multiple EEG paradigms. A random forest classifier has been proposed for the classification and analysis of mental states using single-channel EEG [7]. SVM has been successfully used in multiple sclerosis [8] and epilepsy detection [9]. Gaussian Process Classifier (GPC) has also been proposed for classifying mental states [10] and detecting neonatal seizures [11]. In this work, we analyze the effectiveness of multiple methods, namely random forest, SVM, and GPC, for classifying FEP patients and healthy controls based on the PSD of multiple EEG frequency bands. A medium-sized dataset of 28 controls and 44 patients has been balanced using borderline-SMOTE [12] for this work. With a very small number of parameters, the computationally efficient GPC has performed very well, with an accuracy of 95.51 (\(\pm\)1.74)% and a specificity of 95.78 (\(\pm\)3.3)%. The proposed framework sets a baseline for FEP and control classification using resting-state EEG, and we expect it to be improved upon in the future with more complex neural network models and multiple feature extraction techniques based on time-frequency analysis.
## 2 Materials and Methods
### Electroencephalography (EEG)
EEG is a waveform representation of the (electrical) brain signals measured by the fluctuations of voltage induced by the neuronal ionic activity [13]. The effectiveness of EEG in decoding neurological and emotional states of the brain is attributed to the high temporal resolution of the
signal [14] and our understanding of which frequency or pattern of the signal relates to a particular task, stimulus, or emotion. Several visual, auditory, and task-based stimuli have been developed over the years by researchers on account of EEG studies. These studies have eventually built the foundation of modern EEG-based emotion recognition, seizure detection, medical diagnosis, and Brain-Computer Interface (BCI) systems. In particular, EEG is currently established as the primary method for seizure detection [15]. Most publicly available EEG datasets are focused on diverse neural activation events of healthy and occasionally pathological brains. That being said, the publication of resting-state EEG studies and datasets has also increased in the past few years. Major depressive disorder [16], depression [17, 19], cognitive states [18], and multiple other psychiatric disorders [19] have been studied using resting-state EEG as of late, and some of them have been published as datasets. In addition to the MEG study of resting-state cortical alpha networks of FEP/FESz [3], Salisbury _et al._ also published the corresponding EEG datasets in 2022 [20, 21]. For our work, we use the _Resting Task 1_ dataset, excluding the _Resting Task 2_ samples of 10 subjects that are also present in the _Resting Task 1_ dataset. The subject population consists of 72 subjects (28 controls and 44 patients). The demographic information of the subjects are presented in Table 1.
The dataset is obtained from OpenNeuro [22] (accession number: ds003944). It is available under the Creative Commons License (CC0). The phenotypic information is also included in the dataset. The cognitive and socio-economic assessments have been conducted using the MATRICS score and SES score respectively, and the negative effects of FEP are evident in the patient population.
### Preprocessing
The initial step of every EEG study is preprocessing the data to reduce the effects of several unwanted artifacts. The EEG signals used in this work are obtained in a 5-minute period using a low-impedance 10-10 system 60-channel cap. Two additional electrooculogram (EOG) channels and an electrocardiogram (ECG) channel are also included in the data. EOG channels are particularly important as they capture the eye-blink artifacts that are also present in the EEG signals. Much work has been done to establish a correct method for EOG-related artifact removal based on Independent Component Analysis (ICA) and regression [23]. EEG signals also correlate with the ECG signal (heartbeat artifacts), which can be removed using ICA [24] and Signal-Space Projection (SSP).
ICA is a blind source separation (BSS) technique that has revolutionized signal separation from mixed signals and has been used in numerous EEG and fMRI studies over the years. With the success of a fast and efficient ICA implementation, fittingly named FastICA [25], it has become much easier to remove artifacts from EEG signals. In this work, FasstICA is used to remove both EOG and ECG artifacts separately. We apply temporal band-pass filtering of 0.5-35Hz before applying ICA to remove low-frequency drifts and high-frequency components that are not
\begin{table}
\begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline Group & N (male, female) & Average age (SD) & Ethnicity – White, Black, Asian, Mixed, Undisclosed \\ \hline All subjects & 72 (46, 26) & 21.96 (4.66) & 46, 17, 5, 3, 1 \\ \hline Control & 28 (16, 12) & 21.33 (3.88) & 21, 4, 3, 0, 0 \\ \hline FEP & 44 (30, 14) & 22.36 (5.06) & 25, 13, 2, 3, 1 \\ \hline \end{tabular}
\end{table}
Table 1: Demographic information of the subject population.
needed for this study. We extract 20 Independent Components (ICs) from all the channels to find out which components correspond to EOG and ECG artifacts and remove those components. The ICs for a sample subject are shown in Figure 1.
We identify ICs that are related to EOG artifacts by correcting the baseline (0.2 seconds interval) and averaging across the channels, as shown in Figure 2.
Figure 1: All 20 ICs for a subject. From a cursory glance, the IC-001 and IC-002 appear to be related to unwanted artifacts. IC-001 is close to the eyes, which indicates EOG-related potential, and IC-002 appears to be incoherent compared to the other ICs.
Figure 2: The ICs identified to be EOG-related IC (-0.5s–0.5s range, 1000 time points).
The ECG-related ICs are also identified using the same principle. Correlation is also applied to identify the heartbeat artifacts, since these artifacts do not affect each EEG electrode with the same potential due to the temporal properties of the ECG signal. Figure 3 shows the ICs that correlate to the ECG signal, and Figure 4 shows the effect of EOG and ECG-related artifact removal.
upper-bound. The wavelet power spectrum can be defined as
\[\left(WPS\right)_{x}(\tau,s)=\left|\ W_{x}(\tau,s)\right|^{2}, \tag{1}\]
where \(W_{x}\) is the wavelet transform and \(\tau,s\) represent the position of the wavelet in the time and frequency domain, respectively [26]. The Morlet wavelet is given by
\[\psi(x)=exp\left(-\frac{x^{2}}{2}\right)cos(5x). \tag{2}\]
By combining the correlation between the power spectrums for each pair of signals, we eventually get a 60\(\times\)60 matrix for all 60 channels. The average CSD matrices for a sample subject across different frequency bands are presented in Figure 5.
### Power Spectral Density (PSD)
PSD is an effective method to differentiate between noise and features in a signal by making a spectral representation of the power distribution of its frequency components. We use Thomson's multitier spectral estimation [27] method to compute PSD. This method starts by calculating a periodogram for each of the first \(K\approx 2NW\) Discrete Prolate Spheroidal Sequences (DPSS/Slepian tapers) [28] and then averaging these periodograms. Figure 6 shows the power spectra of a sample subject's preprocessed EEG data in \(\mu V^{2}/\mathrm{Hz}\) (decibels).
Figure 5: CSD analysis of a single subject. (a) delta, (b) theta, (c) alpha, and (d) sigma CSD matrices denote coherence across channel signals.
We divide the data into 30s segments and compute four PSD bands for each subject. The four bands are then combined for the classification step.
### Random Forest
Random forest is a tree-based ensemble learning technique [29] that has been used many times in different classification tasks. The core idea of a random forest classifier is to combine multiple decision trees using an ensemble (bagging) mechanism. The prediction of the random forest is given by the averaged prediction of the decision trees combined with the extremely-randomized method [30]. A random forest of 200 decision trees with a maximum depth of 30 per tree is used in this work to classify PSD feature vectors. Figure 7 presents a simple diagram of the random forest classifier.
Figure 6: Power spectral representation of EEG data. Each frequency band shows the characteristic PSD of the signal.
### Gaussian Process Classifier (GPC)
The GPC for binary classification is based on Laplace approximation [31]. With the joint probability \(p(y)p(x|y)\) derived from Bayes' theorem, where \(y\) denotes the class label, the marginal likelihood \(p(y|X)\) is given by
\[p(y|X)=\int p(y|f)p(f|X)df=\int exp\big{(}\Psi(f)\big{)}df. \tag{3}\]
Using a Taylor expansion of \(\Psi(f)\) the approximation \(q(y|X)\) to the marginal likelihood is derived as follows.
\[p(y|X)\simeq q(y|X)=exp\left(\Psi(f)\right)\int exp\left(-\tfrac{1}{2}\big{(}f- \hat{f}\big{)}^{T}A\big{(}f-\hat{f}\big{)}\right)df. \tag{4}\]
An approximation to the log marginal likelihood is derived by analyzing this Gaussian integral.
\[logq(y|X,\theta)=-\tfrac{1}{2}\hat{f}^{T}K^{-1}\hat{f}+logp\big{(}y|\hat{f} \big{)}-\tfrac{1}{2}log\big{|}\ B\big{|}\,, \tag{5}\]
where
\[\big{|}\ B\big{|}\ =\ \big{|}\ K\big{|}\.\ \big{|}\ K^{-1}+W\big{|}\ =\ \big{|}\ I_{n}+W^{\tfrac{1}{2}}KW^{\tfrac{1}{2}}\big{|}\,, \tag{6}\]
and \(\theta\) is a vector of hyperparameters of the covariance function.
We use a stationary covariance function, Radial Basis Function (RBF), as the Gaussian process kernel. With \(r=\|x-x_{i}\|\) and a specified shape parameter \(\varepsilon\), the Gaussian RBF is given as follows while the schematic working procedure of GPC is illustrated in Figure 8.
\[\varphi(r)=exp\big{(}-(\varepsilon r)^{2}\big{)} \tag{7}\]
Figure 7: Random forest classifier architecture for binary classification.
### Support Vector Machine (SVM)
Support vector machines (SVMs) are widely used for classification because they build a linear decision surface from a very large feature space to which input vectors are mapped non-linearly [32]. Based on the properties of the optimal hyperplane (feature map), the SVM algorithm can be classified into linearly separable, linearly inseparable, and non-linearly separable. For non-linear feature mapping, a kernel function is used to map the inputs implicitly. Similar to the GPC, we use the Gaussian RBF as the kernel function for our SVM model. For Gaussian RBF, \(\varphi\) the kernel function can be written as
\[K\big{(}x_{i},x_{j}\big{)}=\varphi(x_{i}).\,\varphi\big{(}x_{j}\big{)}. \tag{8}\]
Then the vector to the hyperplane (weight) is given by
\[w=\sum_{i}\alpha_{i}y_{i}\varphi(x_{i}) \tag{9}\]
The SVM classifier minimizes the following expression to separate the input feature vectors with the parameter \(\lambda>0\), which denotes the tradeoff between the size and flexibility of the margin for classification while the basic architecture for non-linear SVM is shown in Figure 9.
\[\Big{[}\frac{1}{n}\sum_{i=1}^{n}max\big{(}0,1-y_{i}(w^{T}x_{i}-b)\big{)}\Big{]} +\lambda|w|^{2} \tag{10}\]
Figure 8: GPC architecture for binary classification.
Figure 9: Non-linear SVM with Gaussian RBF kernel for binary classification.
## 3 Result and Discussion
The experiments were done using MATLAB R2022b and Python 3.10 in Microsoft Windows 11 (22H2) platform on an AMD Ryzen 7 3750H computer. The performance of each model is evaluated using 5-fold cross-validation. The final confusion matrix for each model is derived by taking the average of all confusion matrices, as shown in Figure 10.
We use precision, recall, and F1-score to evaluate the classification accuracy for each class. The mathematical expressions for precision, recall, and F1-score are as follows.
\[Precision=\frac{TP}{TP+FP^{*}}, \tag{11}\]
\[Recall=\frac{TP}{TP+FN^{*}}, \tag{12}\]
\[F1=\frac{2\times Precision\times Recall}{Precision+Recall}, \tag{13}\]
where \(TP\), \(FP\), and \(FN\) denote true-positive, false-positive, and false-negative predictions respectively. Specificity or true negative rate is defined as the recall of the negative class (control). The accuracy score, precision, recall, and F1 scores for the random forest, GPC, and SVM models are discussed in Table 2, Table 3, and Table 4, respectively.
With an accuracy of 95.51 (\(\pm\)1.74)% and specificity of 95.78 (\(\pm\)3.3)%, the GPC model has outperformed the other models (\(\uparrow\)9.67% accuracy over random forest and \(\uparrow\)13.26% accuracy over SVM) and thus, decided as the best model for PSD-based classification of FEP vs. control. The proposed GPC model has a comparatively small number of parameters and can be considered a'shallow' learning model. The high accuracy of GPC can be attributed to selecting a suitable covariance function for the input features. Other RBF kernels should also be considered for comparison. Deep recurrent neural network (RNN) models trained with time-frequency features, much like the recently proposed models for epilepsy classification, age prediction, and concussion classification [33, 34, 35], can hypothetically outperform this model. Another aspect that requires further analysis is the method for computing PSD. Future studies should also consider Welch's method for computing PSD to compare with the results of the DPSS method. Combining the CSD features with the PSD features can also provide insight into which electrode signals have the most significant impact on classification. This work can also be extended further for a spectrum-wide analysis of the schizophrenia spectrum.
## 4 Conclusion
In this study, we have evaluated the use of machine learning methods for the classification of patients with first-episode psychosis (FEP) and healthy controls based on the Power Spectral Density (PSD) of resting-state EEG. We have reviewed various feature engineering techniques and machine learning models to demonstrate that FEP patients can be accurately detected utilizing resting-state EEG. In addition, we have demonstrated that low-to-medium frequency (delta-to-sigma band) waves are pathological in FEP patients and can differentiate patients from healthy persons with the same degree of accuracy as task/event-related high-frequency waves. PSD is shown to be a reliable characteristic for the effective classification of FEP using machine learning. We conclude that resting-state EEG studies can lead to an accurate diagnosis of FEP/FESz and other psychiatric disorders and should be regarded as equally essential as stimulus-based EEG studies.
## Data Availability
The denoised and preprocessed data used in this work is available at [https://zenodo.org/record/7315010](https://zenodo.org/record/7315010) while the original _EEG: First Episode Psychosis vs. Control Resting Task 1_ dataset is available at doi:10.18112/openneuro.ds003944.v1.0.1.
|
2309.13976 | Heat transfer in drop-laden turbulence | Heat transfer by large deformable drops in a turbulent flow is a complex and
rich in physics system, in which drops deformation, breakage and coalescence
influence the transport of heat. We study this problem coupling direct
numerical simulations (DNS) of turbulence, with a phase-field method for the
interface description. Simulations are run at fixed shear Reynolds and Weber
numbers. To evaluate the influence of microscopic flow properties, like
momentum/thermal diffusivity, on macroscopic flow properties, like mean
temperature or heat transfer rates, we consider four different values of the
Prandtl number, which is the momentum to thermal diffusivity ratio: Pr=1, Pr=2,
Pr=4 and Pr=8. The drops volume fraction is Phi=5.4% for all cases. Drops are
initially warmer than the turbulent carrier fluid, and release heat at
different rates, depending on the value of Pr, but also on their size and on
their own dynamics (topology, breakage, drop-drop interaction). Computing the
time behavior of the drops and carrier fluid average temperatures, we clearly
show that an increase of Pr slows down the heat transfer process. We explain
our results by a simplified phenomenological model: we show that the time
behavior of the drops average temperature is self similar, and a universal
behavior can be found upon rescaling by t/Pr^2/3. | Francesca Mangani, Alessio Roccon, Francesco Zonta, Alfredo Soldati | 2023-09-25T09:22:29Z | http://arxiv.org/abs/2309.13976v1 | # Heat transfer in drop-laden turbulence
###### Abstract
Heat transfer by large deformable drops in a turbulent flow is a complex and rich in physics system, in which drops deformation, breakage and coalescence influence the transport of heat. We study this problem coupling direct numerical simulations (DNS) of turbulence, with a phase-field method for the interface description. Simulations are run at fixed shear Reynolds and Weber numbers. To evaluate the influence of microscopic flow properties, like momentum/thermal diffusivity, on macroscopic flow properties, like mean temperature or heat transfer rates, we consider four different values of the Prandtl number, which is the momentum to thermal diffusivity ratio: \(Pr=1\), \(Pr=2\), \(Pr=4\) and \(Pr=8\). The drops volume fraction is \(\Phi\simeq 5.4\%\) for all cases. Drops are initially warmer than the turbulent carrier fluid, and release heat at different rates, depending on the value of \(Pr\), but also on their size and on their own dynamics (topology, breakage, drop-drop interaction). Computing the time behavior of the drops and carrier fluid average temperatures, we clearly show that an increase of \(Pr\) slows down the heat transfer process. We explain our results by a simplified phenomenological model: we show that the time behavior of the drops average temperature is self similar, and a universal behavior can be found upon rescaling by \(t/Pr^{2/3}\). Accordingly, the heat transfer coefficient \(\mathcal{H}\) (resp. its dimensionless counterpart, the Nusselt number \(Nu\)) scales as \(\mathcal{H}\sim Pr^{-2/3}\) (resp. \(Nu\sim Pr^{1/3}\)) at the beginning of the simulation, and tends to \(\mathcal{H}\sim Pr^{-1/2}\) (resp. \(Nu\sim Pr^{1/2}\)) at later times. These different scalings can be explained via the boundary layer theory and are consistent with previous theoretical/numerical predictions.
## 1 Introduction
Transport of passive and active scalars in multiphase turbulence is very important in many industrial processes and natural phenomena, from vaporization of atomized fuel jets (Gorokhovski & Herrmann 2008; Ashgriz 2011; Gao _et al._ 2022; Boyd & Ling 2023), to rain formation and atmosphere-ocean heat/mass exchanges (Duguid & Stampfer Jr 1971; Deike 2022) or even to the uptake of nutrients and other biochemicals by cells in complex flows (Aksnes & Egge 1991; Magar & Pedley 2005). While the mixing of active or passive scalars in turbulent single-phase flows has been extensively analyzed (Antonia & Orlandi 2003; Kasagi _et al._ 1992; Kim & Moin 1989; Warhaft 2000; Pirozzoli _et al._ 2016; Zonta _et al._ 2012\(a\),_b_; Zonta & Soldati 2014), it remains a challenging task in turbulent multiphase flows (Gauding _et al._ 2022; Ni 2024), where most of the available studies considered the case of heat/mass transfer from/to isolated drops and bubbles (Boussinesq 1905; Levich 1962; Bird _et al._ 2002; Bothe _et al._ 2004; Figueroa-Espinoza & Legendre 2010), with some remarkable
###### Abstract
The present study has three main objectives. First, we want to investigate the macroscopic dynamic of the drops and of the heat transfer process by analyzing the drop size distribution and the mean temperature behavior of the two phases over time. Second, we want to characterize the influence of the Prandtl number, i.e. of the microscopic flow properties, on the macroscopic flow properties (mean temperature, heat transfer coefficient) and, building on top of the numerical results, we want to develop a physically-based model to explain the observed results. Third, we want to study the influence of the Prandtl number and of drop
size on the temperature distribution inside the drops, so to evaluate the corresponding flow mixing/ homogenization.
The paper is organized as follows. In section SS2, the governing equations, the numerical method, and the simulation setup are presented. In section SS3, the simulation results, in terms of drop size distribution and mean temperature of the two phases and heat transfer coefficient are carefully characterized and discussed. A simplified model is also developed to explain the observed results. The temperature distribution inside the drops is then evaluate at different Prandtl numbers and drop sizes. Finally, conclusions are presented in SS4.
## 2 Methodology
We consider a swarm of large and deformable drops injected in a turbulent channel flow. The channel has dimensions \(L_{x}\times L_{y}\times L_{z}=4\pi h\times 2\pi h\times 2h\) along the streamwise (\(x\)), spanwise (\(y\)) and wall-normal direction (\(z\)). To describe the dynamics of the system, we couple direct numerical simulation (DNS) of the Navier-Stokes and energy equations, used to describe the turbulent flow, with a phase-field method (PFM), used to describe the interfacial phenomena. The employed numerical framework is described more in detail in the following.
### Phase-field method
To describe the dynamics of drops and the corresponding topological changes (e.g. coalescence and breakage), we employ an energy-based phase field method (Jacqmin, 1999; Badalassi _et al._, 2003), which is based on the introduction of a scalar quantity, the phase field \(\phi\), required to identify the two phases. The phase field \(\phi\) has a uniform value in the bulk of each phase (\(\phi=+1\) inside the drops; \(\phi=-1\) inside the carrier fluid) and undergoes a smooth change across the thin transition layer that separates the two phases. The transport of the phase field variable is described by a Cahn-Hilliard equation, which in dimensionless form reads as:
\[\frac{\partial\phi}{\partial t}+\mathbf{u}\cdot\nabla\phi=\frac{1}{Pe}\nabla ^{2}\mu_{\phi}\,+f_{p}, \tag{1}\]
where \(\mathbf{u}=(u,v,w)\) is the velocity vector, \(Pe\) is the Peclet number, \(\mu\) is the phase field chemical potential while \(f_{p}\) is a penalty-flux term which will be further discussed later. The Peclet number is
\[Pe=\frac{u_{\tau}^{*}h^{*}}{\mathcal{M}^{*}\beta^{*}}\,, \tag{2}\]
where \(u_{\tau}^{*}\) is the friction velocity (\(u_{\tau}^{*}=\sqrt{\tau_{w}^{*}/\rho^{*}}\), with \(\tau_{w}^{*}\) the wall-shear stress and \(\rho^{*}=\rho_{c}^{*}=\rho_{d}^{*}\) the density of the fluids), \(h^{*}\) is the channel half-height, \(\mathcal{M}^{*}\) is the mobility and \(\beta^{*}\) is a positive constant (the superscript \({}^{*}\) is used to denote dimensional quantities hereinafter). The chemical potential \(\mu\) is defined as the variational derivative of a Ginzburg-Landau free-energy functional, the expression of which is chosen to represent an immiscible binary mixture of fluids (Soligo _et al._, 2019, 2019). The functional is the sum of two contributions: the first contribution, \(f_{0}\), accounts for the tendency of the system to separate into the two pure stable phases, while the second contribution, \(f_{mix}\), is a mixing term accounting for the energy stored at the interface (i.e. surface tension). The mathematical expression of the functional in dimensionless form is:
\[\mathcal{F}[\phi,\nabla\phi]=\int_{\Omega}\bigg{(}\underbrace{\frac{(\phi^{2} -1)^{2}}{4}}_{f_{0}}+\underbrace{\frac{Ch^{2}}{2}\left|\nabla\phi\right|^{2}}_ {f_{mix}}\bigg{)}\mathrm{d}\Omega\,, \tag{3}\]
where \(\Omega\) is the considered domain and \(Ch\) is the Cahn number, which represents the dimensionless thickness of the thin interfacial layer between the two fluids:
\[Ch=\frac{\xi^{*}}{h^{*}}\,, \tag{4}\]
where \(\xi^{*}\) is clearly the dimensional thickness of the interfacial layer. From equation (3), the expression of the chemical potential can be derived as the functional derivative with respect to the order parameter:
\[\mu_{\phi}=\frac{\delta\mathcal{F}[\phi\nabla\phi]}{\delta\phi}=\phi^{3}-\phi- Ch^{2}\nabla^{2}\phi\,. \tag{5}\]
At equilibrium, the chemical potential is constant throughout all the domain. The equilibrium profile for a flat interface can thus be obtained by solving \(\nabla\mu_{\phi}=\mathbf{0}\), hence:
\[\phi_{eq}=\tanh\left(\frac{s}{\sqrt{2}Ch}\right)\,, \tag{6}\]
where \(s\) is the coordinate normal to the interface. As anticipated before, the last term in the right-hand side of the Cahn-Hilliard equation (equation 1) is a penalty-flux term employed in the profile-corrected formulation of the phase-field method, and is used to overcome some potential drawbacks of the standard formulation of the method, e.g. mass leakages among the phases and misrepresentation of the interfacial profile (Yue _et al._, 2007; Li _et al._, 2016). This penalty flux is defined as:
\[f_{p}=\frac{\lambda}{Pe}\left[\nabla^{2}\phi-\frac{1}{\sqrt{2}Ch}\nabla\cdot \left((1-\phi^{2})\frac{\nabla\phi}{|\nabla\phi|}\right)\right]\,, \tag{7}\]
where \(\lambda=0.0625/Ch\)(Soligo _et al._, 2019_c_).
### Hydrodynamics
To describe the hydrodynamics of the multiphase system, the Cahn-Hilliard equation is coupled with the Navier-Stokes equations. The presence of a deformable interface (and of the corresponding surface tension forces) is accounted for by introducing an interfacial term in the Navier-Stokes equations. Recalling that in the present case we consider two fluids with the same density (\(\rho^{*}=\rho_{c}^{*}=\rho_{d}^{*}\)) and viscosity (\(\mu^{*}=\mu_{c}^{*}=\mu_{d}^{*}\)), continuity and Navier-Stokes equations in dimensionless form read as:
\[\nabla\cdot\mathbf{u}=0\,, \tag{8}\]
\[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla\mathbf{u}=-\nabla p +\frac{1}{Re_{\tau}}\nabla^{2}\mathbf{u}+\frac{3}{\sqrt{8}}\frac{Ch}{We}\nabla \cdot\mathbf{T_{c}}\,, \tag{9}\]
\(p\) is the pressure field, while \(\mathbf{T_{c}}\) is the Korteweg tensor (Korteweg, 1901) used to account for the surface tension forces and defined
\[\mathbf{T_{c}}=|\nabla\phi|^{2}\mathbf{I}-\nabla\phi\otimes\nabla\phi\,, \tag{10}\]
where \(\mathbf{I}\) is the identity matrix and \(\otimes\) represents the dyadic product. This approach is the continuum surface stress approach (Lafaurie _et al._, 1994; Gueyffier _et al._, 1999) applied in the context of PFM, and is analytically equivalent to the chemical potential forcing (Mirjalili _et al._, 2023). The dimensionless groups appearing in the Navier-Stokes equations are the shear Reynolds number \(Re_{\tau}\) (ratio between inertial and viscous forces), and the Weber number
\(We\) (ratio between inertial and surface tension forces), which are defined as:
\[Re_{\tau}=\frac{\rho^{*}u_{\tau}^{*}h^{*}}{\mu^{*}}\,,\qquad We=\frac{\rho^{*}{u_{ \tau}^{*}}^{2}h^{*}}{\sigma^{*}}\,. \tag{11}\]
where \(\sigma^{*}\) is the surface tension. Note that, consistently with the employed adimensionalization, \(We\) is defined using the half-channel height (and not the drop diameter).
### Energy equation
The time evolution of the temperature field is obtained by solving the energy equation using a one-scalar model approach (Zheng _et al._, 2015). To avoid the introduction of further complexity in the system, we consider two fluids with the same thermophysical properties, i.e. same thermal conductivity \(\lambda^{*}\), same specific heat capacity \(c_{p}^{*}\) and therefore same thermal diffusivity \(a^{*}=\lambda^{*}/\rho^{*}c_{p}^{*}\) (since the density of the two phases is also the same). These properties have been evaluated at a reference temperature \(\theta_{r}^{*}=(\theta_{d,0}^{*}+\theta_{c,0}^{*})/2\), i.e. the average between the initial drops temperature and the carrier fluid temperature, and are assumed to be constant and uniform. Within these assumptions, the energy equation written in dimensionless form reads as:
\[\frac{\partial\theta}{\partial t}+\mathbf{u}\cdot\nabla\theta=\frac{1}{Re_{ \tau}Pr}\nabla^{2}\theta\,, \tag{12}\]
where \(Pr\) is the Prandtl number defined as
\[Pr=\frac{\mu^{*}c_{p}^{*}}{\lambda^{*}}=\frac{\nu^{*}}{a^{*}}\,, \tag{13}\]
with \(\nu^{*}=\mu^{*}/\rho^{*}\) the kinematic viscosity (i.e. momentum diffusivity). From a physical viewpoint, \(Pr\) represents the momentum-to-thermal diffusivity ratio.
### Numerical discretization
The governing equations (1), (8), (9) and (12) are solved using a pseudo-spectral method, which uses Fourier series along the periodic directions (streamwise and spanwise) and Chebyshev polynomials along the wall-normal direction. The Navier-Stokes and continuity equations are solved using a wall-normal velocity-vorticity formulation: equation (9) is rewritten as a \(4^{th}\) order equation for the wall-normal component of the velocity \(w\) and a \(2^{nd}\) order equation for the wall-normal component of the vorticity \(\omega_{z}\)(Kim _et al._, 1987; Speziale, 1987). The Cahn-Hilliard equation (1), which in its original form is a \(4^{th}\) order equation is split into two \(2^{nd}\) order equations using the splitting scheme proposed in Badalassi _et al._ (2003). Using this scheme, the governing equations are recasted as a coupled system of Helmholtz equations, which can be readily solved.
The governing equations are time advanced using an implicit-explicit scheme. For the Navier-Stokes, the linear part is integrated using a Crank-Nicolson implicit scheme, while the non-linear part is integrated using an Adams-Bashforth explicit scheme. Similarly, for the Cahn-Hilliard and energy equations, the linear terms are integrated using an implicit Euler scheme, while the non-linear terms are integrated in time using an Adams-Bashforth scheme. The adoption of the implicit Euler scheme for the Cahn-Hilliard equation helps to damp unphysical high-frequency oscillations that could arise from the steep gradients of the phase field.
As the characteristic length scales of the flow and temperature fields, represented by the Kolmogorov scale, \(\eta_{k}^{+}\), and the Batchelor scale, \(\eta_{\theta}^{+}\), are different when non-unitary Prandtl numbers are employed (being these two quantities linked by the following relation
\(\eta_{\theta}^{+}=\eta_{k}^{+}/\sqrt{Pr}\)), a dual grid approach is employed to reduce the computational cost of the simulations and at the same time to fulfill the DNS requirements. In particular, when super-unitary Prandtl numbers are simulated, a finer grid is used to resolve the energy equation. Spectral interpolation is used to upscale/downscale the fields from the coarse to the refined grid and vice-versa when required (e.g. upscaling of the velocity field to compute the advection terms in the energy equation).
This numerical scheme has been implemented in a parallel Fortran 2003 MPI in-house proprietary code. The parallelization strategy is based on a 2D domain decomposition to divide the workload among all the MPI tasks. The solver execution is accelerated using openACC directives and CUDA Fortran instructions (solver execution) while the Nvidia cuFFT libraries are used to accelerate the execution of the Fourier/Chebyshev transforms. Overall, the computational method adopted allows for the accurate resolution of all the governing equations and the achievement of an excellent parallel efficiency thanks to the fine-grain parallelism offered by the numerical method used. The equivalent computational cost of the simulations is about 25 million CPU hours and the resulting dataset has a size of about 16 TB.
### Boundary conditions
The system of governing equations is complemented by a set of suitable boundary conditions. For the Navier-Stokes equations, no-slip boundary conditions are enforced at the top and bottom walls (located at \(z=\pm h\)):
\[\mathbf{u}(z=\pm h)=\mathbf{0}\,. \tag{14}\]
For the Cahn-Hilliard equation, no-flux boundary conditions are applied at the two walls, yielding to the following boundary conditions:
\[\frac{\partial\phi}{\partial z}(z=\pm h)=0\,;\qquad\frac{\partial^{3}\phi}{ \partial z^{3}}(z=\pm h)=0\,. \tag{15}\]
Likewise, for the energy equation, no-flux boundary conditions are applied at the two walls (i.e. adiabatic walls).
\[\frac{\partial\theta}{\partial z}(z=\pm h)=0\,. \tag{16}\]
Along the streamwise and spanwise directions (\(x\) and \(y\)), periodic boundary conditions are imposed for all variables (Fourier discretization). The adoption of these boundary conditions leads to the conservation of the phase field and temperature fields over time:
\[\frac{\partial}{\partial t}\int_{\Omega}\phi\mathrm{d}\Omega=0\,;\qquad\frac{ \partial}{\partial t}\int_{\Omega}\theta\mathrm{d}\Omega=0\,. \tag{17}\]
where \(\Omega\) is the computational domain. Regarding the phase-field, equation (17) enforces mass conservation of the entire system but does not guarantee the conservation of the mass of each phase (Yue _et al._, 2007; Soligo _et al._, 2019), as some leakages between the phases may occur. This drawback is rooted in the phase-field method and more specifically in the curvature-driven flux produced by the chemical potential gradients (Kwakkel _et al._, 2020; Mirjalili & Mani, 2021). This issue is here successfully mitigated with the adoption of the profile-corrected formulation that largely reduces this phenomenon. In the present cases, mass leakage between the phases occurs only in the initial transient when the phase field is initialized (see the section below for details on the initial condition) and is limited to 2% of the initial mass of the drops. After this initial transient, the mass of each phase keeps constant.
### Simulation set-up
The turbulent channel flow, driven by an imposed constant pressure gradient in the streamwise direction, has shear Reynolds number \(Re_{\tau}=300\). The computational domain has dimensions \(L_{x}\times L_{y}\times L_{z}=4\pi h\times 2\pi h\times 2h\), which corresponds to \(L_{x}^{+}\times L_{y}^{+}\times L_{z}^{+}=3770\times 1885\times 600\) wall units. The value of the Weber number is kept constant and is equal to \(We=3.00\), so to be representative of liquid/liquid mixtures (Than _et al._ 1988). To study the influence of the Prandtl number \(Pr\) on the heat transfer process, we consider four different values of \(Pr\): \(Pr=1\), \(Pr=2\), \(Pr=4\) and \(Pr=8\). These values cover a wide range of real-case scenarios: from low-Prandtl number fluids to water-toluene mixtures.
The grid resolution used to resolve the continuity, Navier-Stokes and Cahn-Hilliard equations is equal to \(N_{x}\times N_{y}\times N_{z}=1024\times 512\times 513\) for all the cases considered in this work. For the energy equation, the same grid used for the flow field and phase field is employed at the lower Prandtl numbers (\(Pr=1\) and \(Pr=2\)), while a more refined grid, with \(N_{x}\times N_{y}\times N_{z}=2048\times 1024\times 513\) points, is used when the larger Prandtl numbers are considered (\(Pr=4\) and \(Pr=8\)). The computational grid has uniform spacing in the homogeneous directions, while Chebyshev-Gauss-Lobatto points are used in the wall-normal direction. We refer the reader to table 1 for an overview of the main phzsical and computational parameters of the simulation. For the employed grid resolution, the Cahn number is set to \(Ch=0.01\) while, to achieve convergence to the sharp interface limit, the corresponding phase field Peclet number is \(Pe=1/Ch=50\).
All simulations are initialized releasing a regular array of 256 spherical drops with diameter \(d=0.4h\) (corresponding to \(d^{+}=120\)\(w.u.\)) inside a fully-developed turbulent flow field (obtained from a preliminary simulation). To ensure the independence of the results from the initial flow field condition, each case is initialized with a slightly different flow field realization. Naturally, the fields are equivalent in terms of statistics as they are all obtained from a statistically steady turbulent channel flow. The volume fraction of the drops is \(\Phi=V_{d}/(V_{c}+V_{d})=5.4\%\), with \(V_{d}\) and \(V_{c}\) the volume of the drops and carrier fluid, respectively.
The initial condition for the temperature field is such that all drops are initially warm (initial temperature \(\theta_{d,0}=1\)), while the carrier fluid is initially cold (initial temperature \(\theta_{c,0}=0\)). To avoid numerical instabilities that might arise from a discontinuous temperature field, the transition between drops and carrier fluid is initially smoothed using a hyperbolic tangent kernel. Figure 1 (which is an instantaneous snapshot captured at \(t^{+}=1000\), for \(Pr=1\)) shows a volume rendering of the temperature field (blue-cold, red-hot), inside which deformable drops (whose interface, iso-level \(\phi=0\), is shown in white) are transported.
\begin{table}
\begin{tabular}{c c c c c c} \hline Case & \(Re_{\tau}\) & \(We\) & \(Pr\) & \(N_{x}\times N_{y}\times N_{z}\) (NS+CH) & \(N_{x}\times N_{y}\times N_{z}\) (Energy) \\ \hline Single-phase & 300 & - & - & \(512\times 256\times 257\) & - \\ Drop-laden & 300 & 3.0 & 1.0 & \(1024\times 512\times 513\) & \(1024\times 512\times 513\) \\ Drop-laden & 300 & 3.0 & 2.0 & \(1024\times 512\times 513\) & \(1024\times 512\times 513\) \\ Drop-laden & 300 & 3.0 & 4.0 & \(1024\times 512\times 513\) & \(2048\times 1024\times 1025\) \\ Drop-laden & 300 & 3.0 & 8.0 & \(1024\times 512\times 513\) & \(2048\times 1024\times 1025\) \\ \end{tabular}
\end{table}
Table 1: Overview of the simulation parameters. For a fixed shear Reynolds number \(Re_{\tau}=300\) and Weber number \(We=3\), we consider a single-phase flow case and four non-isothermal drop-laden flows characterised by different Prandtl numbers: from \(Pr=1\) to \(Pr=8\). The grid resolution is modified accordingly so as to satisfy DNS requirements.
## 3 Results
Results obtained from the numerical simulations will be first discussed from a qualitative viewpoint, by looking at instantaneous flow and drops visualizations, and then analyzed from a more quantitative viewpoint, by looking at the drop size distribution (DSD), and at the effect of the Prandtl number \(Pr\) on the average drops and fluid temperature. To explain the numerical results, and to offer a possible parametrization of the heat transfer process in drop-laden flows, we will also develop a simplified phenomenological model of the system. Finally, we will characterize the temperature distribution inside the drops, elucidating the effects of \(Pr\) and of the drop size on it. Note that, unless differently mentioned, results are presented using the wall-unit scaling system but for the temperature field, which is made dimensionless using the initial temperature difference as a reference scale (which is a natural choice in the present case).
### Qualitative discussion
The complex dynamics of drops immersed in a non-isothermal turbulent flow is visualized in figure 1, where the drops (identified by isocontour of \(\phi=0\)) are shown together with a volume-rendered distribution of temperature in the carrier fluid. Also shown in figure 1 is a close-up view of the temperature distribution inside the drop.
Once injected into the flow, each drop starts interacting with the flow and with the
Figure 1: Rendering of the computational setup employed for the simulations. A swarm of large and deformable drops is released in a turbulent channel flow. The flow goes from left to right driven by a constant pressure gradient. The temperature field is volume-rendered (blue-low, red-high) and the drops interface is shown in white (iso-level \(\phi=0\)). As can be appreciated from the close-up view (top left, which shows the temperature field in a drop section), drops have a temperature higher than the carrier fluid and as a result, there is a heat flux from the drops to carrier fluid. The snapshot refers to \(Pr=1\) and \(t^{\star}=1000\).
neighbouring drops. The result of the drop-turbulence and drop-drop interactions is the occurrence of breakage and coalescence events. A breakage event happens when the flow vigorously stretches the drop, leading to the formation of a thin ligament that breaks and generates two child drops. Upon separation, surface tension forces tend to retract the broken filaments and to restore the drop spherical shape. A coalescence event is observed when two drops come close each other. The small liquid film that separates the drops start to drain, and a coalescence bridge is formed. Later, surface tension forces enter the picture, reshaping the drop and completing the coalescence process. The dynamic competition between breakage and coalescence events, and their interaction with the turbulent flow, determines the number of drops, their size distribution, and their shape/morphology (i.e., curvature, interfacial area, etc.).
In the present case, drops not only exchange momentum with the flow and with the other drops, but also heat. Starting from an initial condition characterized by warm drops (with uniform temperature) and cold carrier fluid, and because of the imposed adiabatic boundary conditions, the system evolves towards an equilibrium isothermal state. During the transient to attain this thermal equilibrium state, heat is transported by diffusion and advection inside each of the two phases, and across the interface of the drops (see the temperature field inside and outside the drops, figure 1). The picture is then further complicated by the occurrence of breakage and coalescence events. This is represented in figure 2. When a breakage occurs (figure 2, top row), a thin filament is generated (figure 2a-c), which then leads to the formation of a smaller satellite drop (figure 2d-e). The filament and the satellite drop, given the large surface-to-volume ratio, exchange heat very efficiently, and become rapidly colder. In contrast,
Figure 2: Influence of topology changes on heat transfer: Time sequence of a breakage event (top row, panels _a-e_) and of a coalescence event (bottom row, panels _f-k_). In the top row, we can appreciate how during a breakage event in the pinch-off region heat is efficiently transferred from the drops to carrier fluid thanks to the high surface/volume ratio of this region. Likewise, the thermal relaxation time of the small drops generated during the breakage (bottom) is smaller than that of the parent drop (top). In the bottom row, we can appreciate how during a coalescence event, parcels of fluid with different temperatures mix. The two time sequences refer to the case \(Pr=1\) and snapshots are separated by \(\Delta t^{+}=15\).
Figure 3: Instantaneous visualization of the temperature field (red-hot; blue-cold) on a \(x^{+}-y^{+}\) plane located at the channel center for \(t^{+}=1500\). Drops interface (iso-level \(\phi=0\)) are reported using white lines. Each panel refers to a different Prandtl number: \(Pr=1\) (panel \(a\)), \(Pr=2\) (panel \(b\)), \(Pr=4\) (panel \(c\)) and \(Pr=8\) (panel \(d\)). By increasing the Prandtl number (from top to bottom) and thus decreasing the thermal diffusivity, the heat transfer process slows down. As a consequence, the drop temperature is higher when larger Prandtl numbers are considered. The effects of the Prandtl number on the characteristic length scales can also be appreciated: as it increases, temperature structures become smaller.
when a coalescence occurs (figure 2, bottom row), two drops having different temperature merge together. This induces an efficient mixing process, during which cold parcels of one drop become warmer and viceversa, warm parcels of the other drops become colder. Overall, breakup and coalescence events induce heat transfer modifications that are in general hard to predict a priori, since they do depend on the relative size of the involved parents/child drops.
Naturally, the problem of heat transfer in drop-laden turbulence is strongly influenced by the Prandtl number of the flow. This can be appreciated by looking at figure 3, where we show the instantaneous temperature field, together with the shape of the drops, at a certain instant in time (\(t^{+}=1500\)) and at the different Prandtl numbers: \(Pr=1\) (panel \(a\)), \(Pr=2\) (panel \(b\)), \(Pr=4\) (panel \(c\)) and \(Pr=8\) (panel \(d\)). In each panel, the temperature field is shown on a wall-parallel \(x^{+}\)-\(y^{+}\) plane located at the channel center (\(z^{+}=0\)), and is visualized with a blue-red scale (blue-low, red-high). We observe that the temperature field changes significantly with \(Pr\). In particular, we notice an increase in the drop-to-fluid temperature difference for increasing \(Pr\), going from \(Pr=1\) (top panel) where this difference is small, to \(Pr=8\) (bottom panel) where this difference is large. The heat transfer from the drops to the carrier fluid becomes slower as \(Pr\) increases, consistently with a physical situation in which the \(Pr\) number is increased by reducing the thermal diffusivity of the fluid, while keeping the momentum diffusivity constant (i.e. constant kinematic viscosity, and hence shear Reynolds number). Also, the temperature structures, both inside and outside the drop, become thinner and more complicated at higher \(Pr\), since their characteristic lengthscale, the Bachelor scale \(\eta_{\theta}^{+}\propto Pr^{-1/2}\), becomes smaller for increasing \(Pr\)(Batchelor, 1959; Batchelor _et al._, 1959). In addition, smaller drops have, on average, a lower temperature compared to larger drops, regardless of the value of \(Pr\). All these aspects will be discussed in more detail in the next sections.
### Drop Size Distribution
To characterize the collective dynamics of the drops, we compute the drop size distribution (DSD) at steady-state conditions, averaging over a time window \(\Delta t^{+}=3000\), from \(t^{+}=3000\) to \(6000\). It is worth mentioning that a quasi-equilibrium DSD, very close to the steady one, is already achieved around \(t^{+}\simeq 750\), and only minor changes occur to the DSD afterwards.
Figure 4 shows the DSD obtained for the different cases considered here: \(Pr=1\) (dark violet), \(Pr=2\) (violet), \(Pr=4\) (pink), and \(Pr=8\) (light pink). The distributions have been computed considering, for each drop, the diameter of the equivalent sphere computed as:
\[d_{eq}^{+}=\left(\frac{6V^{+}}{\pi}\right)^{1/3}\,, \tag{1}\]
where \(V^{+}\) is the volume of the drop. Also reported in figure 4 is the Kolmogorov-Hinze scale, \(d_{H}^{+}\), which can be computed as (Perlekar _et al._, 2012; Roccon _et al._, 2017; Soligo _et al._, 2019_a_):
\[d_{H}^{+}=0.725\left(\frac{We}{Re_{\tau}}\right)^{-3/5}|\epsilon_{c}|^{-2/5}\,, \tag{2}\]
where \(\epsilon_{c}\) is the turbulent dissipation, here evaluated at the channel center where most of the drops collect because of their deformability (Lu & Tryggvason, 2007; Soligo _et al._, 2020; Mangani _et al._, 2022). The Kolmogorov-Hinze scale identifies the critical diameter below which drop breakage is unlikely to occur (Kolmogorov, 1941; Hinze, 1955). Separated by the Kolmogorov-Hinze scale, we observe the emergence of two different regimes. For drops smaller than the Kolmogorov-Hinze scale, we find the coalescence-dominated regime (left, gray area), in which drops, which are smaller than the critical scale, are generally not prone to break. For drops larger than the Kolmogorov-Hinze scale, we find the breakup-dominated
regime (right, white area) in which drops breakup is likely to happen. Each regime is characterized by a specific scaling law, which describes the behavior of the drops number density as a function of the drop size (Garrett _et al._, 2000; Deane & Stokes, 2002; Chan _et al._, 2021): \(PDF\sim d_{eq}^{+}-^{3/2}\), below Kolmogorov-Hinze scale, and \(PDF\sim d_{eq}^{+}-^{10/3}\) above it. The two scalings are represented by dot-dashed lines in figure 4.
We note that, for equivalent diameters above the Hinze scale, our results follow quite well the theoretical scaling law and match the drops/bubbles size distributions obtained in literature considering similar flow instances (Deike _et al._, 2016; Di Giorgio _et al._, 2022; Soligo _et al._, 2021; Deike, 2022; Crialesi-Esposito _et al._, 2023). Below the Hinze scale, for equivalent diameters in the range \(25<d_{eq}^{+}<d_{H}^{+}\) our results match reasonably well the theoretical scaling law. For equivalent diameters \(d_{eq}^{+}<25\)\(w.u.\), we observe an underestimation of the DSD compared to the proposed scaling. This is linked to the grid resolution, and in particular to the problem in describing very small drops (Soligo _et al._, 2021).
### Mean temperature of drops and carrier fluid
We now focus on the average temperature of the drops and of the carrier fluid. We consider the ensemble of all drops as one phase, and the carrier fluid as the other phase (using the value of the phase field as a phase discriminator), and we compute the average temperature for each phase. The evolution in time of the drops and carrier fluid temperature, \(\overline{\theta}_{d}\) and \(\overline{\theta}_{c}\) respectively, is shown in figure 5, for the different values of \(Pr\). Together with the results obtained by current direct numerical simulations, filled symbols, in figure 5 we also show the predictions obtained by a simplified phenomenological model (solid lines), the details of which will be described and discussed later (see section 3.4). We start considering the DNS results only. As expected, we observe that the average temperature of the drops (violet to pink symbols) decreases in time, while the average temperature of the carrier fluid (blue to
Figure 4: Steady-state drop size distributions (DSD) obtained for: \(Pr=1\) (dark violet, circles), \(Pr=2\) (violet, squares), \(Pr=4\) (pink, diamonds) and \(Pr=8\) (light pink, triangles). The Kolmogorov-Hinze (KH) scale \(d_{H}^{+}\) is reported with a vertical dashed line while the two analytical scaling laws: \(d_{eq}^{+}-^{3/2}\) for the coalescence-dominated regime (small drops, gray region) and \(d_{eq}^{+}-^{10/3}\) for the breakage-dominated regime (larger drops, white region) are reported with dash-dotted lines.
cyan symbols) increases in time, until the thermodynamic equilibrium, at which both phases have the same temperature, is asymptotically reached. For this reason, simulations have been run long enough for the average temperature of both phases to be sufficiently close to the equilibrium temperature. In particular, we stopped the simulations at \(t^{+}\simeq 6000\), when the condition
\[\frac{(\overline{\theta}_{d}-\theta_{eq})}{(\theta_{d,0}-\theta_{eq})}\leqslant 0.05\,, \tag{3.3}\]
with \(\theta_{d,0}\) the initial temperature of the drops, is satisfied by all simulations. The equilibrium temperature, \(\theta_{eq}\), can be easily estimated a-priori: since the two walls are adiabatic, and the homogeneous directions periodic, the energy of the system is conserved over time. The energy balance can be written as:
\[m_{c}^{*}c_{p}^{*}\theta_{d,0}^{*}+m_{d}^{*}c_{p}^{*}\theta_{c,0}^{*}=(m_{d}^{ *}+m_{c}^{*})c_{p}^{*}\theta_{eq}^{*}\, \tag{3.4}\]
where \(m_{c}^{*}\) and \(m_{d}^{*}\) is the mass of the carrier fluid and of the drops; \(\theta_{d,0}^{*}\) and \(\theta_{c,0}^{*}\) the physical value of the initial temperatures of the two phases and \(c_{p}^{*}\) the specific heat capacity. Considering that the two phases have equal density and specific heat capacity, we obtain:
\[V_{c}^{*}c_{p}^{*}\theta_{d,0}^{*}+V_{d}^{*}c_{p}^{*}\theta_{c,0}^{*}=(V_{d}^{ *}+V_{c}^{*})c_{p}^{*}\theta_{eq}^{*}. \tag{3.5}\]
Recalling the definition of volume fraction, \(\Phi=V_{d}/(V_{d}+V_{c})\), and making the equation dimensionless, we finally get:
\[\theta_{eq}=\theta_{c,0}(1-\Phi)+\theta_{d,0}\Phi\, \tag{3.6}\]
which is represented by the horizontal dashed line in Figure 5. Figure 5 provides also a clear indication that the higher the Prandtl number, the larger the time it takes for the system to reach the equilibrium temperature, \(\theta_{eq}\). The trend can be observed for both the drops and carrier fluid, as the two phases are mutually coupled (the heat released from the drops is adsorbed by the carrier fluid). This result confirms our previous qualitative observations, see figure 3 and discussion therein, that a large \(Pr\) (small thermal diffusivity) reduces the heat released by the drops. It is also interesting to observe that the behavior of the mean temperature of the two phases appears self-similar at the different \(Pr\).
### A phenomenological model for heat transfer rates in droplet laden flows
In an effort to provide a possible interpretation of the previous results - and in particular to explain the average temperature behavior shown in figure 5 -, we develop a simple physically-sound model of the heat transfer process in droplet-laden turbulence. We start by considering the heat transfer mechanisms from a single drop of diameter \(d^{*}\) to the surrounding fluid:
\[m_{d}^{*}c_{p}^{*}\frac{\partial\theta_{d}^{*}}{\partial t^{*}}=\mathcal{H}^{ *}A_{d}^{*}\left(\theta_{c}^{*}-\theta_{d}^{*}\right)\, \tag{3.7}\]
where \(m_{d}^{*}\), \(A_{d}^{*}\), and \(c_{p}^{*}\) are the mass, external surface, and specific heat of the drop, \(\mathcal{H}^{*}\) is the heat transfer coefficient, while \(\theta_{d}^{*}\) and \(\theta_{c}^{*}\) are the drops and carrier fluid temperature. The heat transfer coefficient can be estimated as the ratio between the thermal conductivity of the external fluid, \(\lambda^{*}\), and a reference length scale, here represented by the thermal boundary layer thickness \(\delta_{t}^{*}\):
\[\mathcal{H}^{*}\sim\lambda^{*}/\delta_{t}^{*}. \tag{3.8}\]
With this assumption, and recalling that \(\rho^{*}=\rho_{c}^{*}=\rho_{d}^{*}\), equation 3.7 becomes:
\[\frac{\partial\theta_{d}^{*}}{\partial t^{*}}=\frac{6}{Pr}\frac{v^{*}}{d^{*} \delta_{t}^{*}}\left(\theta_{c}^{*}-\theta_{d}^{*}\right). \tag{3.9}\]
Reportedly (Schlichting & Gersten, 2016, 218), the thermal boundary layer thickness, \(\delta_{t}^{*}\), can be expressed as \(\delta_{t}^{*}=\delta^{*}Pr^{-\alpha}\) where \(\delta^{*}\) is the momentum boundary layer thickness, and \(\alpha\) is an exponent that depends on the flow condition in the proximity of the boundary where the boundary layer evolves. In particular, the exponent \(\alpha\) ranges from \(\alpha=1/3\) for no slip conditions, usually assumed for solid particles, to \(\alpha=1/2\), usually assumed for clean gas bubbles. For an in-depth discussion on the topic, we refer the reader to appendix A. As a consequence, the heat transfer rate observed from drops/bubbles is expected to be larger than that observed from solid particles, since the no-slip boundary condition generally weakens the flow motion near the interface (Levich, 1962; Herlina & Wissink, 2016; Bird _et al._, 2002). We can now rewrite the equation of the model in dimensionless form, using the initial drop-to-carrier fluid temperature difference \(\Delta\theta^{*}=\theta_{d,0}^{*}-\theta_{c,0}^{*}\) as reference temperature, and \(\nu^{*}/u_{\tau}^{*2}\) as reference time:
\[\frac{\partial\theta_{d}}{\partial t^{+}}=6Re_{\delta}^{-1}Pr^{-1+\alpha} \left(d^{+}\right)^{-1}\left(\theta_{c}-\theta_{d}\right)\,, \tag{10}\]
where \(d^{+}\) is the drop diameter in wall units, while \(Re_{\delta}=u_{\tau}^{*}\delta^{*}/\nu^{*}\) is the Reynolds number based on the boundary layer thickness (which can be assumed constant among the different cases). Equation 10 can be rewritten as:
\[\frac{\partial\theta_{d}}{\partial t^{+}}=CPr^{-1+\alpha}\left(d^{+}\right)^{ -1}\left(\theta_{c}-\theta_{d}\right)\,, \tag{11}\]
where \(\mathcal{C}\) is a constant whose value depends only on the flow structure, i.e. on \(Re_{\delta}\). Equation 11 describes the heat released by a single drop of dimensionless diameter \(d^{+}\). Assuming now that the turbulent flow is laden with drops of different diameters the general equation
Figure 5: Time evolution of the mean temperature of drops (violet to pink colors, different symbols) and carrier fluid (blue to cyan colors, different symbols) for the different Prandtl numbers considered. DNS results are reported with full circles while the predictions obtained from the model are reported with continuous lines. The equilibrium temperature of the system, \(\theta_{eq}\), is reported with a horizontal dashed line. In general, it can be observed that by increasing the Prandtl number (corresponding to a decrease of the thermal diffusivity), the heat transfer process between the two phases slows down and more time is required to achieve the equilibrium condition.
describing the heat released by the \(i\)-th drop of diameter \(d_{i}^{+}\) becomes:
\[\frac{\partial\theta_{d,i}}{\partial t^{+}}=CPr^{-1+\alpha}\,\left(d_{i}^{+} \right)^{-1}\,\left(\theta_{c}-\theta_{d}\right)\,=\mathcal{F}_{i}\,, \tag{3.12}\]
where \(\mathcal{F}_{i}\) is the lumped-parameters representation of the right-hand side of the temperature evolution equation for the \(i\)-th drop. As widely observed in literature (Deane & Stokes 2002; Soligo _et al._ 2019\(c\)), and also confirmed by the present study (figure 4), we can hypothesize an equilibrium drops-size-distribution (DSD) by which the number density of drops scales as \(d^{+-3/2}\), in the sub-Hinze range of diameters (\(10<d^{+}<110\)), and as \(d^{+-10/3}\) in the super-Hinze range of diameters (\(110<d^{+}<240\)). With this assumption, and considering 7 classes of drops diameter for the sub-Hinze range, and 4 classes for the super-Hinze range, we can integrate equation 3.12 to obtain the time evolution of the temperature of each drop in time:
\[\theta_{d,i}^{n+1}=\theta_{d,i}^{n}+\Delta t^{+}\mathcal{F}_{i}\,. \tag{3.13}\]
From a weighted averaged of the temperature (based on the number of drops in each class, as per the theoretical DSD), we obtain the average temperature of the drops, \(\overline{\theta}_{d}\).
To obtain the mean temperature of the carrier fluid, we consider that (adiabatic condition at the walls), the heat released by the drops is entirely absorbed by the carrier fluid. The heat released by the drops with a certain diameter \(d_{i}^{*}\), can be computed as:
\[Q_{i}^{*}=m_{d}^{*}c_{p}^{*}\,\frac{\partial\theta_{d}}{\partial t^{+}}N_{d}^ {*}(i)\,, \tag{3.14}\]
where \(N_{d}^{*}(i)\) is the number of drops for that specific diameter (as per the DSD). The overall heat released by all drops can be calculated as the summation over all different classes of diameters:
\[Q_{tot}^{*}=\sum_{i=1}^{N_{c}}Q_{i}^{*}\,, \tag{3.15}\]
where \(N_{c}\) is the employed number of classes. Finally, the mean temperature of the carrier fluid is
\[\overline{\theta}_{c}^{*,n+1}=\theta_{c}^{*,n}+\Delta t^{+}\frac{Q_{tot}^{*}} {m_{c}^{*}c_{p}^{*}}\,. \tag{3.16}\]
In dimensionless form (dividing by the initial drop-to-carrier fluid temperature \(\Delta\theta^{*}\)) equation 3.16 becomes:
\[\overline{\theta}_{c}^{n+1}=\theta_{c}^{n}+\Delta t^{+}Q_{tot}\,. \tag{3.17}\]
The results of the model are shown in figure 5.
Interestingly, under the simplified hypothesis of the model (chiefly, spherical shape of the drops, constant drop-size-distribution evaluated at the equilibrium), we observe that the behavior of the mean temperature is very well captured by the model (represented by the solid lines in figure 5)
\[\frac{\partial\theta_{d}}{\partial t^{+}}=CPr^{-2/3}\,\left(d^{+}\right)^{-1} \,\left(\theta_{c}-\theta_{d}\right)\,, \tag{3.18}\]
i.e. when \(\alpha=1/3\) - typical of boundary layers around solid objects (i.e. solid particles). Reasons for this might be the presence of wakes/sheltering effects between drops, but also the fact that drops are strongly advected by the mean flow, and the flow condition at the drop surface can be different from the slip one, and is in general not of simple evaluation. Given the relationship \(\partial\theta_{d}/\partial t\sim Pr^{-2/3}\) postulated by the model (equation 3.18), which provides results in very good agreement with the numerical ones, it seems reasonable to rescale the
time variable as:
\[\vec{t}^{+}=\frac{t^{+}}{Pr^{(1-\alpha)}}=\frac{t^{+}}{Pr^{(2/3)}}\,. \tag{21}\]
A representation of the DNS results in terms of the rescaled time, equation 21, is shown in figure 6. We observe a nice collapse of the two sets of curves - drops and carrier fluid (red and blue) - for the different values of \(Pr\), which clearly demonstrates the self-similar behavior of \(\overline{\theta}\). For this reason, the rescaling of time \(\vec{t}^{+}=t^{+}/Pr^{2/3}\), will be also used in the following.
### Heat transfer from particles and drops/bubbles
It is now important to discuss the behavior of the heat transfer coefficient (and its dimensionless counterpart, the Nusselt number \(Nu\)), also in the context of available literature results. Naturally, similar considerations can be made to evaluate the mass transfer coefficient, in particular at liquid/gas interfaces (Levich, 1962; Bird _et al._, 2002).
For solid particles, a balance between the convective time scale near the surface, and the diffusion time scale, gives an heat transfer coefficient (Krishnamurthy & Subramanian, 2018):
\[\mathcal{H}^{*}\propto Pr^{-2/3}\,, \tag{22}\]
and the corresponding Nusselt number:
\[Nu\propto Re^{\beta}Pr^{1/3}\,, \tag{23}\]
where \(\beta\) is an exponent that depends on the flow conditions and links the boundary layer thickness to the particle Reynolds number. Usually, \(\beta=1/3\) for small Reynolds numbers
Figure 6: Time evolution of the mean temperature of drops (violet to pink colors) and carrier fluid (blue to cyan colors) for the different Prandtl numbers considered obtained from DNS and reported against the dimensionless time \(\vec{t}^{+}=t^{+}/Pr^{2/3}\). The equilibrium temperature of the system, \(\theta_{eq}\), is reported with a horizontal dashed line. The DNS results reported over the new dimensionless time nicely collapse on top of each other, highlighting the self-similarity of the \(\overline{\theta}_{c,d}\) profiles.
(Krishnamurthy & Subramanian 2018) while \(\beta=1/2\) for large Reynolds numbers (Ranz 1952; Whitaker 1972; Michaelides 2003).
Using similar arguments (balance between convective and diffusion time scales), but considering now that at the surface of a drop/bubble a slip velocity, and therefore a certain degree of advection, can be observed (Levich 1962; Bird _et al._ 2002; Herlina & Wissink 2016), the heat transfer coefficient is found to scale as:
\[\mathcal{H}^{*}\propto Pr^{-1/2}\,, \tag{3.22}\]
and the corresponding Nusselt number as:
\[Nu\propto Re^{\beta}Pr^{1/2}\,, \tag{3.23}\]
where also in this case the exponent \(\beta\) does depend on the considered Reynolds number. Two regimes are usually defined (Theofanous _et al._ 1976): a low Reynolds number regime, for which \(\beta=1/2\), and a high Reynolds number regime, for which \(\beta=3/4\). An alternative approach, which gives similar predictions, is to use the penetration theory of Higbie (1935), in which turbulent fluctuations are invoked to estimate a flow exposure (or contact) time, to compute the heat/mass transfer coefficient. Such approach has been widely used in bubble-laden flows (Colombet _et al._ 2011; Herlina & Wissink 2014, 2016; Farsoiya _et al._ 2021).
We can now evaluate the heat transfer coefficient from our DNS at different \(Pr\), and compare it to the proposed scaling laws. Note that the heat transfer coefficient is obtained as:
\[\mathcal{H}=\frac{(\overline{\theta}_{d}^{n+1}-\overline{\theta}_{d}^{n})}{A \Delta t(\overline{\theta}_{d}^{n+1/2}-\overline{\theta}_{c}^{n+1/2})}\,, \tag{3.24}\]
where the numerator represents the temperature difference of the drops between the time steps \(n\) and \(n+1\), while the denominator represents the temperature difference between the drop and the carrier fluid evaluated halfway in time between step \(n\) and \(n+1\) (i.e. at \(n+1/2\)). The quantity \(A\) is the total interfacial area between drops and carrier fluid, while \(\Delta t\) is the time step used to evaluate the heat transfer. Here, we have evaluated the heat transfer coefficient taking the heat released by the drops as reference; an equivalent result, but with opposite sign, can be obtained using the heat absorbed by the carrier fluid as reference, and taking into account the different volume fraction of the two phases.
The dimensionless heat transfer coefficient, equation 3.24, is shown as a function of \(Pr\), and at different time instants, in figure 7. For a better comparison, the results are normalized by the value of the heat transfer coefficient for \(Pr=1\). The two reference scaling laws, \(\mathcal{H}\sim Pr^{-2/3}\) obtained for \(\alpha=1/3\) and \(\mathcal{H}\sim Pr^{-1/2}\) obtained for \(\alpha=1/2\) are also shown by a dotted and a dashed line. We note that at the beginning of the simulations (see for example \(t^{+}=250\)), the heat transfer coefficient is close to \(\mathcal{H}\sim Pr^{-2/3}\), while at later times it tends towards \(\mathcal{H}\sim Pr^{-1/2}\), hence approaching the scaling law proposed for heat/mass transfer in gas-liquid flows (Levich 1962; Magnaudet & Eames 2000; Bird _et al._ 2002; Herlina & Wissink 2014, 2016; Colombet _et al._ 2018; Farsoiya _et al._ 2021).
A possible explanation is that, as time advances, the shape of the drops becomes complex, and coalescence/breakups more frequent, thus inducing an overall surface decrease that is associated to an heat transfer increase. This reflects into an heat transfer process that is slower at the beginning, \(\mathcal{H}^{*}\sim Pr^{-2/3}\), and faster at later times \(\mathcal{H}^{*}\sim Pr^{-1/2}\).
### Influence of the drop size on the average drop temperature
In the previous sections, we have studied the behavior of the mean temperature field of the drops and of the carrier fluid considered as single entities. However, while this description is perfectly reasonable for the carrier fluid - which can be considered a continuum - it can
be questionable for the drops, which are not a continuum phase by nature. We now take the dispersed nature of the drops into account and we evaluate, for each drop, the equivalent diameter and the corresponding mean temperature.
This is sketched in figure 8, where the average temperature of each drop (represented by a dot) is shown as a function of its equivalent diameter, at different time instants (between \(t^{+}=1050\) and \(t^{+}=2400\)). Each panel refers to a different Prandtl number. Note that, at \(t^{+}=2400\), the case \(Pr=1\) has almost reached the thermodynamic equilibrium (figure 5). It is clearly visible that, regardless of the considered time, small drops have an average temperature close to the equilibrium one. This is particularly visible at smaller Prandtl numbers, i.e. when heat transport is faster, but it can be observed also at larger \(Pr\). In contrast, the average temperature of larger drops is larger. Hence, the average temperature of the drops seems directly proportional to the drop size, as can be argued considering that the heat released by the drop, and hence its temperature reduction, is \(\partial\theta_{d}/\partial t\propto d^{-1}\) (equation 3.16). It is therefore not surprising that the scatter plot at a given time instant is characterized by dots distributing in stripes-like fashion, with slope that decreases with time. This behavior is observed at all \(Pr\), although the range of drops temperature (y axis) at small \(Pr\) is definitely narrower (because of their larger heat loss) compared to that at large \(Pr\). It is also interesting to note - in particular at \(Pr=4\) and \(Pr=8\) (panels \(c\) and \(d\)) - the presence of drops with a temperature smaller than the equilibrium one (dots falling below the horizontal line that marks the equilibrium temperature). We can link this behavior to the small relaxation time of small drops that therefore adapt quickly to the local temperature of the carrier fluid, which can be smaller than the equilibrium one for two main reasons. First, at the early stages of the simulations, and at high Prandtl numbers, the temperature of the carrier fluid is lower than the equilibrium one. Second, temperature fluctuations (of both negative and positive sign) are present also in the carrier fluid. These fluctuations, in the form of hot/cold striations are more likely observed at large \(Pr\) (see the striation-like structures at \(Pr=8\) in figure 3\(d\)).
Figure 7: Time behavior of the dimensionless heat transfer coefficient for the different Prandtl numbers considered. Heat transfer coefficients are reported normalized by the value of the heat transfer coefficient obtained for \(Pr=1\) (at the same time instant). In this way, results obtained at different time instants can be conveniently compared. The two scaling laws that refer to \(\alpha=2/3\) and \(\alpha=1/2\) are also reported as references.
### Temperature distribution inside the drops
In many applications, in particular to evaluate mixing efficiency and flow homogeneity, not only the average temperature of drops is important, but also its space and time distribution inside the drops. To understand it, we now look at the PDF of the temperature fluctuations inside the drops,
\[\theta_{d}^{\prime}=\theta_{d}-\overline{\theta}_{d}\,, \tag{3.25}\]
where \(\theta_{d}\) is the local temperature inside the drop, and \(\overline{\theta}_{d}\) is the average temperature of all drops at a certain time (as per figure 5). Results are shown in figure 9. The first row of figure 9 shows the probability density function of \(\theta_{d}^{\prime}\) at different \(Pr\), and at two different time instants: \(t^{+}=600\) (panel \(a\), left) and \(t^{+}=1500\) (panel \(b\), right). The second row of figure 9 shows the PDFs obtained at two different rescaled time instants, \(\tilde{t}^{+}=t^{+}/Pr^{2/3}\): \(\tilde{t}^{+}=600\) (panel \(c\), left) and \(\tilde{t}^{+}=1500\) (panel \(d\), right).
Considering first figure 9\(a\) (\(t^{+}=600\)), we notice that all PDFs have a rather regular shape, characterized by the presence of both positive and negative fluctuations (with respect to the average temperature), with a slight asymmetry towards the positive ones (positive fluctuations are more likely observed). A comparison between the curves obtained at different \(Pr\) shows
Figure 8: Scatter plot of the drop equivalent diameter \(d_{eq}^{+}\) against the drop average temperature \(\overline{\theta}_{d}\). Each dot represents a different drop while its color (black to gray colormap) identifies different times, from \(t^{+}=1050\) (black) up to \(t^{+}=2400\) (gray). Each panel refers to a different Prandtl number. A sketch showing drops of different equivalent diameters is reported in the upper part of panel \(a\). As can be appreciated, larger drops have larger relaxation times, and thus a longer transient is required to reach thermal equilibrium. This effect becomes more pronounced as the Prandtl number is increased and thus the overall heat transfer process slows down (smaller heat transfer coefficient).
that the range of temperature fluctuations is wider at larger \(Pr\). This is due to the small thermal diffusivity at large \(Pr\), which allows temperature fluctuations in the drop to survive much longer before they are damped and spread by diffusion. Naturally, at later times (figure 9\(b\), \(t^{+}=1500\)), the range of temperature fluctuations reduces. Indeed, as heat is transferred from the drops to the carrier fluid, the maximum temperature of drops reduces, and so does the range of temperature fluctuations inside the drop. This trend is more pronounced for negative fluctuations, as the minimum temperature inside the drops is somehow bounded by the temperature of the carrier fluid (which increases only a little, from \(\overline{\theta}_{c,0}=0\) to \(\theta_{eq}=0.054\), during the simulation). This latter observation is visible in the shape of the PDFs at \(Pr=1,2\) and \(4\), since the system is closer to the thermal equilibrium at this time instant (the thermal equilibrium is identified in panel \(b\) by a vertical dashed line and marked with a label, \(\theta_{eq}^{Pr}\)): a sharp drop of the PDF, which does not significantly trespass the \(\theta_{eq}^{Pr}\) limit, is observed. In contrast, positive temperature fluctuations are subject to relatively weaker constraints (they are only bounded by the maximum initial temperature of the drops). This results into a PDF that gets asymmetric, positively-skewed. It is also interesting to observe the development of a pronounced peak about the equilibrium temperature \(\theta_{eq}^{Pr}\), which corresponds to the presence of a sharp drop of the PDF.
Figure 9: Probability density function (PDF) of the temperature fluctuations, \(\theta_{d}^{\prime}=\theta_{d}-\overline{\theta}_{d}\) inside the drops. Each case is reported with a different color (violet to light pink) depending on the Prandtl number. The first row shows the PDFs obtained at two different time instants: \(t^{+}=600\) (panel \(a\)) and \(t^{+}=1500\) (panel \(b\)). The second row shows the PDFs obtained at two rescaled time instants \(\tilde{t}^{+}=600\) (panel \(c\)) and \(\tilde{t}^{+}=1500\) (panel \(d\)), where the rescaled time is computed as \(\tilde{t}^{+}=t^{+}/Pr^{2/3}\). For panels \(c\)-\(d\), the corresponding \(t^{+}\) is reported between brackets.
of small drops (generated by breakages events) that - given their small thermal relaxation time and heat capacity - almost immediately adapt to the equilibrium temperature (see also figure 2\(d\),\(f\)).
However, a discussion on the temperature fluctuations, captured from flows at different \(Pr\) and after the same time \(t^{+}\) from the initial condition, could be misleading, because it puts in contrast flows at different thermal states (i.e. different average temperature, and different temperature gradients, see figure 5). To filter out this effect, we compute the PDFs of the temperature fluctuations at the same rescaled time instants \(\tilde{t}^{+}=t^{+}/Pr^{2/3}\). By doing this, all cases can be considered at similar thermal conditions (see also figure 6). The resulting PDFs, at \(\tilde{t}^{+}=600\) and \(\tilde{t}^{+}=1500\), are shown in figure 9\(c\)-\(d\). Note that, for the sake of clarity, the corresponding \(t^{+}\), which is different from case to case, is reported between brackets in the legend. In the rescaled time units, the collapse between the different curves is quite nice. The slight difference between the curves is due to the fact that, although the system is at the same thermal state (same \(\tilde{t}^{+}\)), it is at different flow state (different \(t^{+}\)), i.e. the instantaneous drop size distributions are different. This gives the slightly larger negative fluctuations at larger \(Pr\) (which, being at a later stage, is characterized by the presence of smaller and colder drops), and slightly larger positive fluctuations at smaller \(Pr\) (which, being at an earlier flow state, is characterized by the presence of larger and warmer drops).
From a closer look at figure 9\(d\) (\(\tilde{t}^{+}=1500\)), we note very clearly the constraint set by the thermal equilibrium condition: the PDF cannot significantly trespass the limit represented by \(\theta_{eq}\) (vertical dashed line), which is very similar for all \(Pr\), given the similar thermal state. Also visible is the peak, already discussed in figure 9\(b\), that emerges very close to the equilibrium temperature \(\theta_{eq}\), and which is due to the presence of small drops that adapt quickly to the local temperature of the carrier fluid. As previously noticed in figure 9\(c\), the higher probability of finding small drops at lower \(Pr\) is also responsible for the narrowing of the PDF (reduction of positive temperature fluctuations).
## 4 Conclusions
In this work, we studied heat transfer in a turbulent channel flow laden with large and deformable drops. The drops are initially warmer than the carrier fluid and as the simulations advance, heat is transferred from the drops to the carrier fluid. Simulations considered a fixed value of the Reynolds number, \(Re_{\tau}=300\), and Weber number, \(We=3\), and analyze different Prandtl number values, from \(Pr=1\) to \(Pr=8\). The Prandtl number is changed by changing the thermal diffusivity. The investigation is based on the direct numerical simulation of turbulent heat transfer, coupled with a phase-field method, used to describe interfacial phenomena. First, we focused on the drops dynamics, observing that after an initial transient (up to \(t^{+}=1000\)), the drop size distribution (DSD) reaches a quasi-equilibrium condition where it follows the scaling \(d_{eq}^{+}{}^{-3/2}\) in the coalescence-dominated regime and \(d_{eq}^{+}{}^{-10/3}\) in the breakage-dominated regime. The threshold between the coalescence-dominated and the breakage-dominate regimes is represented by the Kolmogorov-Hinze scale. Then, we characterize the behavior of the average temperature of the drops and of the carrier fluid: as expected, the average temperature of drops decreases in time, while the average temperature of the carrier fluid increases in time, until reaching the equilibrium condition of uniform temperature in the whole system. We clearly observed that the higher the Prandtl number, the larger the time it takes for the system to reach the equilibrium temperature. Interestingly, the time behavior of the temperature profiles of both drops and carrier fluid is self-similar. Building on top of these numerical results, we developed a phenomenological model that can accurately reproduce the time evolution of the mean temperatures at all Prandtl numbers considered here. This model gave us the opportunity to introduce a new self-similarity
variable (time, \(\tilde{t}^{+}\)) that accounts for the Prandtl number effect, and by which all results collapse on a single curve. In addition, we also computed the heat transfer coefficient \(\mathcal{H}\) (and its dimensionless counterpart, the Nusselt number \(Nu\)) and showed that it scales as \(\mathcal{H}\sim Pr^{-2/3}\) (which corresponds to a Nusselt number scaling \(Nu\sim Pr^{1/3}\)) at the beginning of the simulation, and tends to \(\mathcal{H}\sim Pr^{-1/2}\) (or, alternatively, \(Nu\sim Pr^{1/2}\)) at later times. These different scalings are consistent with previous literature predictions, and can be explained via the boundary layer theory (appendix A). The effects of the Prandtl number on the temperature distribution inside the drops has been investigated. We observe that by increasing the Prandtl number, the PDFs become wider and thus large temperature fluctuations are more likely to be observed. Interestingly, when the PDFs are compared at a the same rescaled time \(\tilde{t}^{+}\) (i.e. accounting for the Prandtl number effect), all curves collapse on top of each other, with only minor differences possibly due to the different instantaneous drop size distribution. The effect of the drop size was also discussed: small drops adapt faster to the equilibrium temperature, thanks to their small heat capacity, compared to larger drops. Finally, it must be pointed out that, since the different phases of a multiphase flow can have different thermophysical properties, Prandtl numbers can be also different from phase to phase. This aspect, which was not considered in the present work, will be the topic of a future study. In addition, in the present work we have assumed a constant and uniform surface tension. However, in many circumstances, surface tension does depend on temperature, therefore inducing thermocapillary effects. This will be also the subject of a future investigation.
###### Acknowledgements.
We acknowledge EURO-HPC JU for awarding us access to Discoverer@Sofiatech, Bulgaria (Project ID: EHPC-REG-2022R01-048) and LUMI-C@LUMI, Finland (Project ID: EHPC-EXT-2022E01-003) and ISCRA for awarding us access to Leonardo (Project ID: HP10BUJEO5). FM gratefully acknowledges funding from the MSCA-ITN-EID project COMETE (Project No. 813948) and AR gratefully acknowledges funding from PRIN 2017 - Advanced Computations & Experiments for anisotropic particle transport in turbulent flows (ACE). Declaration of interests. The authors report no conflict of interest. Data availability statement. Data available on request from the authors. Author ORCIDs. Francesca Mangani, [https://orcid.org/0000-0001-7777-6665](https://orcid.org/0000-0001-7777-6665) Alessio Roccon, [https://orcid.org/0000-0001-7618-7797](https://orcid.org/0000-0001-7618-7797) Francesco Zonta, [https://orcid.org/0000-0002-3849-315X](https://orcid.org/0000-0002-3849-315X) Alfredo Soldati, [https://orcid.org/0000-0002-7515-7147](https://orcid.org/0000-0002-7515-7147)
###### Acknowledgements.
FM performed the simulations. FM and AR analysed the data. FCZ developed the model. All authors contributed equally in writing the paper.
## Appendix A Effects of slip condition on the velocity and thermal boundary layer evolution
In this section, we derive and solve the equations that describe the evolution of the boundary layer on a heated flat plate that is parallel to a constant unidirectional flow.
In addition to the standard description of the boundary layer, where no-slip conditions on the plate are considered (Prandtl, 1905; Blasius, 1908), here we consider also the effect of a slip velocity on the velocity and thermal boundary layers (Martin & Boyd, 2006; Bhattacharyya _et al._, 2011; Aziz _et al._, 2014). Following the standard approach (Schlichting & Gersten, 2016), the continuity, Navier-Stokes and energy equations in 2D are:
\[\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0\,, \tag{1}\]
\[u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}=-\frac{1}{\rho}\frac{ \partial p}{\partial x}+v\frac{\partial^{2}u}{\partial y^{2}}\,,\] (A2)
\[u\frac{\partial T}{\partial x}+v\frac{\partial T}{\partial y}=a\frac{\partial^ {2}T}{\partial y^{2}}\,,\] (A3)
where \(x\) is the direction parallel to the wall, and \(y\) the direction normal to the wall, see figure 10. The boundary conditions, accounting also for the slip velocity, read as:
\[u(x,y=0)=k\frac{\partial u}{\partial y}(x,y=0)\,,\] (A4)
\[v(x,y=0)=0\,,\] (A5)
\[u(x,y\rightarrow+\infty)=u_{\infty}\,,\] (A6)
\[T(x,y=0)=T_{w}\,,\] (A7)
\[T(x,y\rightarrow+\infty)=T_{\infty}\,,\] (A8)
where \(k\) is a parameter that controls the amount of slip at the wall (no-slip for \(k=0\), up to free-slip for \(k\rightarrow+\infty\)), \(u_{\infty}\) and \(T_{\infty}\) are the free stream velocity and temperature, and \(T_{w}\) is the constant temperature of the flat plate.
To solve the system of equations, we use the method of similarity transformation. First, we consider the continuity and Navier-Stokes equations. Following Blasius (1908), we introduce the following similarity transformation:
\[\eta=y\sqrt{\frac{u_{\infty}}{vx}}\,.\] (A9)
We can define a dimensionless stream function, \(f(\eta)\), which depends only on the variable \(\eta\),
\[f(\eta)=\frac{\psi(x,y)}{\sqrt{u_{\infty}vx}}\,,\] (A10)
Figure 10: Sketch of the momentum and thermal boundary layer dynamics on a flat plate characterized by a uniform temperature, \(T_{w}\), larger than the free stream temperature, \(T_{\infty}\). In panel \(a\) no-slip conditions are enforced at the wall (corresponding to a slip parameter \(k=0\)) while in panel \(b\) partial slip is allowed at the wall. The qualitative behavior of the momentum and thermal boundary layer thickness is also shown for the two cases. Both panels refer to a super-unitary Prandtl number.
from which we can express the two dimensionless velocity components:
\[\frac{u}{u_{\infty}}=f^{\prime};\qquad\frac{v}{u_{\infty}}=\frac{1}{2}\sqrt{\frac {u_{\infty}\nu}{x}}(\eta f^{\prime}-f)\,, \tag{11}\]
where \(f^{\prime}\) denotes the first derivative with respect to \(\eta\) (same notation is used for higher order derivatives). Upon substitution of these variables in the continuity and Navier-Stokes equations, we obtain the governing equation for the dimensionless stream function \(f(\eta)\):
\[f^{\prime\prime\prime}+\frac{1}{2}ff^{\prime\prime}=0\,, \tag{12}\]
together with the boundary conditions:
\[f^{\prime}(\eta=0)=kf^{\prime\prime}(\eta=0)\,, \tag{13}\]
\[f(\eta=0)=0\,, \tag{14}\]
\[f^{\prime}(\eta\rightarrow+\infty)=0\,. \tag{15}\]
Considering now the energy equation for the dimensionless temperature \(\theta\)
\[\theta=\frac{T-T_{\infty}}{T_{w}-T_{\infty}}\,, \tag{16}\]
and using the similarity transformation, the governing equation for the dimensionless temperature becomes:
\[\theta^{\prime\prime}+\frac{1}{2}Prf\theta^{\prime}=0\,, \tag{17}\]
where \(Pr=\nu/\alpha\) is the Prandtl number, and the following boundary conditions are applied:
\[\theta(\eta=0)=1\,, \tag{18}\]
\[\theta(\eta\rightarrow+\infty)=0\,. \tag{19}\]
The governing equations 12 and 17, which constitute a boundary value problem, are solved numerically via a shooting method which, avoiding the imposition of the boundary condition 6, stabilizes the computation over a wider range of \(\eta\). The equations are solved for different values of \(k\), from \(k=0\) (no-slip) up to \(k=5\), at which the velocity at the wall (\(\eta=0\)) is \(\simeq 70\%\) of the free stream velocity. The resulting velocity profiles, (rotated by 90 degrees to be consistent with the sketch of figure 10) are shown in figure 11 for different values of \(k\). Panel \(a\) shows the effect of \(k\) on the streamwise component of the velocity, while panel \(b\) shows the effect of \(k\) on the temperature profile. All the results refer to \(Pr=1\), for which the temperature solution can be obtained as \(\theta=1-f^{\prime}\). For the no-slip case (\(k=0\)), the Blasius solution (velocity and temperature, shown by the red circles) is recovered. As expected, by increasing \(k\), the amount of slip at the plat increases. As a consequence, the temperature profiles are also modified, generating larger temperature gradients at the plate. This corresponds to an heat transfer increase, as also observed in previous studies (Martin & Boyd, 2006; Aziz _et al._, 2014).
Of specific importance in the context of the model developed in the present paper, is the evaluation, as a function of the slip parameter \(k\) and for different values of \(Pr\), of the ratio between the velocity and the thermal boundary layer thickness, respectively defined (Martin & Boyd, 2006):
\[\delta=\int_{0}^{+\infty}(1-f^{\prime})d\eta\,,\qquad\text{and}\qquad\delta_{t }=\int_{0}^{+\infty}\theta d\eta\,. \tag{20}\]
The ratio \(\delta_{t}/\delta\) is shown in figure 12 as a function of \(Pr\) and for different values of the slip parameter \(k\) (different symbols). We notice that, when the no-slip condition is enforced (\(k=0\)), the ratio \(\delta_{t}/\delta\sim Pr^{-1/3}\), in agreement with the thermal boundary layer theory on flat plates (Schlichting & Gersten, 2016). However, when a slip condition is introduced at the wall (\(k>0\)), the ratio \(\delta_{t}/\delta\) relaxes onto the scaling \(\delta_{t}/\delta\sim Pr^{-1/2}\). This indicates that, at a given \(Pr\), the thermal boundary layer for the slip case becomes thinner compared to the no-slip case, and the heat transfer increases. In other words, heat transfer coefficients for drops/bubbles (slip surfaces) can be higher compared to the corresponding values for solid particles (no-slip surfaces) (Herlina & Wissink, 2016). In particular, based on the previous
Figure 11: Streamwise velocity (panel \(a\)) and temperature profiles (panel \(b\)) obtained for different values of the slip parameter \(k=0\). Results are reported rotated by \(90^{\circ}\) degrees for sake of better interpretation and are obtained considering \(Pr=1\). For the no-slip case (\(k=0\)), the classical Blasius solution available in archival literature for the velocity, \(f^{\prime}\), and temperature, \(\theta=1-f^{\prime}\), is reported with red dots. By increasing the slip parameter \(k\), velocity at the wall location \(\eta=0\) increases and larger temperature gradients are observed.
Figure 12: Ratio between the thermal and momentum boundary layer thickness as a function of the Prandtl number and the slip parameter \(k\). The scaling laws \(Pr^{-1/3}\) and \(Pr^{-1/2}\) are reported as reference. Moving from \(k=0\) (no-slip) to \(k=5\) (slip), for a given value of the Prandtl number, the thermal boundary layer becomes thinner thus leading to an increase of the heat transferred from the wall.
observations, and on the model developed in Sec. 3.4, we can obtain the following scalings for the heat transfer coefficients:
\[\mathcal{H}^{*}\propto Pr^{-2/3}\qquad\text{for no-slip}, \tag{10}\] \[\mathcal{H}^{*}\propto Pr^{-1/2}\qquad\text{for free-slip}. \tag{11}\]
|
2309.13962 | Egocentric RGB+Depth Action Recognition in Industry-Like Settings | Action recognition from an egocentric viewpoint is a crucial perception task
in robotics and enables a wide range of human-robot interactions. While most
computer vision approaches prioritize the RGB camera, the Depth modality -
which can further amplify the subtleties of actions from an egocentric
perspective - remains underexplored. Our work focuses on recognizing actions
from egocentric RGB and Depth modalities in an industry-like environment. To
study this problem, we consider the recent MECCANO dataset, which provides a
wide range of assembling actions. Our framework is based on the 3D Video SWIN
Transformer to encode both RGB and Depth modalities effectively. To address the
inherent skewness in real-world multimodal action occurrences, we propose a
training strategy using an exponentially decaying variant of the focal loss
modulating factor. Additionally, to leverage the information in both RGB and
Depth modalities, we opt for late fusion to combine the predictions from each
modality. We thoroughly evaluate our method on the action recognition task of
the MECCANO dataset, and it significantly outperforms the prior work. Notably,
our method also secured first place at the multimodal action recognition
challenge at ICIAP 2023. | Jyoti Kini, Sarah Fleischer, Ishan Dave, Mubarak Shah | 2023-09-25T08:56:22Z | http://arxiv.org/abs/2309.13962v1 | # Egocentric RGB+Depth Action Recognition in Industry-Like Settings
###### Abstract
Action recognition from an egocentric viewpoint is a crucial perception task in robotics and enables a wide range of human-robot interactions. While most computer vision approaches prioritize the RGB camera, the Depth modality--which can further amplify the subtleties of actions from an egocentric perspective--remains underexplored. Our work focuses on recognizing actions from egocentric RGB and Depth modalities in an industry-like environment. To study this problem, we consider the recent MECCANO dataset, which provides a wide range of assembling actions. Our framework is based on the 3D Video SWIN Transformer to encode both RGB and Depth modalities effectively. To address the inherent skewness in real-world multimodal action occurrences, we propose a training strategy using an exponentially decaying variant of the focal loss modulating factor. Additionally, to leverage the information in both RGB and Depth modalities, we opt for late fusion to combine the predictions from each modality. We thoroughly evaluate our method on the action recognition task of the MECCANO dataset, and it significantly outperforms the prior work. Notably, our method also secured first place at the multimodal action recognition challenge at ICIAP 2023.
## I Introduction
Recent advancements in action recognition have paved the way for numerous practical applications, ranging from behavioral studies [1] and sports analytics [2, 3] to visual security systems [4, 5, 6], and systems designed to detect falls in elderly individuals [7, 8]. In the realm of robotics, the capability to detect and interpret human actions and gestures is a crucial perception task. This is especially true when robots are expected to engage with humans and execute tasks across various sectors, including manufacturing, healthcare, and service robotics. Actions like pointing, reaching, or grasping become especially pivotal since they often relay valuable insights about the user's requirements and intentions.
While traditional video analysis captures a bulk of human behavior, it occasionally overlooks subtle nuances. This is where egocentric cameras prove beneficial. Offering a unique first-person perspective, these cameras offer a more intimate vantage point of human-object interactions and movements. This detailed view is essential in settings where robots need to closely work with humans and need to understand both their actions and reasons behind them.
A common challenge in egocentric action recognition is the heavy focus on RGB data. Though RGB yields rich visual details, it falls short in conveying depth or spatial relationships. Integrating Depth with RGB fills this gap, providing a more holistic visual understanding. Depth modality offers information about the distance and relation between objects, which is very useful when interpreting actions from an egocentric point of view.
The combination of RGB and Depth data is showcased in the recent MECCANO dataset [9], which captures these elements in industry-like settings. The dataset captures various complex assembly actions of a toy-motorbike, as shown in Fig. 1. By using this dataset, our research aims to explore how these two modality of data can improve egocentric action recognition.
Prior work, such as UMDR [10], addresses the RGB+Depth action recognition challenge using a video data augmentation strategy. They built upon existing MixUp augmentations to provide temporal regularization. Although this method performs well on third-person datasets, their heavy reliance on augmentation struggles to generalize for real-world egocentric datasets.
For our approach, we first utilize a recent Swin3D [11] video encoder to capture spatio-temporal features from RGB and Depth modalities. We note that real-world multimodal data, associated with action occurrences, exhibit
Fig. 1: **Sample actions from the MECCANO dataset [9]. In both examples (a) and (b), the top row shows video frames from RGB modality and the bottom row shows corresponding frames from Depth modality. MECCANO dataset provides various actions from assembling the toy-motorbike in an industry-like setting.**
an inherent skewness, leading to a long-tailed action recognition scenario. In such cases, some action classes are prevalent and well-represented in the training data (e.g., check_booklet, align_screwdriver_to_screw, plug_rod, align_objects actions), while others are scarce (e.g., fit_rim_tire, put_nut, put_wheels_axle actions), leading to significant data imbalance. The inherent complexity of multimodal data, combined with such data imbalance, presents a formidable challenge for learning approaches. Based on the underlying principle of focal loss, which captures the relationship between tail (scarce) classes and their prediction difficulties, we propose an exponentially decaying variant of focal loss modulating factor for our current task. It initially emphasizes learning from the hard misclassified examples and gradually adapts to the entire range of examples in the dataset. This annealing process encourages the model to strike a balance between focusing on the sparse set of hard samples, while still leveraging the information provided by the easier ones. Additionally, we opt for the late fusion strategy to combine the resultant probability distributions from RGB and Depth modalities for final action prediction.
Our method is evaluated on the action recognition task of the MECCANO dataset, where it outperforms prior baselines by significant margins and establishes a new benchmark. Our method also secured first place in the multimodal action recognition challenge of _22nd International Conference on Image Analysis and Processing_ (ICIAP) 2023.
To summarize, our primary contributions are:
* We propose a training framework for multimodal action recognition from an egocentric camera with RGB and Depth modalities.
* We introduce an exponentially decaying variant of focal loss modulating factor to address the challenges of inherent skewness of the multimodal data.
* Our method sets the benchmark with state-of-the-art results on the MECCANO dataset and has secured first place in the multimodal action recognition challenge at ICIAP 2023.
## II Related Work
Video understanding aims to extract meaningful spatio-temporal features from videos. This domain encompasses a wide range of problems such as video object detection, object tracking, action recognition, temporal action localization, action anticipation, repetition counting, and more.
### _Action Recognition_
Developments in action recognition have largely been driven by architectural changes and novel training paradigms. Among the many architectures that have emerged, convolution-based models like C3D [12], R3D [13], R2plus1D [14], and X3D [15] set the early benchmarks. Following this, transformer-based models such as TimeSformer [16], VTN [17], and VideoSWIN [11] came into the spotlight, offering innovative ways to process video data. Concurrently, the focus on data efficiency led to the exploration of paradigms like self-supervised learning [18, 19, 20, 21], semi-supervised learning [22, 23, 24], and few-shot recognition [25].
Diverse datasets have significantly propelled the field forward. Examples include Kinetics [26], HVU [27], HACS [28], MMiT [29], MultiSports [30], UCF101 [31], HMDB51 [32], and APPROVE [33]. It's worth noting that these datasets predominantly focus on the third-person view.
### _Egocentric Action Recognition_
With the increasing interest in a more intimate and user-centric perspective, several egocentric datasets have been introduced. EPIC-KITCHENS [34], for instance, offers a comprehensive view of daily kitchen activities over multiple days. Charades-Ego [35] takes a different approach, focusing on the joint modeling of first and third-person videos. The datasets Something-Something [36] and 100DOH [37] delve into the realm of hand-centric activities, offering insights into human hand interactions. HOMAGE [38] introduces the use of synchronized multi-view cameras, providing an enriched perspective that includes an egocentric view. Furthermore, Ego4D [39] captures daily-life activities from various global locations, adding a layer of diversity to the data.
### _Multimodal RGB+Depth Based Action Recognition_
While many datasets focus on learning from the visual (RGB) modality, depth cameras bring enhanced spatial recognition, the ability to discern intricate human-object interactions, and the capacity to capture subtleties often missed by RGB cameras. There is a growing body of work exploring action recognition combining RGB and Depth data. Pioneering datasets in this area include NTU RGB-D, a large-scale human interaction dataset [40], NvGesture, concentrating on touchless driver controls [41], and Chalearn IsoGD, focusing on gesture videos [42]. Methods leveraging both RGB and Depth cues, such as those cited in [43, 44, 45, 46]. However, most of these resources are not egocentric and don't replicate industrial settings. This observation led us to focus on the MECCANO dataset [9], which emulates industry-like actions by showcasing the intricate assembly processes of a toy motorbike. It offers RGB, Depth, and an added Gaze modality for human eye tracking. In our study, we restrict our scope to the RGB and Depth modalities.
UMDR [10] emerges as a particularly relevant approach to our RGB+Depth based egocentric action recognition goal and has been submitted to the MECCANO challenge [47] (Table I). This method enhances the existing MixUp augmentation for videos by adding motion regularization. Our approach deviates significantly from UMDR. Rather than relying on the video augmentations, we prioritize hard-to-classify tail classes before the head classes, addressing the skewness inherent in multimodal egocentric action occurrences.
## III Approach
### _Cross-Modal Fusion_
Fig. 2 provides comprehensive details of our proposed approach. Given a set of spatiotemporally aligned RGB and
Depth sequences that extend between \([t_{s},t_{e}]\), where \(t_{s}\) and \(t_{e}\) are the start and the end duration of the sequence, our goal is to predict the action class \(\mathcal{O}=\{o_{1}\), \(o_{2}\),.., \(o_{K}\}\) associated with the sequence. In order to achieve this, we adopt an ensemble architecture comprising two dedicated Video Swin Transformer [11] backbones to process the RGB clip \(\mathcal{A}=\{A_{i}\), \(A_{i-1}\),...,\(A_{T}\}\) and Depth clip \(\mathcal{B}=\{B_{i}\), \(B_{i-1}\),..,\(B_{T}\}\) independently. Here, \(i\) corresponds to a random index spanning between \(t_{s}\) and \(t_{e}\). The input video for each modality defined by size \(T\times H\times W\times 3\) results in token embeddings of dimension \(\frac{T}{2}\times H_{d}\times W_{d}\times 8C\). We pass this representation retrieved from _stage-4_ of the base feature network to our newly added fully connected layer and fine-tune the overall network. The final prediction is derived by averaging the two probability distributions obtained as output from the RGB and Depth pathways.
_Video Swin Transformer_ We utilize the Video Swin Transformer [11] backbone to extract spatiotemporal features from input sequences. The input sequence, with dimensions \(T\times H\times W\times 3\), is divided into non-overlapping 3D patches/tokens of size \(2\times 4\times 4\times 3\). Next, the 3D Patch Embedding layer generates \(\frac{T}{2}\times\frac{H}{4}\times\frac{W}{4}\) token with 96-dimensional representation. A Linear Embedding layer is subsequently applied to project these features into a \(C\)-dimensional space. Thereafter, no subsampling takes place along the temporal dimension. However, in the patch merging layer, features from adjacent patches, organized into \(2\times 2\) spatial groups, are fused together. This fusion is followed by a linear layer that reduces the dimension of the concatenated features by half. Within the Swin3D Block, a specific attention mechanism known as the 3D shifted window-based multi-head self-attention module is employed. This is followed by a 2-layer MLP, with GELU [48] non-linearity in between. The 3D shifted window-based multi-head self-attention module enables attention computation within each 3D window, and facilitates cross-window connections, while efficiently calculating self-attention based on non-overlapping windows. The hierarchical processing of patches and the self-attention mechanisms provided by the Video Swin Transformer enable effective capture of complex spatiotemporal patterns.
### _Focal Loss: Exponentially Decaying Modulating Factor_
Focal loss [49] is a variant of cross-entropy loss with a modulating factor that down-weighs the impact of easy examples and focuses on the hard ones. It, therefore, tends to prevent bias towards data-rich classes and improves the performance on scarce categories.
Multi-classification cross-entropy (CE) loss is given by:
\[L_{CE}=-\sum_{j=1}^{K}y_{j}\log(p_{j}) \tag{1}\]
where, say we have \(K\) action classes, and \(y_{j}\) and \(p_{j}\) correspond to the ground-truth label and predicted probability respectively for the \(j^{th}\) class.
On the other hand, the key objective of focal loss [49] is defined as:
\[L_{Focal}=-\sum_{j=1}^{K}{(1-p_{j})^{\gamma}\log{p_{j}}} \tag{2}\]
In our work, we use focal loss \(L_{Focal}\), and exponentially decay the modulating factor \(\gamma\) from \(\gamma_{init}=2\) to \(\gamma_{fin}=0.1\) over the entire training duration.
The exponential interpolation/decay of \(\gamma\) over total epochs \(Z\) is defined by:
\[\gamma_{curr}=\gamma_{init}*(\gamma_{fin}/\gamma_{init})^{(z_{curr}/Z)} \tag{3}\]
Fig. 2: **Overview of our framework** The RGB frames \(\{A_{i}\), \(A_{i-1}\),..,\(A_{T}\}\) and Depth frames \(\{B_{i}\), \(B_{i-1}\),..,\(B_{T}\}\) are passed through two independently trained Swin3D-B [11] encoders \(f_{\theta_{1}}\) and \(f_{\theta_{2}}\) respectively to generate feature tokens. The resultant class probabilities, obtained from each pathway, are averaged to subsequently yield action classes. Exponentially decaying focal loss \(L_{Focal}\) is leveraged to deal with the long-tailed distribution exhibited by the data.
where, \(\gamma_{curr}\) refers to the current value of the modulating factor at a specific epoch \(z_{curr}\).
When \(\gamma\)=0, the objective function is equivalent to cross-entropy loss. Our proposed annealing process for \(\gamma\) allows for the model to focus on the sparse set of hard examples in the early stage of training, and gradually shift its focus towards easy examples. This configuration is essential to ensure that the model learns meaningful representations and generalized decision boundaries.
_Note:_ For ablation purposes, we also attempt to use linear interpolation of \(\gamma\) given by :
\[\gamma_{curr}=\gamma_{init}+(\gamma_{fin}-\gamma_{init})*(z_{curr}/Z) \tag{4}\]
## IV Experimental Setup
### _Dataset_
We utilize MECCANO dataset, which comprises egocentric human-object interaction videos in an industrial setting. This dataset, recently introduced and enriched with First Person Vision (FPV) information, holds a wide range of practical applications. The dataset includes Gaze signals, Depth maps, and RGB videos. It offers 20 sequences with 61 action classes. For our experiments, we employ RGB and Depth modalities. The dataset is comprised of 299,376 annotated frames, which includes 61 Action classes and 20 Object classes collected from 20 different participants. For our work, we only utilize the action labels with RGB and Depth modality. We utilize the standard train-val-test split of 55%-109-35% of the total split.
### _Data-preprocessing_
We resize the frames to a width of \(256\), without disturbing the aspect ratio of the original image, followed by a random crop of \(224\times 224\). We build a clip of 16 consecutive frames and apply random cropping consistently across the clip. In the case of shorter sequences, we pad the sequence with the last frame.
### _Training_
We use the Swin3D-B [11] backbone, which is pre-trained on the Something-Something v2 [50] dataset. We adopt focal loss [49] and modify it with exponentially decaying modulating factor \(\gamma\) for training the classification model. For optimization, AdamW optimizer with a learning rate of \(3\times 10^{-4}\). Our model converges in just 20 epochs on the MECCANO dataset.
### _Evaluation Metrics_
Following the standard performance metrics of MECCANO dataset [9], we report the _Top-1_ and _Top-5_ classification accuracy as our evaluation metrics. Additionally, to demonstrate the effectiveness of employing our focal loss variant for this task, we present the class-weighted performance measures across classes: _Precision_, _Recall_ and _F1-score_. In our case, we calculate the _F1-score_ for each class individually and then compute a weighted average based on class prevalence. This weighted _F1-score_ accounts for class imbalance, ensuring appropriate importance is given to each class's performance relative to its representation in the data. More details of the weighted F1-score can be found in [9].
### _Comparison To State-Of-The-Art_
Table I presents a comprehensive comparison of the performance of various methods on the MECCANO dataset, a benchmark dataset for industrial-like scenarios in multimodal action recognition. Notably, our method stands out with a _Top-1_ accuracy of 52.82% and a _Top-5_ accuracy of 83.85%, suggesting its superior performance compared to the state-of-the-art methods listed in the table. The ICIAP 2023 multimodal action recognition challenge leaderboard can be found online at [https://iplab.dmi.unict.it/MECCANO/challenge.html](https://iplab.dmi.unict.it/MECCANO/challenge.html).
### _Ablation Study_
**Cross-Entropy loss v/s Focal loss with exponentially decaying modulating factor:** Table II presents our results on the MECCANO test set. Applying cross-entropy loss to fine-tune our model, pre-trained on Something-Something v2, gives us an initial baseline accuracy of 50.94% on our multimodal setup. Introducing focal loss with exponential decay in the modulating factor \(\gamma\) boosts the overall accuracy by approximately **2%** on all performance measures. Furthermore, combining the train and validation data gives the best _Top-1_ accuracy of 55.37%.
Apart from overall improved performance, when we look at the class-wise performance in Fig. 4, our proposed loss function consistently improves the tail class performance, and in some action-classes (_screw_screw_with_screwdriver_, _take_screwdriver_, and _put_red_4_perforated_junction_bar_), where cross-entropy loss was not even able to get any samples of the correct predictions, our loss is able to predict them correctly.
**Design choice of focal loss modulating factor:** Table III and Fig. 3 provide a detailed analysis of the impact of the focal loss modulating factor \(\gamma\) on top-1 accuracy in the context of
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Team**} & \multirow{2}{*}{**Modality**} & \multicolumn{2}{c}{**Accuracy**} \\ \cline{3-4} & & _Top-1_ & _Top-5_ \\ \hline UNICT [51] & RGB+Depth & 49.49 & 77.61 \\ TORONTO [52] & - & 49.52 & 74.21 \\ UNICT [51] & RGB+Depth+Gaze & 49.66 & 77.82 \\ MACAN/ \(\backslash\) UMDR [53, 10] & RGB+Depth & 50.30 & 78.46 \\ LUBECK [54] & - & 51.82 & 83.35 \\ UNIBZ [55] & - & 52.57 & 81.53 \\ UCF (**Ours**) & RGB+Depth & **52.82** & **83.85** \\ \hline \hline \end{tabular}
\end{table} TABLE I: **Comparison with state-of-the-art results on the MECCANO dataset.** Our work, declared as the _challenge_ winner, ranks first on the leaderboard for Multimodal Action Recognition on the MECCANO dataset. The best method is shown in Red and the second best method is shown in Blue. _Note: Information about the modalities used in some competing works, marked with ’-’, is not available at the time of paper submission.
the long-tailed MECCANO dataset. Here, we meticulously examine four distinct profiles of \(\gamma\):
_Linear growth_: linear increase from 0.1 to 2
_Linear decay_: linear decrease from 2 to 0.1
_Exponential growth_: exponential increase from 0.1 to 2
_Exponential decay_: exponential decrease from 2 to 0.1
Each profile represents a unique strategy for changing \(\gamma\) with respect to the ongoing number of the training epoch.
The exponential decay strategy, which initially emphasizes hard samples (scarce tail classes) and then transitions to easy ones (dominant classes), showcases the most substantial improvements in action recognition accuracy. Strikingly, this approach leads to the highest _Top-1_ accuracy of 50.80% for RGB modality. By starting with hard samples, the model is compelled to learn from the most challenging examples in the dataset. This process can lead to the development of robust and discriminative features that are crucial for accurately classifying difficult instances. Consequently, the model becomes more resilient to challenging cases, which can be particularly useful in imbalanced datasets where the minority class often consists of hard-to-distinguish examples. Once the model has effectively tackled these challenging instances, it gradually shifts its focus towards easier samples, drawing upon the knowledge acquired from handling the more challenging cases. This gradual transition serves as a preventive measure against overfitting to the minority class.
In contrast, the exponential growth approach, which initially addresses easy samples before shifting to more challenging ones, demonstrates the lowest _Top-1_ accuracy of 46.76%. While it focuses on simpler instances for an extended period of time, its ability to effectively handle difficult cases seems to be comparatively restricted. It is also likely that by starting with easy samples, the model prematurely saturates on the majority class, leading to a suboptimal solution that overlooks the minority class. These findings underscore the importance of a well-balanced training strategy for the MECCANO dataset.
**Impact of fine-tuning:** In this context, we evaluate the impact of freezing or fine-tuning pre-trained weights from Something-Something v2 on our baseline SWIN3D-B model using the standard CE loss (see Table IV). Our experiments indicate improvements, when fine-tuning the model using cross-entropy loss from pre-trained weights from Something
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{**Modality**} & \multirow{2}{*}{**Loss**} & \multicolumn{2}{c}{**Accuracy**} & \multicolumn{2}{c}{**AVG Class**} & \multicolumn{1}{c}{**AVG _F1-score_**} \\ \cline{3-6} & & _Top-1_ & _Top-5_ & _Precision_ & _Recall_ & \\ \hline RGB & _CE_ & 48.35 & 80.91 & 45.52 & 48.35 & 46.22 \\ Depth & _CE_ & 43.32 & 75.38 & 41.79 & 43.32 & 41.88 \\ RGB+Depth & _CE_ & 50.94 & 81.79 & 47.28 & 50.94 & 48.08 \\ \hline RGB & Focal & 50.80 & 82.36 & 47.17 & 50.80 & 47.95 \\ Depth & Focal & 45.52 & 78.07 & 43.74 & 45.52 & 43.41 \\ RGB+Depth & Focal & **52.82** & **83.85** & **49.97** & **52.82** & **49.41** \\ \hline RGB\({}^{*}\) & Focal & 53.03 & 85.37 & 50.46 & 53.03 & 50.39 \\ Depth\({}^{*}\) & Focal & 48.39 & 80.55 & 46.43 & 48.39 & 46.35 \\ RGB+Depth\({}^{*}\) & Focal & **55.37** & **85.58** & **52.41** & **55.37** & **52.28** \\ \hline \hline \end{tabular}
\end{table} TABLE II: **Ablation on Cross-Entropy (_CE_) loss v/s Focal loss with exponentially decaying \(\gamma\).** Results demonstrating the effectiveness of our focal loss variant with an exponentially decaying modulating factor for the action recognition task on the MECCANO test dataset. * refers to model trained using both train\(+\)validation set.
\begin{table}
\begin{tabular}{l l} \hline \hline \(\mathbf{\gamma}\) & _Top-1_ **Accuracy** \\ \hline CE baseline & 48.35 \\ \hline Linear growth & 48.03 \\ Linear decay & 48.46 \\ Exponential growth & 46.76 \\ Exponential decay & 50.80 (+2%) \\ \hline \hline \end{tabular}
\end{table} TABLE III: **Impact of focal loss modulating factor \(\gamma\).** Analysis depicting the significance of the introduced exponentially decaying variant of focal loss on action recognition in the long-tailed MECCANO dataset.
Fig. 3: **Graphical representation of variations in the focal loss modulating factor \(\gamma\).** Illustration depicts the effect of different values of \(\gamma\) on the _Top-1_ action recognition accuracy across the MECCANO test set.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Modality** & **Pre-trained backbone** & _Top-1_ **Accuracy** \\ \hline RGB & Freeze & 45.90 \\ RGB & Fine-tune & 48.35 (+2\%) \\ \hline Depth & Freeze & 35.92 \\ Depth & Fine-tune & 43.32 (+7\%) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: **Effect of freezing pre-trained weights across different modalities.** Experiments indicating improvements in MECCANO dataset results when fine-tuning the pre-trained weights from Something-Something v2.
Something v2. The Depth modality, in particular, exhibits a significant performance boost of **7%** upon fine-tuning the pre-trained weights, owing to the domain shift.
**Backbone model capacity and pre-training data:** As observed in Table V, the usage of the Swin3D-B model consistently outperforms the Swin3D-T model in both RGB and Depth modalities. This implies that a deeper model, characterized by a higher number of learnable parameters, proves to be more effective in capturing the nuances of the data, consequently leading to improved accuracy. Furthermore, the choice of pre-training data significantly influences model performance on the MECCANO dataset. Model pre-trained on Something-Something v2 consistently outperforms its counterpart pre-trained on Kinetics-400, showcasing the importance of domain-specific pre-training. The Something-Something v2 dataset covers hand-object interactions from a first-person view, making it similar to the industry-like actions in the MECCANO dataset. On the other hand, the Kinetics dataset, which focuses on third-person views of human actions, isn't as closely aligned to actions that take place in an industry-like setting.
This finding aligns with the MECCANO dataset's dedicated focus on egocentric activities.
**Runtime computation:** In order to benchmark the speed of our method, we compute its runtime on a Tesla V100 GPU. We consider both modalities (RGB + Depth) combined for inference and report the frames per second (FPS) in Fig. 5 with varying input batch sizes. Our method attains **138 FPS** for the combined RGB and Depth modalities, even with a single batch of data, demonstrating its potential for use in real-time systems that typically require 30 FPS.
## V Conclusion
In this paper, we proposed a framework to recognize actions from the RGB and Depth multimodal egocentric camera. In order to handle the long-tailed distribution of the action classes, we propose an effective training strategy using the focal loss with an exponentially decaying modulating factor. Our method has set a new state-of-the-art on the MECCANO industry-like dataset and secured first place in the ICIAP 2023 multimodal action recognition challenge.
We are planning to open-source our code, which can be utilized as a strong baseline for research in video-understanding for multimodal egocentric cameras, especially for industry-like settings. One promising direction for future research lies in integrating the Gaze modality available in datasets like MECCANO, which captures the movement of the human eye while performing the actions. Utilizing Gaze data, alongside hand-object interactions of RGB and Depth modalities, can offer an even deeper understanding of human behavior, providing richer context and enhancing the accuracy on action recognition systems.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Modality** & **Model** & **Pre-training data** & _Top-1_ **Accuracy** \\ \hline RGB & Swin3D-T & Kinetics-400 & 36.52 \\ RGB & Swin3D-B & Kinetics-400 & 40.17 \\ RGB & Swin3D-B & Something-Something v2 & 48.35 \\ \hline Depth & Swin3D-T & Kinetics-400 & 35.92 \\ Depth & Swin3D-B & Kinetics-400 & 36.63 \\ Depth & Swin3D-B & Something-Something v2 & 43.32 \\ \hline \hline \end{tabular}
\end{table} TABLE V: **Influence of backbone model size and pre-trained weights.** Introducing a backbone with higher capacity has a positive impact on action recognition accuracy, as does using pre-training weights from the egocentric action recognition dataset, namely Something-Something v2.
Fig. 4: **Class-wise _F1-score_ on MECCANO dataset.** We compare class-wise _F1-score_ for standard cross-entropy loss vs. our proposed exponentially decaying focal loss. +number shows the relative improvement of our proposed loss over the cross-entropy loss. Our proposed loss consistently improves the performance of the tail-classes.
Fig. 5: **Runtime.** Frames per second at varying batch-size. |
2306.17359 | Higher Hölder regularity for nonlocal parabolic equations with
irregular kernels | We study a nonlocal parabolic equation with an irregular kernel coefficient
to establish higher H\"older regularity under an appropriate higher
integrablilty on the nonhomogeneous terms and a minimal regularity assumption
on the kernel coefficient. | Sun-Sig Byun, Hyojin Kim, Kyeongbae Kim | 2023-06-30T01:34:58Z | http://arxiv.org/abs/2306.17359v1 | # Higher Holder regularity for nonlocal parabolic equations with irregular kernels
###### Abstract.
We study a nonlocal parabolic equation with an irregular kernel coefficient to establish higher Holder regularity under an appropriate higher integrability on the nonhomogeneous terms and a minimal regularity assumption on the kernel coefficient.
Key words and phrases:Nonlocal; Holder Regularity ; Nonlinear 2020 Mathematics Subject Classification: 35A01, 35B65, 35D30, 35R05, 47G20 S. Byun was supported by NRF-2021R1A4A1027378. H. Kim was supported by NRF-2020R1C1C1A01009760. K. Kim was supported by NRF-2022R1A2C1009312.
The aim of this paper is two-fold. One is to establish a local boundedness of a weak solution to (1.1) when the nonhomogeneous term \(f\) satisfies (1.4). The other is to obtain the higher Holder regularity of a weak solution to (1.1) under a suitable regularity assumption on the kernel coefficient \(A\). As usual, a solution to (1.1) is defined in the weak sense as below. See the next section for a precise description of the related function spaces.
**Definition**.: (Local weak solution) Let \(f\in L^{q,r}_{\mathrm{loc}}(\Omega_{T})=L^{r}_{\mathrm{loc}}\left(0,T;L^{q}_{ \mathrm{loc}}(\Omega)\right)\), where \(q\) and \(r\) are any positive numbers such that \(q,r\geq 1\) and \(\frac{n}{2qs}+\frac{1}{r}\leq 1+\frac{n}{4s}\). We say that
\[u\in L^{2}_{\mathrm{loc}}\left(0,T;W^{s,2}_{\mathrm{loc}}(\Omega)\right)\cap L ^{\infty}_{\mathrm{loc}}\left(0,T;L^{1}_{2s}(\mathbb{R}^{n})\right)\cap C_{ \mathrm{loc}}\left(0,T;L^{2}_{\mathrm{loc}}(\Omega)\right)\]
is a local weak subsolution(supersolution) to (1.1) if
\[\int_{t_{1}}^{t_{2}}\int_{\Omega}-u\partial_{t}\phi\,dx\,dt+\int_ {t_{1}}^{t_{2}}\int_{\mathbb{R}^{n}}\Phi(u(x,t)-u(y,t))(\phi(x,t)-\phi(y,t)) \frac{A(x,y,t)}{|x-y|^{n+2s}}\,dx\,dy\,dt\\ \leq(\geq)\int_{t_{1}}^{t_{2}}\int_{\Omega}f\phi\,dx\,dt-\int_{ \Omega}u\phi\,dx\bigg{|}_{t=t_{1}}^{t=t_{2}}\]
for all nonnegative functions \(\phi\in L^{2}(I;W^{s,2}(\Omega))\cap W^{1,2}\left(I;L^{2}(\Omega)\right)\) with spatial support compactly embedded in \(\Omega\) and \(I\coloneqq[t_{1},t_{2}]\Subset(0,T)\). In particular, we say that \(u\) is a local weak solution to (1.1) if \(u\) is both a local subsolution and a local supersolution to (1.1).
Note that \(L^{2}([t_{1},t_{2}];W^{s,2}(\Omega))\cap W^{1,2}\left([t_{1},t_{2}];L^{2}( \Omega)\right)\) is embedded in \(L^{\hat{r}}([t_{1},t_{2}];L^{\hat{q}}(\Omega))\) for some positive numbers \(\hat{q}\) and \(\hat{r}\) such that \(\hat{q},\hat{r}\geq 1\) and \(\frac{n}{2qs}+\frac{1}{r}=\frac{n}{4s}\), see Lemma 2.5, to find
\[\int_{t_{1}}^{t_{2}}\int_{\Omega}|f\phi|<\infty.\]
Existence and uniqueness of a weak solution (1.1) with appropriate initial and boundary conditions will be discussed in Appendix A.
Now we introduce our main results. The first one is the local boundedness for a local weak subsolution to (1.1) with the following assumption of \(f\),
\[f\in L^{q,r}_{\mathrm{loc}}(\Omega_{T})\quad\text{with}\quad\frac{n}{2qs}+ \frac{1}{r}<1. \tag{1.4}\]
**Theorem 1.1**.: _(Local boundedness) Suppose that \(u\) is a local weak subsolution to (1.1) with (1.4). Then there is a constant \(c\equiv c(n,s,q,r,\lambda)\) such that_
\[\sup_{z\in Q_{\rho_{0}/2}(z_{0})}u(z)\leq c\Bigg{[} \left(\fint_{Q_{\rho_{0}}(z_{0})}u^{2}(x,t)\,dx\,dt\right)^{ \frac{1}{2}}+\mathrm{Tail}_{\infty}\left(u,z_{0},\rho_{0}/2,\rho_{0}^{2s}\right)\] \[+\rho_{0}^{2s-\left(\frac{n}{q}+\frac{2s}{r}\right)}\|f\|_{L^{q,r }(Q_{\rho_{0}}(z_{0}))}\Bigg{]},\]
_whenever \(Q_{\rho_{0}}(z_{0})\equiv B_{\rho_{0}}(x_{0})\times(t_{0}-\rho_{0}^{2s},t_{0} ]\Subset\Omega_{T}\)._
_Remark 1_.: We first notice that (1.4) is an appropriate condition to get the local boundedness (see [30]). In addition, we observe that in the limiting case of \(s\to 1\), (1.4) is a suitable condition to get the local boundedness for the local case when \(s=1\) (see [29, Chapter 3]).
The second one is the higher Holder regularity. Here, we introduce a new kernel class for the desired result. We say that \(\tilde{A}\in\mathcal{L}_{1}(\lambda;\Omega\times\Omega\times(0,T))\) if \(\tilde{A}\in\mathcal{L}_{0}(\lambda;\Omega\times\Omega\times(0,T))\) and there is a measurable function \(a:\mathbb{R}^{n}\times(0,T)\to\mathbb{R}\) such that
\[\tilde{A}(x,y,t)=a(x-y,t),\quad(x,y,t)\in\Omega\times\Omega\times(0,T).\]
**Theorem 1.2**.: _Let \(u\) be a local weak solution to (1.1) with (1.4) and let \(\alpha\) be a positive number such that_
\[\alpha<\min\left\{2s-\left(\frac{n}{q}+\frac{2s}{r}\right),1\right\}. \tag{1.5}\]
_Then there is a constant \(\delta=\delta(n,s,q,r,\lambda,\alpha)>0\) such that if for any \(\tilde{z}\in\Omega_{T}\), there exist sufficiently small \(\rho_{\tilde{z}}>0\) and \(\tilde{A}_{\tilde{z}}\in\mathcal{L}_{1}\left(\lambda;B_{\rho_{\tilde{z}}}(\tilde {x})\times B_{\rho_{\tilde{z}}}(\tilde{x})\times[\tilde{t}-\rho_{\tilde{z}}^{2 s},\tilde{t}]\right)\) with_
\[\|\tilde{A}_{\tilde{z}}-A\|_{L^{\infty}\left(B_{\rho_{\tilde{z}}}(\tilde{x}) \times B_{\rho_{\tilde{z}}}(\tilde{x})\times[\tilde{t}-\rho_{\tilde{z}}^{2s}, \tilde{t}]\right)}\leq\delta, \tag{1.6}\]
_then we have \(u\in C^{\alpha,\frac{\alpha}{2s}}_{\mathrm{loc}}(\Omega_{T})\). In particular, for any \(Q_{\rho_{0}}(z_{0})\in\Omega_{T}\), we have_
\[[u]_{C^{\alpha,\frac{\alpha}{2s}}(Q_{\rho_{0}/2}(z_{0}))} \leq c\Bigg{(}\rho_{0}^{-\frac{n+2s}{2}}\|u\|_{L^{2}(Q_{\rho_{0}}( z_{0}))}+\mathrm{Tail}_{\infty}\left(u,z_{0},\rho_{0}/2,\rho_{0}^{2s}\right)\] \[\qquad+\rho_{0}^{2s-\left(\frac{n}{q}+\frac{2s}{r}\right)}\|f\|_{ L^{q,r}(Q_{\rho_{0}}(z_{0}))}\Bigg{)},\]
_for some constant \(c\equiv c\left(n,s,q,r,\lambda,\alpha,\{\rho_{z}\}_{z\in Q_{\rho_{0}/2}(z_{0}) }\right)\)._
_Remark 2_.: The condition (1.5) is natural to obtain Holder continuity. When \(s=1\), (1.5) is exactly the same as the condition to get the Holder continuity of weak solutions to local parabolic problems. See [16, Chpater 2] for more details. In addition, if \(f\) is autonomous, then (1.5) is sharp. In this case, (1.5) becomes \(f\in L^{q}_{\mathrm{loc}}(\Omega)\) with \(q>\frac{n}{2s}\) and a particular class of solutions is given by stationary solutions to the corresponding elliptic problem of (1.1). From [27, Corollary 1.2], we deduce that if \(q\leq\frac{n}{2s}\), then it does not satisfy the continuity criterion. Hence, it verifies that (1.5) is an optimal condition to obtain the Holder regularity of a weak solution to (1.1) when the right-hand side is given by an autonomous function.
_Remark 3_.: We give some comments of the assumption (1.6) on the kernel coefficient \(A\).
1. We clearly point out that any continuous kernel coefficient \(A\) satisfies (1.6). To be specific, we consider \[A_{1}(x,y,t)=K_{1,1}(x,y,t)+\mathcal{X}_{|x-y|\geq\epsilon}K_{1,2}(x,y,t)\] (1.7) for \(\epsilon>0\), where \(K_{1,1}\in\mathcal{L}_{0}(\lambda/2)\) is continuous, \(K_{1,2}\in\mathcal{L}_{0}(\lambda/2)\) is merely measurable for some \(\lambda\geq 2\). Let us fix \(\delta>0\). Then for any \(\tilde{z}=(\tilde{x},\tilde{t})\in\Omega_{T}\), there is a sufficiently small \(\rho_{\tilde{z}}\in(0,\epsilon)\) such that \[\|A_{1}(x,y,t)-A_{1}(\tilde{x},\tilde{x},\tilde{t})\|_{L^{\infty}\left(B_{ \rho_{\tilde{z}}}(\tilde{x})\times B_{\rho_{\tilde{z}}}(\tilde{x})\times[ \tilde{t}-\rho_{\tilde{z}}^{2s},\tilde{t}]\right)}\leq\delta.\] Taking \[\tilde{A}_{1,\tilde{z}}(x,y,t)=A(\tilde{x},\tilde{x},\tilde{t})\quad\text{ for }(x,y,t)\in B_{\rho_{\tilde{z}}}(\tilde{x})\times B_{\rho_{\tilde{z}}}(\tilde{x}) \times[\tilde{t}-\rho_{\tilde{z}}^{2s},\tilde{t}],\] we see that \(\tilde{A}_{1,\tilde{z}}\in\mathcal{L}_{1}\left(\lambda;B_{\rho_{\tilde{z}}} (\tilde{x})\times B_{\rho_{\tilde{z}}}(\tilde{x})\times[\tilde{t}-\rho_{\tilde {z}}^{2s},\tilde{t}]\right)\) and \(A\) satisfies (1.6). Furthermore, we observe that any continuous function satisfies the assumption by taking \(K_{1,2}=0\) in (1.7). Generally, if the kernel coefficient \(A_{1}\) is continuous only near the diagonal, then the assumption holds. Moreover, if \(A_{1}\) is Holder continuous, then we can remove the dependence on \(\{\rho_{z}\}_{z\in Q_{\rho_{0}/2}(z_{0})}\) of a universal constant \(c\) determined in Theorem 1.2. See Corollary 5.3.
2. We note that the kernel coefficient \(A\) need not be continuous near the diagonal. Let us consider a function \(A_{2}(x,y,t)=K_{2,1}(x,y,t)K_{2,2}(x,y,t)\), where \(K_{2,1}(x,y,t)\in\mathcal{L}_{0}(\sqrt{\lambda})\) is continuous near the diagonal and \(K_{2,2}(x,y,t)\in\mathcal{L}_{1}(\sqrt{\lambda};\Omega\times\Omega\times(0,T)) \cap\mathcal{L}_{0}(\sqrt{\lambda})\). Let us fix \(\delta>0\). Then for any \(\tilde{z}=(\tilde{x},\tilde{t})\in\Omega_{T}\), there is a sufficiently small \(\rho_{\tilde{z}}\in(0,\epsilon)\) such that \[\|A_{2}(x,y,t)-A_{2}(\tilde{x},\tilde{x},\tilde{t})\|_{L^{\infty}\left(B_{\rho _{\tilde{z}}}(\tilde{x})\times B_{\rho_{\tilde{z}}}(\tilde{x})\times[\tilde{t}- \rho_{\tilde{z}}^{2s},\tilde{t}]\right)}\leq\frac{\delta}{\sqrt{\lambda}}.\] Similarly, we take \[\tilde{A}_{2,\tilde{z}}(x,y,t)=K_{2,1}(\tilde{x},\tilde{x},\tilde{t})K_{2,2}(x,y,t)\quad\text{for }(x,y,t)\in B_{\rho_{\tilde{z}}}(\tilde{x})\times B_{\rho_{ \tilde{z}}}(\tilde{x})\times[\tilde{t}-\rho_{\tilde{z}}^{2s},\tilde{t}]\] to find that \(\tilde{A}_{2,\tilde{z}}\in\mathcal{L}_{1}\left(\lambda;B_{\rho_{\tilde{z}}}( \tilde{x})\times B_{\rho_{\tilde{z}}}(\tilde{x})\times[\tilde{t}-\rho_{\tilde {z}}^{2s},\tilde{t}]\right)\) and \(A_{2}\) satisfies (1.6).
3. The assumption (1.6) naturally appears in the literature when a perturbation argument is used, as follows from [33].
_Remark 4_.: We here compare a nonlocal case with a local case. For the local parabolic problem
\[\partial_{t}v-\operatorname{div}\left(B(x,t)Dv\right)=f\text{ in }Q_{1}\]
when \(B(x,t)\) is a continuous coefficient and \(f\in L^{q,r}(Q_{1})\) with \(\frac{n}{2q}+\frac{1}{r}<1\), we note that \(v\in C^{\beta,\frac{\beta}{2}}_{\operatorname{loc}}\) for every \(\beta\in\left(0,\min\left\{2-\left(\frac{n}{q}+\frac{2}{r}\right),1\right\}\right)\). See [43] for the case \(B\equiv 1\). For a general case \(B\not\equiv 1\), a freezing argument as in [23, Theorem 3.8] leads to the desired result. Based on this observation for the local case, it might be expected for the nonlocal case that a weak solution to (1.1) has at most \(C^{\alpha,\frac{\alpha}{2}}_{\operatorname{loc}}\) regularity for every \(\alpha\in\left(0,\min\left\{2s-\left(\frac{n}{q}+\frac{2}{r}\right),s\right\}\right)\). However, we observe that the regularity of \(u\) is better than this expectation. Indeed, Theorem 1.2 asserts that it exceeds \(C^{s,\frac{\alpha}{2}}_{\operatorname{loc}}\) regularity when \(2s-\left(\frac{n}{q}+\frac{2}{r}\right)>s\). This phenomenon is first observed in [4] for the elliptic case.
### Plan of the paper
Our approach to proving the local boundedness is based on [29, Chapter 2] alongside a suitable modification for the fractional Sobolev space and the nonzero tail term. For the higher Holder regularity, we use a notion of discrete fractional derivatives in [4, 5, 33] to prove the almost Lipschitz regularity of a weak solution to a homogeneous equation (4.2) below, and an approximation technique as in [9, 33] to transfer its regularity to a weak solution of (1.1) with (1.4) and (1.6).
The paper is organized as follows. Section 2 deals with the notations, embeddings between function spaces, technical lemma, and Holder regularity for the homogeneous problem. In Section 3, we give Caccioppoli-type inequality of 3.2 to prove the local boundedness and Holder regularity of (1.1) with (1.4). In Section 4, we improve Holder regularity for the homogeneous problem with the kernel coefficient which is invariant under the translation in only spatial direction. Finally, in Section 5, we obtain the higher Holder regularity for (1.1) with (1.6). In Appendix A, we give the existence for the initial and boundary value problem with the inhomogeneous term \(f\in L^{q,r}\), where \(\frac{n}{2sq}+\frac{1}{r}\leq 1+\frac{n}{4s}\).
## 2. Preliminaries and Notations
Throughout the paper, we write \(c\) by a general positive constant, possibly changing from line to line. In particular, we give the relevant dependencies on parameters using parentheses, i.e., \(c\equiv c(n,s,\alpha)\). We now introduce some geometric and function notations.
1. The standard Lebesgue measures in \(\mathbb{R}^{n}\) and \(\mathbb{R}\) are denoted by \(\,dx\) and \(\,dt\). A usual point in \(\mathbb{R}^{n}\times\mathbb{R}\) is \(z=(x,t)\).
2. We denote the open ball in \(\mathbb{R}^{n}\) with center \(x_{0}\) and radius \(\rho>0\) by \(B_{\rho}(x_{0})\). In particular if the center is obvious, we omit the center.
3. We shall use \(Q_{\rho,\tau}(z_{0})\equiv B_{\rho}(x_{0})\times(t_{0}-\tau,t_{0}]\) for \(z_{0}=(x_{0},t_{0})\in\mathbb{R}^{n}\times\mathbb{R}\) and \(\tau>0\). Also we write \(Q_{\rho}(z_{0})\equiv Q_{\rho,\rho^{2s}}(z_{0})\).
4. For any \(Q_{\rho,\tau}(z_{0})\), we define a parabolic boundary of \(Q_{\rho,\tau}(z_{0})\) by \[\partial_{P}Q_{\rho,\tau}(z_{0})=B_{\rho}(x_{0})\times\{t=t_{0}-\tau\}\cup \partial B_{\rho}(x_{0})\times[t_{0}-\tau,t_{0}].\]
5. Given a measurable set \(K\subset\mathbb{R}^{n}\), \(|K|\) means the volume of \(K\) with respect to the Lebesgue measure in \(\mathbb{R}^{n}\).
6. Given a measurable set \(K\subset\mathbb{R}^{n+1}\) and a measurable function \(f:K\to\mathbb{R}\), \[\overline{f}_{K}=\frac{1}{|K|}\iint_{K}f\,dx\,dt=\iint_{K}f\,dx\,dt\] is the average of \(f\) over \(K\). In particular, we denote \[\overline{f}_{B}(t)=\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{B}f(x,t)\,dx,\quad t\in\mathbb{R},\] where \(B\subset\mathbb{R}^{n}\).
7. Given a measurable function \(f:\mathbb{R}^{n+1}\to\mathbb{R}\) and \(h\in\mathbb{R}^{n}\), we define \[f_{h}(x,t)=f(x+h,t),\quad\delta_{h}f(x,t)=f_{h}(x,t)-f(x,t)\quad\text{and}\quad \delta_{h}^{2}f=\delta_{h}(\delta_{h}f).\]
8. \(p^{\prime}\) is denoted by the Holder conjugate of \(p\in[1,\infty]\).
We now describe relevant function spaces. Let \(u:\Omega\to\mathbb{R}\) be a measurable function. For \(s\in(0,1)\) and \(p\in[1,\infty)\), we define a seminorm
\[[u]_{W^{s,p}(\Omega)}=\left(\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^{p}}{ |x-y|^{n+sp}}\,dx\,dy\right)^{\frac{1}{p}}\]
and the fractional Sobolev space
\[W^{s,p}(\Omega)=\{u\in L^{p}(\Omega)\,;\,[u]_{W^{s,p}(\Omega)}<\infty\}.\]
In particular, we say that \(u\) is in \(W^{s,p}_{0}(\Omega)\) if \(u\) is in \(W^{s,p}_{s}(\mathbb{R}^{n})\) and \(u\equiv 0\) in \(\mathbb{R}^{n}\setminus\Omega\)\(a.e.\)
Next we introduce a nonlocal tail space. We write \(L^{1}_{2s}(\mathbb{R}^{n})\) as
\[L^{1}_{2s}(\mathbb{R}^{n})=\left\{u\in L^{1}_{loc}(\mathbb{R}^{n})\,;\int_{ \mathbb{R}^{n}}\frac{|u(y)|}{(1+|y|)^{n+2s}}\,dy<\infty\right\}.\]
Then we give a suitable nonlocal tail for a parabolic case. Given \(u\in L^{\infty}\left(I;L^{1}_{2s}(\mathbb{R}^{n})\right)\), we define
\[\operatorname{Tail}_{\infty}(u,z_{0},\rho,\tau)=\operatorname*{ess\,sup}_{t \in[t_{0}-\tau,t_{0}]}\rho^{2s}\int_{\mathbb{R}^{n}\setminus B_{\rho}(x_{0})} \frac{|u(y,t)|}{|y-x_{0}|^{n+2s}}\,dy,\]
for any \(B_{\rho}(x_{0})\subset\mathbb{R}^{n}\) and \([t_{0}-\tau,t_{0}]\subset I\) where \(I\) is a time interval in \(\mathbb{R}\). In particular, we write
\[\operatorname{Tail}_{\infty}(u;z_{0},\rho)\coloneqq\operatorname*{ess\,sup}_{ t\in[t_{0}-\rho^{2s},t_{0}]}\rho^{2s}\int_{\mathbb{R}^{n}\setminus B_{\rho}(x_{0})} \frac{|u(y,t)|}{|y-x_{0}|^{n+2s}}\,dy.\]
Note that a simple calculation gives that \(u\in L^{\infty}(I;L^{1}_{2s}(\mathbb{R}^{n}))\) if and only if
\[\operatorname{Tail}_{\infty}(u,x_{0},\rho,I)<\infty\]
for any \(B_{\rho}(x_{0})\subset\mathbb{R}^{n}\). We next introduce Besov-type spaces to get higher regularity as follows.
**Definition**.: (Besov-type spaces) Let \(p\in[1,\infty)\) and let \(u:\mathbb{R}^{n}\to\mathbb{R}\) be in \(L^{p}(\mathbb{R}^{n})\). Define two seminorms by
\[[u]_{\mathcal{N}^{\beta,p}_{\infty}(\mathbb{R}^{n})}=\sup_{|h|>0}\left\|\frac{ \delta_{h}u}{|h|^{\beta}}\right\|_{L^{p}(\mathbb{R}^{n})}\text{ for }\beta\in(0,1]\]
and
\[[u]_{\mathcal{B}^{\beta,p}_{\infty}(\mathbb{R}^{n})}=\sup_{|h|>0}\left\|\frac{ \delta_{h}^{2}u}{|h|^{\beta}}\right\|_{L^{p}(\mathbb{R}^{n})}\text{ for }\beta\in(0,2).\]
Then we define two Besov-type spaces.
\[\mathcal{N}^{\beta,p}_{\infty}(\mathbb{R}^{n})=\left\{u\in L^{p}(\mathbb{R}^{ n})\,;[u]_{\mathcal{N}^{\beta,p}_{\infty}(\mathbb{R}^{n})}<\infty\right\}\text{ for }\beta\in(0,1]\]
and
\[\mathcal{B}^{\beta,p}_{\infty}(\mathbb{R}^{n})=\left\{u\in L^{p}(\mathbb{R}^{ n})\,;[u]_{\mathcal{B}^{\beta,p}_{\infty}(\mathbb{R}^{n})}<\infty\right\}\text{ for }\beta\in(0,2).\]
Now we present several lemmas that will be used in the remainder of the paper. The first five results are related to the embeddings. We start with Campanato's embedding for the fractional version for \(s\in(0,1)\).
We refer to [11, 13, 22] for a local case.
**Lemma 2.1** (Campanato's embedding).: _Let \(p>1\) and \(s\in(0,1)\). Let \(u\in L^{p}(Q_{2R_{0}}(z_{0}))\). Suppose that there is a \(\alpha\in(0,1)\) such that for any \(z\in Q_{R_{0}}(z_{0})\) and \(\rho>0\) with \(Q_{\rho}(z)\subset Q_{2R_{0}}(z_{0})\),_
\[\iint_{Q_{\rho}(z)}|u-\overline{u}_{Q_{\rho}(z)}|^{p}\,dx\,dt\leq M^{p}\rho^{ \rho\alpha},\]
_for some constant \(M>0\). Then we have \(u\in C^{\alpha,\frac{\alpha}{2s}}(Q_{R_{0}}(z_{0}))\). In particular, there is a constant \(c\equiv c(n,p,s,\alpha,R_{0})\) such that_
\[[u]_{C^{\alpha,\frac{\alpha}{2s}}(Q_{R_{0}}(z_{0}))}\leq c\left(M+\|u\|_{L^{ \infty}(Q_{R_{0}}(z_{0}))}\right),\]
_where_
\[[u]_{C^{\alpha,\frac{\alpha}{2s}}(Q)}\coloneqq\sup_{\begin{subarray}{c}(x,t),(x^{\prime},t^{\prime})\in Q\\ (x,t)\neq(x^{\prime},t^{\prime})\end{subarray}}\frac{|u(x,t)-u(x^{\prime},t^{ \prime})|}{|x-x^{\prime}|^{\alpha}+|t-t^{\prime}|^{\frac{\alpha}{2s}}}\quad \text{for any }Q\subset Q_{2R_{0}(z_{0})}.\]
Proof.: Take \(z_{1}\in Q_{R_{0}}(z_{0})\) and \(0<\rho_{1}<\rho_{2}\leq\min\left\{\left(2^{2s}-1\right)^{\frac{1}{2s}}R_{0}, R_{0}\right\}\) so that \(Q_{\rho_{2}}(z_{1})\subset Q_{2R_{0}}(z_{0})\). Then we observe that
\[|\overline{u}_{Q_{\rho_{1}}(z_{1})}-\overline{u}_{Q_{\rho_{2}}(z_{1})}|^{p} \leq cM^{p}\left(\rho_{1}^{p\alpha}+\rho_{2}^{p\alpha}\left(\frac{\rho_{2}}{ \rho_{1}}\right)^{n+2s}\right).\]
For any \(0<R\leq\min\left\{\left(2^{2s}-1\right)^{\frac{1}{2s}}R_{0},R_{0}\right\}\) with \(\rho_{1}=2^{-i-1}R\) and \(\rho_{2}=2^{-i}R\), we see that
\[|\overline{u}_{Q_{2^{-i-1}R}(z_{1})}-\overline{u}_{Q_{2^{-i}R}(z_{1})}|\leq c 2^{-\alpha(i+1)}MR^{\alpha}.\]
With the standard argument as in [23, Theorem 3.1], we have \(\|u\|_{L^{\infty}(Q_{R_{0}}(z_{0}))}<\infty\) and
\[|u(z_{1})-\overline{u}_{Q\rho(z_{1})}|\leq cM\rho^{\alpha}\quad\text{for any }\rho\leq\min\left\{\left(2^{2s}-1\right)^{\frac{1}{2s}}R_{0},R_{0}\right\}.\]
Fix \(z_{1},z_{2}\in Q_{R_{0}}(z_{0})\) with \(R=\max\left\{|x_{1}-x_{2}|,|t_{1}-t_{2}|^{\frac{1}{2s}}\right\}<\min\left\{ \left(2^{2s}-1\right)^{\frac{1}{2s}}\frac{R_{0}}{2},\frac{R_{0}}{2}\right\}\), then we get
\[Q_{2R}(z_{i})\subset Q_{2R_{0}}(z_{0})\text{ for }i=1,2.\]
As in [23, Theorem 3.1], we have the following
\[|u(z_{1})-u(z_{2})|\leq cMR^{\alpha}\leq cM\left(|x_{1}-x_{2}|^{\alpha}+|t_{1} -t_{2}|^{\frac{\alpha}{2s}}\right). \tag{2.1}\]
On the other hand, we deduce
\[|u(z_{1})-u(z_{2})|\leq c\|u\|_{L^{\infty}(Q_{R_{0}}(z_{0}))}\left(|x_{1}-x_{2 }|^{\alpha}+|t_{1}-t_{2}|^{\frac{\alpha}{2s}}\right), \tag{2.2}\]
whenever \(z_{1},z_{2}\in Q_{R_{0}}(z_{0})\) with \(\max\left\{|x_{1}-x_{2}|,|t_{1}-t_{2}|^{\frac{1}{2s}}\right\}>\min\left\{\left( 2^{2s}-1\right)^{\frac{1}{2s}}\frac{R_{0}}{2},\frac{R_{0}}{2}\right\}\). We combine (2.1) and (2.2) to complete the proof.
We write \(V_{s}^{2}(B_{R}\times I)\equiv L^{2}(I;W^{s,2}(B_{R}))\cap L^{\infty}(I;L^{2} (B_{R}))\) and its norm is given by
\[\|u\|_{V_{s}^{2}(B_{R}\times I)}=\left(\int_{t_{0}-\tau}^{t_{0}}[u(\cdot,t)]_{ W^{s,2}(B_{R})}^{2}\,dt\right)^{\frac{1}{2}}+\operatorname*{ess\,sup}_{t\in I}\|u( \cdot,t)\|_{L^{2}(B_{R})}.\]
Then we prove the following embedding which is a essential tool to deal with a nonhomogeneous term in (1.1). We refer to [16, Chapter 1, Proposition 3.3] for the local case when \(s=1\).
**Lemma 2.2**.: _Let \(f\in V_{s}^{2}(B_{R}\times I)\). Suppose that_
\[\begin{cases}\hat{q}\in\left[2,\frac{2n}{n-2s}\right]&\text{and}\quad\hat{r} \in[2,\infty]\quad\text{if }2s<n,\\ \hat{q}\in\left[2,\infty\right)&\text{and}\quad\hat{r}\in(4s,\infty]\quad\text{ if }2s=n,\\ \hat{q}\in\left[2,\infty\right]&\text{and}\quad\hat{r}\in[4s,\infty]\quad\text{ if }n<2s.\end{cases}\]
_satisfy_
\[\frac{n}{2\hat{q}s}+\frac{1}{\hat{r}}=\frac{n}{4s}.\]
_Then there is a constant \(c\equiv c(n,s,\hat{q},\hat{r})\) such that_
\[\|f\|_{L^{\hat{q},\hat{r}}(B_{R}\times I)}\leq cR^{-s}\|f\|_{L^{2}(I;L^{2}(B_{R} ))}+c\|f\|_{V_{s}^{2}(B_{R}\times I)}.\]
_Moreover if \(f\in L^{2}(I;W^{s,2}_{0}(B_{R}))\cap L^{\infty}(I;L^{2}(B_{R}))\), we also have_
\[\|f\|_{L^{\hat{q},\hat{r}}(B_{R}\times I)}\leq c\left(\left(\int_{t_{0}-\tau}^{t_ {0}}[f(\cdot,t)]^{2}_{W^{s,2}(\mathbb{R}^{n})}\right)^{\frac{1}{2}}+\operatorname {ess\,sup}_{t\in I}\|f(\cdot,t)\|_{L^{2}(B_{R})}\right) \tag{2.3}\]
_for some constant \(c\equiv c(n,s,\hat{q},\hat{r})\)._
Proof.: If \((\hat{q},\hat{r})=(2,\infty)\) and \((\hat{q},\hat{r})=\left(\frac{2n}{n-2s},2\right)\), then we check the above results directly. Therefore we may assume that \(2<\hat{r}<\infty\). We observe that for \(\alpha=\frac{2}{\hat{r}}\in(0,1)\),
\[\alpha\frac{n-2s}{2n}+(1-\alpha)\frac{1}{2}=\frac{1}{\hat{q}}.\]
Applying scaled version of [17, Lemma 2.1] with \(p=p_{2}=2\), \(p_{1}=\hat{q}\) and \(\theta=\alpha\), we have
\[\|f(\cdot,t)\|_{L^{\hat{q}}(B_{R})}\leq c\left([f(\cdot,t)]_{W^{s,2}(B_{R})}+ R^{-s}\|f(\cdot,t)\|_{L^{2}(B_{R})}\right)^{\alpha}\|f(\cdot,t)\|_{L^{2}(B_{R})}^{ 1-\alpha}\quad\text{a.e. }t\in I, \tag{2.4}\]
where \(c=c(n,s,\hat{q})\). Using (2.4) and young's inequality, we get that
\[\left(\int_{I}\|f(\cdot,t)\|_{L^{\hat{q}}(B_{R})}^{\hat{r}}\,dt \right)^{\frac{1}{\hat{r}}} \leq c\left(\left(\int_{I}[f(\cdot,t)]^{2}_{W^{s,2}(B_{R})}\,dt \right)^{\frac{1}{2}}+R^{-s}\|f\|_{L^{2}(I;L^{2}(B_{R}))}\right)^{\frac{2}{ \hat{r}}}\] \[\quad\times\operatorname{ess\,sup}_{t\in I}\|f(\cdot,t)\|_{L^{2} (B_{R})}^{1-\frac{2}{\hat{r}}}\] \[\leq cR^{-s}\|f\|_{L^{2}(I;L^{2}(B_{R}))}+c\|f\|_{V^{s}_{s}(B_{R} \times I)}\]
for some constant \(c=c(n,s,\hat{q},\hat{r})\). In particular, (2.3) is a direct consequence from [15, Theorem 6.5].
Next, we see three Besov-type embeddings without proof.
**Lemma 2.3**.: _[_4_, Lemma 2.6.]_ _Let \(0<\beta<1\) and \(1\leq p<\infty\). Let \(u\) be in \(L^{p}(B_{2R_{0}})\) for some \(R_{0}>0\). Suppose for some \(0<h_{0}<\frac{R_{0}}{4}\), we have_
\[\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}^{2}u}{|h|^{\beta}}\right\|_{L^{p}(B_ {R_{0}+h_{0}})}<\infty.\]
_Then we obtain_
\[\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}u}{|h|^{\beta}}\right\|_{L^{p}(B_{R_{ 0}})}\leq c\left(\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}^{2}u}{|h|^{\beta}} \right\|_{L^{p}(B_{R_{0}+h_{0}})}+\|u\|_{L^{p}(B_{R_{0}+h_{0}})}^{p}\right)\]
_for some \(c\equiv c(n,p,\beta,h_{0})\)._
**Lemma 2.4**.: _[_4_, Lemma 2.8.]_ _Let \(0<\beta<1\) and \(1\leq p<\infty\) with \(p\beta>n\). If \(u\in\mathcal{N}_{\infty}^{\beta,p}(\mathbb{R}^{n})\), then \(u\in C_{\mathrm{loc}}^{\alpha}(\mathbb{R}^{n})\) for any \(0<\alpha<\beta-\frac{n}{p}\) with_
\[\frac{|u(x)-u(y)|}{|x-y|^{\alpha}}\leq c\left([u]_{\mathcal{N}_{\infty}^{ \beta,p}(\mathbb{R}^{n})}\right)^{\frac{p\alpha+n}{p\beta}}\left(\|u\|_{L^{p}( \mathbb{R}^{n})}\right)^{1-\frac{p\alpha+n}{p\beta}}\text{ for any }x\neq y \text{ in }\mathbb{R}^{n}\]
_where \(c\equiv c(n,p,\alpha,\beta)\). In particular, if \(u\in L^{p}(B_{R+2h_{0}})\) with \(R>0\) and \(h_{0}>0\) then for any \(\alpha\in(0,\beta-\frac{n}{p})\), there is a constant \(c\equiv c(n,p,\alpha,\beta,M,N,R,h_{0})\) such that_
\[\frac{|u(x)-u(y)|}{|x-y|^{\alpha}}\leq c,\quad\text{for any }x\neq y\in B_{ \frac{R}{2}},\]
_provided that_
\[\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}u}{|h|^{\beta}}\right\|_{L^{p}(B_{R+h _{0}})}\leq M\text{ and }\|u\|_{L^{p}(B_{R+h_{0}})}\leq N,\quad\text{for }M\text{ and }N>0.\]
_Remark 5_.: Using a cutoff function with the first statement, we can prove the second statement. See in [4, Theorem 4.2.].
**Lemma 2.5**.: _[_3_, Proposition 2.6.]_ _Let \(s\in(0,1)\). We have two embeddings._
1. _Suppose_ \(u\in W^{s,2}(\mathbb{R}^{n})\)_. Then we get_ \[\sup_{|h|>0}\left\|\frac{\delta_{h}u}{|h|^{s}}\right\|_{L^{2}(\mathbb{R}^{n})} ^{2}\leq c(n,s)[u]_{W^{s,2}(\mathbb{R}^{n})}^{2}\]
2. _Suppose_ \(u\in W^{s,2}_{\mathrm{loc}}(\Omega)\) _where_ \(\Omega\subset\mathbb{R}^{n}\) _is open set and_ \(B_{R}\Subset\Omega\) _with_ \(h_{0}<\frac{\mathrm{dist}(B_{R},\partial\Omega)}{2}\)_. Then we deduce_ \[\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}u}{|h|^{s}}\right\|_{L^{2}(B_{R})}^{ 2}\leq c(s,n,R,h_{0})\left([u]_{W^{s,2}(B_{R+h_{0}})}^{2}+\|u\|_{L^{2}(B_{R+h_{ 0}})}^{2}\right)\]
For a local boundedness of a weak solution (1.1) with (1.4), we need the following technical lemma.
**Lemma 2.6**.: _[_16_, Chapter 1, Lemma 4.2]_ _Let \(M,b>1\) and \(\kappa,\delta>0\) be given. For each \(h\in\mathbb{N}\),_
\[Y_{h+1} \leq Mb^{h}(Y_{h}^{1+\delta}+Z_{h}^{1+\kappa}Y_{h}^{\delta})\] \[Z_{h+1} \leq Mb^{h}(Y_{h}+Z_{h}^{1+\kappa}).\]
_If \(Y_{0}+Z_{0}^{1+\kappa}\leq(2M)^{-\frac{1+\kappa}{\sigma}}b^{-\frac{1+\kappa}{ \sigma^{2}}}\) where \(\sigma=\min\{\kappa,\delta\}\), then_
\[\lim_{h\to\infty}Y_{h}=\lim_{h\to\infty}Z_{h}=0.\]
Before ending this section, we give the Holder regularity of a homogeneous equation to transfer the regularity to a weak solution to (1.1) with (1.4). In [17, 31, 1], the Holder continuity is proved when \(\Phi(t)=t\). In a similar way, we can prove the Holder continuity when \(\Phi\) satisfies (1.3) with some minor modifications. Therefore we get the following Holder estimate.
**Lemma 2.7**.: _Suppose that_
\[u\in L^{2}\left((-1,0];W^{s,2}(B_{1})\right)\cap L^{\infty}\left((-1,0];L^{1} _{2s}(\mathbb{R}^{n})\right)\cap C\left((-1,0];L^{2}(B_{1})\right)\]
_is a local weak solution to_
\[\partial_{t}u+\mathcal{L}^{\Phi}_{A}u=0\text{ in }Q_{1}.\]
_Then we have that \(u\) is in \(C^{\beta,\frac{\beta}{2s}}_{\mathrm{loc}}(Q_{1})\) for some \(\beta=\beta(n,s,\lambda)\). In particular, we obtain for any \(0<\rho\leq\frac{B}{2}\) with \(Q_{R}(z_{0})\Subset Q_{1}\)_
\[\operatorname*{osc}_{Q_{\rho}(z_{0})}u\leq c\left(\frac{\rho}{R}\right)^{ \beta}\left[\left(\fint_{Q_{R}}|u|^{2}\,dx\,dt\right)^{\frac{1}{2}}+\mathrm{ Tail}_{\infty}\left(u;x_{0},\frac{R}{2},t_{0}-R^{2s},t_{0}\right)\right]\]
_and_
\[\fint_{Q_{\rho}(z_{0})}|u-\overline{u}_{Q_{\rho}(z_{0})}|^{2}\,dx\,dt\leq c \left(\frac{\rho}{R}\right)^{2\beta}\left[\left(\fint_{Q_{R}}|u|^{2}\,dx\,dt \right)+\mathrm{Tail}_{\infty}^{2}\left(u;x_{0},\frac{R}{2},t_{0}-R^{2s},t_{0 }\right)\right],\]
_where \(c\equiv c(n,s,\lambda)\)._
## 3. Local boundedness and Holder regularity for inhomogeneous equation
Let \(\zeta:\mathbb{R}\to\mathbb{R}\) be a smooth even function with \(\operatorname*{supp}\zeta\subset(-\frac{1}{2},\frac{1}{2})\) and \(\int_{\mathbb{R}}\zeta=1\). For any locally integrable function \(u:\Omega\times(0,T)\to\mathbb{R}\), we define
\[u^{\epsilon}(x,t)\coloneqq-\fint_{t-\frac{\epsilon}{2}}^{t+\frac{\epsilon}{2}} \zeta\left(\frac{t-\sigma}{\epsilon}\right)u(x,\sigma)d\sigma=-\fint_{-\frac{1 }{2}}^{\frac{1}{2}}\zeta(\sigma)u(x,t-\epsilon\sigma)d\sigma,\quad(x,t)\in \Omega_{T}\]
for \(0<\epsilon<\min\{t,T-t\}\). We check the following elementary lemma.
**Lemma 3.1**.: _(Property of mollification) Let \(X\) be a seperable Banach space and \(1\leq p<\infty\)._
1. _Suppose_ \(u\in C([0,T];X)\)_. Then for each_ \(t\in(0,T)\)_,_ \[u^{\epsilon}(\cdot,t)\to u(\cdot,t)\text{ in }X\] _as_ \(\epsilon\) _tends to 0._
2. _Suppose_ \(u\in L^{p}(0,T;X)\)_. Then_ \[u^{\epsilon}\to u\text{ in }L^{p}_{loc}(0,T;X)\] _as_ \(\epsilon\) _tends to 0._
First, we give a Caccioppoli-type estimate, Lemma 3.2, for (1.1) with (1.4). Let us consider a parameter \(\kappa\in\left(0,\frac{2s}{n}\right)\) so that
\[0<\frac{n}{2qs}+\frac{1}{r}=1-\frac{n}{2s}\kappa<1. \tag{3.1}\]
Let us also consider two parameters \(\hat{q}\coloneqq 2(1+\kappa)q^{\prime}\) and \(\hat{r}\coloneqq 2(1+\kappa)r^{\prime}\) to see that
\[\frac{n}{2\hat{q}s}+\frac{1}{\hat{r}}=\frac{n}{4s},\quad\hat{q}\in\left[2, \frac{2n}{n-2s}\right]\quad\text{and}\quad\hat{r}\in[2,\infty]. \tag{3.2}\]
If a function \(u\) belongs to \(L^{2}(0,T;W^{s,2}(\Omega))\), then for \(x_{0}\in\Omega\), \(\rho>0\), \(k\geq 0\), and \(t\in(0,T)\), we define
\[E(t;x_{0},\rho,k)=\{x\in B_{\rho}(x_{0})\ ;u(x,t)>k\}\quad\text{and}\quad w_{+} =(u-k)_{+}, \tag{3.3}\]
where \(B_{\rho}(x_{0})\subset\Omega\).
**Lemma 3.2**.: _Suppose that \(u\) is a local weak subsolution to (1.1) with (1.4). Then for any \(Q_{R,T_{2}}(z_{0})\Subset\Omega_{T}\) with \(0<\rho<R\) and \(0<T_{1}<T_{2}<R^{2s}\), there exists a constant \(c\equiv c(n,s,q,r,\lambda)\) such that_
\[\int_{t_{0}-T_{1}}^{t_{0}}\int_{B_{\rho}(x_{0})}\int_{B_{\rho}(x_ {0})}\frac{|w_{+}(x,t)-w_{+}(y,t)|^{2}}{|x-y|^{n+2s}}\,dx\,dy\,dt+\operatorname {ess\,sup}_{t\in[t_{0}-T_{1},t_{0}]}\int_{B_{\rho}(x_{0})}w_{+}^{2}(x,t)\,dx\] \[\quad\leq c\left(\frac{R^{2(1-s)}}{(R-\rho)^{2}}+\frac{1}{T_{2}-T _{1}}\right)\int_{t_{0}-T_{2}}^{t_{0}}\int_{B_{R}(x_{0})}w_{+}^{2}\,dx\,dt\] \[\quad\quad+c\left(\frac{R^{n+2s}}{(R-\rho)^{n+2s}}\operatorname{ ess\,sup}_{t\in[t_{0}-T_{2},t_{0}]}\int_{\mathbb{R}^{n}\setminus B_{\rho}(x_{0})} \frac{w_{+}(y,t)}{|x_{0}-y|^{n+2s}}\,dy\right)\times\|w_{+}\|_{L^{1}(Q_{R,T_{2} }(z_{0}))}\] \[\quad\quad+ck^{2}R^{-n\kappa}\left(\int_{t_{0}-T_{2}}^{t_{0}}|E(t; x_{0},R,k)|^{\frac{\rho}{\ell}}\,dt\right)^{\frac{2(1+\kappa)}{\tau}},\]
_whenever \(k\geq\|f\|_{L^{q,r}(Q_{R,T_{2}}(z_{0}))}R^{n\kappa}\)._
Proof.: Let \(\epsilon>0\) be a sufficiently small number so that \(J\coloneqq[t_{0}-T_{2}-\epsilon,t_{0}+\epsilon]\Subset(0,T)\) and \(t_{0}-T_{2}+\epsilon<t_{0}-\frac{T_{1}+T_{2}}{2}-\epsilon\). We take a nonnegative function \(\eta=\eta(t)\in C^{\infty}(\mathbb{R})\) satisfying
\[\eta(t)\equiv 1\quad\text{in }t\geq t_{0}-T_{1},\quad\eta(t)\equiv 0\quad\text{in }t \leq t_{0}-\left(\frac{T_{1}+T_{2}}{2}\right),\quad\|\eta^{\prime}(t)\|_{L^{ \infty}(\mathbb{R})}\leq\frac{4}{T_{2}-T_{1}}, \tag{3.4}\]
and choose a cutoff function \(\psi=\psi(x)\in C^{\infty}_{c}(B_{(R+\rho)/2}(x_{0}))\) such that
\[\psi\equiv 1\quad\text{in }B_{\rho}(x_{0})\quad\text{and}\quad\|\nabla\psi\|_{L^{ \infty}(\mathbb{R}^{n})}\leq\frac{4}{R-\rho}. \tag{3.5}\]
Define
\[\phi_{\epsilon}(x,t)=(w_{+}^{\epsilon}\psi^{2}\eta^{2})^{\epsilon}(x,t),\quad (x,t)\in B_{R}(x_{0})\times J,\]
where \(w_{+}^{\epsilon}\) is the convolution of \(w_{+}\) with respect to time variable. Note that
\[\phi_{\epsilon}\to\phi\coloneqq w_{+}\psi^{2}\eta^{2}\text{ in }L^{2}\big{(}J;W^{s,2}(B_{R}(x_{0}))\big{)}\text{ as }\epsilon\to 0, \tag{3.6}\]
by Lemma 3.1. Take \(\phi_{\epsilon}\) as a test function to find
\[I_{1}^{\epsilon}+I_{2}^{\epsilon} \coloneqq\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}-u\partial_{t }\phi_{\epsilon}\,dx\,dt\] \[\quad+\int_{t_{0}-T_{2}}^{\tau}\int_{\mathbb{R}^{n}}\int_{ \mathbb{R}^{n}}\Phi(u(x,t)-u(y,t))(\phi_{\epsilon}(x,t)-\phi_{\epsilon}(y,t)) \frac{A(x,y,t)}{|x-y|^{n+2s}}\,dx\,dy\,dt\]
\[\leq\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}f\phi_{\epsilon}\,dx\,dt-\int_{B_{ R}(x_{0})}u\phi_{\epsilon}\,dx\Bigg{|}_{t=t_{0}-T_{2}}^{t=\tau}\eqqcolon I_{3}^{ \epsilon}+I_{4}^{\epsilon},\]
for any \(\tau\in\big{[}t_{0}-\big{(}\frac{T_{1}+T_{2}}{2}\big{)}\,,t_{0}\big{]}\). Now we find the limits of \(I_{1}^{\epsilon},I_{2}^{\epsilon},I_{3}^{\epsilon}\), and \(I_{4}^{\epsilon}\).
**Estimate of \(I_{1}^{\epsilon}\).** Using Fubini's theroem, we obtain
\[I_{1}^{\epsilon} =-\int_{B_{R}(x_{0})}\fint_{t_{0}-T_{2}-\frac{\epsilon}{2}}^{t_{0 }-T_{2}+\frac{\epsilon}{2}}\int_{t_{0}-T_{2}}^{\sigma+\frac{\epsilon}{2}} \frac{1}{2}u(x,t)\zeta^{\prime}\left(\frac{t-\sigma}{\epsilon}\right)\big{(} w_{+}^{\epsilon}\psi^{2}\eta^{2}\big{)}(x,\sigma)\,dt\,d\sigma\,dx\] \[\quad-\int_{B_{R}(x_{0})}\fint_{\tau-\frac{\epsilon}{2}}^{\tau+ \frac{\epsilon}{2}}\int_{\sigma-\frac{\epsilon}{2}}^{\tau}\frac{1}{\epsilon }u(x,t)\zeta^{\prime}\left(\frac{t-\sigma}{\epsilon}\right)\big{(}w_{+}^{ \epsilon}\psi^{2}\eta^{2}\big{)}(x,\sigma)\,dt\,d\sigma\,dx\] \[\quad+\int_{B_{R}(x_{0})}\int_{t_{0}-T_{2}+\frac{\epsilon}{2}}^{ \tau-\frac{\epsilon}{2}}\partial_{t}u^{\epsilon}(x,t)\big{(}w_{+}^{\epsilon} \psi^{2}\eta^{2}\big{)}(x,t)\,dt\,dx\eqqcolon-I_{1,1}^{\epsilon}-I_{1,2}^{ \epsilon}+I_{1,3}^{\epsilon}.\]
By using a suitable change of variables, we write \(I_{1,1}^{\epsilon}\) the following
\[I_{1,1}^{\epsilon}=\int_{B_{R}(x_{0})}\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{ 0}^{\sigma+\frac{1}{2}}u(x,\epsilon t+t_{0}-T_{2})\zeta^{\prime}(t-\sigma) \big{(}w_{+}^{\epsilon}\psi^{2}\eta^{2}\big{)}(x,\epsilon\sigma+t_{0}-T_{2}) \,dt\,d\sigma\,dx.\]
Thus, we have
\[\lim_{\epsilon\to 0}I_{1,1}^{\epsilon} =\int_{B_{R}(x_{0})}\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{0}^{ \sigma+\frac{1}{2}}u(x,t_{0}-T_{2})\phi(x,t_{0}-T_{2})\zeta^{\prime}(t-\sigma) \,dt\,d\sigma\,dx\] \[=-\int_{B_{R}(x_{0})}u(x,t_{0}-T_{2})\phi(x,t_{0}-T_{2})\,dx\]
by the fact that \(u,w_{+}\in C(J;L^{2}(\Omega))\) with Lemma 3.1. Similarly, we get
\[\lim_{\epsilon\to 0}I_{1,2}^{\epsilon}=\int_{B_{R}(x_{0})}u(x,\tau)\phi(x, \tau)\,dx.\]
For \(I_{1,3}^{\epsilon}\), we use an integration by parts, which gives
\[I_{1,3}^{\epsilon} =\int_{B_{R}(x_{0})}\int_{t_{0}-T_{2}+\frac{\epsilon}{2}}^{\tau- \frac{\epsilon}{2}}\partial_{t}w_{+}^{\epsilon}\big{(}w_{+}^{\epsilon}\psi^{2} \eta^{2}\big{)}\,dt\,dx\] \[=\int_{B_{R}(x_{0})}\int_{t_{0}-T_{2}+\frac{\epsilon}{2}}^{\tau- \frac{\epsilon}{2}}\partial_{t}\left(\frac{w_{+}^{\epsilon}(x,t)^{2}}{2} \right)\psi^{2}(x)\eta^{2}(t)\,dt\,dx\] \[=-\int_{B_{R}(x_{0})}\int_{t_{0}-T_{2}+\frac{\epsilon}{2}}^{\tau- \frac{\epsilon}{2}}\frac{w_{+}^{\epsilon}(x,t)^{2}}{2}\psi^{2}(x)\left(\eta^{ 2}(t)\right)^{\prime}\,dt\,dx\] \[\quad+\int_{B_{R}(x_{0})}\frac{w_{+}^{\epsilon}(x,t)^{2}}{2}\psi^ {2}(x)\eta^{2}(t)\,dx\Bigg{|}_{t=t_{0}-T_{2}-\frac{\epsilon}{2}}^{t=\tau+\frac {\epsilon}{2}}\]
As in \(I_{1,1}^{\epsilon}\), we observe that
\[\lim_{\epsilon\to 0}I_{1,3}^{\epsilon}=-\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}w_{ +}^{2}(x,t)\psi^{2}(x)\eta(t)\eta^{\prime}(t)\,dt\,dx+\int_{B_{R}(x_{0})} \frac{w_{+}^{2}(x,t)}{2}\psi^{2}(x)\eta^{2}(t)\,dx\Bigg{|}_{t=t_{0}-T_{2}}^{t= \tau}.\]
Combining all the limits of \(I_{1,1}^{\epsilon}\), \(I_{1,2}^{\epsilon}\), and \(I_{1,3}^{\epsilon}\), we find that
\[\lim_{\epsilon\to 0}I_{1}^{\epsilon} =-\int_{t_{0}-T_{2}}^{t_{1}}\int_{B_{R}(x_{0})}w_{+}^{2}(x,t)\psi^{ 2}(x)\eta(t)\eta^{\prime}(t)\,dt\,dx+\int_{B_{R}(x_{0})}\frac{w_{+}^{2}(x,t)}{2 }\psi^{2}(x)\eta^{2}(t)\,dx\Bigg{|}_{t=t_{0}-T_{2}}^{t=\tau}\] \[\quad-\int_{B_{R}(x_{0})}u(x,t)\phi(x,t)\,dx\Bigg{|}_{t=t_{0}-T_{2 }}^{t=\tau}.\]
**Estimate of \(I_{2}^{\epsilon}\).** We write \(I_{2}^{\epsilon}\) as follows:
\[\begin{split} I_{2}^{\epsilon}&=\int_{t_{0}-T_{2}}^{ \tau}\int_{B_{R}(x_{0})}\int_{B_{R}(x_{0})}\Phi(u(x,t)-u(y,t))(\phi_{\epsilon}(x,t)-\phi_{\epsilon}(y,t))\frac{A(x,y,t)}{|x-y|^{n+2s}}\,dx\,dy\,dt\\ &\quad+\int_{t_{0}-T_{2}}^{\tau}\int_{\mathbb{R}^{n}\setminus B_{ R}(x_{0})}\int_{B_{R}(x_{0})}\Phi(u(x,t)-u(y,t))\phi_{\epsilon}(x,t)\frac{A(x,y,t)}{|x-y |^{n+2s}}\,dx\,dy\,dt\\ &\quad-\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}\int_{\mathbb{ R}^{n}\setminus B_{R}(x_{0})}\Phi(u(x,t)-u(y,t))\phi_{\epsilon}(y,t)\frac{A(x,y,t)}{|x-y |^{n+2s}}\,dx\,dy\,dt\\ &\quad=:I_{2,1}^{\epsilon}+I_{2,2}^{\epsilon}-I_{2,3}^{\epsilon }.\end{split} \tag{3.7}\]
Using Holder's inequality, we observe that
\[\begin{split}&\left|I_{2,1}^{\epsilon}-\int_{t_{0}-T_{2}}^{ \tau}\int_{B_{R}(x_{0})}\int_{B_{R}(x_{0})}\Phi(u(x,t)-u(y,t))(\phi(x,t)-\phi(y, t))\frac{A(x,y,t)}{|x-y|^{n+2s}}\,dx\,dy\,dt\right|\\ &\quad\leq\lambda^{2}\|u\|_{L^{2}\big{(}J;W^{s,2}(B_{R}(x_{0})) \big{)}}\|\phi_{\epsilon}-\phi\|_{L^{2}\big{(}J;W^{s,2}(B_{R}(x_{0}))\big{)}} \,.\end{split} \tag{3.8}\]
We next focus on the estimate of \(I_{2,2}^{\epsilon}\). Due to the fact that
\[|y-x|\geq|y-x_{0}|-|x-x_{0}|\geq\frac{(R-\rho)}{2R}|y-x_{0}|,\quad x\in B_{(R+ \rho)/2}(x_{0})\subset\operatorname{supp}\psi,\,y\in\mathbb{R}^{n}\setminus B _{R}(x_{0}) \tag{3.9}\]
and Holder's inequality, we have
\[\begin{split}&\left|I_{2,2}^{\epsilon}-\int_{t_{0}-T_{2}}^{ \tau}\int_{\mathbb{R}^{n}\setminus B_{R}(x_{0})}\int_{B_{R}(x_{0})}\Phi(u(x, t)-u(y,t))\phi(x,t)\frac{A(x,y,t)}{|x-y|^{n+2s}}\,dx\,dy\,dt\right|\\ &\quad\leq\lambda^{2}\int_{t_{0}-T_{2}}^{\tau}\int_{\mathbb{R}^{n} \setminus B_{R}(x_{0})}\int_{B_{R}(x_{0})}\frac{|u(x,t)|+|u(y,t)|}{|x-y|^{n+2s} }(\phi_{\epsilon}(x,t)-\phi(x,t))\,dx\,dy\,dt\\ &\quad\leq c\int_{t_{0}-T_{2}}^{\tau}\int_{\mathbb{R}^{n} \setminus B_{R}(x_{0})}\int_{B_{R}(x_{0})}\frac{|u(x,t)|}{|x_{0}-y|^{n+2s}}( \phi_{\epsilon}(x,t)-\phi(x,t))\,dx\,dy\,dt\\ &\quad+c\int_{t_{0}-T_{2}}^{\tau}\int_{\mathbb{R}^{n}\setminus B _{R}(x_{0})}\int_{B_{R}(x_{0})}\frac{|u(y,t)|}{|x_{0}-y|^{n+2s}}(\phi_{ \epsilon}(x,t)-\phi(x,t))\,dx\,dy\,dt\\ &\quad\leq c(\rho,R)\|u\|_{L^{2}\big{(}J;L^{2}(B_{R}(x_{0})) \big{)}}\|\phi_{\epsilon}-\phi\|_{L^{2}\big{(}J;L^{2}(B_{R}(x_{0}))\big{)}}\\ &\quad+c(\rho,R)\operatorname{Tail}_{\infty}(u;x_{0},R,J)\|\phi_{ \epsilon}-\phi\|_{L^{2}\big{(}J;L^{2}(B_{R}(x_{0}))\big{)}}\,.\end{split} \tag{3.10}\]
Similarly, we deduce
\[\begin{split}&\left|I_{2,3}^{\epsilon}-\int_{t_{0}-T_{2}}^{\tau} \int_{B_{R}(x_{0})}\int_{\mathbb{R}^{n}\setminus B_{R}(x_{0})}\Phi(u(x,t)-u(y, t))\phi(y,t)\frac{A(x,y,t)}{|x-y|^{n+2s}}\,dx\,dy\,dt\right|\\ &\quad\leq c(\rho,R)\|u\|_{L^{2}\big{(}J;L^{2}(B_{R}(x_{0})) \big{)}}\|\phi_{\epsilon}-\phi\|_{L^{2}\big{(}J;L^{2}(B_{R}(x_{0}))\big{)}}\\ &\quad+c(\rho,R)\operatorname{Tail}_{\infty}(u;x_{0},R,J)\|\phi_{ \epsilon}-\phi\|_{L^{2}\big{(}J;L^{2}(B_{R}(x_{0}))\big{)}}\,.\end{split} \tag{3.11}\]
Combining the above estimates (3.8), (3.10), (3.11) and using the fact (3.6), we discover
\[\lim_{\epsilon\to 0}I_{2}^{\epsilon}=\int_{t_{0}-T_{2}}^{\tau}\int_{\mathbb{R}^{n} }\int_{\mathbb{R}^{n}}\Phi(u(x,t)-u(y,t))(\phi(x,t)-\phi(y,t))\frac{A(x,y,t)}{| x-y|^{n+2s}}\,dx\,dy\,dt.\]
**Estimate of \(I_{3}^{\epsilon}\).** From Holder's inequality and Lemma 2.2, we get
\[\begin{split}&\left|I_{3}^{\epsilon}-\int_{t_{0}-T_{2}}^{\tau} \int_{B_{R}(x_{0})}f(x,t)\phi(x,t)\,dx\,dt\right|\\ &\quad\leq\left\|f\right\|_{L^{r}\big{(}J;L^{q}(B_{R}(x_{0})) \big{)}}\left\|\phi_{\epsilon}-\phi\right\|_{L^{r^{\prime}}\big{(}J;L^{q^{ \prime}}(B_{R}(x_{0}))\big{)}}\\ &\quad\leq c\|f\|_{L^{r}\big{(}J;L^{q}(B_{R}(x_{0}))\big{)}}\left\| \phi_{\epsilon}-\phi\right\|_{V_{2}^{\epsilon}((B_{R}(x_{0})\times J))}\end{split}\]
\[+c(R)\|f\|_{L^{r}\big{(}J;L^{q}(B_{R}(x_{0}))\big{)}}\,\|\phi_{\epsilon}-\phi\|_{L^ {2}\big{(}J;L^{2}(B_{R}(x_{0}))\big{)}},\]
and this estimate with (3.6) yields
\[\lim_{\epsilon\to 0}I_{3}^{\epsilon}=\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}}f(x,t) \phi(x,t)\,dx\,dt.\]
**Estimate of \(I_{4}^{\epsilon}\)**. Since \(u\) and \(\phi\in C(J;L^{2}(B_{R}))\), we deduce
\[\lim_{\epsilon\to 0}I_{4}^{\epsilon}=-\int_{B_{R}(x_{0})}u(x,t)\phi(x,t)\,dx \bigg{|}_{t=t_{0}-T_{2}}^{t=\tau}.\]
We combine all the estimates of \(I_{1}^{\epsilon},I_{2}^{\epsilon},I_{3}^{\epsilon}\), and \(I_{4}^{\epsilon}\) to see the following
\[\int_{B_{R}(x_{0})}\frac{w_{+}^{2}(x,t)}{2}\psi^{2}(x)\eta^{2}(t )\,dx\bigg{|}_{t=t_{0}-T_{2}}^{t=\tau}\] \[+\int_{t_{0}-T_{2}}^{\tau}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{ n}}\Phi(u(x,t)-u(y,t))(\phi(x,t)-\phi(y,t))\frac{A(x,y,t)}{|x-y|^{n+2s}}\,dx\, dy\,dt\] \[\qquad\leq\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}f(x,t)\phi( x,t)\,dx\,dt+\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}w_{+}^{2}(x,t)\psi^{2}(x) \eta(t)\eta^{\prime}(t)\,dt\,dx.\]
From the Caccioppoli-type estimate as in [17, Lemma 3.3], we further investigate the second term in the left hand-side so that we have
\[J_{0} \coloneqq\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}\int_{B_{R}( x_{0})}\frac{|w_{+}(x,t)\psi(x)-w_{+}(y,t)\psi(y)|^{2}}{|x-y|^{n+2s}}\eta^{2}(t )\,dx\,dy\,dt\] \[\quad+\int_{B_{R}(x_{0})}\frac{(w_{+}(x,t)\psi(x)\eta(t))^{2}}{2} \,dx\bigg{|}_{t=t_{0}-T_{2}}^{t=\tau}\] \[\leq c\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}\int_{B_{R}(x_{0} )}\frac{\max\{w_{+}(x,t),w_{+}(y,t)\}^{2}|\psi(x)-\psi(y)|^{2}\eta^{2}(t)}{|x- y|^{n+2s}}\,dx\,dy\,dt\] \[\quad+c\int_{t_{0}-T_{2}}^{\tau}\int_{\mathbb{R}^{n}\setminus B_{ R}(x_{0})}\int_{B_{R}(x_{0})}\frac{w_{+}(y,t)}{|x-y|^{n+2s}}\phi(x,t)\,dx\,dy\,dt\] \[\quad+c\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}f(x,t)\phi(x,t) \,dx\,dt\] \[\quad+c\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}w_{+}^{2}(x,t) \psi^{2}(x)\eta(t)\eta^{\prime}(t)\,dx\,dt=J_{1}+J_{2}+J_{3}+J_{4}.\]
Now we estimate \(J_{i}\) for each \(i=0,1,2,3\) and \(4\). Note that \(J_{0},J_{1},J_{2}\) and \(J_{4}\) can be estimated with the help of (3.4), (3.5) and (3.8) as follows:
**Estimate of \(J_{0}\).**
\[J_{0}\geq\int_{t_{0}-T_{1}}^{\tau}\int_{B_{\rho}(x_{0})}\int_{B_{\rho}(x_{0})} \frac{|w_{+}(x,t)-w_{+}(y,t)|^{2}}{|x-y|^{n+2s}}\,dx\,dy\,dt+\operatorname*{ ess\,sup}_{t\in[t_{0}-T_{1},\tau]}\int_{B_{\rho}(x_{0})}\frac{w_{+}^{2}(x,t)}{2}\,dx\]
**Estimate of \(J_{1}\).**
\[J_{1}\leq c\frac{R^{2(1-s)}}{(R-\rho)^{2}}\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}( x_{0})}w_{+}^{2}(x,t)\,dx\,dt.\]
**Estimate of \(J_{2}\).**
\[J_{2}\leq\frac{R^{n+2s}}{(R-\rho)^{n+2s}}\left(\operatorname*{ess\,sup}_{t\in[t _{0}-T_{2},t_{0}]}\int_{\mathbb{R}\setminus B_{\rho}(x_{0})}\frac{w_{+}(y,t)}{| x_{0}-y|^{n+2s}}\,dy\right)\|w_{+}\|_{L^{1}(Q_{R,T_{2}}(z_{0}))}.\]
**Estimate of \(J_{4}\).**
\[J_{4}\leq\frac{2}{T_{2}-\tau}\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}w_{+}^ {2}(x,t)\,dx\,dt.\]
In the case of \(J_{3}\), applying Holder's inequality, we have
\[J_{3} \leq\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}|f(x,t)|w_{+}(x,t) \psi(x)\eta(t)\,dx\,dt \tag{3.11}\] \[\leq\|f\|_{L^{q,r}(B_{R}(x_{0})\times(t_{0}-T_{2},\tau))}\left\| \chi_{\{u\geq k\}}\right\|_{L^{\hat{q},\hat{r}}(B_{R}(x_{0})\times(t_{0}-T_{2},\tau))}\|w_{+}\psi\eta\|_{L^{\hat{q},\hat{r}}(B_{R}(x_{0})\times(t_{0}-T_{2},\tau))}\] \[\leq kR^{-n\kappa}\left\|\chi_{\{u\geq k\}}\right\|_{L^{\hat{q}, \hat{r}}(B_{R}(x_{0})\times(t_{0}-T_{2},\tau))}\|w_{+}\psi\eta\|_{L^{\hat{q}, \hat{r}}(B_{R}(x_{0})\times(t_{0}-T_{2},\tau))},\]
where \(\tilde{q}=\frac{\hat{q}}{2\kappa+1}\) and \(\tilde{r}=\frac{\hat{r}}{2\kappa+1}\). From (3.1) and (3.2), we obtain
\[\left\|\chi_{\{u\geq k\}}\right\|_{L^{\hat{q},\hat{r}}(B_{R}(x_{0 })\times(t_{0}-T_{2},\tau))} =\left(\int_{t_{0}-T_{2}}^{\tau}|E(t;x_{0},R,k)|^{\frac{\hat{r}}{ \hat{q}}}\,dt\right)^{\frac{1}{\hat{r}}} \tag{3.12}\] \[=\left(\int_{t_{0}-T_{2}}^{\tau}|E(t;x_{0},R,k)|^{\frac{\hat{r}} {\hat{q}}}\,dt\right)^{\frac{2\kappa+1}{\hat{r}}},\]
and
\[\left(\int_{t_{0}-T_{2}}^{\tau}|E(t;x_{0},R,k)|^{\frac{\hat{r}}{\hat{q}}}\,dt \right)^{\frac{2\kappa}{\hat{r}}}\leq c\left(\int_{t_{0}-T_{2}}^{\tau}R^{ \frac{n\hat{r}}{\hat{q}}}\,dt\right)^{\frac{2\kappa}{\hat{r}}}\leq cR^{\left(2 s+\frac{n\hat{r}}{\hat{q}}\right)\frac{2\kappa}{\hat{r}}}=cR^{n\kappa}. \tag{3.13}\]
Then we use Cauchy's inequality, (3.12), (3.13), and Lemma 2.2 to get
\[J_{3} \leq\frac{k^{2}}{4\epsilon}R^{-2n\kappa}\left\|\chi_{\{u\geq k\} }\right\|_{L^{\hat{q},\hat{r}}(B_{R}(x_{0})\times(t_{0}-T_{2},\tau))}^{2}+ \epsilon\|w_{+}\psi\eta\|_{L^{\hat{q},\hat{r}}(B_{R}(x_{0})\times(t_{0}-T_{2}, \tau))}\] \[\leq\frac{k^{2}}{4\epsilon}R^{-n\kappa}\left(\int_{t_{0}-T_{2}}^{ \tau}|E(t;x_{0},R,k)|^{\frac{\hat{r}}{\hat{q}}}\,dt\right)^{\frac{2(1+\kappa)} {\hat{r}}}+c\epsilon\left(J_{0}+R^{-2s}\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_ {0})}w_{+}^{2}(x,t)\,dx\,dt\right)\] \[\leq ck^{2}R^{-n\kappa}\left(\int_{t_{0}-T_{2}}^{\tau}|E(t;x_{0}, R,k)|^{\frac{\hat{r}}{\hat{q}}}\,dt\right)^{\frac{2(1+\kappa)}{\hat{r}}}+\frac{1}{4} \left(J_{0}+R^{-2s}\int_{t_{0}-T_{2}}^{\tau}\int_{B_{R}(x_{0})}w_{+}^{2}(x,t) \,dx\,dt\right),\]
by taking \(\epsilon=\frac{1}{4\epsilon}\). Since \(\tau\in\left[t_{0}-\left(\frac{T_{1}+T_{2}}{2}\right),t_{0}\right]\) is arbitrary, we can combine estimates \(J_{0},J_{1},J_{2},J_{3}\) and \(J_{4}\) to complete the proof.
Now using Lemma 3.2, we prove the local boundedness for a local weak subsolution to (1.1) with (1.4).
**Proof of Theorem 1.1.****Step 1: Normalization.** Define
\[\tilde{u}(x,t)=u\left(\rho_{0}x+x_{0},\rho_{0}^{2s}t+t_{0}\right),\quad(x,t) \in Q_{1},\]
\[\tilde{A}(x,y,t)=A\left(\rho_{0}x+x_{0},\rho_{0}y+x_{0},\rho_{0}^{2s}t+t_{0} \right),\quad(x,y,t)\in\mathbb{R}^{2n}\times\mathbb{R},\]
and
\[\tilde{f}(x,t)=\rho_{0}^{2s}f\left(\rho_{0}x+x_{0},\rho_{0}^{2s}t+t_{0}\right),\quad(x,t)\in Q_{1}.\]
Then
\[\tilde{u}\in L^{2}_{\mathrm{loc}}\left(-1,0;W^{s,2}_{\mathrm{loc}}(B_{1}) \right)\cap L^{\infty}_{\mathrm{loc}}\left(-1,0;L^{1}_{2s}(\mathbb{R}^{n}) \right)\cap C_{\mathrm{loc}}\left(-1,0;L^{2}_{\mathrm{loc}}(B_{1})\right)\]
is a local weak subsolution to
\[\partial_{t}\tilde{u}+\mathcal{L}^{\Phi}_{\tilde{A}}\tilde{u}=\tilde{f}\text{ in }Q_{1}.\]
Now take \(k>0\) such that
\[k>\|\tilde{f}\|_{L^{q,r}(Q_{1})}+\operatorname*{ess\,sup}_{t\in[-1,0]}\int_{ \mathbb{R}^{n}\setminus B_{\frac{1}{2}}}\frac{|\tilde{u}(y,t)|}{|y|^{n+2s}}\,dy.\]
From the Lemma 3.2, for any \(\frac{1}{2}<\rho<R<1\) and \(\frac{1}{2^{2s}}<T_{1}<T_{2}\leq R^{2s}\), we have
\[\begin{split}&\|(\tilde{u}-k)_{+}\|_{V_{Y}^{2}(Q_{\rho,T_{1}})}^{2} \\ &\quad\leq c\left(\frac{1}{(R-\rho)^{2}}+\frac{1}{T_{2}-T_{1}} \right)\int_{-T_{2}}^{0}\int_{B_{R}}(\tilde{u}-k)_{+}^{2}\,dx\,dt\\ &\quad\quad+c\left(\frac{1}{(R-\rho)^{n+2s}}\operatorname*{ess \,sup}_{t\in[-T_{2},0]}\int_{\mathbb{R}^{n}\setminus B_{\rho}}\frac{(\tilde{u }-k)_{+}(y,t)}{|y|^{n+2s}}\,dy\right)\times\|(\tilde{u}-k)_{+}\|_{L^{1}(Q_{R,T _{2}})}\\ &\quad\quad+ck^{2}R^{-n\kappa}\left(\int_{-T_{2}}^{0}|\tilde{E}(t ;0,R,k)|^{\frac{\rho}{q}}\,dt\right)^{\frac{2(1+\kappa)}{\rho}},\end{split} \tag{3.13}\]
where \(\tilde{E}\) is the level set of \(\tilde{u}\) as in (3.3). We write the second term on the right hand-side as \(I\) for simplicity. Then, using Cauchy's inequality and Holder's inequality, we have
\[\begin{split} I&\leq\frac{c}{(R-\rho)^{n+2s}}\iint _{Q_{R,T_{2}}}k(\tilde{u}-k)_{+}\,dx\,dt\\ &\leq\frac{c}{(R-\rho)^{n+2s}}\left(\iint_{Q_{R,T_{2}}}k^{2} \chi_{(\tilde{u}(x,t)>k)}\,dx\,dt+\iint_{Q_{R,T_{2}}}(\tilde{u}-k)_{+}^{2}\, dx\,dt\right)\\ &\leq\frac{c}{(R-\rho)^{n+2s}}k^{2}\left(\int_{-T_{2}}^{0}|\tilde {E}(t;0,R,k)|^{\frac{\rho}{q}}\,dt\right)^{\frac{2(1+\kappa)}{\rho}}+\frac{c} {(R-\rho)^{n+2s}}\iint_{Q_{R,T_{2}}}(\tilde{u}-k)_{+}^{2}\,dx\,dt,\end{split}\]
With (3.13) and the estimate of I, we get
\[\begin{split}\|(\tilde{u}-k)_{+}\|_{V_{Y}^{2}(Q_{\rho,T_{1}})}^{2 }\leq c\Bigg{(}&\left(\frac{1}{(R-\rho)^{n+2s}}+\frac{1}{T_{2}-T_ {1}}\right)\|(\tilde{u}-k)_{+}\|_{L^{2}(Q_{R,T_{2}})}^{2}\\ &\quad+\frac{c}{(R-\rho)^{n+2s}}k^{2}\left(\int_{-T_{2}}^{0}| \tilde{E}(t;0,R,k)|^{\frac{\rho}{q}}\,dt\right)^{\frac{2(1+\kappa)}{\rho}} \Bigg{)}.\end{split} \tag{3.14}\]
**Step 2: Iteration.** For any nonnegative integer \(h\), we write
\[\rho_{h}=\frac{1}{2}+\frac{1}{2^{h+2}},\quad\tau_{h}=\rho_{h}^{2s},\quad \overline{\rho}_{h}=\frac{\rho_{h}+\rho_{h+1}}{2}\quad\text{and}\quad Q_{h}=Q _{\rho_{h},\tau_{h}}\]
Moreover, take
\[\begin{split}& k_{h}=N+N\left(1-\frac{1}{2^{h}}\right),\\ &\zeta_{h}\in C_{c}^{\infty}(B_{\overline{\rho}_{h}}),\quad 0 \leq\zeta_{h}\leq 1,\quad\zeta_{h}\equiv 1\text{ on},\quad\|D\zeta_{h}\|_{\infty}\leq 2^{h+5},\\ & y_{h}=N^{-2}\int_{-\tau_{h}}^{0}\int_{B_{\rho_{h}}}(\tilde{u}-k _{h})_{+}^{2}\,dx\,dt,\quad z_{h}=\left(\int_{-\tau_{h}}^{0}|\tilde{E}(t;\rho _{h},k_{h})|^{\frac{\rho}{q}}\,dt\right)^{\frac{2}{\rho}},\\ &\Lambda_{h}=\int_{-\tau_{h+1}}^{0}\tilde{E}(t;0,\overline{\rho}_ {h},k_{h+1})\,dt,\end{split}\]
where \(N>\|\tilde{f}\|_{L^{q,r}(Q_{1})}+\operatorname*{ess\,sup}_{t\in[-1,0]}\int_{ \mathbb{R}^{n}\setminus B_{\frac{1}{2}}}\frac{|\tilde{u}(y,t)|}{|y|^{n+2s}}\,dy\) will be determined later in (3.18). Note that for any nonnegative integer \(h\),
\[\begin{split}&\tau_{h}-\tau_{h+1}=2s\int_{\rho_{h+1}}^{\rho_{h}}t ^{2s-1}\,dt\geq s(\rho_{h}-\rho_{h+1}),\\ &\Lambda_{h}\leq(k_{h+1}-k_{h})^{-2}N^{2}y_{h}\leq 2^{2(h+4)}y_{h}. \end{split} \tag{3.15}\]
Now we obtain an iterative inequality of \(y_{h}\) and \(z_{h}\). Using Holder's inequality and Lemma 2.2, we have
\[y_{h+1} \leq N^{-2}\int_{-\tau_{h+1}}^{0}\int_{B_{\overline{\rho}_{h}}}( \tilde{u}-k_{h+1})_{+}^{2}\zeta_{h}^{2}\,dx\,dt \tag{3.16}\] \[\leq N^{-2}\|(\tilde{u}-k_{h+1})_{+}\zeta_{h}\|_{L^{2}(Q_{ \overline{\rho}_{h},\tau_{h+1}})}^{2}\] \[\leq N^{-2}\Lambda_{h}^{\frac{2s}{n+2s}}\|(\tilde{u}-k_{h+1})_{+} \zeta_{h}\|_{L^{2}\left(1+\frac{2s}{n}\right)(Q_{\overline{\rho}_{h},\tau_{h+ 1}})}^{2}\] \[\leq cN^{-2}\Lambda_{h}^{\frac{2s}{n+2s}}\Big{(}\|(\tilde{u}-k_{h +1})_{+}\zeta_{h}\|_{V_{2}^{2}\left(Q_{\overline{\rho}_{h},\tau_{h+1}}\right) }^{2}+\overline{\rho}_{h}^{-2s}\|(\tilde{u}-k_{h+1})_{+}\zeta_{h}\|_{L^{2}(Q_ {\overline{\rho}_{h},\tau_{h+1}})}^{2}\Big{)}\] \[\leq cN^{-2}\Lambda_{h}^{\delta}\left(\|(\tilde{u}-k_{h+1})_{+}\| _{V_{2}^{2}\left(Q_{\overline{\rho}_{h},\tau_{h+1}}\right)}^{2}+N^{2}y_{h} \right),\]
where \(\delta=\frac{2s}{n+2s}\). By applying \(\rho=\overline{\rho}_{h}\), \(R=\rho_{h}\), \(T_{1}=\tau_{h+1}\), \(T_{2}=\tau_{h}\) and \(k=k_{h+1}\) to (3.14) and using (3.15) and Lemma 2.2, we observe that the followings
\[y_{h+1} \leq cN^{-2}\left(2^{2(h+4)}y_{h}\right)^{\delta}\left(\left(2^{h(2 s+n)}+2^{h}\right)\|(\tilde{u}-k_{h+1})_{+}\|_{L^{2}(Q_{\rho_{h},\tau_{h}})}^{2 }+2^{h(n+2s)}k_{h+1}^{2}z_{h}^{1+\kappa}\right) \tag{3.17}\] \[\leq cN^{-2}\left(2^{2(h+4)}y_{h}\right)^{\delta}\left(\left(2^{h (n+2s)}+2^{h}\right)y_{h}N^{2}+2^{h(n+2s)}k_{h+1}^{2}z_{h}^{1+\kappa}\right)+c 2^{2(h+4)}y_{h}^{1+\delta}\] \[\leq c\left(\left(2^{h(n+4)}+2^{h+2h}\right)y_{h}^{1+\delta}+2^{h( n+4)}z_{h}^{1+\kappa}y_{h}^{\delta}\right),\]
and
\[(k_{h+1}-k_{h})^{2}z_{h+1} =(k_{h+1}-k_{h})^{2}\left(\int_{-\tau_{h+1}}^{0}|\tilde{E}(t;0, \rho_{h+1},k_{h+1})|^{\frac{\ell}{q}}\,dt\right)^{\frac{2}{r}} \tag{3.18}\] \[\leq c\|(\tilde{u}-k_{h})_{+}\|_{L^{\delta,\hat{\eta}}(Q_{\rho_{h +1},\tau_{h+1}})}^{2}\] \[\leq c\|(\tilde{u}-k_{h})_{+}\zeta_{h}\|_{L^{\hat{\eta}}(Q_{ \overline{\rho}_{h},\tau_{h+1}})}^{2}\] \[\leq c\left(\|(\tilde{u}-k_{h})_{+}\zeta_{h}\|_{V_{2}^{2}(Q_{ \overline{\rho}_{h},\tau_{h+1}})}^{2}+\|(\tilde{u}-k_{h})_{+}\zeta_{h}\|_{L^{2} (Q_{\overline{\rho}_{h},\tau_{h+1}})}^{2}\right)\] \[\leq c\left(\left(2^{h(n+2)}+2^{h}\right)\|(\tilde{u}-k_{h})_{+} \|_{L^{2}(Q_{\overline{\rho}_{h},\tau_{h+1}})}^{2}+2^{h(n+2s)}k_{h}^{2}z_{h}^{1 +\kappa}\right)\] \[\leq c2^{h(n+2)}N^{2}y_{h}+c2^{h(n+2)}N^{2}z_{h}^{1+\kappa}.\]
From (3.17) and (3.18), we observe that
\[y_{h+1} \leq c\left(2^{n+4}\right)^{h}(y_{h}^{1+\delta}+z_{h}^{1+\kappa}y_ {h}^{\delta}),\] \[z_{h+1} \leq c\left(2^{n+4}\right)^{h}(y_{h}+z_{h}^{1+\kappa}).\]
In addition, in light of (3.17) and (3.18) with \(k_{0}=N\), \(k_{-1}=\|\tilde{f}\|_{L^{\delta,r}(Q_{1})}+\operatorname{ess\,sup}_{t\in[-1,0]} \int_{\mathbb{R}^{n}\setminus B_{\frac{1}{2}}}\frac{|\tilde{u}(y,t)|}{|y|^{n+2s }}\,dy\), \(\rho_{-1}=1\) and \(\tau_{-1}=1\), we have
\[y_{0} \leq\frac{1}{N^{2}}\iint_{Q_{1}}(\tilde{u}-N)_{+}^{2}\leq\frac{ 1}{N^{2}}\|\tilde{u}\|_{L^{2}(Q_{1})}^{2},\] \[z_{0} \leq\frac{c}{(N-k_{-1})^{2}}\left(\|(\tilde{u}-k_{-1})_{+}\|^{2} {}_{L^{2}(Q_{1})}+k_{-1}^{2}\right).\]
Choose a sufficiently large \(N\) such that
\[N\geq c^{\frac{1+\kappa}{\sigma}}\left(\|\tilde{u}\|_{L^{2}(Q_{1})}+\|\tilde{f} \|_{L^{\delta,r}(Q_{1})}+\operatorname*{ess\,sup}_{t\in[-1,0]}\int_{\mathbb{R}^ {n}\setminus B_{\frac{1}{2}}}\frac{|\tilde{u}(y,t)|}{|y|^{n+2s}}\,dy\right), \quad\sigma=\min\{\delta,\kappa\}, \tag{3.19}\]
which implies
\[y_{0} \leq(2c)^{-\frac{1+\kappa}{\sigma}}\times\left(2^{n+4}\right)^{- \frac{1+\kappa}{\sigma^{2}}}\] \[z_{0} \leq(2c)^{-\frac{1+\kappa}{\sigma}}\times\left(2^{n+4}\right)^{- \frac{1+\kappa}{\sigma^{2}}}.\]
Applying Lemma 2.6, we have
\[\sup_{Q_{1/2}}|\tilde{u}(x,t)|\leq c\left(\|\tilde{u}\|_{L^{2}(Q_{1})}+\|\tilde {f}\|_{L^{q,r}(Q_{1})}+\operatorname*{ess\,sup}_{t\in[-1,0]}\int_{\mathbb{R}^ {n}\setminus B_{\frac{1}{2}}}\frac{|\tilde{u}(y,t)|}{|y|^{n+2s}}\,dy\right),\]
By scaling back, we see that
\[\sup_{z\in Q_{\rho_{0}/2}(z_{0})}|u(z)| \leq c\Bigg{(}\left(\iint_{Q_{\rho_{0}}(z_{0})}u^{2}(x,t)\,dx\,dt \right)^{\frac{1}{2}}+\operatorname{Tail}_{\infty}\left(u,z_{0},\rho_{0}/2, \rho_{0}^{2s}\right)\] \[\qquad+\rho_{0}^{2s-\left(\frac{n}{d}+\frac{2s}{\sigma}\right)} ||f||_{L^{q,r}(Q_{\rho_{0}}(z_{0}))}\Bigg{)}.\]
We next want to assert the Holder regularity of a local weak solution to (1.1) with (1.4) by comparing the solution to the homogeneous equation (3.20) below. More precisely, we assert that there is a constant \(\tilde{\beta}\equiv\tilde{\beta}(n,s,q,r,\beta)\) such that \(u\in C^{\tilde{\beta},\frac{\tilde{\beta}}{2s}}(Q_{R_{0}}(z_{0}))\) for every \(Q_{2R_{0}}(z_{0})\Subset\Omega_{T}\).
Now, we fix any \(\rho>0\) and \(R>0\) so that
\[0<\rho<R<\min\left\{1,\left(2^{2s}-1\right)^{\frac{1}{2s}}\frac{R_{0}}{4}, \frac{R_{0}}{4}\right\} \tag{3.19}\]
and choose a point \(\tilde{z}\in Q_{R_{0}}(z_{0})\). Note that \(Q_{4R}(\tilde{z})\subset Q_{2R_{0}}(z_{0})\Subset\Omega_{T}\). Throughout this section, we set
\[\gamma=2n\kappa,\]
which is the positive number from (1.4).
**Lemma 3.3**.: _Let \(u\) be a local weak solution to (1.1) with (1.4) and (3.19). Suppose that_
\[v\in L^{2}\left(\tilde{t}-(3R)^{2s},\tilde{t};W^{s,2}(B_{3R}( \tilde{x}))\right) \cap L^{\infty}\left(\tilde{t}-(3R)^{2s},\tilde{t};L^{1}_{2s}( \mathbb{R}^{n})\right) \tag{3.20}\] \[\cap C\left(\left[\tilde{t}-(3R)^{2s},\tilde{t}\right];L^{2}(B_{ 3R}(\tilde{x}))\right)\]
_is the local weak solution to_
\[\begin{cases}\partial_{t}v+\mathcal{L}^{\Phi}_{A}v=0&\text{ in }Q_{3R}(\tilde{z})\\ v=u&\text{ on }\partial_{P}Q_{3R}(\tilde{z})\cup\left((\mathbb{R}^{n}\setminus B _{3R}(\tilde{x}))\times\left[\tilde{t}-(3R)^{2s},\tilde{t}\right]\right).\end{cases}\]
_Then we have the estimates_
\[\int_{\tilde{t}-(3R)^{2s}}^{\tilde{t}}[(u-v)(\cdot,t)]^{2}_{W^{s,2}(\mathbb{R} ^{n})}\,dt+\sup_{t\in[\tilde{t}-(3R)^{2s},\tilde{t}]}\int_{B_{3R}(\tilde{x})} (u-v)^{2}(x,t)\,dx\leq cR^{\gamma+n}\|f\|^{2}_{L^{q,r}(Q_{3R}(\tilde{z}))}\]
_and_
\[\iint_{Q_{3R}(\tilde{z})}(u-v)^{2}\,dx\,dt\leq cR^{\gamma}\|f\|^{2}_{L^{q,r}( Q_{3R}(\tilde{z}))},\]
_for some constant \(c\equiv c(n,s,q,r,\lambda)\)._
_Remark 6_.: The existence of \(v\) in (3.20) follows from the Appendix A.
Proof.: Let \(w=u-v\) to find that
\[w\in L^{2}\left(\tilde{t}-(3R)^{2s},\tilde{t};W^{s,2}_{0}(B_{3R}(\tilde{x})) \right)\cap C\left(\left[\tilde{t}-(3R)^{2s},\tilde{t}\right];L^{2}(B_{3R}( \tilde{x}))\right)\]
and
\[\partial_{t}w+\mathcal{L}^{\Phi}_{A}u-\mathcal{L}^{\Phi}_{A}v=f\text{ in }Q_{3R}( \tilde{z}).\]
By an approximation argument with the mollification in time, we find that
\[\int_{B_{3R}(\tilde{x})}\frac{w^{2}(x,t_{1})}{2}\,dx+\int_{\tilde{t}- (3R)^{2s}}^{t_{1}}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}A(x,y,t)\frac{\Phi(u( x,t)-u(y,t))}{|x-y|^{n+2s}}(w(x,t)-w(y,t))\,dx\,dy\,dt\] \[-\int_{\tilde{t}-(3R)^{2s}}^{t_{1}}\int_{\mathbb{R}^{n}}\int_{ \mathbb{R}^{n}}A(x,y,t)\frac{\Phi(v(x,t)-v(y,t))}{|x-y|^{n+2s}}(w(x,t)-w(y,t)) \,dx\,dy\,dt\] \[\quad=\int_{\tilde{t}-(3R)^{2s}}^{t_{1}}\int_{B_{3R}(\tilde{x})}fw \,dx\,dt,\]
where \(t_{1}\in\left[\tilde{t}-(3R)^{2s},\tilde{t}\right].\) we have the estimate
\[\int_{\tilde{t}-(3R)^{2s}}^{t_{1}}\int_{\mathbb{R}^{n}}\int_{ \mathbb{R}^{n}}A(x,y,t)\frac{\Phi(u(x,t)-u(y,t))-\Phi(v(x,t)-v(y,t))}{|x-y|^{n +2s}}(w(x,t)-w(y,t))\,dx\,dy\,dt\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\geq\lambda^{-2}\int_{\tilde{t}-(3R)^{2s}}^{t_{1}}\int_{\mathbb{R}^{n}} \int_{\mathbb{R}^{n}}\frac{|w(x,t)-w(y,t)|^{2}}{|x-y|^{n+2s}}\,dx\,dy\,dt,\]
which implies
\[\sup_{t\in[\tilde{t}-(3R)^{2s},\tilde{t}]}\int_{B_{3R}(\tilde{x} )}\frac{w^{2}(x,t)}{2}\,dx+\lambda^{-2}\int_{\tilde{t}-(3R)^{2s}}^{\tilde{t}} \int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{|w(x,t)-w(y,t)|^{2}}{|x-y|^{n +2s}}\,dx\,dy\,dt\] \[\qquad\leq\int_{Q_{3R}(\tilde{x})}fw\,dx\,dt.\]
Using Holder's inequality, Lemma 2.2, and Cauchy's inequality, we estimate the last term as follows:
\[\int_{Q_{3R}(\tilde{x})}fw\,dx\,dt\] \[\leq cR^{\frac{n+\gamma}{2}}\|f\|_{L^{q,r}(Q_{3R}(\tilde{x}))} \Bigg{[}\left(\int_{\tilde{t}-(3R)^{2s}}^{\tilde{t}}[w(\cdot,t)]_{W^{s,2}( \mathbb{R}^{n})}^{2}\,dt+\sup_{t\in[\tilde{t}-(3R)^{2s},\tilde{t}]}\int_{B_{3R }(\tilde{x})}w^{2}(x,t)\,dx\right)^{\frac{1}{2}}\Bigg{]}\] \[\leq cR^{n+\gamma}\|f\|_{L^{q,r}(Q_{3R}(\tilde{x}))}^{2}\] \[\qquad+\frac{\lambda^{-2}}{4}\int_{\tilde{t}-(3R)^{2s}}^{\tilde{t }}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{|w(x,t)-w(y,t)|^{2}}{|x-y|^{ n+2s}}\,dx\,dy\,dt+\frac{\lambda^{-2}}{4}\sup_{t\in[\tilde{t}-(3R)^{2s}, \tilde{t}]}\int_{B_{3R}(\tilde{x})}w^{2}(x,t)\,dx.\]
Combining all the estimates together with Lemma 2.2, we get
\[\int_{\tilde{t}-(3R)^{2s}}^{\tilde{t}}[(u-v)(\cdot,t)]_{W^{s,2}(\mathbb{R}^{n} )}\,dt+\sup_{t\in[\tilde{t}-(3R)^{2s},\tilde{t}]}\int_{B_{3R}(\tilde{x})}(u-v) ^{2}(x,t)\,dx\leq cR^{n+\gamma}\|f\|_{L^{q,r}(Q_{3R}(\tilde{x}))}^{2} \tag{3.21}\]
and
\[\int\hskip-10.0pt\int_{Q_{3R}(\tilde{x})}(u-v)^{2}\,dx\,dt\leq cR^{\gamma}\|f \|_{L^{q,r}(Q_{3R}(\tilde{x}))}^{2}.\]
Using the local boundedness of \(u\), Lemma 1.1, and Lemma 3.3, we have the following estimate.
**Lemma 3.4**.: _(Decay transfer) Under the same conditions and conclusion as in Lemma 3.3, there holds_
\[\int\hskip-10.0pt\int_{Q_{\rho}}|u-\overline{u}_{Q_{\rho}}|^{2}\, dx\,dt\] \[\qquad\leq c\left(\frac{R}{\rho}\right)^{n+2s}R^{\gamma}\|f\|_{L^ {q,r}(Q_{4R}(\tilde{x}))}^{2}\]
\[\leq c\left(\frac{R}{\rho}\right)^{n+2s}R^{\gamma}\|f\|_{L^{q,r}(Q_{4R}( \tilde{z}))}^{2}\] \[\qquad+c\left(\frac{\rho}{R}\right)^{2\beta}\left(R^{\gamma}\|f\|_{L^ {q,r}(Q_{4R}(\tilde{z}))}^{2}+\|u\|_{L^{\infty}(Q_{4R}(\tilde{z}))}^{2}+\text{ Tail}_{\infty}^{2}(u,\tilde{z},4R)\right)\]
\[\leq c\left(\frac{R}{\rho}\right)^{n+2s}R^{\gamma}\|f\|_{L^{q,r}(Q_{2R_{0} }(z_{0}))}^{2}\] \[\quad+c\left(\frac{\rho}{R}\right)^{2\beta}\Big{(}(R_{0})^{\gamma} \|f\|_{L^{q,r}(Q_{2R_{0}}(z_{0}))}^{2}+\|u\|_{L^{\infty}(Q_{2R_{0}}(z_{0}))}^{2 }+\mathrm{Tail}_{\infty}^{2}(u;z_{0},2R_{0})\Big{)}\,.\]
We notice that
\[\mathrm{Tail}_{\infty}\left(u,\tilde{z},4R\right) \leq(4R)^{2s}\int_{B_{2R_{0}}(x_{0})\setminus B_{4R}(\tilde{z})} \frac{\|u\|_{L^{\infty}(Q_{2R_{0}}(z_{0}))}}{|y-\tilde{x}|^{n+2s}}\,dy\] \[\quad+(4R)^{2s}\operatorname*{ess\,sup}_{t\in[t_{0}-(2R_{0})^{2s},t_{0}]}\int_{\mathbb{R}^{n}\setminus B_{2R_{0}}(x_{0})}\frac{|u(y,t)|}{|y-x_ {0}|^{n+2s}}\,dy\] \[\leq c\|u\|_{L^{\infty}(Q_{2R_{0}}(z_{0}))}+c\,\mathrm{Tail}_{ \infty}\left(u,z_{0},2R_{0}\right),\]
where we have used the fact that
\[|y-\tilde{x}|\geq|y-x_{0}|-|\tilde{x}-x_{0}|\geq\frac{|y-x_{0}|}{2}\text{ for any }y\in\mathbb{R}^{n}\setminus B_{2R_{0}}(x_{0}).\]
Thus
\[\int\!\!\!\!\!\int_{Q_{\rho}(\tilde{z})}|u-\overline{u}_{Q_{\rho} (\tilde{z})}|^{2}\,dx\,dt\] \[\quad\leq c\left(\frac{R}{\rho}\right)^{n+2s}R^{\gamma}\|f\|_{L^{ q,r}(Q_{2R_{0}}(z_{0}))}^{2}\] \[\quad+c\left(\frac{\rho}{R}\right)^{2\beta}\Big{(}(R_{0})^{\gamma }\|f\|_{L^{q,r}(Q_{2R_{0}}(z_{0}))}^{2}+\|u\|_{L^{\infty}(Q_{2R_{0}}(z_{0}))} ^{2}+\mathrm{Tail}_{\infty}^{2}(u;z_{0},2R_{0})\Big{)}\,.\]
Now take \(\rho=R^{\frac{n+2\beta+2s+\gamma}{n+2\beta+2s}}<R\) to discover that
\[\frac{1}{\rho^{2}\tilde{\beta}}\!\!\!\!\int_{Q_{\rho}(\tilde{z}) }|u-\overline{u}_{Q_{\rho}(\tilde{z})}|^{2}\,dx\,dt \leq c\Big{(}\|f\|_{L^{q,r}(Q_{2R_{0}}(z_{0}))}^{2}\] \[\quad+\|u\|_{L^{\infty}(Q_{2R_{0}}(z_{0}))}^{2}+\mathrm{Tail}_{ \infty}^{2}(u;x_{0},2R_{0})\Big{)},\]
for \(\tilde{\beta}=\frac{\beta\gamma}{n+2s+2\beta+\gamma}\) where \(\beta\) is as in Lemma 2.7. The conclusion follows from Lemma 2.1.
Improved Holder regularity for homogeneous equation with kernel coefficient invariant under the translation in only spatial direction
In the previous chapter, we have proved the Holder regularity to (1.1) with (1.4) for some exponent. For now, we assume the kernel coefficient
\[\tilde{A}\in\mathcal{L}_{1}(\lambda;B_{1}\times B_{1}\times(-1,0))\cap\mathcal{ L}_{0}(\lambda), \tag{4.1}\]
to obtain the higher Holder regularity. We focus on a weak solution
\[u\in L^{2}\big{(}(-2^{2s},0];W^{s,2}(B_{2})\big{)}\cap L^{\infty}\big{(}(-2^ {2s},0];L^{1}_{2s}(\mathbb{R}^{n})\big{)}\cap C\big{(}(-2^{2s},0];L^{2}(B_{2}) \big{)}\]
to the locally normalized problem
\[\partial_{t}u+\mathcal{L}_{A}^{\Phi}u=0\text{ in }Q_{2}. \tag{4.2}\]
Many of the results and computations in this section are based on those in [4, 5].
First, we start with the following discrete differentiation of the equation, which is an extension of the one described in [5, Lemma 3.3] as well as a parabolic analogue of the elliptic one in [33, Prosition 3.1.].
**Lemma 4.1**.: _Assume that \(u\) is a local weak solution to (4.2) with (4.1). Let \(\psi(x)\in C_{c}^{\infty}(B_{R})\) be a nonnegative function and \(\eta(t)\in C^{\infty}(\mathbb{R})\) a nonnegative function with_
\[\eta(t)=0\text{ at }t\leq t_{1}\text{ and }\eta(t)=1\text{ at }t\geq t_{2}\]
_where \(-1<t_{1}<t_{2}<0\) and \(R<1\). Then for any locally Lipschitz function \(g:\mathbb{R}\to\mathbb{R}\) and \(h\in\mathbb{R}^{n}\) such that \(0<|h|<\frac{1}{4(1-R)}\), we have_
\[\begin{split}&\int_{t_{1}}^{t_{2}}\int_{B_{R}}\int_{B_{R}}\big{(} \Phi(u_{h}(x,t)-u_{h}(y,t))-\Phi(u(x,t)-u(y,t))\big{)}\\ &\quad\times\big{(}g(u_{h}(x,t)-u(x,t))\psi(x)^{2}-g(u_{h}(y,t)-u (y,t))\psi(y)^{2}\big{)}\eta(t)\frac{A(x,y,t)}{|x-y|^{n+2s}}\,dx\,dy\,dt\\ &\quad=-\int_{t_{1}}^{t_{2}}\int_{\mathbb{R}^{n}\setminus B_{R}} \int_{B_{R}}\frac{\big{(}A_{h}(x,y,t)\Phi(u_{h}(x,t)-u_{h}(y,t))-A(x,y,t)\Phi(u (x,t)-u(y,t))\big{)}}{|x-y|^{n+2s}}\,dx\,dy\,dt\\ &\qquad\times(g(u_{h}(x,t)-u(x,t))\psi(x)^{2}\eta(t))\,dx\,dy\,dt \\ &\qquad-\int_{t_{1}}^{t_{2}}\int_{B_{R}}\int_{\mathbb{R}^{n} \setminus B_{R}}\frac{\big{(}A_{h}(x,y,t)\Phi(u_{h}(x,t)-u_{h}(y,t))-A(x,y,t) \Phi(u(x,t)-u(y,t))\big{)}}{|x-y|^{n+2s}}\\ &\qquad\times(g(u_{h}(y,t)-u(y,t))\psi(y)^{2}\eta(t))\,dx\,dy\, dt\\ &\quad-\int_{B_{R}}G(\delta_{h}u(x,t_{2}))\psi^{2}(x)\,dx+\int_{t _{1}}^{t_{2}}\int_{B_{R}}G(\delta_{h}u(x,t))\psi^{2}(x)\eta^{\prime}(t)\,dx\, dt,\end{split} \tag{4.3}\]
_where \(G(t)=\int_{0}^{t}g(s)\,ds\) for any \(t\in\mathbb{R}\) and \(A_{h}(x,y,t)=A(x+h,y+h,t)\) for \((x,y,t)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}\)._
Proof.: For convenience, we denote the first three terms in the above equation by \(J_{1}\), \(J_{2}\), and \(J_{3}\). Take
\[h_{0}=\operatorname{dist}(\operatorname{supp}\psi,\partial B_{R})\quad\text{ and}\quad\epsilon_{0}=\frac{1}{2}\min\{-t_{2},t_{1}+1,t_{2}-t_{1}\},\]
and for any \(0<\epsilon<\epsilon_{0}\), define
\[\phi_{\epsilon}(x,t)\coloneqq\big{(}g(u_{h}^{\epsilon}-u^{\epsilon})\psi^{2} \eta_{\epsilon}\big{)}^{\epsilon}\left(x,t\right)=\big{(}g\left(\delta_{h}u^{ \epsilon}\right)\psi^{2}\eta_{\epsilon}\big{)}^{\epsilon}\left(x,t\right), \quad(x,t)\in Q_{1},\]
where
\[\eta_{\epsilon}(t)=\eta\left(\frac{t_{2}-t_{1}}{t_{2}-t_{1}-\epsilon}\left(t- t_{2}+\frac{\epsilon}{2}\right)+t_{2}\right),\quad t\in\mathbb{R}.\]
Testing to (4.2) with \(\phi_{\epsilon}\), we get
\[\begin{split}&\int_{t_{1}}^{t_{2}}\int_{B_{R}}-u\partial_{t}\phi_{ \epsilon}\,dx\,dt\\ &\quad+\int_{t_{1}}^{t_{2}}\int_{\mathbb{R}^{n}}\int_{\mathbb{R} ^{n}}\Phi(u(x,t)-u(y,t))(\phi_{\epsilon}(x,t)-\phi_{\epsilon}(y,t))\frac{A(x, y,t)}{|x-y|^{n+2s}}\,dx\,dy\,dt\\ &=-\int_{B_{R}}u(x,t)\phi_{\epsilon}(x,t)\,dx\bigg{|}_{t=t_{1}}^ {t=t_{2}}.\end{split} \tag{4.4}\]
On the other hand, we take a test function \(\phi_{-h}(x,t)\coloneqq\phi(x-h,t)\) for \(0<|h|<\frac{h_{0}}{4}\) and using a change of variables, we have
\[\begin{split}&\int_{t_{1}}^{t_{2}}\int_{B_{R}}-u_{h}\partial_{t} \phi_{\epsilon}\,dx\,dt\\ &\quad+\int_{t_{1}}^{t_{2}}\int_{\mathbb{R}^{n}}\int_{\mathbb{R} ^{n}}\Phi(u_{h}(x,t)-u_{h}(y,t))(\phi_{\epsilon}(x,t)-\phi_{\epsilon}(y,t)) \frac{A_{h}(x,y,t)}{|x-y|^{n+2s}}\,dx\,dy\,dt\\ &=-\int_{B_{R}}u_{h}(x,t)\phi_{\epsilon}(x,t)\,dx\bigg{|}_{t=t_{1}} ^{t=t_{2}}.\end{split} \tag{4.5}\]
By subtracting (4.5) from (4.4), we observe that
\[I^{\epsilon} \coloneqq\int_{t_{1}}^{t_{2}}\int_{B_{R}}(-u_{h}+u)\partial_{t} \phi_{\epsilon}\,dx\,dt+\int_{B_{R}}(u_{h}(x,t)-u(x,t))\phi_{\epsilon}(x,t)\,dx \Bigg{|}_{t=t_{1}}^{t=t_{2}}\] \[=-\int_{t_{1}}^{t_{2}}\int_{B_{R}}\int_{B_{R}}\big{(}\Phi(u(x,t)- u(y,t))-\Phi(u_{h}(x,t)-u_{h}(y,t))\big{)}\] \[\qquad\qquad\qquad\qquad\times(\phi_{\epsilon}(x,t)-\phi_{ \epsilon}(y,t))\frac{A(x,y,t)}{|x-y|^{n+2s}}\,dx\,dy\,dt\] \[-\int_{t_{1}}^{t_{2}}\int_{\mathbb{R}^{n}\setminus B_{R}}\int_{B _{R}}\frac{\big{(}A(x,y,t)\Phi(u(x,t)-u(y,t))-A_{h}(x,y,t)\Phi(u_{h}(x,t)-u_{h }(y,t))\big{)}}{|x-y|^{n+2s}}\] \[\qquad\qquad\qquad\qquad\times\phi_{\epsilon}(x,t)\,dx\,dy\,dt\] \[-\int_{t_{1}}^{t_{2}}\int_{B_{R}}\int_{\mathbb{R}^{n}\setminus B _{R}}\frac{\big{(}A(x,y,t)\Phi(u(x,t)-u(y,t))-A_{h}(x,y,t)\Phi(u_{h}(x,t)-u_{h }(y,t))\big{)}}{|x-y|^{n+2s}}\] \[\qquad\qquad\qquad\times\phi_{\epsilon}(y,t)\,dx\,dy\,dt\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
Proof.: We first assume \(t_{2}<0\) to use Lemma 4.1. the case \(t_{2}=0\) will be considered in Step 4.
**Step 1: Discrete differentiation of the equation.** Set \(r=R-4h_{0}\) and fix \(h\in\mathbb{R}^{n}\) such that \(0<|h|<h_{0}\). Let \(\psi(x)\in C_{c}^{\infty}(B_{R})\) be a cut-off function satisfying
\[0\leq\psi\leq 1,\quad\psi\equiv 1\text{ in }B_{r}\quad\text{and}\quad\|\nabla\psi\|_{L^{ \infty}(B_{R})}\leq\frac{2}{R-r}=\frac{1}{2h_{0}}.\]
Moreover, let \(\eta(t)\in C^{\infty}(\mathbb{R}^{n})\) be a nonnegative function such that
\[\eta(t)\equiv 0\text{ in }(-\infty,t_{1}],\quad\eta(t)\equiv 1\text{ in }[t_{1}+\tau, \infty)\quad\text{and}\quad\|\eta^{\prime}\|_{L^{\infty}(\mathbb{R})}\leq \frac{2}{\tau}. \tag{4.6}\]
Set \(g(t)\coloneqq|t|^{p-1}t\) and \(G(t)\coloneqq\frac{1}{p+1}|t|^{p+1}\) for \(t\in\mathbb{R}\). By using (4.3) in Lemma 4.1 and dividing both sides by \(|h|^{1+\theta p}\), we have
\[I :=\int_{t_{1}}^{t_{2}}\int_{B_{R}}\int_{B_{R}}\frac{A(x,y,t)\big{(} \Phi(u_{h}(x,t)-u_{h}(y,t))-\Phi(u(x,t)-u(y,t))\big{)}}{|h|^{1+\theta p}|x-y|^ {n+2s}}\] \[\qquad\times\big{[}g(u_{h}(x,t)-u(x,t))\psi^{2}(x)-g(u_{h}(y,t)-u (y,t))\psi^{2}(y)\big{]}\eta(t)\,dx\,dy\,dt\] \[=-\int_{t_{1}}^{t_{2}}\int_{\mathbb{R}^{n}\setminus B_{R}}\int_{B _{R}}\frac{A_{h}(x,y,t)\big{(}\Phi(u_{h}(x,t)-u_{h}(y,t))-\Phi(u(x,t)-u(y,t)) \big{)}}{|h|^{1+\theta p}|x-y|^{n+2s}}\] \[\qquad\times g(u_{h}(x,t)-u(x,t))\psi^{2}(x)\eta(t)\,dx\,dy\,dt\] \[-\int_{t_{1}}^{t_{2}}\int_{B_{R}}\int_{\mathbb{R}^{n}\setminus B_ {R}}\frac{A_{h}(x,y,t)\big{(}\Phi(u_{h}(x,t)-u_{h}(y,t))-\Phi(u(x,t)-u(y,t)) \big{)}}{|h|^{1+\theta p}|x-y|^{n+2s}}\] \[\qquad\times g(u_{h}(y,t)-u(y,t))\psi^{2}(y)\eta(t)\,dx\,dy\,dt\] \[-\int_{B_{R}}\frac{G(\delta_{h}u(x,t_{2}))}{|h|^{1+\theta p}}\psi ^{2}(x)\,dx+\int_{t_{1}}^{t_{2}}\int_{B_{R}}\frac{G(\delta_{h}u(x,t))}{|h|^{1+ \theta p}}\psi^{2}(x)\eta^{\prime}(t)\,dx\,dt\] \[=:-II-III-IV+V.\]
**Step 2: Estimates of \(I\) through \(V\).** First, we will estimate \(I\),\(II\) and \(III\). Let \(F_{1}(t)\), \(F_{2}(t)\) and \(F_{3}(t)\) be the integrands of \(I\), \(II\) and \(III\) with respect to the measure \(\eta(t)\,dt\), respectively. Then for fixed \(t\in[t_{1},t_{2}]\), similar calculations as in [33, Proposition 3.1, Step 2 - Step 4] lead to the following estimates,
\[F_{1}(t)\geq\frac{1}{c}\left[\frac{|\delta_{h}u(\cdot,t)|^{\frac{p-1}{2}}\delta _{h}u(\cdot,t)}{|h|^{\frac{1+\theta p}{2}}}\psi(\cdot)\right]_{W^{s,2}(B_{R})} ^{2}-c\int_{B_{R}}\frac{|\delta_{h}u(x,t)|^{p+1}}{|h|^{1+\theta p}}\,dx,\]
and
\[|F_{2}(t)|+|F_{3}(t)|\leq c\int_{B_{R}}\frac{|\delta_{h}u(x,t)|^{p}}{|h|^{1+ \theta p}}\,dx.\]
where \(c\) depends only on \(n,s,\lambda\) and \(h_{0}\), but it is independent on \(t\). It follows that
\[I\geq\frac{1}{c}\int_{t_{1}}^{t_{2}}\left[\frac{|\delta_{h}u(\cdot,t)|^{\frac{ p-1}{2}}\delta_{h}u(\cdot,t)}{|h|^{\frac{1+\theta p}{2}}}\psi(\cdot)\right]_{W^{s,2}(B_{R}) }^{2}\eta(t)\,dt-c\int_{t_{1}}^{t_{2}}\int_{B_{R}}\frac{|\delta_{h}u(x,t)|^{p} }{|h|^{1+\theta p}}\,dx\,dt,\]
and
\[|II|+|III|\leq c\int_{t_{1}}^{t_{2}}\int_{B_{R}}\frac{|\delta_{h}u(x,t)|^{p}}{| h|^{1+\theta p}}\,dx\,dt,\]
where we have used \(\eta\leq 1\). Next, we will derive estimates of \(IV\) and \(V\). From the definition of \(G\) and \(\psi\), we get
\[IV\geq\frac{1}{c}\int_{B_{r}}\frac{|\delta_{h}u(x,t_{2})|^{p+1}}{|h|^{1+\theta p }}\,dx.\]
Since \(\|u_{h}\|_{L^{\infty}(B_{R}\times[t_{1},t_{2}])}\leq\|u\|_{L^{\infty}(B_{1} \times[-1,0])}\leq 1\) and (4.6), we deduce that
\[|V|\leq\frac{c}{\tau}\int_{t_{1}}^{t_{2}}\int_{B_{R}}\frac{|\delta_{h}u(x,t)|^{ p}}{|h|^{1+\theta p}}\,dx\,dt.\]
Combining all the above estimates, we have
\[\int_{t_{1}}^{t_{2}}\left[\frac{|\delta_{h}u(\cdot,t)|^{p-1\over 2} \delta_{h}u(\cdot,t)}{|h|^{1+\theta p\over 2}}\psi(\cdot)\right]_{W^{s,2}(B_{R})}^{2} \eta(t)\,dt +\int_{B_{r}}\frac{|\delta_{h}u(x,t_{2})|^{p+1}}{|h|^{1+\theta p}}\,dx\] \[\leq c(\tau)\int_{t_{1}}^{t_{2}}\int_{B_{R}}\frac{|\delta_{h}u(x, t)|^{p}}{|h|^{1+\theta p}}\,dx\,dt. \tag{4.7}\]
We further estimate using the advantage of [33, Proposition 3.1, Step 5] and (4.6),
\[\int_{t_{1}+\tau}^{t_{2}}\left\|\frac{\delta_{h}^{2}u(\cdot,t)}{| h|^{1+2+\theta p\over p+1}}\right\|_{L^{p+1}(B_{r})}^{p+1}dt \leq c\int_{t_{1}}^{t_{2}}\left[\frac{|\delta_{h}u(\cdot,t)|^{p-1 \over 2}\delta_{h}u(\cdot,t)}{|h|^{1+\theta p\over 2}}\psi(\cdot)\right]_{W^{s,2}(B_{R})}^ {2}\eta(t)\,dt \tag{4.8}\] \[\quad+c\int_{t_{1}}^{t_{2}}\int_{B_{R}}\frac{|\delta_{h}u(x,t)|^{ p}}{|h|^{1+\theta p}}\,dx\,dt.\]
Moreover, Lemma 2.3 and \(\|u\|_{L^{\infty}(Q_{1})}\leq 1\) imply
\[\int_{t_{1}}^{t_{2}}\int_{B_{R}}\frac{|\delta_{h}u(x,t)|^{p}}{|h|^ {1+\theta p}}\,dx\,dt \leq c\left(\int_{t_{1}}^{t_{2}}\int_{B_{R}+4h_{0}}\frac{|\delta_{ h}^{2}u(x,t)|^{p}}{|h|^{1+\theta p}}\,dx\,dt+\int_{t_{1}}^{t_{2}}\|u(\cdot,t) \|_{L^{p}(B_{R+4h_{0}})}^{p}\,dt\right) \tag{4.9}\] \[\leq c\left(\int_{t_{1}}^{t_{2}}\int_{B_{R}+4h_{0}}\frac{|\delta_{ h}^{2}u(x,t)|^{p}}{|h|^{1+\theta p}}\,dx\,dt+1\right).\]
Gathering together the estimates (4.7), (4.8) and (4.9) gives
\[\int_{t_{1}+\tau}^{t_{2}}\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{ h}^{2}u(\cdot,t)}{|h|^{1+2+\theta p\over p+1}}\right\|_{L^{p+1}(B_{R-4h_{0}})}^{p+ 1}dt +\int_{B_{r}}\frac{|\delta_{h}u(x,t_{2})|^{p+1}}{|h|^{1+\theta p }}\,dx \tag{4.10}\] \[\leq c\left(\int_{t_{1}}^{t_{2}}\sup_{0<|h|<h_{0}}\left\|\frac{ \delta_{h}^{2}u(\cdot,t)}{|h|^{1+\theta p\over p}}\right\|_{L^{p}(B_{R+4h_{0}} )}^{p}dt+1\right).\]
**Step 4: Case for \(t_{2}=0.\)** Note that \(c\) in (4.10) is independent of \(t_{1}\). By the monotone convergence theorem, we get
\[\lim_{t_{2}\to 0}\int_{t_{1}+\tau}^{t_{2}}\sup_{0<|h|<h_{0}}\left\| \frac{\delta_{h}^{2}u(\cdot,t)}{|h|^{1+2+\theta p\over p+1}}\right\|_{L^{p+1} (B_{R-4h_{0}})}^{p+1}dt=\int_{t_{1}+\tau}^{0}\sup_{0<|h|<h_{0}}\left\|\frac{ \delta_{h}^{2}u(\cdot,t)}{|h|^{1+2+\theta p\over p+1}}\right\|_{L^{p+1}(B_{R-4 h_{0}})}^{p+1}dt. \tag{4.11}\]
Since for any \(0<|h|<h_{0}\), \(\delta_{h}^{2}u(x,t)\) is in \(C([-1,0];L^{2}(B_{1}))\), we have
\[\lim_{t_{2}\to 0}\|\delta_{h}^{2}u(\cdot,t_{2})-\delta_{h}^{2}u(\cdot,0)\|_{L^{2}(B _{R})}=0.\]
which implies
\[\lim_{t_{2}\to 0}\delta_{h}^{2}u(x,t_{2})=\delta_{h}^{2}u(x,0)\quad a.e.\ \text{in}\ B_{R}.\]
Therefore Fatou's lemma yields
\[\left\|\frac{\delta_{h}u(\cdot,0)}{|h|^{1+\theta p\over p+1}}\right\|_{L^{p+1} (B_{R-4h_{0}})}^{p+1}\leq\liminf_{t_{2}\to 0}\left\|\frac{\delta_{h}u(\cdot,t_{2})}{|h|^{1+ \theta p\over p+1}}\right\|_{L^{p+1}(B_{R-4h_{0}})}^{p+1},\quad 0<|h|<h_{0}. \tag{4.12}\]
Taking into account (4.10), (4.11) and (4.12), we conclude
\[\int_{t_{1}+\tau}^{0}\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}^{2}u(\cdot,t)}{| h|^{1+2+\theta p\over p+1}}\right\|_{L^{p+1}(B_{R-4h_{0}})}^{p+1}dt+\sup_{0<|h|<h_{0}} \left\|\frac{\delta_{h}u(\cdot,0)}{|h|^{1+\theta p\over p+1}}\right\|_{L^{p+1}(B _{R-4h_{0}})}^{p+1}\]
\[\leq c\left(\int_{t_{1}}^{0}\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}^{2}u(\cdot,t)} {|h|^{\frac{1+\theta_{p}}{p}}}\right\|_{L^{p}(B_{R+4h_{0}})}^{p}dt+1\right).\]
Now we prove a higher Holder regularity for (4.2) with respect to the spatial variables.
**Lemma 4.3**.: _Let \(u\) be a local weak solution to (4.2) with (4.1). Then for any \(0<\alpha<\min\{2s,1\}\),_
\[\operatorname*{ess\,sup}_{t\in[-(1/2)^{2s},0]}[u(\cdot,t)]_{C^{\alpha}(B_{1/2 })}\leq c\Bigg{(}\|u\|_{L^{\infty}(Q_{1})}+\left(\int_{-1}^{0}[u(\cdot,t)]_{W^{ s},2(B_{1})}^{2}\,dt\right)^{\frac{1}{2}}+\operatorname{Tail}_{\infty}(u;0,1) \Bigg{)},\]
_where \(c\equiv c(n,s,\alpha,\lambda)\)._
Proof.: **Step 1: Normalization.** Let
\[M\coloneqq\|u\|_{L^{\infty}(Q_{1})}+\left(\int_{-1}^{0}[u(\cdot,t)]_{W^{s},2 (B_{1})}^{2}\,dt\right)^{\frac{1}{2}}+\operatorname{Tail}_{\infty}(u;0,1)\]
and set
\[\tilde{u}(x,t)\coloneqq\frac{1}{M}u(x,t),\quad(x,t)\in Q_{2}\quad\text{and} \quad\tilde{\Phi}(\xi)\coloneqq\frac{1}{M}\Phi(M\xi),\quad\xi\in\mathbb{R}.\]
Then \(\tilde{u}\) is a local weak solution to \(\partial_{t}\tilde{u}+\mathcal{L}_{\tilde{A}}^{\tilde{\Phi}}\tilde{u}=0\) in \(Q_{2}\). In particular, \(\tilde{u}\) satisfies
\[\|\tilde{u}\|_{L^{\infty}(Q_{1})}\leq 1,\ \int_{-1}^{0}[\tilde{u}(\cdot,t)]_{W^{ s},2(B_{1})}^{2}\,dt\leq 1,\ \text{and}\ \operatorname{Tail}_{\infty}(\tilde{u};0,1)\leq 1. \tag{4.13}\]
**Step 2: Iteration.** Let \(q_{i}\coloneqq i+2\) and \(\theta_{i}\coloneqq\frac{2s(i+1)-1}{i+2}\) for \(i\geq 0\). Then \(\frac{1+\theta_{i}q_{i}}{q_{i}}\) is an increasing sequence satisfying
\[\frac{1+2s+\theta_{i}q_{i}}{q_{i}+1}=\frac{1+\theta_{i+1}q_{i+1}}{q_{i+1}}, \quad s=\frac{1+\theta_{0}q_{0}}{q_{0}}\leq\frac{1+\theta_{i}q_{i}}{q_{i}}<2s,\quad\text{and}\quad\lim_{i\to\infty}\frac{1+\theta_{i}q_{i}}{q_{i}}=2s.\]
Now fix \(\alpha\in(0,\min\{2s,1\})\). We consider the following two cases.
**(Case 1 : \(s\leq\frac{1}{2}\)).** In this case, we can choose \(i_{\alpha}\in\mathbb{N}\) such that
\[\alpha<\frac{1+\theta_{i_{\alpha}}q_{i_{\alpha}}-n}{q_{i_{\alpha}}}<2s\leq 1. \tag{4.14}\]
Let
\[h_{0}=\frac{1}{64i_{\alpha}} \tag{4.15}\]
and set for any \(i\geq 0\),
\[R_{i}=\frac{7}{8}-4h_{0}(2i+1)\quad\text{and}\quad T_{i}=-\left(\frac{7}{8} \right)^{2s}+32h_{0}\left(\left(\frac{7}{8}\right)^{2s}-\left(\frac{3}{4} \right)^{2s}\right)i. \tag{4.16}\]
Then \(R_{i}\) and \(T_{i}\) have the following relations.
\[R_{i}-4h_{0}=R_{i+1}+4h_{0},\quad T_{i+1}=T_{i}+32h_{0}\left(\left(\frac{7}{8} \right)^{2s}-\left(\frac{3}{4}\right)^{2s}\right),\]
\[R_{0}+4h_{0}=\frac{7}{8},\quad R_{i_{\alpha}}+4h_{0}=\frac{3}{4}\quad\text{and }\quad T_{i_{\alpha}}=-\frac{1}{2}\left(\left(\frac{7}{8}\right)^{2s}+\left( \frac{3}{4}\right)^{2s}\right).\]
Note that \(\frac{7}{8}+4h_{0}\leq 1\) and \(\delta_{h}^{2}\tilde{u}=\delta_{2h}\tilde{u}-2\delta_{h}\tilde{u}\) to find
\[\int_{T_{0}}^{0}\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}^{2}\tilde{u}(\cdot,t) }{|h|^{\frac{1+\theta_{0}q_{0}}{q_{0}}}}\right\|_{L^{6}(B_{R_{0}+4h_{0}})}^{q_ {0}}dt=\int_{-\left(\frac{7}{8}\right)^{2s}}^{0}\sup_{0<|h|<h_{0}}\left\|\frac{ \delta_{h}^{2}\tilde{u}(\cdot,t)}{|h|^{s}}\right\|_{L^{2}(B_{7/8})}^{2}dt\]
\[\leq c\left(\int_{-1}^{0}[\tilde{u}(\cdot,t)]_{W^{s,2}(B_{1})}^{2}\,dt+\int_{-1}^ {0}\|\tilde{u}(\cdot,t)\|_{L^{2}(B_{1})}^{2}\,dt\right)\leq c,\]
where we have also used Lemma 2.5 and (4.13). Thus Lemma 4.2 implies
\[\int_{T_{i+1}}^{T}\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}^{2} \tilde{u}(\cdot,t)}{|h|^{\frac{1+\theta_{i+1}q_{i+1}}{q_{i+1}}}}\right\|_{L^{q_ {i+1}(B_{R_{i}\to a_{0}})}}^{q_{i+1}}dt+\sup_{0<|h|<h_{0}}\left\|\frac{\delta_ {h}\tilde{u}(\cdot,T)}{|h|^{\frac{1+\theta_{i+1}q_{i+1}}{q_{i+1}}}}\right\|_{L^ {q_{i+1}(B_{R_{i}\to a_{0}})}}^{q_{i+1}}\] \[\leq c\left(\int_{T_{i}}^{T}\sup_{0<|h|<h_{0}}\left\|\frac{\delta _{h}^{2}\tilde{u}(\cdot,t)}{|h|^{\frac{1+\theta_{i}q_{i}}{q_{i}}}}\right\|_{L ^{q_{i}(B_{R_{i}\to a_{0}})}}^{q_{i}}dt+1\right),\]
for all \(i=0,\ldots,i_{\alpha-1}\) and \(-\left(\frac{3}{4}\right)^{2s}\leq T\leq 0\). After finite steps, we obtain
\[\operatorname*{ess\,sup}_{T\in[-(3/4)^{2s},0]}\sup_{0<|h|<h_{0}}\left\|\frac{ \delta_{h}\tilde{u}(\cdot,T)}{|h|^{\frac{1+\theta_{i_{\alpha}}q_{i_{\alpha}}} {q_{i_{\alpha}}}}}\right\|_{L^{q_{i_{\alpha}}}(B_{3/4})}^{q_{i_{\alpha}}}\leq c (n,s,\lambda,\alpha).\]
Thus Lemma 2.4 with (4.14) yields
\[\operatorname*{ess\,sup}_{t\in[-(1/2)^{2s},0]}[\tilde{u}(\cdot,t)]_{C^{\alpha }(B_{1/2})}\leq c(n,s,\lambda,\alpha).\]
**(Case 2 : \(s>\frac{1}{2}\)).** On the other hand, when \(2s>1\), there exists some \(i_{\alpha}\in\mathbb{N}\) such that
\[\frac{1+\theta_{i_{\alpha}-1}q_{i_{\alpha}-1}}{q_{i_{\alpha}-1}}<1\leq\frac{1 +\theta_{i_{\alpha}}q_{i_{\alpha}}}{q_{i_{\alpha}}}.\]
Now take \(\gamma\in(\alpha,1)\). Then there exists some \(j_{\alpha}\in\mathbb{N}\) such that
\[\alpha<\gamma-\frac{n}{q_{i_{\alpha}+j_{\alpha}}}. \tag{4.17}\]
Take
\[h_{0}=\frac{1}{64(i_{\alpha}+j_{\alpha})}, \tag{4.18}\]
and let \(R_{i}\) and \(T_{i}\) be as in (4.16), but we use (4.18) instead of (4.15). Then
\[R_{0}+4h_{0}=\frac{7}{8},\quad R_{i_{\alpha}+j_{\alpha}}+4h_{0}=\frac{3}{4}, \quad\text{and}\quad T_{i_{\alpha}+j_{\alpha}}=-\frac{1}{2}\left(\left(\frac{ 7}{8}\right)^{2s}+\left(\frac{3}{4}\right)^{2s}\right).\]
A similar calculation in Case 1 shows that
\[\int_{T_{i_{\alpha}}}^{0}\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}^{2}\tilde{u }(\cdot,t)}{|h|^{\gamma}}\right\|_{L^{q_{i_{\alpha}}}(B_{R_{i_{\alpha}+4h_{0}} })}^{q_{i_{\alpha}}}dt\leq\int_{T_{i_{\alpha}}}^{0}\sup_{0<|h|<h_{0}}\left\| \frac{\delta_{h}^{2}\tilde{u}(\cdot,t)}{|h|^{\frac{1+\theta_{i_{\alpha}}q_{i_{ \alpha}}}{q_{i_{\alpha}}}}}\right\|_{L^{q_{i_{\alpha}}}(B_{R_{i_{\alpha}+4h_{0 }}})}^{q_{i_{\alpha}}}dt\leq c,\]
where we have also used \(\gamma<1\leq\frac{1+\theta_{i_{\alpha}}q_{i_{\alpha}}}{q_{i_{\alpha}}}\). Take \(\tilde{\theta}_{i}=\gamma-\frac{1}{q_{i}}\), which implies \(\frac{1+\tilde{\theta}_{i}q_{i}}{q_{i}}=\gamma\in(0,1)\). Then since \(s>\frac{1}{2}\), we discover
\[\gamma<1+\frac{q_{i}(\gamma-1)}{q_{i}+1}=\frac{2+\tilde{\theta}_{i}q_{i}}{q_{i} +1}<\frac{1+2s+\tilde{\theta}_{i}q_{i}}{q_{i}+1}.\]
Therefore by applying \(\tilde{\theta}_{i}\) instead of \(\theta_{i}\) to Lemma 4.2, we have
\[\int_{T_{i+1}}^{T}\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}^{2}\tilde{u}(\cdot,t )}{|h|^{\gamma}}\right\|_{L^{q_{i+1}}(B_{R_{i}\to a_{0}})}^{q_{i+1}}dt\leq\int_{ T_{i+1}}^{T}\sup_{0<|h|<h_{0}}\left\|\frac{\delta_{h}^{2}\tilde{u}(\cdot,t)}{|h|^{ \frac{1+2s+\tilde{\theta}_{i}q_{i}}{q_{i}+1}}}\right\|_{L^{q_{i+1}}(B_{R_{i} \to a_{0}})}^{q_{i+1}}dt\]
\[\leq c\left(\int_{T_{i}}^{T}\sup_{0<|h|<h_{0}}\left\|\frac{\theta_{h}^{2} \tilde{u}(\cdot,t)}{|h|^{\frac{1+\tilde{\theta}_{i}q_{i}}{q_{i}}}}\right\|_{L^{q _{i}}(B_{R_{i}+h_{0}})}^{q_{i}}dt+1\right),\]
for each \(i=i_{\alpha},\ldots,i_{\alpha}+j_{\alpha}-1\) and for any \(T\in[-\left(\frac{3}{4}\right)^{2s},0]\). By Lemma 2.4 and (4.17), we conclude
\[\operatorname*{ess\,sup}_{t\in[-(1/2)^{2s},0]}[\tilde{u}(\cdot,t)]_{C^{\alpha }(B_{1/2})}\leq c.\]
By scaling back, we complete the proof.
Next, we will prove a higher Holder regularity for a solution to (4.2) with respect to time variable. To this end, we first introduce two lemmas. One is a generalized Poincare type inequality from [5, Lemma 6.1].
**Lemma 4.4** (Generalized Poincare type inequality).: _Let \(0<s<1\) and \(1\leq p<\infty\). For any \(u\in W^{s,p}(B_{r})\) and \(\psi\in C_{c}^{\infty}(B_{r})\) with \(\overline{\psi}_{B_{r}}=1\), there holds_
\[\int_{B_{r}}|u-\overline{(u\psi)}_{B_{r}}|^{p}\,dx\leq cr^{sp}\|\psi\|_{L^{ \infty}(B_{r})}^{p}[u]_{W^{s,p}(B_{r})}^{p},\]
_where \(c\equiv c(n,s,p)\)._
The other is a gluing lemma regarding a Holder regularity with respect to time variable.
**Lemma 4.5** (Gluing lemma).: _Let \(u\) be a local weak solution to (1.1) in \(Q_{1}\) and let \(z_{0}\in Q_{1}\) and \(Q_{\rho,\theta}(z_{0})\Subset Q_{1}\). Let \(B_{\rho}\equiv B_{\rho}(x_{0})\) and let \(\psi(x)\in C_{c}^{\infty}(B_{3\rho/4})\) be a nonnegative function such that_
\[\overline{\psi}_{B_{\rho}}=1,\ \psi\equiv\|\psi\|_{L^{\infty}(B_{\rho})}\text{ in }B_{ \frac{\rho}{2}}\text{ and }\|\nabla\psi\|_{L^{\infty}(\mathbb{R}^{n})}\leq\frac{c}{\rho}. \tag{4.19}\]
_Then we have_
\[\begin{split}\left|\overline{(u\psi)}_{B_{\rho}}(T_{1})- \overline{(u\psi)}_{B_{\rho}}(T_{0})\right|&\leq c\frac{\theta}{ \rho}\mathchoice{\vbox{\hbox{$-$}}\kern-7.499886pt}{\vbox{ \hbox{$-$}}\kern-6.374903pt}{\vbox{\hbox{$-$}} \kern-4.499886pt}{\vbox{\hbox{$-$}}\kern-3.749943pt}\!\int_{t_{0}- \theta}\int_{B_{\rho}}\mathchoice{\vbox{\hbox{$-$}} \kern-7.499886pt}{\vbox{\hbox{$-$}}\kern-6.374903pt}{\vbox{ \hbox{$-$}}\kern-4.499886pt}{\vbox{\hbox{$-$}} \kern-3.749943pt}\!\int_{B_{\rho}}\frac{|u(x,t)-u(y,t)|}{|x-y|^{n+2s-1}}\,dx \,dy\,dt\\ &\quad+c\theta\mathchoice{\vbox{\hbox{$-$}} \kern-7.499886pt}{\vbox{\hbox{$-$}}\kern-6.374903pt}{\vbox{ \hbox{$-$}}\kern-4.499886pt}{\vbox{\hbox{$-$}} \kern-3.749943pt}\!\int_{t_{0}-\theta}\int_{\mathbb{R}^{n}\setminus B_{\rho}} \mathchoice{\vbox{\hbox{$-$}}\kern-7.499886pt}{\vbox{\hbox{$-$}} \kern-6.374903pt}{\vbox{\hbox{$-$}}\kern-4.499886pt}{\vbox{ \hbox{$-$}}\kern-3.749943pt}\!\int_{B_{3\rho/4}}\frac{|u(x,t)-u(y,t)|}{|x _{0}-y|^{n+2s}}\,dx\,dy\,dt,\end{split} \tag{4.20}\]
_where \(c\equiv c(n,s,\lambda)\) and \(t_{0}-\theta<T_{0}<T_{1}<t_{0}\)._
Proof.: For a sufficiently small \(0<\epsilon<\min\{\frac{T_{1}-T_{0}}{4},\frac{1+T_{0}}{2},-\frac{T_{1}}{2}\}\), we define
\[\eta_{\epsilon}(t)=\begin{cases}0&\text{in }t<T_{0}\\ \frac{t-T_{0}}{\epsilon}&\text{in }T_{0}\leq t<T_{0}+\epsilon\\ 1&\text{in }T_{0}+\epsilon\leq t<T_{1}-\epsilon\\ \frac{T_{1}-t}{\epsilon}&\text{in }T_{1}-\epsilon\leq t<T_{1}\\ 0&\text{in }t\geq T_{1}.\end{cases}\]
By the condition of (4.19), we obtain
\[\|\psi\|_{L^{\infty}(\mathbb{R}^{n})}=\overline{\psi}_{B_{\frac{\rho}{2}}}\leq c \overline{\psi}_{B_{\rho}}=c.\]
Then we can use \(\psi(x)\eta_{\epsilon}(t)\) as a test function. By the definition of local weak solution and the conditions (1.3), we see that
\[\left|\fint_{T_{1}-\epsilon}^{T_{1}}\int_{B_{\rho}}u\psi\,dxdt-\fint _{T_{0}}^{T_{0}+\epsilon}\int_{B_{\rho}}u\psi\,dxdt\right|\] \[=\left|\int_{t_{0}-\theta}^{t_{0}}\int_{B_{\rho}}u\psi\eta_{ \epsilon}^{\prime}\,dxdt\right|\] \[\leq c\int_{t_{0}-\theta}^{t_{0}}\int_{\mathbb{R}^{n}}\int_{ \mathbb{R}^{n}}\frac{|u(x,t)-u(y,t)|}{|x-y|^{n+2s}}|\psi(x)-\psi(y)|\,dxdydt\] \[\leq\frac{c}{\rho}\int_{t_{0}-\theta}^{t_{0}}\int_{B_{\rho}}\int_ {B_{\rho}}\frac{|u(x,t)-u(y,t)|}{|x-y|^{n+2s-1}}\,dxdydt+c\int_{t_{0}-\theta} ^{t_{0}}\int_{\mathbb{R}^{n}\setminus B_{\rho}}\int_{B_{\rho}}\frac{|u(x,t)-u (y,t)|}{|x-y|^{n+2s}}\psi(x)\,dxdydt, \tag{4.21}\]
where we used the fact that \(|\psi(x)-\psi(y)|\leq\frac{c}{\rho}|x-y|\). Furthermore, the second term on the right hand side can be estimated by
\[\int_{t_{0}-\theta}^{t_{0}}\int_{\mathbb{R}^{n}\setminus B_{\rho}}\int_{B_{ \rho}}\frac{|u(x,t)-u(y,t)|}{|x-y|^{n+2s}}\psi(x)\,dxdydt\leq c\int_{t_{0}- \theta}^{t_{0}}\int_{\mathbb{R}^{n}\setminus B_{\rho}}\int_{B_{3\rho/4}}\frac {|u(x,t)-u(y,t)|}{|x_{0}-y|^{n+2s}}\,dxdydt, \tag{4.22}\]
where we have used
\[\frac{|x_{0}-y|}{|x-y|}\leq 1+\frac{|x-x_{0}|}{|x-y|}\leq 2\quad\text{for }x \in B_{3\rho/4},\ y\in B_{\rho}.\]
Since \(u\psi\) is in \(C\left([t_{0}-\theta,t_{0}];L^{2}(B_{\rho})\right)\),
\[\left|\overline{(u\psi)}_{B_{\rho}}(T_{1})-\overline{(u\psi)}_{B_{\rho}}(T_{0 })\right|=\lim_{\epsilon\to 0}\left|\fint_{T_{1}-\epsilon}^{T_{1}}\fint_{B_{ \rho}}u\psi\,dx\,dt-\fint_{T_{0}}^{T_{0}+\epsilon}\fint_{B_{\rho}}u\psi\,dx\, dt\right|. \tag{4.23}\]
Therefore, we combine (4.21), (4.22) and (4.23) to discover (4.20).
With Lemma 4.4 and Lemma 4.5, we are now ready to prove the following time Holder regularity.
**Lemma 4.6** (Holder regularity with respect to time).: _Let \(u\) be a local weak solution to (4.2). Assume_
\[\|u\|_{L^{\infty}(Q_{1})}\leq 1,\ \text{\rm Tail}_{\infty}(u;0,1)\leq 1,\ \text{\rm and}\ \sup_{t\in[-(1/2)^{2s},0]}[u(\cdot,t)]_{C^{\beta}(B_{1/2})}\leq K_{\beta}, \tag{4.24}\]
_for some \(0<\beta<\min\{2s,1\}\) and \(K_{\beta}>0\). Then there exists \(c\equiv c(n,s,\lambda,\beta,K_{\beta})\) such that_
\[|u(x,t)-u(x,\tau)|\leq c|t-\tau|^{\frac{\beta}{2s}}\ \text{\rm for any}\ (x,t),(x,\tau)\in Q_{\frac{1}{4}}.\]
Proof.: Let \(z_{0}=(x_{0},t_{0})\in Q_{\frac{1}{4}}\). Then \(Q_{\rho}(z_{0})\subset Q_{\frac{1}{2}}\) for \(0<\rho<\min\left\{\left(\frac{2^{2s}-1}{4^{2s}}\right)^{\frac{1}{2s}},\frac{1 }{8}\right\}\). Notice that
\[\frac{|y|}{|y-x_{0}|}\leq 1+\frac{|x_{0}|}{|y-x_{0}|}\leq 2\quad\text{\rm for }y\in B _{1},\]
to see
\[\operatorname*{ess\,sup}_{t\in[-(1/2)^{2s},0]}\int_{\mathbb{R}^{n} \setminus B_{1/2}(x_{0})}\frac{|u(y,t)|}{|y-x_{0}|^{n+2s}}\,dy\] \[\quad\leq\operatorname*{ess\,sup}_{t\in[-1,0]}\int_{\mathbb{R}^{n} \setminus B_{1}}\frac{|u(y,t)|}{|y-x_{0}|^{n+2s}}\,dy+\operatorname*{ess\, sup}_{t\in[-1,0]}\int_{B_{1}\setminus B_{1/2}(x_{0})}\frac{|u(y,t)|}{|y-x_{0}|^{n+2s}}\,dy \tag{4.25}\] \[\quad\leq c\operatorname{Tail}_{\infty}(u;0,1)+2^{n+2s}|B_{1}|\leq c.\]
Take a nonnegative cutoff function \(\psi\in C_{c}^{\infty}(B_{\rho}(x_{0}))\) in Lemma 4.5. Then
\[\iint_{Q_{\rho}(z_{0})}\lvert u-\overline{u}_{Q_{\rho}(z_{0})}\rvert \,dx\,dt\leq \iint_{Q_{\rho}(z_{0})}\lvert u-\overline{(u\psi)}_{B_{\rho(x_{0})} }(t)\rvert\,dx\,dt\] \[+ \iint_{Q_{\rho}(z_{0})}\lvert\overline{(u\psi)}_{B_{\rho(x_{0})} }(t)-\overline{(u\psi)}_{Q_{\rho}(z_{0})}\rvert\,dx\,dt\] \[+ \iint_{Q_{\rho}(z_{0})}\lvert\overline{(u\psi)}_{Q_{\rho}(z_{0})} -\overline{u}_{Q_{\rho}(z_{0})}\rvert\,dx\,dt\] \[\eqqcolon A_{1}+A_{2}+A_{3}.\]
We observe that
\[A_{3} =\lvert\overline{(u\psi)}_{Q_{\rho}(z_{0})}-\overline{u}_{Q_{\rho }(z_{0})}\rvert\leq \iint_{Q_{\rho}(z_{0})}\lvert u-\overline{(u\psi)}_{Q_{\rho}(z_{0}) }\rvert\,dx\,dt\] \[\leq \iint_{Q_{\rho}(z_{0})}\lvert u-\overline{(u\psi)}_{B_{\rho(x_{0} )}}(t)\rvert\,dx\,dt+ \iint_{Q_{\rho}(z_{0})}\lvert\overline{(u\psi)}_{B_{\rho(x_{0})}}(t)-\overline{ (u\psi)}_{Q_{\rho}(z_{0})}\rvert\,dx\,dt\] \[=A_{1}+A_{2}.\]
Therefore it is sufficient to estimate \(A_{1}\) and \(A_{2}\).
**Estimate \(A_{1}\).** Using Holder inequality, Lemma 4.4 and (4.24), we have
\[A_{1} \leq c\left(\rho^{2s}\!\!\!\int_{t_{0}-\rho^{2s}}^{t_{0}}\!\!\! \int_{B_{\rho}(x_{0})}\int_{B_{\rho}(x_{0})}\frac{\lvert u(x,t)-u(y,t)\rvert^{2 }}{\lvert x-y\rvert^{n+2s}}\,dx\,dy\,dt\right)^{\frac{1}{2}}\] \[\leq c\left(\rho^{2s}\!\!\!\int_{t_{0}-\rho^{2s}}^{t_{0}}\!\!\! \int_{B_{\rho}(x_{0})}\int_{B_{\rho}(x_{0})}\frac{K_{\beta}^{2}}{\lvert x-y \rvert^{n+2s-2\beta}}\,dx\,dy\,dt\right)^{\frac{1}{2}}\leq c\rho^{\beta}.\]
**Estimate \(A_{2}\).** From Lemma 4.5, we deduce
\[A_{2} \leq\] \[\leq c\rho^{2s-1}\!\!\!\int_{t_{0}-\rho^{2s}}^{t_{0}}\int_{B_{\rho }(x_{0})}\!\!\!\int_{B_{\rho}(x_{0})}\frac{\lvert u(x,t)-u(y,t)\rvert}{\lvert x -y\rvert^{n+2s-1}}\,dx\,dy\,dt\] \[\quad+c\rho^{2s}\!\!\!\int_{t_{0}-\rho^{2s}}^{t_{0}}\int_{\mathbb{ R}^{n}\setminus B_{\rho}(x_{0})}\!\!\!\int_{B_{\rho}(x_{0})}\frac{\lvert u(x,t)-u(y,t) \rvert}{\lvert x_{0}-y\rvert^{n+2s}}\,dx\,dy\,dt\] \[=A_{2,1}+A_{2,2}\]
By (4.24), we see the following
\[A_{2,1}\leq c\rho^{2s-1}\!\!\!\int_{t_{0}-\rho^{2s}}^{t_{0}}\int_{B_{\rho}(x_{ 0})}\!\!\!\int_{B_{\rho}(x_{0})}\frac{K_{\beta}}{\lvert x-y\rvert^{n+2s-1- \beta}}\,dx\,dy\,dt\leq c(n,s,\beta,\lambda)K_{\beta}\rho^{\beta}.\]
We use (4.24) and (4.25) to discover
\[A_{2,2} \leq c\rho^{2s}\!\!\!\int_{t_{0}-\rho^{2s}}^{t_{0}}\int_{\mathbb{R}^{n} \setminus B_{1}}\!\!\!\int_{B_{\rho}(x_{0})}\frac{\lvert u(x,t)-u(y,t)\rvert}{ \lvert x_{0}-y\rvert^{n+2s}}\,dx\,dy\,dt\] \[\quad+c\rho^{2s}\!\!\!\int_{t_{0}-\rho^{2s}}^{t_{0}}\int_{B_{1} \setminus B_{\rho}(x_{0})}\!\!\!\int_{B_{\rho}(x_{0})}\frac{\lvert u(x,t)-u(y, t)\rvert}{\lvert x_{0}-y\rvert^{n+2s}}\,dx\,dy\,dt\] \[\leq c\rho^{2s}\!\!\!\int_{t_{0}-\rho^{2s}}^{t_{0}}\int_{\mathbb{ R}^{n}\setminus B_{1}}\!\!\!\int_{B_{\rho}(x_{0})}\frac{\lvert u(x,t)\rvert}{ \lvert x_{0}-y\rvert^{n+2s}}\,dx\,dy\,dt\] \[\quad+c\rho^{2s}\!\!\!\int_{t_{0}-\rho^{2s}}^{t_{0}}\int_{\mathbb{ R}^{n}\setminus B_{1}}\!\!\!\int_{B_{\rho}(x_{0})}\frac{\lvert u(y,t)\rvert}{ \lvert x_{0}-y\rvert^{n+2s}}\,dx\,dy\,dt+cK_{\beta}\rho^{\beta}\leq c\rho^{ \beta}.\]
Combining the estimates \(A_{1}\) and \(A_{2}\) yields
\[\iint_{Q_{\rho}(z_{0})}|u-\overline{u}_{Q_{\rho}(z_{0})}|\,dx\,dt\leq c\rho^{ \beta}.\]
With the help of Lemma 2.1, we have
\[|u(x,t)-u(x,\tau)|\leq c|t-\tau|^{\frac{\beta}{2s}}\text{ for any }(x,t),(x,\tau) \in Q_{\frac{1}{4}}.\]
## 5. Higher Holder regularity by approximation
This section is devoted to prove the main theorem 1.2. We will use an approximation argument based on the comparison estimate using (4.1) to obtain the higher Holder regularity for the inhomogeneous problem (1.1). Throughout this section, we assume that \(q,r>1\) satisfy
\[\frac{n}{2qs}+\frac{1}{r}<1. \tag{5.1}\]
**Lemma 5.1**.: _For any \(\epsilon>0\), there exists a small \(\delta\equiv\delta(n,s,q,r,\lambda,\epsilon)>0\) such that for any local weak solution \(u\) to (1.1) in \(Q_{4}\) with_
\[\sup_{Q_{4}}|u|\leq 1\quad\text{and}\quad\mathrm{Tail}_{\infty}(u;0,4)\leq 1,\]
_if there hold_
\[\|A-\tilde{A}\|_{L^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n}\times[-4^{2s}, 0])}\leq\delta\quad\text{and}\quad\|f\|_{L^{q,r}(Q_{4})}\leq\delta,\]
_where \(\tilde{A}\in\mathcal{L}_{0}(\lambda)\), then there is a local weak solution \(v\) to_
\[\left\{\begin{aligned} \partial_{t}v+\mathcal{L}_{A}^{\Phi}v& =0&&\text{ in }Q_{2}\\ v&=u&&\text{ on }\partial_{P}Q_{2}\cup \left((\mathbb{R}^{n}\setminus B_{2})\times[-2^{2s},0]\right)\end{aligned}\right.\]
_such that_
\[\|u-v\|_{L^{\infty}(Q_{1})}\leq\epsilon.\]
Proof.: We prove this lemma by contradiction. Suppose there exist some \(\epsilon_{0}>0\), \(\{A_{k}\}_{k=1}^{\infty}\subset\mathcal{L}_{0}(\lambda)\), \(\{\tilde{A}_{k}\}_{k=1}^{\infty}\subset\mathcal{L}_{0}(\lambda)\), \(\{\Phi_{k}\}_{k=1}^{\infty}\) with (1.3), \(\{f_{k}\}_{k=1}^{\infty}\) with (5.1) and \(\{u_{k}\}_{k=1}^{\infty}\) such that \(u_{k}\) is a local weak solution to
\[\partial_{t}u_{k}+\mathcal{L}_{A_{k}}^{\Phi_{k}}u_{k}=f_{k}\text{ in }Q_{4}\]
with
\[\sup_{Q_{4}}|u_{k}|\leq 1,\quad\mathrm{Tail}_{\infty}(u_{k};0,4)\leq 1, \tag{5.2}\]
\[\|A_{k}-\tilde{A}_{k}\|_{L^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n}\times[- 4^{2s},0])}\leq\frac{1}{k}\quad\text{and}\quad\|f_{k}\|_{L^{q,r}(Q_{4})}\leq \frac{1}{k}, \tag{5.3}\]
but
\[\|u_{k}-v_{k}\|_{L^{\infty}(Q_{1})}\geq\epsilon_{0} \tag{5.4}\]
for any local weak solution \(v_{k}\) to
\[\left\{\begin{aligned} \partial_{t}v_{k}+\mathcal{L}_{A_{k}}^{ \Phi_{k}}v_{k}&=0&&\text{ in }Q_{2}\\ v_{k}&=u_{k}&&\text{ on }\partial_{P}Q_{2}\cup \left((\mathbb{R}^{n}\setminus B_{2})\times[-2^{2s},0]\right).\end{aligned}\right.\]
Then \(w_{k}\coloneqq u_{k}-v_{k}\in L^{2}\left(\left(-2^{2s},0\right];W_{0}^{s,2} \left(B_{2}\right)\right)\cap C\left(\left[-2^{2s},0\right];L^{2}\left(B_{2} \right)\right)\) solves
\[\partial_{t}w_{k}+\mathcal{L}_{A_{k}}^{\Phi_{k}}u_{k}-\mathcal{L}_{A_{k}}^{ \Phi_{k}}v_{k}=f_{k}\text{ in }Q_{2}. \tag{5.5}\]
For convenience, we write \(I=\left[-2^{2s},0\right]\). By an approximation argument, we take \(w_{k}\) as a test function for (5.5) to get
\[\begin{split}&\int_{B_{2}}\frac{w_{k}^{2}(x,t)}{2}\,dx\\ &\quad+\int_{I}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{ \Phi_{k}(u_{k}(x,t)-u_{k}(y,t))}{|x-y|^{n+2s}}(w_{k}(x,t)-w_{k}(y,t))A_{k}(x,y, t)\,dx\,dy\,dt\\ &\quad-\int_{I}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{ \Phi_{k}(v_{k}(x,t)-v_{k}(y,t))}{|x-y|^{n+2s}}(w_{k}(x,t)-w_{k}(y,t))\tilde{A}_{ k}(x,y,t)\,dx\,dy\,dt\\ &=\int_{I}\int_{B_{2}}f_{k}w_{k}\,dx\,dt,\end{split} \tag{5.6}\]
where we have used the fact that \(w_{k}(x,-2^{2s})=0\) for \(x\in B_{2}\). Now let us handle the second and the third terms on the left hand side. Set
\[I=\int_{I}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{\Phi_{k}(u_{k}(x,t)- u_{k}(y,t))}{|x-y|^{n+2s}}(w_{k}(x,t)-w_{k}(y,t))A_{k}(x,y,t)\,dx\,dy\,dt,\]
and
\[II=\int_{I}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{\Phi_{k}(v_{k}(x,t )-v_{k}(y,t))}{|x-y|^{n+2s}}(w_{k}(x,t)-w_{k}(y,t))\tilde{A}_{k}(x,y,t)\,dx\, dy\,dt.\]
Note that (1.3) implies
\[[\Phi_{k}(u_{k}(x,t)-u_{k}(y,t))-\Phi_{k}(v_{k}(x,t)-v_{k}(y,t))](w_{k}(x,t)- w_{k}(y,t))\geq\frac{1}{c}|w_{k}(x,t)-w_{k}(y,t)|^{2}.\]
By using the above inequality and (1.3), we get
\[\begin{split}& I-II\\ &=\int_{I}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{\Phi_{k}( u_{k}(x,t)-u_{k}(y,t))-\Phi_{k}(v_{k}(x,t)-v_{k}(y,t))}{|x-y|^{n+2s}}(w_{k}(x,t)-w_{k }(y,t))\tilde{A}_{k}\,dx\,dy\,dt\\ &\quad+\int_{I}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{ \Phi_{k}(u_{k}(x,t)-u_{k}(y,t))}{|x-y|^{n+2s}}(w_{k}(x,t)-w_{k}(y,t))(A_{k}- \tilde{A}_{k})\,dx\,dy\,dt\\ &\geq\frac{1}{c}\int_{I}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n} }\frac{|w_{k}(x,t)-w_{k}(y,t)|^{2}}{|x-y|^{n+2s}}\,dx\,dy\,dt\\ &\quad-\int_{I}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{ |u_{k}(x,t)-u_{k}(y,t)|}{|x-y|^{n+2s}}|w_{k}(x,t)-w_{k}(y,t)||A_{k}-\tilde{A}_{ k}|\,dx\,dy\,dt.\end{split}\]
Putting this inequality into (5.6) and taking essential supremum in \(I\), we have
\[\begin{split} J\coloneqq&\operatorname*{ess\,sup}_{t \in I}\int_{B_{2}}w_{k}^{2}(x,t)\,dx+\int_{I}\int_{\mathbb{R}^{n}}\int_{ \mathbb{R}^{n}}\frac{|w_{k}(x,t)-w_{k}(y,t)|^{2}}{|x-y|^{n+2s}}\,dx\,dy\,dt\\ &\leq c\int_{I}\int_{B_{2}}f_{k}w_{k}\,dx\,dt\\ &\quad+c\int_{I}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{ |u_{k}(x,t)-u_{k}(y,t)|}{|x-y|^{n+2s}}|w_{k}(x,t)-w_{k}(y,t)||A_{k}-\tilde{A}_{ k}|\,dx\,dy\,dt\\ &\quad=:cJ_{1}+cJ_{2}.\end{split}\]
**Estimate of \(J_{1}\).** Using Lemma 2.2, Holder's inequality, and Cauchy's inequality, we have
\[J_{1}\leq c\|f_{k}\|_{L^{q,r}(Q_{2})}^{2}+\frac{1}{8}J.\]
**Estimate of \(J_{2}\).** From (5.3),
\[J_{2}\leq\frac{c}{k}\Bigg{[}\int_{I}\int_{B_{3}}\int_{B_{3}}\frac{|u_{k}(x,t)- u_{k}(y,t)|}{|x-y|^{n+2s}}|w_{k}(x,t)-w_{k}(y,t)|\,dx\,dy\,dt\]
\[\quad+\int_{I}\int_{\mathbb{R}^{n}\setminus B_{3}}\int_{B_{3}}\frac{|u_ {k}(x,t)-u_{k}(y,t)|}{|x-y|^{n+2s}}|w_{k}(x,t)-w_{k}(y,t)|\,dx\,dy\,dt\bigg{]}\] \[\quad=:\frac{c}{k}(J_{2,1}+J_{2,2}).\]
To get estimate of \(J_{2,1}\), we use Holder inequality and Cauchy inequality so that
\[J_{2,1} \leq c\left(\int_{I}[u_{k}(\cdot,t)]_{W^{s,2}(B_{3})}^{2}\,dt \right)^{\frac{1}{2}}\left(\int_{I}[w_{k}(\cdot,t)]_{W^{s,2}(B_{3})}^{2}\,dt \right)^{\frac{1}{2}}\] \[\leq c\int_{I}[u_{k}(\cdot,t)]_{W^{s,2}(B_{3})}^{2}\,dt+\frac{J}{8}.\]
Moreover, Lemma 3.2 and (5.2) imply \(\int_{I}[u_{k}(\cdot,t)]_{W^{s,2}(B_{3})}^{2}\,dt\leq c\). Thus \(J_{2,1}\leq c+J/8\). Next, we are going to estimate \(J_{2,2}\). In light of the fact that \(w=0\) a.e. in \((\mathbb{R}^{n}\setminus B_{2})\times I\), we observe
\[J_{2,2} \leq\int_{I}\int_{\mathbb{R}^{n}\setminus B_{3}}\int_{B_{2}}\frac {|u_{k}(x,t)|+|u_{k}(y,t)|}{|x-y|^{n+2s}}|w_{k}(x,t)|\,dx\,dy\,dt\] \[\leq\int_{I}\int_{\mathbb{R}^{n}\setminus B_{3}}\int_{B_{2}}\frac {|u_{k}(x,t)|}{|x-y|^{n+2s}}|w_{k}(x,t)|\,dx\,dy\,dt\] \[\quad+\int_{I}\int_{\mathbb{R}^{n}\setminus B_{3}}\int_{B_{2}} \frac{|u_{k}(y,t)|}{|x-y|^{n+2s}}|w_{k}(x,t)|\,dx\,dy\,dt\] \[=:J_{2,2,1}+J_{2,2,2}.\]
To get estimate of \(J_{2,2,1}\), we use the assumption \(\sup_{Q_{3}}|u_{k}|\leq 1\) in (5.2) to see
\[\int_{I}\int_{\mathbb{R}^{n}\setminus B_{3}}\int_{B_{2}}\frac{|u_ {k}(x,t)|}{|x-y|^{n+2s}}|w_{k}(x,t)|\,dx\,dy\,dt \leq\int_{I}\int_{B_{2}}\int_{\mathbb{R}^{n}\setminus B_{3}}\frac {|w_{k}(x,t)|}{|x-y|^{n+2s}}\,dy\,dx\,dt\] \[\leq c\int_{I}\int_{B_{2}}|w_{k}(x,t)|\,dx\,dt,\]
where we have also used the fact that \(\frac{|y|}{|x-y|}\) is bounded above for all \(x\in B_{2}\) and \(y\in\mathbb{R}^{n}\setminus B_{3}\). Furthermore, to estimate \(J_{2,2,2}\), we used the assumption \(\mathrm{Tail}_{\infty}(u_{k};0,3)\leq 1\) as follows.
\[\int_{I}\int_{\mathbb{R}^{n}\setminus B_{3}}\int_{B_{2}}\frac{|u_ {k}(y,t)|}{|x-y|^{n+2s}}|w_{k}(x,t)|\,dx\,dy\,dt \leq c\int_{I}\int_{B_{2}}\int_{\mathbb{R}^{n}\setminus B_{3}}\frac {|u_{k}(y,t)|}{|y|^{n+2s}}|w_{k}(x,t)|\,dy\,dx\,dt\] \[\leq c\,\mathrm{Tail}_{\infty}(u_{k};0,3)\int_{I}\int_{B_{2}}|w_ {k}(x,t)|\,dx\,dt\] \[\leq c\int_{I}\int_{B_{2}}|w_{k}(x,t)|\,dx\,dt.\]
Thus
\[J_{2,2}\leq c\int_{I}\int_{B_{2}}|w_{k}(x,t)|\,dx\,dt\leq c\left(\int_{I}\int_{ B_{2}}|w_{k}(x,t)|^{2}\,dx\,dt\right)^{\frac{1}{2}}.\]
Combining all the above estimates, we have
\[J=\int_{I}\int_{B_{2}}\int_{B_{2}}\frac{|w_{k}(x,t)-w_{k}(y,t)|^{2}}{|x-y|^{n+2 s}}\,dx\,dy\,dt+\mathrm{ess}\sup_{t\in I}\int_{B_{2}}w_{k}^{2}(x,t)\,dx\leq\frac{c}{k}. \tag{5.7}\]
Therefore, we have
\[\lim_{k\to\infty}\operatorname*{ess}\sup_{t\in I}\int_{B_{2}}w_{k}^{2}(x,t)\, dx=0. \tag{5.8}\]
By Lemma 3.5, \(u_{k}\) and \(v_{k}\) are Holder continuous functions in \(\overline{Q_{1}}\). In particular, there is a \(\rho=\rho(s)>0\) such that for any \(z_{0}\in\overline{Q_{1}}\), \(Q_{2\rho}(z_{0})\subset Q_{2}\). From Lemma 1.1, (5.2), and (5.7), we have
\[\|v_{k}\|_{L^{\infty}(Q_{\rho}(z_{0}))} \leq c\left(\left(\iint_{Q_{2\rho_{0}}(z_{0})}v_{k}^{2}\,dx\,dt \right)^{\frac{1}{2}}+\text{Tail}_{\infty}(v_{k};x_{0},\rho/2,t_{0}-\rho^{2s},t _{0})\right)\] \[\leq c\left(\left(\iint_{Q_{2\rho_{0}}(z_{0})}u_{k}^{2}\,dx\,dt \right)^{\frac{1}{2}}+\left(\iint_{Q_{2\rho_{0}}(z_{0})}w_{k}^{2}\,dx\,dt \right)^{\frac{1}{2}}\right)\] \[\quad+c\left(\text{Tail}_{\infty}(u_{k};x_{0},\rho/2,t_{0}-\rho^{2 s},t_{0})+\text{Tail}_{\infty}(w_{k};x_{0},\rho/2,t_{0}-\rho^{2s},t_{0})\right) \leq c.\]
Similarly, using Lemma 2.7 and Lemma 3.5, we see that there are constants \(\beta=\beta(n,s,q,r,\lambda)\) and \(c\equiv c(n,s,q,r,\lambda)\) which are independent of \(k\) such that
\[\sup_{\overline{Q_{1}}}|v_{k}(x,t)|+[v_{k}]_{C^{\beta,\frac{\beta}{2s}}( \overline{Q_{1}})}\leq c,\]
and
\[\sup_{\overline{Q_{1}}}|u_{k}(x,t)|+[u_{k}]_{C^{\beta,\frac{\beta}{2s}}( \overline{Q_{1}})}\leq c.\]
By Arzela-Ascoli theorem, there exist a subsequence of \(w_{k_{j}}\) and a continuous function \(w\) in \(\overline{Q_{1}}\) such that \(w_{k_{j}}\to w\) in \(\overline{Q_{1}}\). From (5.8) with the uniqueness of the limit, we have
\[\lim_{j\to\infty}\|w_{k_{j}}\|_{L^{\infty}(\overline{Q_{1}})}=0.\]
which contradicts (5.4). This completes the proof.
Now using Lemma 5.1, we obtain the higher Holder regularity provided that the kernel coefficient \(A\) is sufficiently close to the corresponding kernel coefficient \(\tilde{A}\) which is invariant under the translation and the nonhomogeneous term \(f\) is sufficiently small in \(L^{q,r}\).
**Lemma 5.2**.: _For any \(0<\alpha<\min\left\{2s-(\frac{n}{q}+\frac{2s}{r}),1\right\}\), there is a positive \(\delta\equiv\delta(n,s,q,r,\lambda,\alpha)<1\) such that for any local weak solution \(u\) to (1.1) in \(Q_{4}\subset Q_{5}\) with_
\[\sup_{Q_{4}}|u|\leq 1\quad\text{and}\quad\text{Tail}_{\infty}(u;0,4)\leq 1, \tag{5.9}\]
_and any \(\tilde{A}\in\mathcal{L}_{1}(\lambda;B_{4}\times B_{4}\times[-4^{2s},0])\), if_
\[\|A-\tilde{A}\|_{L^{\infty}(B_{4}\times B_{4}\times[-4^{2s},0])}\leq\delta\text { and }\|f\|_{L^{q,r}(Q_{4})}\leq\delta, \tag{5.10}\]
_then we have \(u\in C^{\alpha,\frac{\alpha}{2s}}(Q_{1})\) with the estimate_
\[[u]_{C^{\alpha,\frac{\alpha}{2s}}(Q_{1})}\leq c(n,s,q,r,\lambda,\alpha).\]
Proof.: **Step 1: Regularity at the origin.** We want to show that there is a sufficiently small \(\delta>0\) such that under the assumptions as in (5.10), there exist \(a\equiv a(n,s,q,r,\lambda,\alpha)\), \(c\equiv c(n,s,q,r,\lambda,\alpha)\) and small \(\rho<\frac{1}{12}\) satisfying
\[\|u(x,t)-a\|_{L^{\infty}(Q_{\rho^{k}})}\leq c\rho^{\alpha k},\quad k\geq 0.\]
To prove the statement, it is sufficient to show the following.
**Claim:** There exist a positive \(\rho\equiv\rho(n,s,q,r,\lambda,\alpha)<\frac{1}{12}\), \(c_{1}\equiv c_{1}(n,s,q,r,\lambda,\alpha)\) and \(\{a_{i}\}_{i=-1}^{\infty}\) with \(a_{-1}=0\) such that
\[\sup_{Q_{4}}|u(\rho^{i}x,\rho^{2si}t)-a_{i}|\leq\rho^{\alpha i},\ |a_{i}-a_{i-1}|\leq c_{1}\rho^{\alpha i},\quad i\geq 0, \tag{5.11}\]
and
\[\operatorname*{ess\,sup}_{t\in[-4^{2s},0]}\int_{\mathbb{R}^{n}\setminus B_{4} }\frac{|u\left(\rho^{i}y,\rho^{2si}t\right)-a_{i}|}{\rho^{\alpha i}|y|^{n+2s }}\,dy\leq 1,\quad i\geq 0. \tag{5.12}\]
**Proof of the claim.** Suppose a constant \(c_{1}>0\) is given, which is to be determined later. Let \(\tilde{\alpha}=\frac{\alpha+\min\{2s,1\}}{2}\) and take a small \(\rho\equiv\rho(n,s,\alpha,c_{1})<\frac{1}{12}\) such that
\[(4\rho)^{2s}+1<2^{2s},\ \rho^{2s-\alpha}\left(1+(2c_{1}+1)\frac{\omega_{n}}{2s} \right)\leq\frac{1}{8}\ \ \text{and}\ \ c_{1}\rho^{\tilde{\alpha}}\leq(s-\tilde{\alpha}/2)\frac{\rho^{\alpha}}{8(1 +\omega_{n})}, \tag{5.13}\]
where \(w_{n}\) means the surface area of n-dimensional unit sphere. Take \(\epsilon=\frac{s}{2(1+\omega_{n})}\rho^{\alpha}\), then we find a suitable \(\delta=\delta(n,s,q,r,\lambda,\alpha)\) as in Lemma 5.1. Now we extend \(\tilde{A}\) by \(A\) outside \(B_{4}\times B_{4}\times[-4^{2s},0]\). Then we get \(\tilde{A}\in\mathcal{L}_{1}(B_{4}\times B_{4}\times(-4^{2s},0))\cap\mathcal{ L}_{0}(\lambda)\) and
\[\|A-\tilde{A}\|_{L^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n}\times[-4^{2s}, 0])}\leq\delta.\]
Now we construct \(a_{i}\), \(i\geq 0\), as follows. For \(i=0\), set \(a_{0}=0\). Then (5.9) directly implies (5.11) and (5.12). Now suppose that there is \(a_{i}\) satisfying (5.11) and (5.12) for \(i=0,1,\ldots,k\). Set
\[u_{k}(x,t)\coloneqq\frac{u\left(\rho^{k}x,\rho^{2sk}t\right)-a_{k}}{\rho^{ \alpha k}},\ f_{k}(x,t)\coloneqq\rho^{(2s-\alpha)k}f\left(\rho^{k}x,\rho^{2 sk}t\right),\quad(x,t)\in\mathbb{R}^{n}\times(-4^{2s},0],\]
\[A_{k}\coloneqq A\left(\rho^{k}x,\rho^{k}y,\rho^{2sk}t\right),\ \tilde{A}_{k}\coloneqq\tilde{A}\left(\rho^{k}x,\rho^{k}y,\rho^{2sk}t\right), \quad(x,y,t)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R},\]
and
\[\Phi_{k}(\xi)\coloneqq\frac{1}{\rho^{\alpha k}}\Phi\left(\rho^{\alpha k}\xi \right),\quad\xi\in\mathbb{R}.\]
Then \(u_{k}\) is a local weak solution to
\[\partial_{t}u_{k}+\mathcal{L}_{A_{k}}^{\Phi_{k}}u_{k}=f_{k}\ \text{in}\ Q_{4}\]
such that
\[\|u_{k}\|_{L^{\infty}(Q_{4})}\leq 1\ \text{and}\ \operatorname{Tail}_{ \infty}(u_{k};0,4)\leq 1. \tag{5.14}\]
Moreover, \(A_{k},\tilde{A}_{k}\in\mathcal{L}_{0}(\lambda)\) and \(f_{k}\) satisfy
\[\|A_{k}-\tilde{A}_{k}\|_{L^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n}\times[- 4^{2s},0])}\leq\delta,\]
and
\[\|f_{k}\|_{L^{q,r}(Q_{4})}\leq\rho^{(2s-\alpha)k-\left(\frac{n}{q}+\frac{2s}{ r}\right)k}\|f\|_{L^{q,r}(Q_{4}\rho^{k})}\leq\delta.\]
Here we used \(2s-\left(\frac{n}{q}+\frac{2s}{r}\right)>\alpha\) and \(\rho<1\). By Lemma 5.1 with \(\epsilon=\frac{s}{2(1+\omega_{n})}\rho^{\alpha}\), there exists a weak solution \(v\) to
\[\left\{\begin{aligned} \partial_{t}v+\mathcal{L}_{A_{k}}^{\Phi_{k}}v& =0&&\text{in}\ Q_{2}\\ v&=u_{k}&&\text{on}\ \partial_{P}Q_{2}\cup\left((\mathbb{R}^{n} \setminus B_{2})\times[-2^{2s},0]\right)\end{aligned}\right.\]
with
\[\|u_{k}-v\|_{L^{\infty}(Q_{1})}\leq\frac{s}{2(1+\omega_{n})}\rho^{\alpha}. \tag{5.15}\]
A similar calculation as in (5.7) shows
\[\iint_{Q_{2}}v^{2}\,dx\,dt\leq c\left[\iint_{Q_{2}}(u_{k}-v)^{2}\,dx\,dt+ \iint_{Q_{2}}u_{k}^{2}\,dx\,dt\right]\leq c,\]
where \(c\equiv c(n,s,q,r,\lambda)\) is independent on \(k\). Moreover, (5.14) implies
\[\operatorname{Tail}_{\infty}(v;0,1,[-2^{2s},0])\leq\operatorname{Tail}_{ \infty}(u_{k};0,1,[-2^{2s},0])+\operatorname{Tail}_{\infty}(u_{k}-v;0,1,[-2^{2s },0])\leq c,\]
where \(c\equiv c(n,s,q,r,\lambda)\) is also independent on \(k\). Then in light of Theorem 1.1, Lemma 4.3 and Lemma 4.6, we can take \(c_{1}=c_{1}(n,s,q,r,\lambda,\alpha)\), which is independent on \(k\), so that
\[|v(0,0)|\leq c_{1}\ \text{and}\ [v]_{C^{\frac{\tilde{\alpha}}{2}}(\overline{Q_{1}})} \leq c_{1}. \tag{5.16}\]
Note that
\[\|v(x,t)-v(0,0)\|_{L^{\infty}(Q_{4\rho})}\leq c_{1}\rho^{\tilde{\alpha}}\]
to discover
\[\|u_{k}(x,t)-v(0,0)\|_{L^{\infty}(Q_{4\rho})}\leq\|u_{k}(x,t)-v(x,t)\|_{L^{ \infty}(Q_{4\rho})}+\|v(x,t)-v(0,0)\|_{L^{\infty}(Q_{4\rho})}\leq c\rho^{ \alpha},\]
where we have used (5.13) and (5.15). Take \(a_{k+1}=a_{k}+v(0,0)\rho^{\alpha k}\). Then (5.11) also holds for \(i=k+1\). Furthermore, we estimate
\[\operatorname*{ess\,sup}_{t\in[-4^{2s},0]}\int_{\mathbb{R}^{n} \setminus B_{4}}\frac{\left|u\left(\rho^{k+1}y,\rho^{2s(k+1)}t\right)-a_{k+1} \right|}{\rho^{\alpha(k+1)}|y|^{n+2s}}\,dy\] \[=\operatorname*{ess\,sup}_{t\in[-4^{2s},0]}\int_{\mathbb{R}^{n} \setminus B_{4}}\frac{\left|u\left(\rho^{k+1}y,\rho^{2s(k+1)}t\right)-a_{k+1} \right|}{\rho^{\alpha(k+1)}|y|^{n+2s}}\,dy\] \[\quad+\operatorname*{ess\,sup}_{t\in[-4^{2s},0]}\int_{B_{4} \setminus B_{2}}\frac{\left|u\left(\rho^{k+1}y,\rho^{2s(k+1)}t\right)-a_{k+1} \right|}{\rho^{\alpha(k+1)}|y|^{n+2s}}\,dy\] \[\quad+\operatorname*{ess\,sup}_{t\in[-4^{2s},0]}\int_{B_{\frac{1 }{p}}\setminus B_{4}}\frac{\left|u\left(\rho^{k+1}y,\rho^{2s(k+1)}t\right)-a_{ k+1}\right|}{\rho^{\alpha(k+1)}|y|^{n+2s}}\,dy\eqqcolon I+II+III.\]
Then using (5.11) for \(i=k+1\) and (5.12) for \(i=k\), we have
\[I =\rho^{2s-\alpha}\operatorname*{ess\,sup}_{t\in[-4^{2s},0]}\int_{ \mathbb{R}^{n}\setminus B_{4}}\frac{\left|u\left(\rho^{k}y,\rho^{2sk}t\right)- a_{k+1}\right|}{\rho^{\alpha k}|y|^{n+2s}}\,dy\] \[\leq\rho^{2s-\alpha}\left(\operatorname*{ess\,sup}_{t\in[-4^{2s},0]}\int_{\mathbb{R}^{n}\setminus B_{4}}\frac{\left|u\left(\rho^{k}y,\rho^{2sk }t\right)-a_{k}\right|}{\rho^{\alpha k}|y|^{n+2s}}\,dy+\operatorname*{ess\,sup }_{t\in[-4^{2s},0]}\int_{\mathbb{R}^{n}\setminus B_{4}}\frac{|a_{k}-a_{k+1}|}{ \rho^{\alpha k}|y|^{n+2s}}\,dy\right)\] \[\leq\rho^{2s-\alpha}\left(1+c_{1}\frac{\omega_{n}}{2s}\right),\]
and
\[II \leq\rho^{2s-\alpha}\left(\operatorname*{ess\,sup}_{t\in[-4^{2s},0]}\int_{B_{4}\setminus B_{1}}\frac{\left|u\left(\rho^{k}y,\rho^{2sk}t\right) -a_{k}\right|}{\rho^{\alpha k}|y|^{n+2s}}\,dy+\operatorname*{ess\,sup}_{t\in [-4^{2s},0]}\int_{B_{4}\setminus B_{1}}\frac{|a_{k}-a_{k+1}|}{\rho^{\alpha k} |y|^{n+2s}}\,dy\right)\] \[\leq\rho^{2s-\alpha}(c_{1}+1)\frac{\omega_{n}}{2s}.\]
In addition, using (5.13), (5.15) and (5.16), we obtain
\[III \leq\rho^{2s-\alpha}\left(\operatorname*{ess\,sup}_{t\in[-(4\rho) ^{2s},0]}\int_{B_{1}\setminus B_{4\rho}}\frac{\left|u\left(\rho^{k}y,\rho^{2sk }t\right)-\left(a_{k}+\rho^{\alpha k}v(y,t)\right)\right|}{\rho^{\alpha k}|y|^ {n+2s}}\,dy\right)\] \[\quad+\rho^{2s-\alpha}\left(\operatorname*{ess\,sup}_{t\in[-(4 \rho)^{2s},0]}\int_{B_{1}\setminus B_{4\rho}}\frac{|\rho^{\alpha k}v(y,t)-\rho ^{\alpha k}v(0,0)|}{\rho^{\alpha k}|y|^{n+2s}}\,dy\right)\] \[\leq\rho^{2s-\alpha}\left(\operatorname*{ess\,sup}_{t\in[-(4\rho) ^{2s},0]}\int_{B_{1}\setminus B_{4\rho}}\frac{|u_{k}(y,t)-v(y,t)|}{|y|^{n+2s} }\,dy+\operatorname*{ess\,sup}_{t\in[-(\sigma)^{2s},0]}\int_{B_{1}\setminus B_ {4\rho}}\frac{c_{1}}{|y|^{n+2s-\bar{\alpha}}}\,dy\right)\] \[\leq\frac{\omega_{n}}{4^{2s+1}(1+\omega_{n})}+c_{1}\rho^{\bar{ \alpha}-\omega_{n}}\frac{\omega_{n}}{2s-\bar{\alpha}}\leq\frac{3}{8}.\]
We combine the estimates \(I\), \(II\), \(III\) and (5.13) to find
\[\operatorname*{ess\,sup}_{t\in[-4^{2s},0]}\int_{\mathbb{R}^{n} \setminus B_{4}}\frac{\left|u\left(\rho^{k+1}y,\rho^{2s(k+1)}t\right)-a_{k+1} \right|}{\rho^{(k+1)\alpha}|y|^{n+2s}}\,dy\leq\rho^{2s-\alpha}\left(1+(2c_{1} +1)\frac{\omega_{n}}{2s}\right)+\frac{3}{8}\leq 1.\]
This completes the proof of the claim.
By (5.11), \(a_{i}\) converges to some \(a\in\mathbb{R}\) and
\[\|u(x,t)-a\|_{L^{\infty}(Q_{\rho^{i}})}\leq c\rho^{\alpha i},\quad i\geq 0.\]
This finishes Step 1.
**Step 2: Regularity in \(Q_{1}\).** Let \(z_{0}\in Q_{1}\). Then \(Q_{4\rho}(z_{0})\Subset Q_{2}\) from (5.13). Now set
\[\tilde{u}(x,t)\eqqcolon u\left(\rho x+x_{0},\rho^{2s}t+t_{0}\right),\quad f_{1 }(x,t)\coloneqq\rho^{2s}f\left(\rho x+x_{0},\rho^{2s}t+t_{0}\right),\quad(x,t )\in Q_{4},\]
\[A_{1}(x,y,t):=A\left(\rho x+x_{0},\rho y+y_{0},\rho^{2s}t+t_{0}\right),\quad(x,y,t) \in\mathbb{R}^{2n}\times\mathbb{R}.\]
Then \(\tilde{u}\) is a local weak solution to
\[\partial_{t}\tilde{u}+\mathcal{L}_{A_{1}}^{\Phi}\tilde{u}=f_{1}\text{ in }Q_{4}.\]
Moreover, (5.13) implies
\[\sup_{Q_{4}}|\tilde{u}|\leq 1\text{ and }\operatorname{Tail}_{\infty}(\tilde{u};0,4 )\leq 1.\]
From the result of Step 1, there is some constant \(a\in\mathbb{R}\) such that
\[\|\tilde{u}(x,t)-a\|_{L^{\infty}(Q_{\rho^{k}})}\leq c\rho^{\alpha k},\quad k \geq 0,\]
which implies
\[\|u(x,t)-a\|_{L^{\infty}(Q_{\rho^{k+1}}(z_{0}))}\leq c\rho^{\alpha(k+1)},\quad k \geq 0. \tag{5.17}\]
Let \((x_{0},t_{0}),(x_{1},t_{1})\in Q_{1}\). We may assume \(t_{0}<t_{1}\). Then there is a nonnegative integer \(k\geq 0\) such that \((x_{0},t_{0})\in Q_{\rho^{k}}(x_{1},t_{1})\setminus Q_{\rho^{k+1}}(x_{1},t_{ 1})\), which implies that
\[\rho^{k+1}<|x_{0}-x_{1}|+|t_{0}-t_{1}|^{\frac{1}{2s}}\leq 2\rho^{k}. \tag{5.18}\]
From (5.17), there is a constant \(a_{1}\in\mathbb{R}\) such that
\[\|u(x,t)-a_{1}\|_{L^{\infty}(Q_{\rho^{k}}(x_{1},t_{1}))}\leq c\rho^{\alpha k}.\]
Therefore, we have
\[|u(x_{0},t_{0})-u(x_{1},t_{1})|\leq|u(x_{0},t_{0})-a_{1}|+|a_{1}-u(x_{1},t_{1} )|\leq 2\|u(x,t)-a_{1}\|_{L^{\infty}(Q_{\rho^{k}}(x_{1},t_{1}))}\leq c\rho^{ \alpha k}.\]
We combine the above inequality and (5.18) to see that
\[|u(x_{0},t_{0})-u(x_{1},t_{1})|\leq c\left(|x_{0}-x_{1}|+|t_{0}-t_{1}|^{\frac{ 1}{2s}}\right)^{\alpha}. \tag{5.19}\]
Since we have proved (5.19) for any \((x_{0},t_{0})\) and \((x_{1},t_{1})\in Q_{1}\), we conclude that \(u\in C^{\alpha,\frac{\alpha}{2s}}(Q_{1})\) with the estimate
\[[u]_{C^{\alpha,\frac{\alpha}{2s}}(Q_{1})}\leq c.\]
Now we are ready to prove the main result.
**Proof of Theorem 1.2.** Let \(\delta>0\) be a small number to be determined later, depending on \(\alpha\). Let \(Q_{\rho_{0}}(z_{0})\Subset\Omega_{T}\). Set
\[\rho=\min\left\{\frac{\rho_{0}}{32},\frac{1}{8}\left(\left(\frac{3}{4}\right)^ {2s}-\left(\frac{1}{2}\right)^{2s}\right)^{\frac{1}{2s}}\rho_{0}\right\} \tag{5.20}\]
so that \(Q_{4\rho}(\tilde{z})\Subset Q_{\frac{3\rho_{0}}{4}}(z_{0})\) for any \(\tilde{z}\in Q_{\frac{\rho_{0}}{2}}(z_{0})\). Fix \(\tilde{z}\in Q_{\frac{\rho_{0}}{2}}(z_{0})\). According to the assumption (1.6), there exist \(0<\tilde{\rho}_{\tilde{z}}\leq\min\{\rho,\frac{\rho_{\tilde{z}}}{4}\}\) and \(\tilde{A}_{\tilde{z}}\in\mathcal{L}_{1}\left(\lambda;B_{\rho_{\tilde{z}}}( \tilde{x})\times B_{\rho_{\tilde{z}}}(\tilde{x})\times[\tilde{t}-\rho_{\tilde {z}}^{2s},\tilde{t}]\right)\) such that
\[\|\tilde{A}_{\tilde{z}}-A\|_{L^{\infty}\left(B_{4\tilde{\rho}_{\tilde{z}}}( \tilde{x})\times B_{4\tilde{\rho}_{\tilde{z}}}(\tilde{x})\times[\tilde{t}-(4 \tilde{\rho}_{\tilde{z}})^{2s},\tilde{t}]\right)}\leq\delta.\]
We write
\[M_{0}=\|u\|_{L^{\infty}(Q_{4\tilde{\rho}_{\tilde{z}}}(\tilde{z}))}+\operatorname {Tail}_{\infty}\left(u;\tilde{z},4\tilde{\rho}_{\tilde{z}}\right)+\frac{\left(4 \tilde{\rho}_{\tilde{z}}\right)^{2s-\left(\frac{n}{q}+\frac{2s}{r}\right)}}{ \delta}\|f\|_{L^{q,r}(Q_{4\tilde{\rho}_{\tilde{z}}}(\tilde{z}))}\]
to define
\[\tilde{u}(x,t)=\frac{u\left(\tilde{\rho}_{\tilde{z}}x+\tilde{x}, \left(\tilde{\rho}_{\tilde{z}}\right)^{2s}t+\tilde{t}\right)}{M_{0}},\quad f_{ 1}(x,t)=\frac{\left(\tilde{\rho}_{\tilde{z}}\right)^{2s}f\left(\tilde{\rho}_{ \tilde{z}}x+\tilde{x},\left(\tilde{\rho}_{\tilde{z}}\right)^{2s}t+\tilde{t} \right)}{M_{0}},\quad(x,t)\in Q_{4},\] \[\qquad\qquad A_{1}(x,y,t)=A\left(\tilde{\rho}_{\tilde{z}}x+ \tilde{x},\tilde{\rho}_{\tilde{z}}y+\tilde{x},\left(\tilde{\rho}_{\tilde{z}} \right)^{2s}t+\tilde{t}\right),\quad(x,y,t)\in\mathbb{R}^{2n}\times\mathbb{R},\]
\[\tilde{A}_{\tilde{z},1}(x,y,t)=\tilde{A}\left(\tilde{\rho}_{\tilde{z}}x+\tilde{x}, \tilde{\rho}_{\tilde{z}}y+\tilde{x},(\tilde{\rho}_{\tilde{z}})^{2s}\,t+\tilde{t }\right),\quad(x,y,t)\in\mathbb{R}^{2n}\times\mathbb{R},\]
and
\[\Phi_{1}(\xi)=\frac{\Phi(M_{0}\xi)}{M_{0}},\quad\xi\in\mathbb{R}.\]
Then \(\tilde{u}\) is a local weak solution to
\[\partial_{t}\tilde{u}+\mathcal{L}_{A_{1}}^{\Phi_{1}}\tilde{u}=f_{1}\text{ in }Q_{4}\]
with
\[\sup_{Q_{4}}|\tilde{u}|\leq 1\text{ and }\operatorname{Tail}_{\infty}(\tilde{u};0,4) \leq 1,\]
where \(\Phi_{1}\) satisfies (1.3). Also we directly check
\[\|\tilde{A}_{\tilde{z},1}-A_{1}\|_{L^{\infty}(B_{4}\times B_{4}\times[-4^{2s},0])}\leq\delta\text{ and }\|f\|_{L^{q,r}(Q_{4})}\leq\delta.\]
We are now under the assumptions and settings in Lemma 5.2, which implies that there is a positive constant \(\delta(n,s,q,r,\lambda,\alpha)\) such that
\[[\tilde{u}]_{C^{\alpha,\frac{\alpha}{2s}}(Q_{1})}\leq c(n,s,q,r,\lambda, \alpha).\]
Therefore, scaling back, we have
\[[u]_{C^{\alpha,\frac{\alpha}{2s}}(Q_{\tilde{\rho}_{\tilde{z}}}(\tilde{z}))} \leq\frac{c}{\tilde{\rho}_{\tilde{z}}^{\alpha}}\Bigg{(}\|u\|_{L^{ \infty}(Q_{4\tilde{\rho}_{\tilde{z}}}(\tilde{z}))}+\operatorname{Tail}_{ \infty}(u;\tilde{z},4\tilde{\rho}_{\tilde{z}})+(\tilde{\rho}_{\tilde{z}})^{2s- \left(\frac{n}{q}+\frac{2s}{r}\right)}\|f\|_{L^{q,r}(Q_{4\tilde{\rho}_{\tilde {z}}}(\tilde{z}))}\Bigg{)}.\]
Let us see the second term of the right hand side. For fixed \(t\in\big{[}t_{0}-(\frac{3}{4}\rho_{0})^{2s},t_{0}\big{]}\),
\[\int_{\mathbb{R}^{n}\setminus B_{4\tilde{\rho}_{\tilde{z}}}( \tilde{x})}\frac{|u(y,t)|}{|\tilde{x}-y|^{n+2s}}\,dy \leq\int_{\mathbb{R}^{n}\setminus B_{\frac{3\rho_{0}}{4}}(x_{0}) }\frac{|u(y,t)|}{|\tilde{x}-y|^{n+2s}}\,dy+\int_{B_{\frac{3\rho_{0}}{4}}(x_{0} )\setminus B_{4\tilde{\rho}_{\tilde{z}}}(\tilde{x})}\frac{|u(y,t)|}{|\tilde{x }-y|^{n+2s}}\,dy\] \[\leq\int_{\mathbb{R}^{n}\setminus B_{\frac{3\rho_{0}}{4}}(x_{0}) }\frac{|u(y,t)|}{|x_{0}-y|^{n+2s}}\,dy+c\|u\|_{L^{\infty}(Q_{\frac{3\rho_{0}} {4}}(z_{0}))}\tilde{\rho}_{\tilde{z}}^{-2s},\]
where we have used
\[|\tilde{x}-y|\geq|x_{0}-y|-|\tilde{x}-x_{0}|\geq c|x_{0}-y|,\quad y\in\mathbb{ R}^{n}\setminus Q_{\frac{3\rho_{0}}{4}}(z_{0}).\]
Thus
\[\operatorname{Tail}_{\infty}(u;\tilde{z},4\tilde{\rho}_{\tilde{z}})\leq c\|u \|_{L^{\infty}(Q_{\frac{3\rho_{0}}{4}}(z_{0}))}+c\rho_{0}^{2s}\operatorname{ ess\,sup}_{t\in\big{[}t_{0}-(\frac{3}{4}\rho_{0})^{2s},t_{0}\big{]}}\int_{ \mathbb{R}^{n}\setminus B_{\frac{3\rho_{0}}{4}}(x_{0})}\frac{|u(y,t)|}{|x_{0} -y|^{n+2s}}\,dy.\]
In turn, we use (5.21), \(Q_{4\tilde{\rho}_{\tilde{z}}}(\tilde{z})\Subset Q_{\frac{3\rho_{0}}{4}}(z_{0})\) and Theorem 1.1 with a simple modification to derive
\[[u]_{C^{\alpha,\frac{\alpha}{2s}}(Q_{\tilde{\rho}_{\tilde{z}}}( \tilde{z}))} \leq\frac{c}{\tilde{\rho}_{\tilde{z}}^{\alpha}}\Bigg{(}\|u\|_{L^{ \infty}(Q_{\frac{3\rho_{0}}{4}}(z_{0}))}+\rho_{0}^{2s}\operatorname{ess\,sup}_{t \in\big{[}t_{0}-(\frac{3}{2}\rho_{0})^{2s},t_{0}\big{]}}\int_{\mathbb{R}^{n} \setminus B_{\frac{3\rho_{0}}{4}}(x_{0})}\frac{|u(y,t)|}{|x_{0}-y|^{n+2s}}\,dy\] \[\qquad+\rho_{\tilde{z}}^{2s-\left(\frac{n}{q}+\frac{2s}{r}\right) }\|f\|_{L^{q,r}(Q_{\frac{3}{4}\rho_{0}}(\tilde{z}))}\Bigg{)}\] \[\leq\frac{c}{\tilde{\rho}_{\tilde{z}}^{\alpha}}\Bigg{(}\left( \iint_{Q_{\rho_{0}}(z_{0})}u^{2}(x,t)\,dx\,dt\right)^{\frac{1}{2}}+\rho_{0}^{2s }\operatorname{ess\,sup}_{t\in[t_{0}-(\rho_{0})^{2s},t_{0}]}\int_{\mathbb{R}^{n }\setminus B_{\frac{\rho_{0}}{2}}(x_{0})}\frac{|u(y,t)|}{|x_{0}-y|^{n+2s}}\,dy\] \[\qquad+\rho_{0}^{2s-\left(\frac{n}{q}+\frac{2s}{r}\right)}\|f\|_{ L^{q,r}(Q_{\rho_{0}}(z_{0}))}\Bigg{)}. \tag{5.22}\]
Since \(Q_{\frac{\rho_{0}}{2}}(z_{0})\) is compact, there is a finite subcover \(\left\{Q_{\frac{\tilde{\rho}_{\tilde{z}_{i}}}{4}}(\tilde{z}_{i})\right\}_{i=1}^{N}\) of \(Q_{\frac{\rho_{0}}{2}}(z_{0})\) and we choose
\[\rho_{\min}:=\min_{i=1,2,\dots,N}\tilde{\rho}_{\tilde{z}_{i}}>0. \tag{5.23}\]
Fix \(z_{1}=(x_{1},t_{1}),z_{2}=(x_{2},t_{2})\in Q_{\frac{\rho_{0}}{2}}(z_{0})\). Let us assume
\[\max\left\{4|x_{1}-x_{2}|,\left(\frac{4^{2s}}{2^{2s}-1}|t_{1}-t_{2}|\right)^{ \frac{1}{2s}}\right\}<\rho_{\min}. \tag{5.24}\]
We note that there exists \(i\) such that \(z_{1}\in Q_{\frac{\tilde{\rho}_{\tilde{z}_{i}}}{4}}(\tilde{z}_{i})\). We are going to show \(z_{2}\in Q_{\frac{\tilde{\rho}_{\tilde{z}_{i}}}{2}}(\tilde{z}_{i})\). By (5.24) and (5.23), we have
\[|x_{2}-\tilde{x}_{i}|\leq|x_{2}-x_{1}|+|x_{1}-\tilde{x}_{i}|<\frac{\rho_{\min} }{4}+\frac{\tilde{\rho}_{\tilde{z}_{i}}}{4}\leq\frac{\tilde{\rho}_{\tilde{z} _{i}}}{2}\]
and
\[|t_{2}-\tilde{t}_{i}|\leq|t_{2}-t_{1}|+|t_{1}-\tilde{t}_{i}|<\frac{(2^{2s}-1) \rho_{\min}^{2s}}{4^{2s}}+\left(\frac{\tilde{\rho}_{\tilde{z}_{i}}}{4}\right) ^{2s}\leq\left(\frac{\tilde{\rho}_{\tilde{z}_{i}}}{2}\right)^{2s},\]
which implies \(z_{2}\in Q_{\frac{\tilde{\rho}_{\tilde{z}_{i}}}{2}}(\tilde{z}_{i})\). Thus (5.22) yields
\[\begin{split}\frac{|u(z_{1})-u(z_{2})|}{|x_{1}-x_{2}|^{\alpha}+|t _{1}-t_{2}|^{\frac{\alpha}{2s}}}\leq c&\Bigg{(}\iint_{Q_{\rho_{0} }(z_{0})}u^{2}(x,t)\,dx\,dt\Bigg{)}^{\frac{1}{2}}+\text{Tail}_{\infty}(u;z_{0},\rho_{0}/2,\rho_{0}^{2s})\\ &+\rho_{0}^{2s-\left(\frac{n}{q}+\frac{2s}{r}\right)}\|f\|_{L^{ \theta,r}(Q_{\rho_{0}}(z_{0}))}\Bigg{)}.\end{split} \tag{5.25}\]
On the other hand, if \(\max\left\{4|x_{1}-x_{2}|,\left(\frac{4^{2s}}{2^{2s}-1}|t_{1}-t_{2}|\right)^{ \frac{1}{2s}}\right\}\geq\rho_{\min}\), then we deduce that
\[\begin{split}\frac{|u(z_{1})-u(z_{2})|}{|x_{1}-x_{2}|^{\alpha}+|t _{1}-t_{2}|^{\frac{\alpha}{2s}}}&\leq c\|u\|_{L^{\infty}(Q_{ \frac{\rho_{0}}{2}}(z_{0}))}\\ &\leq c\Bigg{(}\iint_{Q_{\rho_{0}}(z_{0})}u^{2}(x,t)\,dx\,dt \Bigg{)}^{\frac{1}{2}}+\text{Tail}_{\infty}(u;z_{0},\rho_{0}/2,\rho_{0}^{2s}) \\ &\qquad+\rho_{0}^{2s-\left(\frac{n}{q}+\frac{2s}{r}\right)}\|f\|_ {L^{q,r}(Q_{\rho_{0}}(z_{0}))}\Bigg{)}.\end{split} \tag{5.26}\]
From (5.25) and (5.26), we see that
\[\begin{split}[u]_{C^{\alpha},\frac{\alpha}{2s}(Q_{\frac{\rho_{0} }{2}}(z_{0}))}\leq c&\Bigg{(}\rho_{0}^{-\frac{n+2s}{2}}\|u\|_{L^{2 }(Q_{\rho_{0}}(z_{0}))}+\text{Tail}_{\infty}(u;z_{0},\rho_{0}/2,\rho_{0}^{2s}) \\ &\qquad+\rho_{0}^{2s-\left(\frac{n}{q}+\frac{2s}{r}\right)}\|f\|_ {L^{q,r}(Q_{\rho_{0}}(z_{0}))}\Bigg{)}.\end{split}\]
Since \(Q_{\rho_{0}}(z_{0})\) is chosen arbitrary, we have \(u\in C^{\alpha,\frac{\alpha}{2s}}_{\infty}(\Omega_{T})\).
In particular, we obtain the following higher Holder regularity result with a more refined estimate (5.28) if the kernel coefficient \(A\) is Holder continuous.
**Corollary 5.3**.: _Let \(u\) be a local weak solution to (1.1) with (1.4). Suppose that a kernel coefficient \(A\) satisfies_
\[\frac{|A(x,y,t)-A(x^{\prime},y^{\prime},t^{\prime})|}{\left(|(x,y)-(x^{\prime}, y^{\prime})|+|t-t^{\prime}|^{\frac{1}{2s}}\right)^{\beta}}\leq L,\quad x,x^{\prime},y,y^{ \prime}\in\Omega\text{ and }t,t^{\prime}\in(0,T), \tag{5.27}\]
_for some constants \(\beta\in(0,1)\) and \(L>0\). Then \(u\in C^{\alpha,\frac{\alpha}{2s}}_{\mathrm{loc}}(\Omega_{T})\) for any \(\alpha\) satisfying (1.5). In particular, there is a sufficiently small \(\rho_{\beta}=\rho_{\beta}(n,s,q,r,\lambda,\alpha,\beta,L)\) such that for any \(Q_{\rho_{0}}(z_{0})\Subset\Omega_{T}\) with \(\rho_{0}\leq\rho_{\beta}\), we have_
\[\begin{split}[u]_{C^{\alpha,\frac{\alpha}{2s}}(Q_{\rho_{0}/2}(z_{ 0}))}&\leq\frac{c}{\rho_{0}^{\alpha}}\Bigg{(}\rho_{0}^{-\frac{n+2 s}{2}}\|u\|_{L^{2}(Q_{\rho_{0}}(z_{0}))}+\mathrm{Tail}_{\infty}(u;z_{0},\rho_{0}/2, \rho_{0}^{2s})\\ &\qquad\qquad+\rho_{0}^{2s-\left(\frac{n}{q}+\frac{2s}{r}\right)} \|f\|_{L^{q,r}(Q_{\rho_{0}}(z_{0}))}\Bigg{)},\end{split} \tag{5.28}\]
_where \(c\equiv c(n,s,q,r,\lambda,\alpha)\)._
Proof.: Fix \(\alpha\in\left(0,\min\left\{2s-\left(\frac{n}{q}+\frac{2s}{r}\right),1\right\}\right)\). Let \(\delta=\delta(n,s,q,r,\lambda,\alpha)>0\) be determined in Lemma 5.2. By (5.27), there is a sufficiently small \(\rho_{\beta}=\rho_{\beta}(n,s,q,r,\lambda,\alpha,\beta,L)>0\) such that if \((x,y,t),(x^{\prime},y^{\prime},t^{\prime})\in\Omega\times\Omega\times(0,T)\) satisfy
\[\left(|(x,y)-(x^{\prime},y^{\prime})|+|t-t^{\prime}|^{\frac{1}{2s}}\right) \leq\rho_{\beta},\]
then
\[|A(x,y,t)-A(x^{\prime},y^{\prime},t^{\prime})|\leq\delta.\]
We now take \(Q_{\rho_{0}}(z_{0})\Subset\Omega_{T}\) with \(\rho_{0}\leq\rho_{\beta}\). Then for any \(\tilde{z}\in Q_{\rho_{0}}(z_{0})\), we see that
\[\|\tilde{A}_{\tilde{z}}-A\|_{L^{\infty}\left(B_{4\rho}(\tilde{x})\times B_{4 \rho}(\tilde{x})\times\left[\tilde{t}-(4\rho)^{2s},\tilde{t}\right]\right)} \leq\delta,\]
where the constant \(\rho>0\) is given in (5.20) and we take
\[\tilde{A}_{\tilde{z}}(x,y,t)=A(\tilde{x},\tilde{x},\tilde{t}),\quad(x,y,t)\in B _{4\rho}(\tilde{x})\times B_{4\rho}(\tilde{x})\times\left[\tilde{t}-(4\rho)^ {2s},\tilde{t}\right]\]
as in (1) of Remark 3. Thus, by following the same lines as in the proof of (5.22) with \(\tilde{\rho}_{\tilde{z}}\) there, replaced by \(\rho\), we have
\[\begin{split}[u]_{C^{\alpha,\frac{\alpha}{2s}}(Q_{\rho}(\tilde{z} ))}&\leq\frac{c}{\rho^{\alpha}}\Bigg{(}\left(\iint_{Q_{\rho_{0}}( z_{0})}u^{2}\,dx\,dt\right)^{\frac{1}{2}}+\rho_{0}^{2s}\operatorname*{ess\,sup}_{t\in[t_{0} -(\rho_{0})^{2s},t_{0}]}\int_{\mathbb{R}^{n}\setminus B_{\frac{\rho_{0}}{2}}( z_{0})}\frac{|u(y,t)|}{|x_{0}-y|^{n+2s}}\,dy\\ &\qquad+\rho_{0}^{2s-\left(\frac{n}{q}+\frac{2s}{r}\right)}\|f\|_ {L^{q,r}(Q_{\rho_{0}}(z_{0}))}\Bigg{)}\end{split}\]
for some constant \(c=c(n,s,q,r,\lambda,\alpha)\). Using the definition of \(\rho\) given in (5.20), we get
\[\begin{split}[u]_{C^{\alpha,\frac{\alpha}{2s}}(Q_{\rho}(\tilde{z} ))}&\leq\frac{c}{\rho_{0}^{\alpha}}\Bigg{(}\left(\iint_{Q_{\rho_{0} }(z_{0})}u^{2}\,dx\,dt\right)^{\frac{1}{2}}+\mathrm{Tail}_{\infty}(u;z_{0},\rho _{0}/2,\rho_{0}^{2s})\\ &\qquad+\rho_{0}^{2s-\left(\frac{n}{q}+\frac{2s}{r}\right)}\|f\|_ {L^{q,r}(Q_{\rho_{0}}(z_{0}))}\Bigg{)}\end{split}\]
for some constant \(c=c(n,s,q,r,\lambda,\alpha)\). We now use the standard covering argument to conclude (5.28).
## Appendix A Existence and uniqueness of an initial and boundary value problem
In this section, we prove the existence and uniqueness of a weak solution to (1.1) with boundary conditions. When \(r\geq 2\), this is proved in [5, Appendix A]. Here we extend the argument to the case \(r>1\) by regularizing the nonhomogeneous term. To this end, let us introduce appropriate function spaces. Let \(\Omega\) and \(\Omega^{\prime}\) be bounded open sets with \(\Omega\Subset\Omega^{\prime}\subset\mathbb{R}^{n}\). We introduce a space as in [26],
\[X_{\phi}^{s,2}(\Omega,\Omega^{\prime})=\big{\{}v\in W^{s,2}(\Omega^{\prime}) \cap L^{1}_{2s}(\mathbb{R}^{n})\ ;\ v=\phi\ \text{on}\ \mathbb{R}^{n}\setminus\Omega\big{\}}\quad\text{ for }\phi\in L^{1}_{2s}(\mathbb{R}^{n}).\]
Assume that functions \(u_{0}\), \(f\) and \(\zeta\) satisfy the followings,
\[u_{0}\in L^{2}(\Omega),\] (A.1) \[f\in L^{q,r}(\Omega_{T})\text{ for }\frac{n}{2qs}+\frac{1}{r}\leq 1+ \frac{n}{4s},\] \[\zeta\in L^{2}(0,T;W^{s,2}(\Omega^{\prime}))\cap L^{2}(0,T;L^{1}_ {2s}(\mathbb{R}^{n}))\text{ and }\partial_{t}\zeta\in\left(L^{2}(0,T;W^{s,2}(\Omega^{ \prime})\right)^{*}.\]
We say that \(u\in L^{2}(0,T;W^{s,2}(\Omega^{\prime}))\cap L^{2}(0,T;L^{1}_{2s}(\mathbb{R}^{ n}))\cap C([0,T];L^{2}(\Omega))\) is a weak solution to
\[\begin{cases}\partial_{t}u+\mathcal{L}_{A}^{\Phi}u=f&\text{ in }\Omega\times(0,T)\\ \qquad\qquad u=\zeta&\text{ on }\mathbb{R}^{n}\setminus\Omega\times[0,T]\\ \qquad\qquad u(\cdot,0)=u_{0}&\text{ on }\Omega,\end{cases}\] (A.2)
if it satisfies the following three conditions:
1. \(u(\cdot,t)\in X^{s,p}_{\zeta(\cdot,t)}(\Omega,\Omega^{\prime})\) for almost every \(t\in I\).
2. \(\lim\limits_{t\to 0}\|u(\cdot,t)-u_{0}\|_{L^{2}(\Omega)}=0\).
3. For every \(\phi\in L^{2}(t_{1},t_{2};X^{s,2}_{0}(\Omega,\Omega^{\prime}))\cap C^{1}([t_{ 1},t_{2}];L^{2}(\Omega))\), we have \[-\int_{t_{1}}^{t_{2}}\int_{\Omega}u(x,t)\partial_{t}\phi(x,t)\,dx\, dt+\int_{t_{1}}^{t_{2}}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{\Phi(u(x,t)-u( y,t))}{|x-y|^{n+2s}}(\phi(x,t)-\phi(y,t))\,dx\,dy\,dt\] \[=\int_{t_{1}}^{t_{2}}\int_{\Omega}f(x,t)\phi(x,t)\,dx\,dt-\int_{ \Omega}u(x,t)\phi(x,t)\,dx\bigg{|}_{t=t_{1}}^{t=t_{2}}\] whenever \(J\coloneqq[t_{1},t_{2}]\Subset I\).
**Lemma A.1**.: _Under the assumptions (A.1), there is a unique weak solution_
\[u\in L^{2}(0,T;W^{s,2}(\Omega^{\prime}))\cap L^{2}(0,T;L^{1}_{2s}(\mathbb{R}^ {n}))\cap C([0,T];L^{2}(\Omega))\]
_to (A.2). In particular, if \(\zeta\in L^{\infty}(0,T;L^{1}_{2s}(\mathbb{R}^{n}))\), then \(u\in L^{\infty}(0,T;L^{1}_{2s}(\mathbb{R}^{n}))\)._
Proof.: Take a sequence \(\{f_{n}\}_{n\in\mathbb{N}}\subset C([0,T];L^{q}(\Omega))\) such that \(\lim_{n\to\infty}\|f_{n}-f\|_{L^{q,r}(\Omega_{T})}=0\). Since \(\{f_{n}\}_{n\in\mathbb{N}}\subset L^{2}(0,T;L^{q}(\Omega))\subset L^{2}\left( 0,T;\left(X^{s,2}_{0}(\Omega,\Omega^{\prime})\right)^{*}\right)\), we can use [5, Theorem A.3] and [35, Proposition 4.1] with simple modifications to handle \(\Phi(\xi)\). Thus we have a unique weak solution
\[u_{n}\in L^{2}(0,T;W^{s,2}(\Omega^{\prime}))\cap L^{2}(0,T;L^{1}_{2s}(\mathbb{ R}^{n}))\cap C([0,T];L^{2}(\Omega))\]
to
\[\begin{cases}\partial_{t}u_{n}+\mathcal{L}_{A}^{\Phi}u_{n}=f_{n}&\text{ in } \Omega\times(0,T)\\ \qquad\qquad u_{n}=\zeta&\text{ on }(\mathbb{R}^{n}\setminus\Omega)\times(0,T)\\ \qquad\qquad u_{n}(\cdot,0)=u_{0}&\text{ on }\Omega.\end{cases}\]
We next show that \(\{u_{n}-\zeta\}_{n\in\mathbb{N}}\) is a Cauchy sequence in \(L^{2}(0,T;W^{s,2}(\mathbb{R}^{n}))\cap C([0,T];L^{2}(\Omega))\). Then \(u_{n}-u_{m}\in L^{2}(0,T;W^{s,2}_{0}(\Omega))\cap C([0,T];L^{2}(\Omega))\) and
\[\partial_{t}(u_{n}-u_{m})+\mathcal{L}_{A}^{\Phi}u_{n}-\mathcal{L}_{A}^{\Phi}u_ {m}=f_{n}-f_{m}\quad\text{ in }\Omega_{T}.\] (A.3)
Using \(u_{n}-u_{m}\) as a test function to (A.3), we deduce that
\[\sup_{t\in[0,T]}\int_{\Omega}(u_{n}-u_{m})^{2}(x,t)\,dx+\int_{0}^{T}[(u_{n}-u_ {m})(\cdot,t)]^{2}_{W^{s,2}(\mathbb{R}^{n})}\,dt\] \[\qquad\leq c\int_{0}^{T}\int_{\Omega}(f_{n}-f_{m})(u_{n}-u_{m}) \,dx\,dt\]
for some constant \(c\equiv c(\lambda)\). Then using Lemma 2.2 and Young's inequality, we have
\[\sup_{t\in[0,T]}\int_{\Omega}(u_{n}-u_{m})^{2}(x,t)\,dx+\int_{0}^{T}[(u_{n}-u_ {m})(\cdot,t)]^{2}_{W^{s,2}(\mathbb{R}^{n})}\,dt\leq c\|f_{n}-f_{m}\|^{2}_{L^{q,r }(\Omega_{T})}.\]
This implies that \(\{u_{n}-\zeta\}_{n\in\mathbb{N}}\) is a Cauchy sequence in \(L^{2}(0,T;W^{s,2}(\mathbb{R}^{n}))\cap C([0,T];L^{2}(\Omega))\). Then we have
\[\tilde{u}=\lim_{n\to\infty}u_{n}-\zeta\in L^{2}(0,T;W^{s,2}(\mathbb{R}^{n})) \cap C([0,T];L^{2}(\Omega)).\]
Since \(\lim_{n\to\infty}u_{n}=\tilde{u}+\zeta\in L^{2}(0,T;W^{s,2}(\Omega^{\prime})) \cap L^{2}(0,T;L^{1}_{2s}(\mathbb{R}^{n}))\) and \(\tilde{u}+\zeta=\zeta\) on \((\mathbb{R}^{n}\setminus\Omega)\times(0,T)\), we have
\[u=\tilde{u}+\zeta\in L^{2}(0,T;W^{s,2}(\Omega^{\prime}))\cap L^{2}(0,T;L^{1}_ {2s}(\mathbb{R}^{n}))\cap C([0,T];L^{2}(\Omega))\]
is a weak solution to (A.2). For the uniqueness, let \(u\),\(v\) be solutions to (A.2). Then \(u-v\in L^{2}(0,T;W^{s,2}_{0}(\Omega))\cap C([0,T];L^{2}(\Omega))\) and it satisfies
\[\partial_{t}(u-v)+\mathcal{L}_{A}^{\Phi}u-\mathcal{L}_{A}^{\Phi}v=0\quad\text { in }\Omega_{T}.\]
Taking \(u-v\) as a test function, we get
\[\sup_{t\in[0,T]}\int_{\Omega}(u-v)^{2}(x,t)\,dx+\int_{0}^{T}[(u-v)(\cdot,t)]_{ W^{s,2}(\mathbb{R}^{n})}^{2}\,dt\leq 0,\]
which implies \(u\) and \(v\) are same in \(L^{2}(0,T;W^{s,2}_{0}(\Omega))\cap C([0,T];L^{2}(\Omega))\). In addition, if \(\zeta\in L^{\infty}(0,T;L^{1}_{2s}(\mathbb{R}^{n}))\), we discover that
\[\operatorname*{ess\,sup}_{t\in(0,T)}\int_{\mathbb{R}^{n}}\frac{|w(y,t)|}{1+|y |^{n+2s}}\,dy\leq\operatorname*{ess\,sup}_{t\in(0,T)}\int_{\Omega}\frac{|w(y,t )|}{1+|y|^{n+2s}}\,dy+\operatorname*{ess\,sup}_{t\in(0,T)}\int_{\mathbb{R}^{n} \setminus\Omega}\frac{|\zeta(y,t)|}{1+|y|^{n+2s}}\,dy<\infty,\]
which yields that
\[u\in L^{\infty}(0,T;L^{1}_{2s}(\mathbb{R}^{n})).\]
**Data Availability** All data generated or analysed during this study are included in this published article.
**Conflict of Interest** There is no conflict of interest.
**Acknowledgement** We would like to thank a referee for the careful reading of the early version to give valuable comments and constructive suggestions.
|
2309.05866 | ESG-coherent risk measures for sustainable investing | The growing interest in sustainable investing calls for an axiomatic approach
to measures of risk and reward that focus not only on financial returns, but
also on measures of environmental and social sustainability, i.e.
environmental, social, and governance (ESG) scores. We propose definitions for
ESG-coherent risk measures and ESG reward-risk ratios based on functions of
bivariate random variables that are applied to financial returns and ESG
scores, extending the traditional univariate measures to the ESG case. We
provide examples and present an empirical analysis in which the ESG-coherent
risk measures and ESG reward-risk ratios are used to rank stocks. | Gabriele Torri, Rosella Giacometti, Darinka Dentcheva, Svetlozar T. Rachev, W. Brent Lindquist | 2023-09-11T23:11:19Z | http://arxiv.org/abs/2309.05866v1 | # ESG-coherent risk measures for sustainable investing
###### Abstract
The growing interest in sustainable investing calls for an axiomatic approach to measures of risk and reward that focus not only on financial returns, but also on measures of environmental and social sustainability, i.e. environmental, social, and governance (ESG) scores. We propose definitions for _ESG-coherent risk measures_ and _ESG reward-risk ratios_ based on functions of bivariate random variables that are applied to financial returns and ESG scores, extending the traditional univariate measures to the ESG case. We provide examples and present an empirical analysis in which the ESG-coherent risk measures and ESG reward-risk ratios are used to rank stocks.
## 1 Introduction
ESG investing refers to the integration of environmental, social, and governance considerations into the asset allocation process. It is one of the most significant trends in the asset manage
ment industry, due to an increased focus on sustainability and to the increasing availability of information related to non-financial impacts. Governments and transnational organizations are acknowledging the importance of ESG by promoting specific legislation. Under general voluntary commitments, such as the UN Principles for Responsible Investment (UN-PRI) initiative launched in 2006, regulators are beginning to require financial institutions to comply with specific standards.
ESG investing encompasses a broad array of approaches, and its market practices are very heterogeneous, with different terminologies, definitions, and strategies. These practices vary due to the cultural and ideological diversity of investors (Sandberg et al., 2009; Widyawati, 2020). According to Amel-Zadeh and Serafeim (2018), the most significant motivation for incorporating ESG factors is related to financial performance, as sustainability factors are perceived as relevant to investment returns. That is, investors believe that ESG data can be used to identify potential risks and opportunities, and that such information is not yet fully incorporated into market prices. Hence, ESG information should help investors to control risk better and improve their financial performance. In line with Schanzenbach and Sitkoff (2020) we employ the expression _risk-return ESG_ to refer to investment strategies that use ESG factors to improve returns while lessening risk. Academic evidence on the role of ESG in enhancing performance is inconclusive. The meta-analysis conducted by Revelli and Viviani (2015) shows that sustainable and responsible investing (SRI) is neither a weakness nor a strength compared with conventional investing. Similar conclusions have been drawn by Hornuf et al. (2022). From a theoretical point of view, risk-return ESG does not pose any specific challenge, as ESG is treated as any other information (e.g. balance sheet data, macroeconomic indicators, sentiment analysis, etc.) and is integrated in the investment process without affecting the main goal of improving the risk-adjusted performance.1
Footnote 1: We note that, if ESG information is already incorporated in market prices, any strategy that restricts investment based on ESG criteria would result in sub-optimal allocation in terms of monetary performance. Moreover we underline that risk-return ESG strategies are not necessarily more sustainable than traditional ones. Indeed, an investor may implement a contrarian-ESG approach, investing in less sustainable assets if they are expected to outperform the market.
A second motivation that guides ESG strategies is the desire to improve the sustainable profile of the portfolios for ethical reasons or to improve the investors' green image (Amel-Zadeh and Serafeim, 2018).2 Sustainability then becomes part of the investment goals, alongside
monetary performance and the riskiness of the position. Schanzenbach and Sitkoff (2020) refer to investment strategies that incorporate ESG screenings for moral or ethical reasons as _collateral benefit ESG_, as they aim to provide benefits to a third party, rather than to improve risk-adjusted returns. We refer to investors who include sustainability considerations for ethical reasons as _ESG-oriented investors_, to distinguish them from _risk-return investors_ who care exclusively about the financial risks and returns of a position.
Footnote 1: The risk-adjusted returns are defined as \(\epsilon_{1}=\epsilon_{2}+\epsilon_{3}\), where \(\epsilon_{1}\) is the risk-adjusted return of the asset, and \(\epsilon_{2}\) is the risk-adjusted return of the asset.
Footnote 2: The risk-adjusted returns are defined as \(\epsilon_{1}=\epsilon_{2}+\epsilon_{3}\), where \(\epsilon_{1}\) is the risk-adjusted return of the asset, and \(\epsilon_{2}\) is the risk-adjusted return of the asset.
The employment of ethical motivations challenges many of the traditional assumptions adopted in finance theory, as ESG-oriented investors consider monetary risks and rewards in their investment process as well as a set of other determinants that cannot be reduced to the financial performance alone. To show how the ethical motivation breaks common investing principles, consider two stocks with the same reward-risk profile, but with different ESG scores. They may be equivalent for a risk-return investor but not for an ESG-oriented investor, as one of the two companies may be more environmentally sustainable or may adopt stricter human right policies. An ESG-oriented investor may decide to deliberately worsen their risk-adjusted performance to comply with non-negotiable principles, for instance by excluding certain sectors or companies, thus reducing the diversification of their portfolio.3
Footnote 3: The risk-adjusted returns are defined as \(\epsilon_{1}=\epsilon_{2}+\epsilon_{3}\), where \(\epsilon_{1}\) is the risk-adjusted return of the asset, and \(\epsilon_{2}\) is the risk-adjusted return of the asset.
The dichotomy between ethical and financial motivations of ESG strategies is also connected to country-specific legal frameworks. As mentioned, ESG investing is now high on the agenda of regulators as a consequence of interest from market participants and pressure from initiatives such as UN-PRI. Due to the highly regulated nature of the asset management industry, the growth of ESG investing is tightly related to national and international regulatory frameworks, and views on this topic differ substantially across jurisdictions. In particular, in the United States the debate focuses on investments subject to the Employee Retirement Income Security Act of 1974 (ERISA): Schanzenbach and Sitkoff (2020) argue that, in general under American trust fiduciary law, the use of ESG is only permissible to improve financial performance (i.e. risk-return ESG), as the trustee of a fund must act in the sole interest of the beneficiaries, and the inclusion of ethical considerations in the investment process may breach this principle
(collateral benefit ESG may still be possible under direct authorization by the beneficiary or for charity endowments under specific conditions). Other jurisdictions more extensively support ESG investing in all its forms. As an example, in Europe the Commission Delegated Regulation (EU) 2021/1253 of 21 April 20214 amends the Markets in Financial Instruments Directive II (MiFID II) and imposes on financial advisors and portfolio managers the requirement to carry out an assessment of the _sustainability preferences_ of their clients (and potential clients) in order to offer products that are in line with client preferences.
Footnote 4: [https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:32021R1253](https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:32021R1253)
It is clear that the growing interest and surrounding controversy related to ESG investing requires the development of a solid theoretical framework for supporting the decision making process of ESG-oriented investors.
Currently, the most common ESG portfolio strategies involve negative screenings (i.e., the exclusion of specific sectors or companies) or a pre-selection of best-in-class companies according to ESG scores. It is only recently that more formal techniques for the construction of a portfolio suitable for an ESG-oriented investor have been discussed in the literature. One approach to combining financial and sustainability goals is the introduction of optimal asset allocations that include ESG constraints. This stream of the literature extends the classical reward-risk analysis by adding a third dimension: the sustainability, represented by ESG scores. An investor would then be able to assess a portfolio in three dimensions: return, risk, and ESG. Such an approach has been pursued by, among others, Utz et al. (2015) and Cesarone et al. (2022) who studied the efficient frontier of non-dominated portfolios in the reward-risk-ESG space. This approach extends the traditional optimal portfolio literature that started with Markowitz (1952) and which aims to find portfolios with the highest expected return for a given level of risk, where risk is measured using the portfolio variance (Markowitz, 1952), a coherent risk measure such as the average value at risk (AVaR)5(Rockafellar et al., 2000), or an asymmetric deviation measure (Giacometti et al., 2021).6 One drawback of this modeling framework is that it fails to take into account the stochasticity of ESG scores: it either uses only the expected values of the scores or assumes that ESG scores can be treated as constants for the duration of the investment period.
Footnote 5: The AVaR is also known in the literature as Conditional Value at Risk (CVaR) or Expected Shortfall (ES)
Footnote 6: An alternative approach for including ESG in the frontier analysis was proposed by Pedersen et al. (2021), who identified a two-dimensional ESG efficient frontier by considering the Sharpe ratio and the ESG score.
Here, we aim to address this limitation and, more generally, introduce a way to measure the
risk associated with an ESG-oriented position. We propose an axiomatic approach based on the idea that risk comes from two drivers: monetary performance (i.e., the financial returns of a position) and sustainability (represented by the ESG score of the company).7 These quantities are random and not necessarily independent, and we can represent them as a bivariate random variable. Hence, it is natural to refer to the rich literature on multivariate risk measures (Jouini et al., 2004; Hamel, 2009; Wei and Hu, 2014; Ekeland et al., 2012). Such measures have been developed to study portfolios of non-perfectly fungible assets (e.g., assets valued in multiple currencies) or assets that are difficult to price. We propose using a bivariate risk measure to deal with a single asset that can be evaluated over two dimensions: the monetary returns and the sustainability (represented by ESG scores). We then define _ESG-coherent risk measures_ as an extension of the _coherent risk measures_ introduced by Artzner et al. (1999).
Footnote 7: For convenience, in the rest of this paper, we refer to these two dimensions as the _monetary_ and _ESG_ components of risk.
Since different investors may have different attitudes towards ESG, the proposed measures are parametrized using a value \(\lambda\in[0,1]\) to explicitly take into account the subjective trade-off between sustainability and financial performance: when \(\lambda=0\) an investor cares exclusively about financial risk, when \(\lambda=1\) the investor cares only about ESG. In addition to ESG-coherent risk measures, we define _ESG-coherent reward measures_ and _ESG reward-risk ratios_.
Section 2 discusses our interpretation of ESG scores and outlines how we propose to use them. Section 3 introduces ESG-coherent risk measures and provides examples, while Section 4 introduces ESG reward-risk ratios. Section 5 presents an empirical example using real data. Section 6 highlights some conclusions and perspectives for future research.
## 2 Measuring ESG and financial performance
The first step in defining ESG risk measures is to clarify how we characterize and measure the sustainability.
In general, following an approach common in the literature, we use the ESG scores of a company as a proxy for sustainability (see, e.g., Pedersen et al., 2021; Utz et al., 2015; Cesarone et al., 2022). Here, we clarify how to treat such a variable from both practical and theoretical points of view, establishing the basis for measuring ESG scores in a way that allows us to
consistently use them alongside monetary values for risk measurement. In practice, to measure risk, we define a bivariate random variable \(X=[r\text{ ESG}]^{\prime}\), where the first component measures the monetary risk and the second measures the ESG risk. We emphasize that, at this stage, we keep the discussion on an abstract level, without discussing the composition and estimation of ESG scores. This allows us not to be limited by the current state-of-the-art market ESG scores, which are still far from being standardized and comparable across data providers (see, e.g., Berg et al., 2019; Billio et al., 2021).
Several approaches appear in the literature for modeling the monetary component. Artzner et al. (1999) define coherent risk measures for a random variable that represents net worth by following the principle that "bygones are bygones," meaning that future net worth is the only thing that matters. In an alternative approach, the random variable used to compute monetary risk represents the return of a financial position. This last approach, which is often used in practical applications for the measurement of the risk of equity portfolios, introduces some differences in the interpretation of the axioms (see Rachev et al., 2011, Chapter 6).8
Footnote 8: Alternatively, some authors prefer to define risk measures computed on a variable that represents losses, thus assuming that lower values of the variable are preferred by an investor. Such an assumption does not significantly alter the analysis: it simply changes some signs in the definition of risk (see, e.g., Rockafellar and Uryasev, 2013).
Similarly, we need to establish the quantity measured by the random variable ESG and the sign convention used. It is critical to identify the nature of the ESG variable; in particular, one must consider whether it is a _stock variable_ (measured at a specific point in time) or a _flow variable_ (measured over an interval of time, as some sort of _sustainability return_). Based on the way they are computed, we argue that ESG scores belong to the second category: indeed, they represent the current level of sustainability of the production and commercial practices of a company, which directly affects the current impact of that company on the world.9 We can think of ESG as a broad measure of externalities (positive or negative) that are generated over time. In this sense, the ESG score of a company is the rate at which it accumulates non-monetary "satisfaction" for the investor. The total non-monetary satisfaction for an investor is proportional to the holding time of the investment.
Footnote 9: The scores are typically computed as functions of several indicators related to the production methods, the supply chain management, the industry in which the company operates, the transparency of its governance, the presence of specific policies on human rights violations, etc.
Our approach is to regard ESG as a _sustainability flow_. This quantity does not necessarily need to be expressed in monetary terms, and the value of this comp
investor.10 Assuming that the sustainability flow is proportional to the initial investment, we can rescale the ESG rating at a time \(t\) to obtain \(\mathrm{esg}_{t}\), which represents the instantaneous sustainability flow of an asset, and it can be modeled as a stochastic process. For a given time horizon, the satisfaction of the investor depends on the amount of sustainability accumulated over time, which, for the interval of time from \(0\) to \(T\), is defined by
Footnote 10: A monetary market price of sustainability may exist, but each investor may assign a subjective value to it.
\[\mathrm{ESG}_{T}:=\int_{0}^{T}\mathrm{esg}_{s}ds. \tag{1}\]
Under this approach, ESG scores currently available from the market are interpreted as the _accumulated sustainability_ over one year, and could be used to calibrate the process \(\mathrm{esg}_{t}\). In the simple case of a constant instantaneous flow \(\mathrm{esg}\), we have \(\mathrm{ESG}_{T}=T\cdot\mathrm{esg}\).
The stochasticity of the process \(\mathrm{esg}_{t}\) can be traced to two sources: the first is the uncertainty of the measurement of its value, and the second is the uncertainty in the evolution of the sustainability policies within the company, which is driven by the choices made by the management, by market conditions, and by the regulatory framework. We also expect a correlation structure between ESG scores and financial returns due to the presence of common driving factors related, for instance, to sector-wide or country-wide dynamics.
With regard to a sign convention, it is natural to express ESG scores in such a way that higher scores are preferable. We use this convention in the rest of this paper. For consistency, we also assume a preference for higher values for the monetary component: that is, we consider a preference for a gain in future net worth.
This interpretation of ESG as a sustainability flow pairs well with the use of returns, rather than the final net worth, to model the monetary component. As with returns, the accumulated sustainability for the unit investment is expressed as a number which depends on the length of the studied period of time.
Unless stated otherwise, in the rest of this paper we equate the monetary component of the investment with the rate of return \(r_{T}\) and the sustainability component with the accumulated sustainability \(\mathrm{ESG}_{T}\). Dropping the time dependence from the notation to improve readability,
the bivariate random variable used to compute the ESG-coherent risk measures is therefore
\[X=\begin{bmatrix}r\\ \text{ESG}\end{bmatrix}. \tag{2}\]
With reference to the empirical implementation of this framework, we note that the standardization of ESG scores across data providers is currently a major issue, and the procedures used to estimate such scores are under continuous development. In particular, there is a dearth of forward-looking information regarding the sustainability of a company. ESG scores are typically based on the current or past business practices of the firm, while measures of the ESG risk (and reward) should focus on the sustainability of the firm over, and beyond, any investment horizon. Our procedure is general enough to be adapted to improved estimators of company sustainability, when they are developed. Further research should focus on the modeling and estimation of \(\text{esg}_{t}\) and the computation of the ESG score.
## 3 ESG-coherent risk measures
We begin with the axioms that define a coherent risk measure (Artzner et al., 1999). These axioms make it possible to identify measures with desirable properties, assisting both investors and regulators. Consider a convex set \(\mathcal{X}\subseteq\mathcal{L}_{p}(\Omega,\mathcal{F},\mathbb{P})\) of real-valued random variables \(X\) which are defined on a probability space \((\Omega,\mathcal{F},\mathbb{P})\), have finite \(p\)-moments (\(p\geq 1\)), and are indistinguishable up to a set of \(P\)-measure zero. We assume that the random variables represent the returns or the payoff of an asset. The functional \(\rho(X):\mathcal{X}\to\mathbb{R}\cup\{+\infty\}\) is a coherent risk measure if it satisfies the following properties:
* sub-additivity: if \(X_{1},X_{2}\in\mathcal{X}\), then \(\rho(X_{1}+X_{2})\leq\rho(X_{1})+\rho(X_{2})\);
* positive homogeneity: if \(\alpha\in\mathbb{R}_{+}\) and \(X\in\mathcal{X}\), then \(\rho(\alpha X)=\alpha\rho(X)\);
* translation invariance: if \(a\) is deterministic, \(\rho(X+a)=\rho(X)-a\);
* monotonicity: if \(X_{1},X_{2}\in\mathcal{X}\) and \(X_{1}\leq X_{2}\) a.s., then \(\rho(X_{1})\geq\rho(X_{2})\).
Examples of coherent risk measures are AVaR and expectile; but VaR and standard deviation are not coherent risk measures.11
Footnote 11: Alternative axiomizations of risk measures, such as convex risk measures (Föllmer and Schied, 2002) and regular risk measures (Rockafellar and Uryasev, 2013) have been proposed since the work of (Artzner et al., 1999).
The definition of a coherent risk measure is based upon a univariate random variable, and on the idea that the risk is a function only of the returns of the asset (consistent with the assumption that an investor is only interested in the monetary outcome of their position). In some contexts, the monetary outcome does not fully characterize the risk. Examples include: portfolios in two countries having floating exchange rates whose payoffs in the different currencies are not perfectly substitutable (the Siegel paradox, see Black, 1989), the fact that various maturities for interest rate products are not perfect substitutes (failure of the pure expectation hypothesis), and cases in which it is difficult to attribute a monetary equivalent to various dimensions of risk, such as environmental or health risks. The same principle can be used for the ESG-oriented investor, whose risk depends on both the monetary return of a financial position and its sustainability, represented in our analysis by its ESG score. Such an approach allows an investor to deal with trade-offs between different goals that, although they are very different in nature, have to be taken into account in the investment process.
### ESG-coherent risk measures -- axiomatic definition
We introduce bivariate risk measures and apply them to the financial performance and sustainability of a position.12 Consider a convex set \(\mathcal{X}_{2}\) of random vectors \(X=[r\ ESG]^{\prime}\), defined on a probability space \((\Omega,\mathcal{F},\mathbb{P})\) with values in \(\mathbb{R}^{2}\). We use the short-hand notation \(\mathcal{X}_{2}=\mathcal{L}_{p}(\Omega,\mathcal{F},\mathbb{P};\mathbb{R}^{2})\) for the space of random vectors with two components and finite \(p\)-th moments, which are indistinguishable on sets of \(P\)-measure zero. Here \(p\in[1,\infty]\). An ESG risk measure is then a functional of the form \(\boldsymbol{\rho}(X):\mathcal{X}_{2}\to\mathbb{R}\cup\{+\infty\}\). As in the univariate case, it is possible to axiomatically characterize a set of measures that have desirable properties. These axioms are a bivariate generalization of SUB, PH, TI, and MO:
Footnote 12: As discussed in Section 2, we use periodic returns for the monetary component and ESG scores (which represent the accumulated sustainability of the investment) for the sustainability component. The definition could be extended to alternative specifications that can be represented using two random variables with a joint distribution such that an investor has a preference for both higher \(r\) and higher ESG values (e.g., using the final net worth and the accumulated sustainability multiplied by the initial value of the position).
* sub-additivity: if \(X_{1},X_{2}\in\mathcal{X}_{2}\), then \(\mathbf{\rho}(X_{1}+X_{2})\leq\mathbf{\rho}(X_{1})+\mathbf{\rho}(X_{2})\);
* positive homogeneity: if \(\beta\in\mathbb{R}_{+}\) and \(X\in\mathcal{X}_{2}\), then \(\mathbf{\rho}(\beta X)=\beta\mathbf{\rho}(X)\);
* translation invariance: if \(y\in\mathbb{R}^{2}\), \(\mathbf{\rho}(X+y)=\mathbf{\rho}(X)+\mathbf{\rho}(y)\);
* monotonicity: if \(X_{1},X_{2}\in\mathcal{X}_{2}\) and \((r_{1}\leq r_{2}\wedge\text{ESG}_{1}\leq\text{ESG}_{2})\) a.s., then \(\mathbf{\rho}(X_{1})\geq\mathbf{\rho}(X_{2})\).
We note the following.
* A multivariate generalization of the first two axioms is straightforward.
* The identification of the value of \(\mathbf{\rho}(y)\), where \(y\) is a constant vector, appearing in axiom TI-M, is discussed in subsection 3.2. It is related to the risk of an _ESG safe asset_ (SA), defined as a position having constant values for both monetary and ESG components.
* In line with the literature, we impose a monotonicity condition for which zeroth-order stochastic dominance implies the a.s. ordering of risks. Alternative approaches may be based on first- and second-order stochastic dominance; we leave such an analysis for future studies and maintain the most general (weakest) condition. We emphasize that the ordering induced by the rule in MO-M is partial.
This set of axioms is consistent with the work of Ruschendorf (2006), Wei and Hu (2014), and Chen and Hu (2020), which define risk for multivariate vectors using scalar-valued functions. A key difference is that we maintain a more general formulation of the translation invariance (TI-M) axiom, as we aim to further parametrize the function for ESG-coherent risk measures.
Some authors have proposed set-valued risk measures (Jouini et al., 2004; Hamel, 2009; Hamel et al., 2013). Such proposals start from the idea that, "risk is the amount of cash that needs to be added to a position to make it acceptable" (Artzner et al., 1999). Extending the reasoning to a multivariate setting with multiple assets, it is possible to make a position acceptable by adding any of several "safe" portfolios with deterministic payoff, each characterized by a different combination of assets with deterministic payoff. In such a framework, the risk measure is given by a combination of all safe portfolios that make the position acceptable. In the context ESG investing, where each asset is evaluated on the basis of two dimensions (returns and sustainability), we can imagine a market with multiple ESG safe assets with different return
and ESG. The advantage of this approach is to provide a more complete assessment of the risk in relation to the multiple drivers of risk, and has nicer mathematical properties, but at the cost of greater complexity and the inability to directly rank positions. Our approach to compute a scalar risk based on a bivariate random variable can be seen as a special case of the set-valued risk measures, in which we consider only one specific ESG safe asset.
To better characterize the risk for an ESG-oriented investor, we introduce a fifth axiom, that takes into account the treatment of measures computed on constant vectors, allowing thus to characterize the risk of an ESG safe asset. Consider a constant vector \(y\in\mathbb{R}^{2}\), and a scalar \(\lambda\in[0,1]\), then
(LH-M) lambda homogeneity: if
\[y=\begin{bmatrix}r_{f}\\ r_{\text{ESG}}\end{bmatrix}\in\mathbb{R}^{2}\]
, then
\[\mathbf{\rho}(y)=-\big{(}(1-\lambda)r_{f}+\lambda r_{\text{ESG}}\big{)}\]
Axiom LH-M specifies the form of \(\mathbf{\rho}(y)\) used in TI-M and explicitly highlights the trade-off between the monetary and sustainability values using the parameter \(\lambda\), which represents an investor preference for the relative weighing between the monetary and ESG components of risk. This specification ensures that \(\mathbf{\rho}(\mathbf{1})=-1,\quad\forall\lambda\in[0,1]\), where \(\mathbf{1}\) is the vector \([1,1]^{\prime}\). To make the dependency on \(\lambda\) of a risk measure clear, we introduce \(\lambda\) as a subscript: \(\mathbf{\rho}_{\lambda}(X)\), and we finally provide the following definition:
**Definition 1** (ESG-coherent risk measure).: _Consider a probability space \((\Omega,\mathcal{F},\mathbb{P})\), a parameter \(\lambda\in[0,1]\), and \(X=\begin{bmatrix}r\\ ESG\end{bmatrix}\) belonging to a set of bivariate random variables \(\mathcal{X}_{2}\) where \(r\) measures the monetary returns of a position or portfolio and ESG measures its sustainability. We define an ESG-coherent risk measure as any functional \(\mathbf{\rho}_{\lambda}(X):\mathcal{X}_{2}\to\mathbb{R}\cup\{+\infty\}\) that satisfies the five axioms from SUB-M through LH-M._
The class of ESG-coherent risk measures extends coherent measures to the multivariate setting and provides a way to control the trade-off between the two sources of risk. This trade-off depends on the preferences of the individual investor expressed by \(\lambda\). Axioms SUB-M and PH-M guarantee that an ESG-coherent risk measure is convex. This allows an investor to diversify between monetary risk and ESG risk, as highlighted by the following remarks.
**Remark**.: _Given \(X=[r,\ ESG]^{\prime}\in\mathcal{X}_{2}\) and an ESG risk measure \(\mathbf{\rho}_{\lambda}(X)\), for an investor
_with a given \(\lambda\) the pure monetary risk and pure ESG risk are defined by \(\mathbf{\rho}_{\lambda}([r_{X},\ 0]^{\prime})\) and \(\mathbf{\rho}_{\lambda}([0,\ ESG_{X}]^{\prime})\), respectively._
**Remark** (Convexity of an ESG-coherent risk measure).: _If \(\mathbf{\rho}_{\lambda}(X)\) is an ESG-coherent risk measure, from SUB-M and PH-M we observe that_
\[\mathbf{\rho}_{\lambda}(X)\leq\mathbf{\rho}_{\lambda}([r_{X},\ 0]^{\prime})+\mathbf{\rho}_{ \lambda}([0,\ ESG_{X}]^{\prime}). \tag{3}\]
_That is, the risk of a position is always less than or equal to the sum of the pure ESG risk and the pure monetary risk (the investor diversifies between the ESG risk and monetary risk)._
We note that the definition of an ESG-coherent risk measure remains agnostic concerning the measurement of either the financial performance or the ESG score; the former can be measured in terms of the final value, profit and loss, or periodic returns (Artzner et al., 1999), and the ESG component can be computed according to multiple methodologies and aggregated over time following several approaches. The only requirement is that the investor must have a preference for both higher financial gain and higher ESG scores (hence, the monetary part must be expressed in terms such that gains are positive).
In an analogous manner, we can define ESG-coherent reward measures that extend the work of Rachev et al. (2008) (see Appendix A).
#### 3.1.1 Dual representation
It is well known that coherent risk measures have a dual representation \(-\) the supremum of a certain expected value over a risk envelope (Ruszczynski and Shapiro, 2006; Rockafellar, 2007; Ang et al., 2018). For ESG-coherent risk measures, the dual representation is provided in Proposition 1.
**Proposition 1**.: _Given \(X=[r,\ ESG]^{\prime}\in\mathcal{X}_{2}\) and an ESG-coherent risk measure \(\mathbf{\rho}_{\lambda}(X)\) that satisfies axioms SUB-M through LH-M, the dual representation of the risk measure is_
\[\mathbf{\rho}_{\lambda}(X)=\sup_{\zeta\in\mathcal{A}_{\mathbf{\rho}_{\lambda}}}\left\{ -\int_{\Omega}\left[\zeta_{1}(\omega)r(\omega)+\zeta_{2}(\omega)\text{ESG}( \omega)\right]P(d\omega)\right\}, \tag{4}\]
_where \(\mathcal{A}_{\mathbf{\rho}_{\lambda}}\) contains non-negative functions \((\zeta_{1},\zeta_{2})\in\mathcal{L}_{q}(\Omega,\mathcal{F},\mathbb{P};\mathbb{R} ^{2})\) whose expected value is \([1-\lambda,\quad\lambda]^{\prime}\). Furthermore, \(\mathcal{A}_{\mathbf{\rho}_{\lambda}}\) is equal to the convex subdifferential of \(\mathbf{\rho}_{\lambda}([0,0]^{\prime})\)._
The proof of Proposition 1 is provided in Appendix B. Using a more compact notation, (4) can be written
\[\mathbf{\rho}_{\lambda}(X)=\sup_{\zeta\in\mathcal{A}_{\mathbf{\rho}_{\lambda}}}- \mathbb{E}[\zeta_{1}r+\zeta_{2}\text{ESG}].\]
Proposition 2 addresses the marginal ESG-coherent risk measure when \(\lambda=1\) or \(\lambda=0\).
**Proposition 2**.: _If \(\mathbf{\rho}_{\lambda}\) is an ESG-coherent risk measure, then_
\[\mathbf{\rho}_{0}([r,\text{ ESG}]^{\prime})=\mathbf{\rho}_{0}([r,\;0]^{\prime}),\]
\[\mathbf{\rho}_{1}([r,\text{ ESG}]^{\prime})=\mathbf{\rho}_{1}([0,\text{ ESG}]^{\prime}).\]
Proof.: For \(\lambda=0\), we know from the dual representation that \(\zeta_{2}=0\) a.s., since it has zero expected value and is non-negative. Hence,
\[\mathbf{\rho}_{0}([r,\text{ ESG}]^{\prime})=\sup_{\zeta\in\mathcal{A}_{0}}-\int_{ \Omega}\zeta_{1}(\omega)r(\omega)P(d\omega)=\mathbf{\rho}_{0}([r,\;0]^{\prime}).\]
Analogously, for \(\lambda=1\) we have
\[\mathbf{\rho}_{1}([r,\text{ ESG}]^{\prime})=\sup_{\zeta\in\mathcal{A}_{1}}-\int_{ \Omega}\zeta_{2}(\omega)\text{ESG}(\omega)P(d\omega)=\mathbf{\rho}_{1}([0,\text{ ESG}]^{\prime}).\]
Proposition 2 states that the risk for an investor with \(\lambda=0\) is not affected by the ESG score of the asset, while the risk for an investor with \(\lambda=1\) is not affected by the monetary returns.
### Hedging risk by investing in ESG safe assets
To provide a more complete interpretation of ESG-coherent risk measures, it is useful to study how ESG-oriented investors can hedge a risky position by investing in an ESG safe asset. In a traditional univariate framework, a safe asset is an asset whose payoff is deterministic;13 its
return is a constant \(r_{f}\). If we define the risk on a random variable that represents the returns, axioms TI and PO provide that the risk of a portfolio composed of the safe asset and a risky position \(r\) is
Footnote 13: The risk measure is defined as the risk measure of the portfolio.
\[\rho((1-w)r+wr_{f})=(1-w)\rho(r)-wr_{f}, \tag{5}\]
where \(w\in[0,1]\) is the weight of the risky position in the portfolio. More formally, we address the problem of an investor who is willing to reduce the risk of a position to an acceptable level \(\kappa\) by creating a portfolio consisting of the risky position, and of the smallest possible amount of the safe asset. The motivation is, for instance, to satisfy requirements imposed by regulators or by the institutional mandate. Formally the problem is
\[w^{*}=\arg\min_{w}(w)\] \[\text{s.t. }\rho((1-w)X+wr_{f})\leq\kappa\] \[0\leq w\leq 1. \tag{6}\]
We assume that \(-r_{f}<\kappa\leq\rho(X)\). Since the risk of a portfolio is an affine function of \(w\) as shown in (6), the solution is:
\[w^{*}=\frac{\rho(X)-\kappa}{\rho(X)+r_{f}}. \tag{7}\]
That is, the risky position can be hedged by constructing a portfolio that contains the safe asset having weight \(w^{*}\).14
Footnote 14: If the risk measure were defined in terms of final net worth rather than returns, to hedge a risky position with risk \(m\), it would be necessary to add a cash position. For a broader discussion of the interpretation of the axioms expressed in terms of returns rather than the final net worth, see Rachev et al. (2011, Chapter 6).
For ESG-oriented investing, an ESG safe asset is a position having constant values for both monetary and ESG components. We can postulate the existence of several types of such ESG safe assets, distinguished by different combinations of constant ESG and monetary return.
Consider first the case in which only one type of ESG safe asset is available in the market:
\[\text{SA}:=\begin{bmatrix}r_{f}\\ r_{\text{ESG}}\end{bmatrix}.\]
We know that by axiom LH-M, for an ESG-coherent risk measure, the risk of this ESG safe asset is \(\boldsymbol{\rho}_{\lambda}(\text{SA})=-((1-\lambda)r_{f}+\lambda r_{\text{ESG}})\). The problem of hedging the risk of an asset \(X\) is analogous to the univariate case: an investor with a given \(\lambda\) wants to construct a portfolio with ESG-risk smaller or equal than \(\kappa\) by adding to \(X\) the smallest amount of the ESG safe asset SA (i.e. minimizing its weight in the portfolio):
\[w_{\lambda}^{*}=\arg\min_{w}(w)\] \[\text{s.t. }\boldsymbol{\rho}_{\lambda}((1-w)X+w\text{SA})\leq\kappa\] \[\qquad 0\leq w\leq 1. \tag{8}\]
We assume again that \(-(1-\lambda)r_{f}-\lambda r_{\text{ESG}}<\kappa\leq\rho(X).\) The solution is
\[w_{\lambda}^{*}=\frac{\boldsymbol{\rho}_{\lambda}(X)-\kappa}{(1-\lambda)r_{f} +\lambda r_{\text{ESG}}+\boldsymbol{\rho}_{\lambda}(X)}. \tag{9}\]
\(w_{\lambda}^{*}\) is unique and its value varies with the investor preference \(\lambda\). In practice, such an ESG safe asset could be achieved by the investor making a guaranteed loan to an institution (either a for-profit company, a government, or a non-profit institution) that has a positive and stable environmental or social impact, which generates an interest \(r_{f}\) for the investor.
To provide further intuitions, we discuss three special cases characterized by specific ESG safe assets. With the exception of case 3, we assume that the ESG safe assets have non-negative monetary return and ESG.
1. **The only ESG safe asset is a _pure monetary safe asset_** described by
where \(r_{f}\geq 0\). An example of this is a risk-free, zero-coupon bond issued by a governmental institution not associated with a specific ESG profile.15 An ESG investor can hedge the risk of a position \(X\) by constructing the portfolio \((1-w_{\lambda}^{*})X+w_{\lambda}^{*}\text{SA}_{\text{CASH}}\) with
Footnote 15: Rating agencies are starting to compute ESG scores for countries as well, although the criteria are different from those used to calculate companies’ scores. The identification of pure monetary and pure ESG safe assets will be a significant challenge for practitioners and scholars.
\[w_{\lambda}^{*}=\frac{\boldsymbol{\rho}_{\lambda}(X)-\kappa}{(1-\lambda)r_{f} +\boldsymbol{\rho}_{\lambda}(X)}.\]
We note that if \(\lambda=1\) (i.e., an investor cares exclusively about the ESG component) \(\boldsymbol{\rho}_{1}(\text{SA}_{\text{CASH}})=0\).
2. **The only ESG safe asset is a _pure ESG safe asset_** described by \[\text{SA}_{\text{ESG}}:=\begin{bmatrix}0\\ r_{\text{ESG}}\end{bmatrix}.\]
The analysis of this case is symmetrical to that for the pure monetary safe asset.
3. **There exists an ESG safe asset having a monetary return of -100%.** We consider the special case of an ESG safe asset described by \[\text{SA}_{CHARITY}:=\begin{bmatrix}-1\\ r_{\text{ESG}}\end{bmatrix}.\] An example of this is a donation to a non-profit organization that has a positive and constant ESG score. Such an asset produces a monetary return of \(-100\%\), and clearly it is not a relevant investment opportunity for a risk-return investor with \(\lambda=0\).16 On the contrary, for an ESG-oriented investor with \(\lambda>0\) it could be rational to invest in such asset. The measured risk of such an asset is \(\boldsymbol{\rho}(\text{SA})=(1-\lambda)-\lambda r_{\text{ESG}}\); That is, for an investor with a given \(\lambda\) the risk is negative if \(\lambda>\tilde{\lambda}=1/(1+r_{\text{ESG}})\). For a "\(\lambda<\tilde{\lambda}\)" investor, such an asset provides no opportunity to hedge a risky position \(X\); however for a "\(\lambda>\tilde{\lambda}\)" investor, such an
ESG safe asset provides a meaningful hedging tool through the donation of a wealth fraction:
\[w_{\lambda}^{*}=\frac{\boldsymbol{\rho}_{\lambda}(X)-\kappa}{\lambda-1+\lambda r_ {\text{ESG}}+\boldsymbol{\rho}_{\lambda}(X)}\]
to a project with a positive environmental or social impact.
We can also study the case with multiple ESG safe assets available in the market. If the ESG safe assets \(\text{SA}_{i},\;i=1,\ldots,n\), are available, an investor can hedge a risky position \(X\) by creating a portfolio \(w_{0}X+w_{1}\text{SA}_{1}+w_{2}\text{SA}_{2}+\cdots+w_{n}\text{SA}_{n}\) with \(\sum_{i=0}^{n}w_{i}=1\), so that the risk is equal to \(\kappa\). We can formulate an optimization problem analogous to 8, with the difference that the objective function to maximize is the weight of the risky asset \(w_{0}\) (that is equivalent to minimizing the sum of the portfolio weights invested in ESG safe assets). Formally:
\[\max_{w_{0},w_{1},w_{2},\ldots,w_{n}} (w_{0})\] \[\text{s.t.} \boldsymbol{\rho}_{\lambda}(w_{0}X+w_{1}\text{SA}_{1}+w_{2}\text {SA}_{2}+\cdots+w_{n}\text{SA}_{n})\leq\kappa,\] \[\sum_{i=1}^{n+1}w_{i}=1,\quad 0\leq w_{i}\leq 1,\quad i=1,\ldots,n +1. \tag{10}\]
The constraint set defines a convex feasible subset of \(\mathbb{R}^{n}\) for the investment allocations, \(w_{i}\). The solution is not a diversified portfolio; only one ESG safe asset is selected. This follows from the fact that, due to LH-M and TI-M, the risk of a sum of ESG safe assets is the sum of their risk: \(\boldsymbol{\rho}(\delta\text{SA}_{i}+(1-\delta)\text{SA}_{j})=\delta \boldsymbol{\rho}(\text{SA}_{i})+(1-\delta)\boldsymbol{\rho}(\text{SA}_{j}), \;\delta\in[0,1]\). For any couples of ESG safe assets \(\text{SA}_{i}\), \(\text{SA}_{j}\) the risk of their convex combination is always greater or equal than \(\min(\boldsymbol{\rho}(\text{SA}_{i}),\boldsymbol{\rho}(\text{SA}_{j})\). Hence to minimize the risk it is always convenient to choose the ESG safe asset with the smallest risk.17 Once the ESG safe asset with the lowest risk is identified, the problem then is the same as (8) where only one ESG safe asset was available. Note however that the choice of ESG safe asset is different for each investor, as the optimization is dependence on the investor's value for \(\lambda\).
Footnote 17: An exception is when two or more ESG safe assets with exactly the same risk are available. In such a case, they are indistinguishable to the investor in terms of risk, and either can be chosen.
**Remark**.: _In general, the choice of the ESG safe asset is not influenced by the characteristics of the risky position, it is only influenced by the availability and price of ESG safe assets and
the \(\lambda\) preference of the investor._
### Examples of ESG-coherent risk measures
After discussing ESG-coherent risk measures in general, we present two approaches for extending univariate risk measures to a bivariate setting. In particular, starting from a univariate coherent risk measure \(\rho(r)\), we identify the ESG-coherent risk measures ESG-\(\boldsymbol{\rho}(X)\), and ESG-\(\boldsymbol{\rho}^{l}(X)\). The measures encompass \(\lambda\in[0,1]\) as a parameter, thus they are families of bivariate risk measures suitable for investors having differing ESG "inclinations". In the following subsection we apply these approaches to the well-known coherent risk measure, average value at risk (AVaR), resulting in ESG-coherent risk measures. In the subsequent subsection we apply these approaches to two non-coherent risk measures, variance and volatility, producing ESG extensions that are not ESG-coherent.
Our first approach to generalizing a univariate risk measure \(\rho(r)\) utilizes a linear combination of \(r\) and ESG. For \(X=\begin{bmatrix}r\\ \text{ESG}\end{bmatrix}\)
\[\text{ESG-}\boldsymbol{\rho}_{\lambda}\left(X\right):=\rho\big{(}(1-\lambda) r+\lambda\text{ESG}\big{)}. \tag{11}\]
**Proposition 3**.: _If \(\rho(\cdot)\) is a (univariate) coherent risk measure, then ESG-\(\boldsymbol{\rho}_{\lambda}(\cdot)\) is an ESG-coherent risk measure._
Proof.: Since the right-hand-side of (11) involves a direct application of \(\rho(\cdot)\), the extended function \(\text{ESG-}\boldsymbol{\rho}_{\lambda}(\cdot)\) inherits the properties SUB-M and PH-M. It is straightforward to show that axiom MO of the univariate function implies MO-M. Consider two vectors \(X_{1}\), \(X_{2}\in\mathcal{X}_{2}\). If \((r_{1}\leq r_{2}\wedge\text{ESG}_{1}\leq\text{ESG}_{2})\) a.s., then from the monotonicity of \(\rho(\cdot)\), we have
\[\text{ESG-}\boldsymbol{\rho}_{\lambda}(X_{1})=\rho((1-\lambda)r_{1}+\lambda \text{ESG}_{1})\geq\rho((1-\lambda)r_{2}+\lambda\text{ESG}_{2})=\text{ESG-} \boldsymbol{\rho}_{\lambda}(X_{2}).\]
Finally, using the translation invariance of \(\rho(\cdot)\), we can prove TI-M and LH-M.
We note the following properties of ESG-\(\boldsymbol{\rho}_{\lambda}(\cdot)\).
* If \(\rho(r)\) is convex, then \(\text{ESG-}\boldsymbol{\rho}_{\lambda}([r,\text{ESG}]^{\prime})\) is a convex function of \(\lambda\).
* If \(\rho(r)\) is a coherent risk measure and if the ESG scores are constant, ESG-\(\boldsymbol{\rho}_{\lambda}([r,\text{ESG}]^{\prime})\) is an affine transformation of \(\rho(r)\) computed on returns only.
* For \(\lambda=0\) and \(\lambda=1\), ESG-\(\boldsymbol{\rho}_{\lambda}([r,\text{ESG}]^{\prime})\) is equal to the univariate \(\rho(r)\) computed on returns alone or \(\rho(\text{ESG})\) computed on ESG scores alone, respectively.
* Suppose that \(r\) and ESG have the same marginal distributions. Then ESG-\(\boldsymbol{\rho}_{\lambda}([r,\text{ESG}]^{\prime})\) is at its maximum for perfect positive co-monotonicity between \(r\) and ESG, and it is at its minimum for perfect negative co-monotonicity.
Our second approach utilizes a linear combination of univariate risk measures,
\[\text{ESG-}\boldsymbol{\rho}_{\lambda}^{l}\left(\begin{bmatrix}r\\ \text{ESG}\end{bmatrix}\right):=(1-\lambda)\rho(r)+\lambda\rho(\text{ESG}). \tag{12}\]
It is straightforward to show that ESG-\(\boldsymbol{\rho}_{\lambda}^{l}(\cdot)\) is an ESG-coherent risk measure if \(\rho(\cdot)\) is coherent because axioms SUB-M, PH-M, TI-M, MO-M, and LH-M follow from the respective univariate axioms.
We note the following properties of ESG-\(\boldsymbol{\rho}_{\lambda}^{l}(\cdot)\).
* The measure ESG-\(\boldsymbol{\rho}_{\lambda}^{l}([r,\text{ESG}]^{\prime})\) is more conservative than ESG-\(\boldsymbol{\rho}_{\lambda}([r,\text{ESG}]^{\prime})\) as it is linear in \(\lambda\) and, hence, always greater than or equal to ESG-\(\boldsymbol{\rho}_{\lambda}([r,\text{ESG}]^{\prime})\).
* ESG-\(\boldsymbol{\rho}_{\lambda}^{l}([r,\text{ESG}]^{\prime})\) is equivalent to ESG-\(\boldsymbol{\rho}_{\lambda}([r,\text{ESG}]^{\prime})\) for the case of perfect co-monotonicity between \(r\) and ESG (i.e. when no diversification between the two is possible). In this sense, we can consider it a worst-case measure of ESG-\(\boldsymbol{\rho}_{\lambda}([r,\text{ESG}]^{\prime})\).
* For the limiting cases \(\lambda=0\) and \(\lambda=1\), we have ESG-\(\boldsymbol{\rho}_{\lambda}^{l}([r,\text{ESG}]^{\prime})=\text{ESG-} \boldsymbol{\rho}_{\lambda}([r,\text{ESG}]^{\prime})\).
* For \(\lambda=0\) or \(\lambda=1\), ESG-\(\boldsymbol{\rho}_{\lambda}^{l}([r,\text{ESG}]^{\prime})\) corresponds to the univariate risk measure \(\rho(X)\) computed on just the monetary part or just the ESG part, respectively.
#### 3.3.1 ESG-AVaR
We demonstrate the two approaches presented in equations (11) and (12) to develop ESG-coherent risk measures based on AVaR. Given a random bivariate vector
\(\mathcal{L}_{1}(\Omega,\mathcal{F},P;\mathbb{R}^{2})\), the first measure, given by (11), is18
Footnote 18: See Ogryczak and Ruszczynski (2002); Rockafellar and Uryasev (2002) for the extremal representation.
\[\text{ESG-AVaR}_{\lambda,\tau}\left(\begin{bmatrix}r\\ \text{ESG}\end{bmatrix}\right) :=\text{AVaR}_{\tau}\left((1-\lambda)r+\lambda\text{ESG}\right)\] \[=\inf_{\beta\in\mathbb{R}}\left\{\frac{1}{1-\tau}\mathbb{E}\left[ \left(\beta-\left((1-\lambda)r+\lambda\text{ESG}\right)\right)^{+}\right]- \beta\right\}, \tag{13}\]
where \((a)^{+}\) denotes \(\max(a,0)\). As discussed above, ESG-AVaR\({}_{\lambda,\tau}(\cdot)\) is ESG-coherent. It is similar to the multivariate expected shortfall presented by Ekeland et al. (2012) (although their measure lacks the parametrization using \(\lambda\)). Since ESG-AVaR\({}_{\lambda,\tau}(\cdot)\) is computed on univariate data (i.e., as a linear combination of \(r\) and ESG), numerical applications using ESG-AVaR\({}_{\lambda,\tau}(\cdot)\) do not present any particular challenge; it is possible to fully utilize existing procedures developed for AVaR for risk estimation, portfolio optimization, and risk management. (See, e.g. Shapiro et al., 2021.)
The dual representation of ESG-AVaR\({}_{\lambda,\tau}(\cdot)\) is
\[\text{ESG-AVaR}_{\lambda,\tau}\left(\begin{bmatrix}r\\ \text{ESG}\end{bmatrix}\right)=\sup_{[\zeta_{1},\ \zeta_{2}]^{\prime}\in \text{AesG-AVaR}_{\lambda,\tau}(X)}\left(-\mathbb{E}[r\zeta_{1}+\text{ESG} \zeta_{2}]\right), \tag{14}\]
where
\[\mathcal{A}_{\text{ESG-AVaR}_{\lambda,\tau}}=\left\{[\zeta_{1},\ \zeta_{2}]^{\prime}\in \mathcal{L}_{\infty}(\Omega,\mathcal{F},P;\mathbb{R}^{2}):\ [\zeta_{1},\ \zeta_{2}]^{\prime}=\xi[1- \lambda,\ \lambda]^{\prime};\ 0\leq\xi\leq\frac{1}{1-\tau}\ \text{a.s.}\ \mathbb{E}[\xi]=1 \right\}. \tag{15}\]
The derivation of this dual representation is given in Appendix C.
The second measure, given by (12), is
\[\text{ESG-AVaR}^{l}_{\lambda,\tau}\left(\begin{bmatrix}r\\ \text{ESG}\end{bmatrix}\right):=(1-\lambda)\text{AVaR}_{\tau}(r)+\lambda \text{AVaR}_{\tau}(\text{ESG}). \tag{16}\]
ESG-AVaR\({}^{l}_{\lambda,\tau}(\cdot)\) is also an ESG-coherent. It can be viewed as the limit of ESG-AVaR\({}_{\lambda,\tau}(\cdot)\) in the case of comonotonicity; i.e., an asset for which it is not possible to diversify between
the monetary and ESG components as they are comonotone. From an economic perspective, ESG-AVaR\({}^{l}_{\lambda,\tau}(\cdot)\) is significant for investors who consider the worst-case scenario in terms of the dependency structure.
The dual representation of ESG-AVaR\({}^{l}_{\lambda,\tau}(X)\) is
\[\text{ESG-AVaR}^{l}_{\lambda,\tau}\left(\left[\begin{array}{c}r\\ \text{ESG}\end{array}\right]\right)=\sup_{[\zeta_{1},\;\zeta_{2}]^{\prime}\in \mathcal{A}_{\text{ESG-AVaR}^{l}_{\lambda,\tau}(X)}}-\mathbb{E}[r\zeta_{1}+ \text{ESG}\zeta_{2}], \tag{17}\]
where
\[\mathcal{A}_{\text{ESG-AVaR}^{l}_{\lambda,\tau}}=\left\{[\zeta_{1},\;\zeta_{2} ]^{\prime}\in\mathcal{L}_{\infty}(\Omega,\mathcal{F},P;\mathbb{R}^{2}): \mathbb{E}[\zeta_{1}]=1-\lambda;\mathbb{E}[\zeta_{2}]=\lambda;\zeta_{1},\zeta _{2}\geq 0;\zeta_{1}\leq\frac{1-\lambda}{1-\tau};\zeta_{2}\leq\frac{\lambda}{ 1-\tau}\right\}. \tag{18}\]
The derivation of the dual representation of ESG-AVaR\({}^{l}_{\lambda,\tau}(\cdot)\) is also given in Appendix C.
#### 3.3.2 Non-ESG-coherent measure examples
It is well known that the standard deviation \(\sigma\) (the volatility) and the variance \(\mathbb{V}\) are not coherent risk measures. We consider the application of (11) and (12) to \(\sigma\) and \(\mathbb{V}\) and show that, in all cases, the result is an ESG measure that does not satisfy the ESG-coherency axioms.
Given a vector \(X=[r,\;\text{ESG}]^{\prime}\in\mathcal{X}_{2}=\mathcal{L}_{2}(\Omega,\mathcal{ F},P;\mathbb{R}^{2})\), from (11) the ESG variance and ESG volatility are
\[\text{ESG-}\mathbb{V}_{\lambda}(X) :=\mathbb{V}[(1-\lambda)r+\lambda\text{ESG}], \tag{19}\] \[\text{ESG-}\sigma_{\lambda}(X) :=\sqrt{\text{ESG-}\mathbb{V}_{\lambda}(X)}. \tag{20}\]
ESG-\(\mathbb{V}_{\lambda}(\cdot)\) is not ESG-coherent, as it does not satisfy PH-M, MO-M, SUB-M, and LH-M. ESG-\(\sigma_{\lambda}(\cdot)\) is not ESG-coherent, as it does not satisfy MO-M and LH-M.
Using (12), the corresponding risk measures are
\[\text{ESG-}\mathbb{V}^{l}_{\lambda}(X) :=(1-\lambda)\mathbb{V}[r]+\lambda\mathbb{V}[\text{ESG}], \tag{21}\] \[\text{ESG-}\sigma^{l}_{\lambda}(X) :=\sqrt{\text{ESG-}\mathbb{V}^{l}_{\lambda}(X)}. \tag{22}\]
The former does not satisfy PH-M, MO-M, SUB-M, and LH-M, and the latter does not satisfy MO-M and LH-M.
A summary of the properties of the examples considered in sections 3.3.1 and 3.3.2 is given in Table 1.
## 4 ESG reward-risk ratios
It is natural to extend the ESG framework to reward-risk ratios (RRRs), used to measure risk-adjusted performance of an investment. Following Cheridito and Kromer (2013), a reward-risk ratio \(\alpha(r)\) in a univariate setting is
\[\alpha(r):=\frac{\theta(r)^{+}}{\rho(r)^{+}}, \tag{23}\]
where \(\theta(r):\mathcal{X}\rightarrow\mathbb{R}\cup\{\pm\infty\}\) and \(\rho(r):\mathcal{X}\rightarrow\mathbb{R}\cup\{+\infty\}\) are risk and reward measures, respectively. Cheridito and Kromer (2013) identified four conditions desirable for RRRs:
* monotonicity: if \(r_{1},r_{2}\in\mathcal{X}\) and \(r_{1}\leq r_{2}\) a.s., then \(\alpha(r_{1})\geq\alpha(r_{2})\);
* quasi-concavity: if \(r_{1},r_{2}\in\mathcal{X}\) and \(\delta\in[0,1]\), then \(\alpha(\delta r_{1}+(1-\delta)r_{2})\geq\min(\alpha(r_{1}),\alpha(r_{2}))\);
* scale invariance: if \(r\in\mathcal{X}\) and \(\delta>0\) s.t. \(\delta r\in\mathcal{X}\), then \(\alpha(\delta r)=\alpha(r)\);
* distribution-based: \(\alpha(r)\) only depends on the distribution of \(r\) under \(\mathbb{P}\).
Following the approach used for risk and reward measures, we introduce ESG reward-risk ratios (ESG-RRRs). Let \(X=[r,\text{ ESG}]^{\prime}\). We define an ESG-RRR \(\mathbf{\alpha}_{\lambda}:\mathcal{X}_{2}\rightarrow\mathbb{R}\cup\{\pm\infty\}\) by
\[\mathbf{\alpha}_{\lambda}(X):=\frac{\mathbf{\theta}_{\lambda}(X)^{+}}{\mathbf{\rho}_{ \lambda}(X)^{+}}, \tag{24}\]
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline
**Risk measure** & **SUB-M** & **PH-M** & **TI-M** & **MO-M** & **LH-M** \\ \hline ESG-AVaR\({}_{\lambda,\tau}\) & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline ESG-AVaR\({}^{l}_{\lambda,\tau}\) & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline ESG-\(\mathbb{V}_{\lambda}\) & & & ✓ & & \\ \hline ESG-\(\sigma_{\lambda}\) & ✓ & ✓ & ✓ & & \\ \hline ESG-\(\mathbb{V}^{l}_{\lambda}\) & & & ✓ & & \\ \hline ESG-\(\sigma^{l}_{\lambda}\) & ✓ & ✓ & ✓ & & \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the properties satisfied by ESG risk measures.
where \(\mathbf{\theta}_{\lambda}(X)\) and \(\mathbf{\rho}_{\lambda}(X)\) are ESG reward and risk measures as defined in Section 3 and Appendix A.19 The extension of the Cheridito-Kromer conditions for ESG-RRRs are:
Footnote 19: In principle, it is possible for the risk and reward measures to have different values of \(\lambda\). We consider the case of values of lambda for both the numerator and the denominator for conciseness.
(MO-RM) monotonicity: if \(X_{1},X_{2}\in\mathcal{X}_{2}\) and \((r_{1}\leq r_{2}\wedge\text{ESG}_{1}\leq\text{ESG}_{2})\) a.s., then \(\mathbf{\alpha}_{\lambda}(X_{1})\leq\mathbf{\alpha}_{\lambda}(X_{2})\);
(QC-RM) quasi-concavity: if \(X_{1},X_{2}\in\mathcal{X}_{2}\) and \(\delta\in[0,1]\), then \(\mathbf{\alpha}_{\lambda}(\delta X_{1}+(1-\delta)X_{2})\geq\min(\mathbf{\alpha}_{ \lambda}(X_{1}),\mathbf{\alpha}_{\lambda}(X_{2}))\);
(SI-RM) scale invariance: if \(X\in\mathcal{X}_{2}\) and \(\delta>0\) s.t. \(\delta X\in\mathcal{X}_{2}\), then \(\mathbf{\alpha}_{\lambda}(\delta X)=\mathbf{\alpha}_{\lambda}(X)\);
(DB-RM) distribution-based: \(\mathbf{\alpha}_{\lambda}(X)\) only depends on the distribution of \(X\) under \(\mathbb{P}\).
Verification of conditions QC-RM and SI-RM is done on the domain of \(\mathbf{\alpha}_{\lambda}(X)\). Verification of DB-RM depends on the bivariate distribution of \(X\in\mathcal{X}_{2}\) and not on the univariate distribution of the returns. As in the case of the risk measure axiom MO-M, verification of MO-RM requires the use of partial ordering.
**Proposition 4**.: _The ESG-RRR (24), where \(\mathbf{\theta}_{\lambda}(X)\) is an ESG-coherent risk measure and \(\mathbf{\rho}_{\lambda}(X)\) is an ESG-coherent reward measure (Appendix A) satisfies conditions MO-RM, QC-RM and SI-RM._
Proof.: MO-RM follows from the monotonicity of \(\mathbf{\rho}_{\lambda}(X)\) and \(\mathbf{\theta}_{\lambda}(X)\). QC-RM follows from the convexity of \(\mathbf{\rho}_{\lambda}(X)\) and the concavity of \(\mathbf{\theta}_{\lambda}(X)\). SI-RM follows from the corresponding properties of \(\mathbf{\rho}_{\lambda}(X)\) and \(\mathbf{\theta}_{\lambda}(X)\).
**Remark**.: _In general, risk and reward measures may not depend on the distribution of returns and ESG under a single probability measure, as in the case of robust reward-risk ratios, which take into account the fact that agents do not know with certainty the distribution of the random variables (Cheridito and Kromer, 2013)._
### Examples of ESG reward-risk ratios
We present six examples of ESG reward-risk ratios derived from RRRs commonly used in the literature. The ratios are obtained by generalizing the univariate reward-risk ratios using
the approach described by (11). Note that, as in the case of risk measures, the definition of a reward-risk ratio can be based on several alternative specifications of the random variable \(X=[r,\text{ ESG}]^{\prime}\). In particular, the ratios can be computed using the rate of returns, the excess returns over a risk-free rate, the final wealth, profit and/or losses, etc. The same logic applies to the ESG component. Here, we only provide hints concerning which approach to use in practice: the choice depends on the specific needs of the practitioner or regulator who uses these measures.
* **ESG Sharpe ratio**. The Sharpe ratio is the ratio between the excess return of an asset and its standard deviation over a period of time. The ESG Sharpe ratio is \[\text{ESG-SR}_{\lambda}(X):=\frac{\text{ESG-}\mu_{\lambda}(X)-\text{SA}}{\text {ESG-}\sigma_{\lambda}(X)}.\] (25) where \[\text{ESG-}\mu_{\lambda}(X):=(1-\lambda)\mathbb{E}[r]+\lambda\mathbb{E}[ \text{ESG}],\] (26) the ESG standard deviation ESG-\(\sigma_{\lambda}(X)\) is given by (20), and \(\text{SA}=[r_{f},\ r_{\text{ESG}}]^{\prime}\) is the vector of return and ESG of an ESG safe asset. ESG-\(\mu_{\lambda}\) is an ESG-coherent reward measure, while ESG-\(\sigma_{\lambda}\) is not an ESG-coherent risk measure. The ESG Sharpe ratio satisfies QC-RM, SI-RM, and DB-RM, but it does not satisfy condition MO-RM.20 Footnote 20: The SI-RM property applies if \(\text{SA}=[0,\ 0]^{\prime}\) or if \(X\) is intended to represent a vector of excess returns over the risk-free rate.
* **ESG Rachev ratio**. The univariate Rachev ratio is defined as (Biglova et al., 2004) \[\text{RR}_{\beta,\gamma}(r):=\frac{\text{AVaR}_{\beta}(-r)}{\text{AVaR}_{ \gamma}(r)}.\] (27) We generalize to an ESG-RR by replacing AVaR with ESG-AVaR. Let \(X\in\mathcal{X}_{2}\). Then, \[\text{ESG-RR}_{\beta,\gamma,\lambda}(X):=\frac{\text{ESG-AVaR}_{\lambda,\beta }(-X)}{\text{ESG-AVaR}_{\lambda,\gamma}(X)}.\] (28) Note that ESG-RR satisfies MO-RM, SI-RM and DB-RM but fails to satisfy QC-RM since the numerator is not concave.
* **ESG STAR ratio**. When \(\beta=0\), the ESG Rachev ratio becomes an ESG generalization
(ESG-STARR) of the stable tail-adjusted return ratio, \[\text{ESG-STARR}_{\alpha,\lambda}(X):=\frac{\text{ESG-}\mu_{\lambda}(X)}{\text{ESG- AVaR}_{\lambda,\alpha}(X)}.\] (29) As a special case of the ESG Rachev ratio, ESG-STARR satisfies conditions MO-RM, SI-RM and DB-RM; as the numerator is linear, it also satisfies QC-RM.
* **ESG Sortino-Satchell ratio**. The univariate Sortino-Satchell ratio (Sortino and Satchell, 2001) is defined as \(\mathbb{E}[r]^{+}/||r^{-}||_{p}\), where \(||r||_{p}=\left(\int_{-\infty}^{\infty}|x|^{p}f_{r}(x)dx\right)^{1/p}\) and \(r\sim f_{r}(x)\). We extend this measure to the bivariate ESG setting by21 Footnote 21: The definition provided here assumes a target return of 0, similarly to Cheridito and Kromer (2013). \[\text{ESG-SS}_{\lambda}(X)=\frac{\left(\text{ESG-}\mu_{\lambda}(X)\right)^{+} }{||Y^{-}||_{p}},\] (30) where \(Y=(1-\lambda)r+\lambda\text{ESG}\) and \(Y\sim f(y)\). This measure satisfies all four ESG-RRR conditions. The proposed formulation assumes a required rate of return and ESG target of zero. We can introduce a non-zero target by subtracting a bivariate vector (e.g. the return of an ESG safe asset \(\text{SA}=[r_{f},\ r_{\text{ESG}}]^{\prime}\)) from the numerator before applying the positive operator and using \(\tilde{Y}=(1-\lambda)(r-r_{f})+\lambda(\text{ESG}-r_{\text{ESG}})\) in the denominator in place of \(Y\). In such a case, this measure no longer satisfies SI-RM.
* **ESG Omega ratio**. Defining \(Y=(1-\lambda)r+\lambda\text{ESG}\), where \(F(y)\) is the cumulative distribution function of \(Y\), the ESG version of the Omega ratio (Keating and Shadwick, 2002) is \[\text{ESG-}\Omega_{\lambda}(X)=\frac{\int_{\tau}^{\infty}[1-F(y)]dy^{+}}{\int _{-\infty}^{\tau}F(y)dy}.\] (31) Equivalently (see Farinelli and Tibiletti, 2008), \[\text{ESG-}\Omega_{\lambda}(X)=\frac{\mathbb{E}[(Y-\tau)^{+}]}{\mathbb{E}[(Y- \tau)^{-}]}.\] (32) This measure satisfies MO-RM and DB-RM. It does not satisfy QC-RM, and SI-RM is only satisfied if \(\tau=0\).
* **ESG Farinelli-Tibiletti ratio**. This ratio aims to take into account the asymmetry in the return distribution; it generalizes the Omega ratio. By defining \(Y=(1-\lambda)r+\lambda\)ESG, the ESG version of the ratio can be defined as \[\text{ESG-FTR}_{\lambda,m,n,p,q}(X)=\frac{||(Y-m)^{+}||_{p}}{||(n-Y)^{+}||_{q}},\] (33) where \(m,n\in\mathbb{R}\) and \(p,q>0\). This ratio satisfies MO-RM and DB-RM. It also satisfies SI-RM if \(n=m=0\). It does not satisfy QC-RM (see the example in Cheridito and Kromer, 2013, Section 4.3).
Table 2 summarizes the properties of these six ESG-RRRs.
## 5 Empirical analysis
We present an empirical exercise in which we estimate a set of ESG risk measures and ESG reward-risk ratios. We use such measures to rank equity assets from the Dow-Jones Industrial Average (DJIA) index, assessing the role of lambda for different measures.
First, we describe the dataset. We consider 28 constituents of the Dow-Jones Industrial Average index in the period from January 3, 2006 to October 29, 2021 (almost 15 years). To match the frequencies of the two datasets, we make the assumption that the ESG scores are constant within each year. For this period, daily returns and yearly ESG scores are available. The annual score is normalized between -1 and 1. Hence, for each day, we obtain daily ESG values by dividing the annual value by the number of trading days in a year (252) (this is consistent with assuming a constant sustainability flow for the entire year, as discussed in
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline
**Ratio** & **MO-RM** & **QC-RM** & **SI-RM** & **DB-RM** \\ \hline ESG Sharpe & & ✓ & ✓ & ✓ \\ \hline ESG Rachev & ✓ & & ✓ & ✓ \\ \hline ESG STAR & ✓ & ✓ & ✓ & ✓ \\ \hline ESG Sortino–Satchell & ✓ & ✓ & ✓ & ✓ \\ \hline ESG Omega & ✓ & & ✓ & ✓ \\ \hline ESG Farinelli–Tibiletti & ✓ & & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the properties of ESG reward–risk ratios.
Section 2). As shown in Figure 1, the series of ESG scores show an increasing trend over time, as well as a reduction in the cross-sectional variability, with more homogeneous scores across companies in recent years. The financial time series show the well-known properties of equity time series, such as limited serial correlations, and volatility clustering for the returns. In the analysis we consider the historical time series and we do not address the modeling of the data. Concerning the relation between the ESG score and the financial performance, the data does not highlight any clear relation. Panels C and D plot the average ESG score across the entire period in relation to the average returns and the return volatility, respectively. Higher ESG scores seem to correlate slightly with lower returns, but the relation does not appear to be strong (correlation: \(-0.16\)). We see a slightly positive correlation between ESG scores and volatility (correlation: \(0.09\)).
Since the characteristics of the time series evolve over time, we focus the analysis on a narrower period of four years: October 30, 2017 to October 29, 2021. Table 3 reports the names of the companies, the properties of the distribution of daily returns, and the statistics of the annual ESG scores for the period from October 30, 2017 to October 29, 2021. The dataset presents a wide cross-sectional variability in terms of the average return and risk, as well as in terms of the average ESG scores, despite the fact that the data showed a certain degree of convergence over the last few years, with almost no negative scores appearing in the sample after 2017. Since the ESG data are annual and are assumed to be constant within each year, they show a very small volatility compared to the returns. This is partially a consequence of the different sampling frequency of the dataset (annual vs. daily), but it also reflects the fact that the changes in the sustainability profile of a company happen at a much slower pace than the shifts seen in the monetary value of a stock. The scales of the average returns and average ESG scores are broadly comparable.
For each of the stocks, we compute a set of ESG risk measures and ESG reward-risk ratios, considering a grid of different values of \(\lambda\). We then use the measures to rank the stocks in the dataset. In particular, we compute ESG-AVaR\({}_{\lambda,\tau}(X)\) (with \(\tau\) equal to \(0.95\) and \(0.99\)), the ESG standard deviation (ESG-\(\sigma_{\lambda}\)), the ESG-mean (ESG-\(\mu_{\lambda}\)), the ESG Rachev ratio (with \(\tau\) equal to \(0.95\) and \(0.99\)), the ESG STAR ratio (with \(\tau\) equal to \(0.95\) and \(0.99\)), and the ESG Sharpe ratio. Figure 2 displays the ordering of the assets: each company is color-coded according to
the industrial sector in which it operates, and the companies in the upper part of the plot are those with the highest value for the indicator. On the horizontal axis, we vary the investor's \(\lambda\). Tables 4 and 5in Appendix D report the names of the companies in the top and bottom five positions for a selected sample of indicators and values of \(\lambda\).22
Footnote 22: For brevity we report only the measures for the 95th quantile. The complete rankings and the results for the 99th quantile are available upon request.
We make the following observations:23
Footnote 23: We emphasize that the range of \(\lambda\) that we considered is quite extreme, and realistically an investor would choose a small value of \(\lambda\) to generate only minor variations compared to traditional market portfolios focused only on monetary considerations.
* ESG-AVaR tends to give similar rankings for similar levels of \(\lambda\) up to 0.5; then, the ranking changes significantly and converges to a significantly different ranking for \(\lambda=1\) compared to \(\lambda=0\). We can explain this behavior by taking into consideration that the ESG scores are characterized by a very small variability over time; hence, they tend to behave mostly as shifts of the distribution of returns, and the variability of the monetary returns dominates
Figure 1: Panel A: Evolution of the ESG score of the constituents of the index. The thicker blue line shows the evolution of the average return. Panel B: Evolution of prices for the stocks. The thick blue line is the value of a daily rebalanced equally weighted portfolio of all the stocks (to improve readability, the price has been normalized to 1 at the beginning of the considered time period, and the upper limit of the \(Y\) axis has been set to 20, cutting off some stocks that showed extremely high growth in the observed period). Panel C: Scatterplot of average daily returns and the average ESG score for each asset. Panel D: Scatterplot of the daily standard deviation of returns and the average ESG score for each asset.
the determination of ESG-AVaR (and thus the ranking) even for relatively large values of \(\lambda\). For values of \(\lambda\) close to 1, the ESG component becomes dominant in the determination of the ranking. The results are similar for \(\tau=0.95\) and \(\tau=0.99\); the main difference is that for the highest value of \(\tau\), the monetary component remains dominant for higher values of \(\lambda\).
* The ranking according to the Rachev ratio is also stable for \(\lambda\lessapprox 0.5\). This is related to the fact that this ratio is computed as the ratio of the AVaRs for the top and bottom tails of the distribution.
* The ranking according to the STAR ratio shows significant changes when \(\lambda\) changes. This can be attributed in large part to the fact that the numerator of the ratio (ESG-\(\mu_{\lambda}\)) changes significantly with \(\lambda\).
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Name & GICS Sector Name & Avg. return & St. dev. returns & Avg. ESG & St. dev. ESG \\ \hline
3M Co. & Industrials & \(-0.031\) & \(0.271\) & \(0.775\) & \(0.002\) \\ American Express Co. & Financials & \(0.218\) & \(0.374\) & \(0.502\) & \(0.004\) \\ Amgen Inc. & Health Care & \(0.077\) & \(0.266\) & \(0.466\) & \(0.002\) \\ Apple Inc. & IT & \(0.379\) & \(0.326\) & \(0.386\) & \(0.003\) \\ Boeing Co. & Industrials & \(0.077\) & \(0.512\) & \(0.594\) & \(0.003\) \\ Caterpillar Inc. & Industrials & \(0.153\) & \(0.332\) & \(0.375\) & \(0.003\) \\ Chevron Corp. & Energy & \(0.068\) & \(0.362\) & \(0.668\) & \(0.004\) \\ Cisco Systems Inc. & IT & \(0.164\) & \(0.290\) & \(0.759\) & \(0.001\) \\ Coca-Cola Co. & Consumer Staples & \(0.075\) & \(0.219\) & \(0.430\) & \(0.006\) \\ Goldman Sachs Group Inc. & Financials & \(0.191\) & \(0.336\) & \(0.552\) & \(0.010\) \\ Home Depot Inc. & Consumer Discretionary & \(0.239\) & \(0.278\) & \(0.515\) & \(0.005\) \\ Honeywell International Inc. & Industrials & \(0.150\) & \(0.276\) & \(0.452\) & \(0.004\) \\ IBM Corp. & IT & \(-0.011\) & \(0.284\) & \(0.506\) & \(0.005\) \\ Intel Corp. & IT & \(0.097\) & \(0.378\) & \(0.758\) & \(0.001\) \\ Johnson \& Johnson & Health Care & \(0.058\) & \(0.215\) & \(0.779\) & \(0.002\) \\ JPMorgan Chase \& Co. & Financials & \(0.181\) & \(0.326\) & \(0.615\) & \(0.003\) \\ McDonald’s Corp. & Consumer Discretionary & \(0.130\) & \(0.249\) & \(0.430\) & \(0.002\) \\ Merck \& Co. Inc. & Health Care & \(0.143\) & \(0.234\) & \(0.595\) & \(0.002\) \\ Microsoft Corp. & IT & \(0.388\) & \(0.297\) & \(0.846\) & \(0.001\) \\ Nike Inc. & Consumer Discretionary & \(0.320\) & \(0.305\) & \(0.423\) & \(0.002\) \\ Procter \& Gamble Co. & Consumer Staples & \(0.148\) & \(0.218\) & \(0.377\) & \(0.004\) \\ Salesforce Inc. & IT & \(0.337\) & \(0.360\) & \(0.313\) & \(0.006\) \\ Travelers Companies Inc. & Financials & \(0.095\) & \(0.293\) & \(0.195\) & \(0.014\) \\ UnitedHealth Group Inc. & Health Care & \(0.241\) & \(0.309\) & \(0.509\) & \(0.005\) \\ Verizon Communications Inc. & Communication Services & \(0.039\) & \(0.195\) & \(0.410\) & \(0.006\) \\ WBA Inc. & Consumer Staples & \(-0.021\) & \(0.340\) & \(0.586\) & \(0.011\) \\ Walmart Inc. & Consumer Staples & \(0.158\) & \(0.231\) & \(0.578\) & \(0.005\) \\ Walt Disney Co. & Communication Services & \(0.184\) & \(0.313\) & \(0.451\) & \(0.006\) \\ \hline \end{tabular}
\end{table}
Table 3: Annualized mean and standard deviation of daily returns and ESG scores.
* The ranking according to the standard deviation is almost identical for all values of \(\lambda\), except for values very close to 1. This is because, as stated previously, the variability of ESG scores is minimal, and the ESG standard deviation is almost exclusively driven by monetary returns. Unlike the AVaR, the standard deviation is not influenced by parallel
Figure 2: Visual representation of the ranking of the assets in the DJIA index based on ESG-AVaR\({}_{\lambda,\tau}\) (\(\tau\) equal to 0.95 and 0.99), ESG-\(\mu_{\lambda}\), the ESG Rachev ratio (\(\tau\) equal to 0.95 and 0.99), the standard deviation, ESG-STARR\({}_{\alpha,\lambda}\) (the STAR ratio with p = 2), and the Sharpe ratio. Lambda values are arranged on the X axis from 0 (only the monetary component considered, left) to 1 (only the ESG component considered, right), and companies are color-coded by sector.
shifts of the distribution; hence, increasing the value of \(\lambda\) does not cause any changes in the ranking.
* ESG-\(\mu_{\lambda}\), the ESG Sharpe ratio, and the ESG Sortino-Satchell ratio rankings change significantly for different values of \(\lambda\). These changes are driven by the significant differences in the rankings of the average returns and average sustainability, which is also suggested by Panel C in Figure 1.
Overall, the very low volatility of the ESG scores in this analysis greatly affects the results, and the ESG score can almost be treated as constant compared to the returns. This could be related to the lower sampling frequency of the ESG data compared to the returns (annual vs daily), but it is likely to be a property of the data due to a greater stability in the level of sustainability of a company, which is influenced by the characteristics of the productive processes and stable firm-level characteristics; in contrast, stock prices have high uncertainty and volatility because they are forward-looking and reflect the market expectations of the future profitability of a company. Further empirical research and an increase in the quality of the ESG data are required for more extensive results, but the framework proposed here is flexible enough to be used to develop empirical analyses both to answer theoretical questions and to implement viable investment strategies suitable for ESG-oriented investors.
## 6 Conclusions
Individuals and institutions are increasingly aware of the non-monetary impact of their investments, and they are willing to structure their portfolios so that not only are the monetary risk and gains optimal but also the environmental, social, and governance implications are taken into account.
This shift in investors' goals challenges the traditional financial literature, requiring the development of new analytical tools to describe the behavior of ESG-oriented investors. Our work contributes to the literature by introducing ESG-coherent risk measures, a framework for the measurement of risk for investors with both monetary and ESG goals that provides an axiomatic definition in which risk is measured as a function of a bivariate random variable.
The investor in our approach is still rational but follows a multi-dimensional evaluation that considers not only the monetary part but also the sustainability.
This framework extends the traditional coherent risk measures approach of Artzner et al. (1999). The measures we provide can be used in several contexts, and due to the introduction of a parameter \(\lambda\), they can be adapted to individuals with different relative preferences for the monetary and ESG components of risk. We also provide the dual representation, present several examples that generalize univariate risk measures, and introduce related ESG reward-risk ratios.
We stress that the goal of the approach proposed is not to integrate ESG scores for improving monetary risk-adjusted performances, but to take into account the ethical preference of an investor for sustainable assets. This challenges the traditional assumptions of monetary profit-maximizing and risk-minimizing agents.
This paper is only an initial step in the development of a new financial theory that will be capable of describing the behavior of ESG-oriented investors. Future work will study optimal asset allocations, utility theory, and asset pricing for ESG-oriented investors.
## Acknowledgments
Project funded under the National Recovery and Resilience Plan (NRRP), Mission 4 Component 2 Investment 1.3 - Call for tender No. 341 of 15/03/2022 of Italian Ministry of University and Research funded by the European Union - NextGenerationEU. Award Number: PE_0000018, Concession Decree No. 1558 of 11/10/2022 adopted by the Italian Ministry of University and Research, CUP F83C22001720001, GROWING RESIIENT INCLUSIVE AND SUSTAINABLE - GRINS
ESG-coherent reward measures
Similarly to how we defined ESG-coherent risk measures, we define ESG-coherent reward measures. As discussed by Rachev et al. (2008), in the univariate case, a reward measure can be defined as any functional defined on the space of the random variables of interest, that is, it should be isotonic with market preferences. Still, it is useful to formalize axiomatically the characteristics of reward measures. We now extend to the multivariate setting the axioms defined by Rachev et al. (2008), and we add the axiom of lambda homogeneity (LH-M+). Considering a probability space \((\Omega,\mathcal{F},\mathbb{P})\) and a set of bivariate random variables \(\mathcal{X}_{2}\), we can define an ESG-adjusted reward measure \(\boldsymbol{\theta}_{\lambda}(X):\mathcal{X}_{2}\to\mathbb{R}\cup\{\pm\infty\}\), where \(\lambda\in[0,1]\) is then ESG-coherent if the following axioms are satisfied:
* Super-additivity: if \(X_{1},X_{2}\in\mathcal{X}_{2}\), then \(\boldsymbol{\theta}_{\lambda}(X_{1}+X_{2})\geq\boldsymbol{\theta}_{\lambda}(X _{1})+\boldsymbol{\theta}_{\lambda}(X_{2})\);
* Positive Homogeneity: if \(\beta\geq 0\) and \(X\in\mathcal{X}_{2}\), then \(\boldsymbol{\theta}_{\lambda}(\beta X)=\beta\boldsymbol{\theta}_{\lambda}(X)\);
* Translation Invariance: if \(y\in\mathbb{R}^{2}\), \(\boldsymbol{\theta}_{\lambda}(X+a)=\boldsymbol{\theta}_{\lambda}(X)+ \boldsymbol{\theta}_{\lambda}(y)\);
* Monotonicity: if \(X_{1},X_{2}\in\mathcal{X}_{2}\) and \((r_{1}\leq r_{2}\wedge\text{ESG}_{1}\leq\text{ESG}_{2})\) a.s., then \(\boldsymbol{\theta}_{\lambda}(X_{1})\leq\boldsymbol{\theta}_{\lambda}(X_{2})\);
* Lambda Homogeneity: if \(y=\begin{bmatrix}y_{1}\\ y_{2}\end{bmatrix}\in\mathbb{R}^{2}\), then \(\boldsymbol{\theta}_{\lambda}(y)=(1-\lambda)y_{1}+\lambda y_{2}\).
## Appendix B Proof of dual representation of ESG-coherent risk measures
For convenience define \(Z=-X\) where \(X\) is the bivariate vector of monetary returns and ESG. Consider the space \(\mathcal{Z}=\mathcal{L}_{p}(\Omega,\mathcal{F},P;\mathbb{R}^{2})\), \(p\in[1,\infty)\). Its dual space \(\mathcal{Z}^{*}\) is isomorphic to \(\mathcal{L}_{q}(\Omega,\mathcal{F},P;\mathbb{R}^{2})\), where \(q\in(1,\infty]\) is such that \(1/p+1/q=1\) with \(q=\infty\) for \(p=1\).
Assume that \(\boldsymbol{\rho}_{\lambda}:\mathcal{Z}\to\mathbb{R}\cup\{\infty\}\) is a proper convex lower-semicontinuous function. Note that \(\boldsymbol{\rho}_{\lambda}\) is convex when SUB-M and PH-M hold.
Consider the bilinear form \(\langle\cdot,\cdot\rangle\) on the product \(\mathcal{Z}\times\mathcal{Z}^{*}\), which is defined as follows for \(Z\in\mathcal{Z}\) and \(\zeta\in\mathcal{Z}^{*}\):
\[\langle\zeta,Z\rangle=\int_{\Omega}\zeta_{1}(\omega)Z_{1}(\omega)+\zeta_{2}( \omega)Z_{2}(\omega))dP(\omega).\]
This form provides the corresponding continuous linear functionals on \(\mathcal{Z}\) and \(\mathcal{Z}^{*}\) when equipped with appropriate topologies. For each fixed \(\zeta\in\mathcal{Z}^{*}\), the mapping \(Z\mapsto\langle\zeta,Z\rangle\) is a continuous linear functional on \(\mathcal{Z}\), equipped with the norm topology. For \(p\in(0,1)\), all continuous linear functional on \(\mathcal{Z}^{*}\) have this form. For \(p=1\), we equip \(Z^{*}\) with the weak\({}^{*}\) topology to have the representation by the bi-linear form valid. The Fenchel conjugate function \(\boldsymbol{\rho}_{\lambda}^{*}:\mathcal{Z}^{*}\to\overline{\mathbb{R}}\) of the risk measure \(\boldsymbol{\rho}_{\lambda}\) is defined as
\[\boldsymbol{\rho}_{\lambda}^{*}(\zeta)=\sup_{Z\in\mathcal{Z}}\big{\{}\langle \zeta,Z\rangle-\boldsymbol{\rho}_{\lambda}(Z)\big{\}},\]
and the conjugate of \(\boldsymbol{\rho}_{\lambda}^{*}\) (the bi-conjugate function) is defined as
\[\boldsymbol{\rho}_{\lambda}^{**}(Z)=\sup_{\zeta\in\mathcal{Z}^{*}}\big{\{} \langle\zeta,Z\rangle-\boldsymbol{\rho}_{\lambda}^{*}(\zeta)\big{\}}.\]
\(\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}\) denotes the domain of \(\boldsymbol{\rho}_{\lambda}^{*}.\) The Fenchel-Moreau Theorem, which is valid in paired spaces, implies that \(\boldsymbol{\rho}_{\lambda}^{**}(Z)=\boldsymbol{\rho}_{\lambda}(Z)\) whenever \(\boldsymbol{\rho}_{\lambda}\) is proper, convex, and lower-semicontinuous.
Claim: The monotonicity axiom holds iff for all \(\zeta\in\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}\), \(\zeta\geq 0\) a.s.
Assume that the opposite is true. This means that there exists a set \(\Delta\in\mathcal{F}\) with \(P(\Delta)>0\) such that for \(\omega\in\Delta\), we have \(\zeta_{i}(\omega)<0\) for \(i=1\) or \(i=2\). We define \(\bar{Z}_{i}=\mathbb{1}_{\Delta\cap\zeta_{i}<0}\), where \(\mathbb{1}_{B}\) is the indicator function of the event \(B\). Take any \(Z\) with support in \(\Delta\) such that \(\boldsymbol{\rho}_{\lambda}(Z)\) is finite and and define \(Z_{t}:=Z-t\bar{Z}\). Then for \(t\geq 0\), we have that \(Z_{t}\leq Z\) componentwise, and \(\boldsymbol{\rho}_{\lambda}(Z_{t})\leq\boldsymbol{\rho}_{\lambda}(Z)\) by monotonicity. Consequently,
\[\boldsymbol{\rho}_{\lambda}^{*}(\zeta)\geq\sup_{t\in\mathbb{R}_{+}}\big{\{} \langle\zeta,Z_{t}\rangle-\boldsymbol{\rho}_{\lambda}(Z_{t})\big{\}}\geq\sup _{t\in\mathbb{R}_{+}}\big{\{}\langle\zeta,Z\rangle-t\langle\zeta,\bar{Z} \rangle-\boldsymbol{\rho}_{\lambda}(Z)\big{\}}.\]
On the right-hand side, \(\langle\zeta,\bar{Z}\rangle<0\) on \(\Delta\) and zero otherwise, while the other terms under the supremum are fixed. Hence, the supremum is infinite and \(\zeta\not\in\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}\).
Conversely, suppose that every \(\zeta\in\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}\) is nonnegative. Then for every \(\zeta\in\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}\) and \(Z\geq Z^{\prime}\) componentwise, we have
\[\langle\zeta,Z^{\prime}\rangle=\int_{\Omega}\zeta_{1}(\omega)Z_{1}^{\prime}( \omega)+\zeta_{2}(\omega)Z_{2}^{\prime}(\omega)dP(\omega)\leq\int_{\Omega} \zeta_{1}(\omega)Z_{1}(\omega)+\zeta_{2}(\omega)Z_{2}(\omega)dP(\omega)=\langle \zeta,Z\rangle.\]
Consequently
\[\boldsymbol{\rho}_{\lambda}(Z)=\sup_{\zeta\in\mathcal{Z}^{*}}\left\{\langle\zeta, Z\rangle-\boldsymbol{\rho}_{\lambda}^{*}(\zeta)\right\}\geq\sup_{\zeta\in\mathcal{Z}^{*}} \left\{\langle\zeta,Z^{\prime}\rangle-\boldsymbol{\rho}_{\lambda}^{*}(\zeta) \right\}=\boldsymbol{\rho}_{\lambda}(Z^{\prime}).\]
Claim: The positive homogeneity axiom holds iff \(\boldsymbol{\rho}_{\lambda}\) is the support function of \(\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}\).
Suppose that \(\boldsymbol{\rho}_{\lambda}(tZ)=t\boldsymbol{\rho}_{\lambda}(Z)\) for all \(Z\in\mathcal{Z}\). For all \(t>0\) and for all \(Z\in\mathcal{Z}\)
\[\boldsymbol{\rho}_{\lambda}^{*}(\zeta)=\sup_{Z\in\mathcal{Z}}\left\{\langle \zeta,Z\rangle-\boldsymbol{\rho}_{\lambda}(Z)\right\}\geq\langle\zeta,tZ \rangle-\boldsymbol{\rho}_{\lambda}(tZ)\]
Thus for all \(t>0\)
\[\boldsymbol{\rho}_{\lambda}^{*}(\zeta)=\sup_{Z\in\mathcal{Z}}\left\{\langle \zeta,Z\rangle-\boldsymbol{\rho}_{\lambda}(Z)\right\}\geq\sup_{Z\in\mathcal{Z }}\left\{\langle\zeta,tZ\rangle-t\boldsymbol{\rho}_{\lambda}(Z)\right\}=t \boldsymbol{\rho}_{\lambda}^{*}(\zeta).\]
Hence, if \(\boldsymbol{\rho}_{\lambda}^{*}(\zeta)\) is finite, then \(\boldsymbol{\rho}_{\lambda}^{*}(\zeta)=0\) as claimed. Furthermore,
\[\boldsymbol{\rho}_{\lambda}(0)=\sup_{\zeta\in\mathcal{Z}^{*}}\left\{\langle \zeta,0\rangle-\boldsymbol{\rho}_{\lambda}^{*}(\zeta)\right\}=0.\]
For the converse, if \(\boldsymbol{\rho}_{\lambda}(Z)=\sup_{\zeta\in\mathcal{A}_{\boldsymbol{\rho}_{ \lambda}}}\langle\zeta,Z\rangle\), then \(\boldsymbol{\rho}_{\lambda}\) is positively homogeneous as a support function of a convex set. Hence when the positive homegeneity property holds, the conjugate function is the indicator function of convex analysis of the set \(\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}\).
Claim: The translation axiom \(\boldsymbol{\rho}_{\lambda}(Z+a)=\boldsymbol{\rho}_{\lambda}(Z)+\boldsymbol{ \rho}_{\lambda}(a)\) holds if and only if \(\int_{\Omega}\zeta(\omega)P(d\omega)=\mu\), where \(\mu\) is a constant vector and \(\boldsymbol{\rho}_{\lambda}(a)=\langle\mu,a\rangle\). We write \(\mu_{\zeta}=\int_{\Omega}\zeta(\omega)P(d\omega)\) and fix an arbitrary constant \(a\). We infer
\[\boldsymbol{\rho}_{\lambda}^{*}(\zeta)=\sup_{Z\in\mathcal{Z}} \left\{\langle\zeta,Z+a\rangle-\boldsymbol{\rho}_{\lambda}(Z+a)\right\}=\sup_ {Z\in\mathcal{Z}}\left\{\int_{\Omega}\langle Z+a,\zeta(\omega)\rangle P(d \omega)-\boldsymbol{\rho}_{\lambda}(Z)-\boldsymbol{\rho}_{\lambda}(a)\right\}\] \[=\boldsymbol{\rho}_{\lambda}^{*}(\zeta)-\boldsymbol{\rho}_{ \lambda}(a)+\langle a,\mu_{\zeta}\rangle.\]
Hence, \(\boldsymbol{\rho}_{\lambda}(a)=\langle a,\mu_{\zeta}\rangle\) for all \(\zeta\in\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}\) and for all constant vectors \(a\). This is possible only if
\[\mu_{\zeta}=\mu\quad\forall\zeta\in\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}.\]
For the converse, consider
\[\boldsymbol{\rho}_{\lambda}(Z+a)=\sup_{\zeta\in\mathcal{Z}^{*}}\left\{ \langle\zeta,Z+a\rangle-\boldsymbol{\rho}_{\lambda}^{*}(\zeta)\right\}=\sup_{ \zeta\in\mathcal{Z}^{*}}\left\{\int_{\Omega}\langle Z,\zeta(\omega)\rangle P(d \omega)-\boldsymbol{\rho}_{\lambda}^{*}(\zeta)+\int_{\Omega}\langle a,\zeta( \omega)\rangle P(d\omega)\right\}\\ =\boldsymbol{\rho}_{\lambda}(Z)+\langle a,\mu\rangle.\]
Claim: If additionally LH-M holds, then \(\mu=\left[(1-\lambda),\lambda\right]^{\prime}\) and, hence, for all \(\zeta\in\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}\), we have
\[\int_{\Omega}\zeta_{1}(\omega)+\zeta_{2}(\omega)P(d\omega)=1.\]
In summary, when axioms SUB-M through LH-M are satisfied, then the dual representation of the risk measure is
\[\boldsymbol{\rho}_{\lambda}(X) =\sup_{\zeta\in\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}}-\int_{ \Omega}\zeta_{1}(\omega)r(\omega)+\zeta_{2}(\omega)\text{ESG}(\omega)P(d\omega) \tag{34}\] \[\boldsymbol{\rho}_{\lambda}(X) =-\inf_{\zeta\in\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}}\int_{ \Omega}\zeta_{1}(\omega)r(\omega)+\zeta_{2}(\omega)\text{ESG}(\omega)P(d\omega)\]
where \(\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}\) contains non-negative functions \((\zeta_{1}(\omega),\zeta_{2}(\omega))\) on \(\mathbb{R}^{2}\) whose expected value is \(\left[(1-\lambda),\quad\lambda\right]^{\prime}\). Furthermore, \(\mathcal{A}_{\boldsymbol{\rho}_{\lambda}}\) is equal to the convex subdifferential of \(\boldsymbol{\rho}_{\lambda}([0,0]^{\prime})\). Note that in (34) we adjusted the signs since we express the risk measure in terms of \(X\) and not \(Z=-X\):
Appendix C Proofs of dual representation for ESG-AVaR\({}_{\lambda,\tau}\) and ESG-AVaR\({}_{\lambda,\tau}^{l}\)
Let \(X=[r,\text{ ESG}]^{\prime}\in\mathcal{X}_{2}=\mathcal{L}_{1}(\Omega,\mathcal{F},P; \mathbb{R}^{2})\) be a bivariate random variable associated with an asset, and for convenience, define \(Z:=[Z_{1},\ Z_{2}]^{\prime}=-X\) (i.e., the corresponding vector with inverted signs).
**Dual representation for ESG-AVaR\({}_{\lambda,\tau}\)**
Here, we prove that ESG-AVaR\({}_{\lambda,\tau}(X)\) has the following dual representation:
\[\text{ESG-AVaR}_{\lambda,\tau}(X)=\sup_{[\zeta_{1}\ \zeta_{2}]^{ \prime}\in\mathcal{A}_{\text{ESG-AVaR}_{\lambda,\tau}}}\mathbb{E}[Z_{1}\zeta _{1}+Z_{2}\zeta_{2}],\quad\text{with}\] \[\mathcal{A}_{\text{ESG-AVaR}_{\lambda,\tau}}=\left\{[\zeta_{1},\ \zeta_{2}]^{\prime}\in\mathcal{L}_{\infty}(\Omega,\mathcal{F},P;\mathbb{R}^{2} ):\ [\zeta_{1},\ \zeta_{2}]^{\prime}=\xi[1-\lambda,\ \lambda]^{\prime};\ 0\leq\xi\leq \frac{1}{1-\tau}\ \text{a.s.}\ \mathbb{E}[\xi]=1\right\}.\]
We assume \(\mathcal{X}_{2}=\mathcal{L}_{1}(\Omega,\mathcal{F},P;\mathbb{R}^{2})\), which entails that the paired space is \(\mathcal{L}_{\infty}(\Omega,\mathcal{F},P;\mathbb{R}^{2})\).
\[\boldsymbol{\rho}(Z) =\min_{\beta\in\mathbb{R}}\Big{\{}\frac{1}{1-\tau}\mathbb{E}\big{[} \big{(}\beta-\big{(}(1-\lambda)r+\lambda\text{ESG}\big{)}\big{)}_{+}\big{]}- \beta\Big{\}},\] \[=\min_{\beta\in\mathbb{R}}\Big{\{}\beta+\frac{1}{1-\tau}\mathbb{E }\big{[}\big{(}(1-\lambda)Z_{1}+\lambda Z_{2}-\beta\big{)}_{+}\big{]}\Big{\}}, \quad\tau\in(0,1).\]
Using the rules of subdifferential calculus and Strassen's theorem (Strassen (1965)), we get
\[\partial\mathbb{E}\big{[}\big{(}(1-\lambda)Z_{1}+\lambda Z_{2}-\beta\big{)}_{+ }\big{)}_{+}\big{]}=[1-\lambda,\ \lambda]^{\prime}\xi,\quad\text{where }\xi(\omega)= \begin{cases}1&\text{if }(1-\lambda)Z_{1}+\lambda Z_{2}>\beta\\ 0&\text{if }(1-\lambda)Z_{1}+\lambda Z_{2}<\beta\\ [0,1]&\text{if }(1-\lambda)Z_{1}+\lambda Z_{2}=\beta.\end{cases}\]
Note that \(\xi\in\mathcal{L}_{\infty}(\Omega,\mathcal{F},P)\) and \(\xi\geq 0\). We define \(\zeta\in\mathcal{L}_{\infty}(\Omega,\mathcal{F},P;\mathbb{R}^{2})\) by setting \(\zeta=[1-\lambda,\ \lambda]^{\prime}\xi\) for any measurable selection \(\xi\in\partial\mathbb{E}\big{[}\big{(}(1-\lambda)r+\lambda\text{ESG}\big{)}_{ +}\big{]}.\) Let \(\bar{\mathcal{A}}\) be the set containing all such elements \(\zeta\). Evidently, we have \(\boldsymbol{0}\leq\zeta\leq[1-\lambda,\ \lambda]^{\prime}\) a.s. for all. Using the subgradient inequality at \(\bar{Z}=(\beta,\beta)\), we obtain for all \(Z\)
\[\mathbb{E}\big{[}\big{(}((1-\lambda)Z_{1}+\lambda Z_{2})-\beta\big{)}_{+} \big{]}\geq\langle\zeta,Z-\beta(1,1)\rangle=\langle\zeta,Z\rangle-\beta \mathbb{E}[\xi].\]
On the other hand, for any \(\zeta=[1-\lambda,\ \lambda]^{\prime}\xi\)
\[\langle\zeta,Z\rangle=\langle[1-\lambda,\ \lambda]^{\prime}\xi,Z- \beta(1,1)\rangle+\beta\langle[1-\lambda,\ \lambda]^{\prime}\xi,(1,1)\rangle\\ =\langle\xi,((1-\lambda)Z_{1}+\lambda Z_{2}-\beta)\rangle+\beta \mathbb{E}[\xi]\leq\mathbb{E}\big{[}\big{(}(1-\lambda)Z_{1}+\lambda Z_{2}- \beta)_{+}\big{]}+\beta\mathbb{E}[\xi].\]
Hence, we can represent
\[\mathbb{E}\big{[}\big{(}(1-\lambda)Z_{1}+\lambda Z_{2}-\beta)_{+}\big{]}+=\max _{\zeta\bar{\mathcal{A}}}\Big{(}\langle\zeta,Z\rangle-\beta\mathbb{E}[\xi] \Big{)}.\]
Now, we can express the risk measure as follows:
\[\mathbf{\rho}(Z) =\min_{\beta\in\mathbb{R}}\bigg{\{}\beta+\frac{1}{1-\tau}\max_{ \zeta\in\tilde{\mathcal{A}}}\Big{(}\langle\zeta,Z\rangle-\beta\mathbb{E}[\xi] \Big{)}\bigg{\}}\] \[=\max_{\zeta\in\tilde{\mathcal{A}}}\inf_{\beta\in\mathbb{R}}\bigg{\{} \beta+\frac{1}{1-\tau}\langle\zeta,Z\rangle-\frac{\beta}{1-\tau}\mathbb{E}[ \xi]\bigg{\}}=\max_{\zeta\in\tilde{\mathcal{A}}}\inf_{\beta\in\mathbb{R}} \bigg{\{}\beta(1-\frac{1}{1-\tau}\mathbb{E}[\xi])+\frac{1}{1-\tau}\langle \zeta,Z\rangle\bigg{\}}.\]
The exchange of the "min" and "max" operations is possible, because the function in braces is bilinear in \((\beta,\zeta)\), and the set \(\tilde{\mathcal{A}}\) is compact. The inner minimization with respect to \(\beta\) yields \(-\infty\), unless \(\mathbb{E}[\xi]=1-\tau\). Consequently,
\[\mathbf{\rho}(Z)=\max_{\zeta=\mathbb{E}[1-\lambda,\lambda]^{\prime}\in\mathcal{A }\atop\mathbb{E}[\xi]=1-\tau}\frac{1}{1-\tau}\langle Z,\zeta\rangle.\]
Setting \(\zeta^{\prime}=\zeta/(1-\tau)\), we obtain the support set
\[\partial\mathbf{\rho}(0)=\bigg{\{}\zeta\in\mathcal{L}_{\infty}(\Omega,\mathcal{F},P;\mathbb{R}^{2}):\zeta=\frac{\xi}{1-\tau}[1-\lambda,\lambda]^{\prime};\xi \geq 0,\ \mathbb{E}[\xi]=1-\tau,\ \|\xi\|_{\infty}\leq 1\bigg{\}}.\]
**Dual representation for ESG-AVaR\({}^{l}_{\lambda,\tau}\)**
We now prove that the dual representation of ESG-AVaR\({}^{l}_{\lambda,\tau}(X)\) is defined as follows:
\[\text{ESG-AVaR}^{l}_{\lambda,\tau}(X)=\sup_{[\zeta_{1},\ \zeta_{2}]^{\prime}\in \mathcal{A}_{\text{ESG-AVaR}_{\lambda,\tau}}}\mathbb{E}[Z_{1}\zeta_{1}+Z_{2} \zeta_{2}], \tag{35}\]
with \(Z:=[Z_{1},\ Z_{2}]^{\prime}=-X\) and
\[\mathcal{A}_{\text{ESG-AVaR}^{l}_{\lambda,\tau}}=\left\{[\zeta_{1},\ \zeta_{2}]^{\prime}\in\mathcal{L}_{\infty}(\Omega,\mathcal{F},P;\mathbb{R}^{2 }):\mathbb{E}[\zeta_{1}]=1-\lambda;\mathbb{E}[\zeta_{2}]=\lambda;\zeta_{1}, \zeta_{2}\geq 0;\zeta_{1}\leq\frac{1-\lambda}{1-\tau};\zeta_{2}\leq\frac{ \lambda}{1-\tau}\right\}. \tag{36}\]
We use the known representation of Average Value at Risk for scalar random variables (cf. Shapiro et al., 2021, Example 6.19 eq. 6.76). Denote the dual set in that representation by \(\mathcal{A}^{\prime}\), i.e.,
\[\mathcal{A}^{\prime}=\left\{\xi\in\mathcal{L}_{\infty}(\Omega,\mathcal{F},P):\ \mathbb{E}[\xi]=1;\ 0\leq\xi\leq\frac{1}{1-\tau}\ \text{a.s.}\right\},\]
Hence
\[\text{ESG-AVaR}^{l}_{\lambda,\tau}(X) =(1-\lambda)\sup_{\xi\in\mathcal{A}^{\prime}}\mathbb{E}[\xi,Z_{1}]+ \lambda\sup_{\xi\in\mathcal{A}^{\prime}}\mathbb{E}[\xi,Z_{2}]\] \[=\sup_{\xi\in\mathcal{A}^{\prime}}\mathbb{E}[(1-\lambda)\xi,Z_{1} ]+\sup_{\xi\in\mathcal{A}^{\prime}}\mathbb{E}[\lambda\xi,Z_{2}]\] \[=\sup_{[\xi_{1},\xi_{2}]^{\prime}\in\mathcal{A}^{\prime}\times \mathcal{A}^{\prime}}\Big{(}\mathbb{E}[(1-\lambda)\xi_{1},Z_{1}]+\mathbb{E}[ \lambda\xi_{2},Z_{2}]\Big{)}. \tag{37}\]
Now, we define the set \(\mathcal{A}_{\text{ESG-AVaR}^{l}_{\lambda,\tau}}=(1-\lambda)\mathcal{A}^{ \prime}\times\lambda\mathcal{A}^{\prime}\) and continue the last chain of equations as follows:
\[\text{ESG-AVaR}^{l}_{\lambda,\tau}(X)=\sup_{[\xi_{1},\xi_{2}]^{\prime}\in \mathcal{A}^{\prime}\times\mathcal{A}^{\prime}}\mathbb{E}[(1-\lambda)\xi,Z_{ 1}]+\mathbb{E}[\lambda\xi,Z_{2}]=\sup_{\zeta\in\mathcal{A}_{\text{ESG-AVaR}^{ l}_{\lambda,\tau}}}\langle\zeta,Z\rangle.\]
This concludes the proof.
## References
## References |
2309.17292 | Spectral gap and embedded trees for the Laplacian of the
Erdős-Rényi graph | For the Erd\H{o}s-R\'enyi graph of size $N$ with mean degree
$(1+o(1))\frac{\log N}{t+1}\leq d\leq(1-o(1))\frac{\log N}{t}$ where
$t\in\mathbb{N}^{*}$, with high probability the smallest non zero eigenvalue of
the Laplacian is equal to $2-2\cos(\pi(2t+1)^{-1})+o(1)$. This eigenvalue
arises from a small subgraph isomorphic to a line of size $t$ linked to the
giant connected component by only one edge. | Raphael Ducatez, Renaud Rivier | 2023-09-29T14:49:02Z | http://arxiv.org/abs/2309.17292v1 | # Spectral gap and embedded trees for the Laplacian of the Erdos-Renyi graph
###### Abstract
For the Erdos-Renyi graph of size \(N\) with mean degree \((1+o(1))\frac{\log N}{t+1}\leq d\leq(1-o(1))\frac{\log N}{t}\) where \(t\in\mathbb{N}^{*}\), with high probability the smallest non zero eigenvalue of the Laplacian is equal to \(2-2\cos(\pi(2t+1)^{-1})+o(1)\). This eigenvalue arises from a small subgraph isomorphic to a line of size \(t\) linked to the giant connected component by only one edge.
## 1 Introduction
### Introduction
The Erdos-Renyi graph is the most natural and simple model for random graph [1]: from the complete graph with \(N\) vertices, every edge is kept with probability \(p=p_{N}\) independently. The asymptotic behavior has been extensively studied [1, 2] for various regimes of \(p_{N}\) and has shown a large number of phenomena, for example, the apparition of a component of macroscopic size at \(p_{N}>\frac{1}{N}\) or the connectivity of the graph for \(p_{N}>\frac{\log N}{N}\).
Together with the adjacency matrix, the Laplacian is a canonical way to encode the structure of the graph. It appears in physics as the Hamiltonian of a quantum particle living on the graph or, in probability, as the generator associated with the random walk on the graph. Spectral analysis of the Laplacian of the Erdos-Renyi has recently enjoyed new results. The bulk of the spectrum has been analyzed in [10] down to polynomial regimes and to \(d\geq\sqrt{\log N}\) in [11, Chapter 2] where local laws down to optimal scale were shown for the Green function. In particular, in the supercritical regime, \(d\geq\log N\) the law of the spectrum in the bulk can be characterized by the free-convolution between a Gaussian law (coming from the diagonal matrix of the degree) and a semi-circle law (coming from the adjacency matrix). The right edge of the spectrum has been studied in [11, Chapter 3] where the author showed that the largest eigenvalues are matched with the largest degrees in the graph (see also
[3, 4]). In a series of papers [1, 2, 1, 3] the authors had previously achieved similar results for the adjacency matrix For the left edge of the spectrum of the Laplacian in the supercritical regime \(d\geq\log N\) a correspondence can be established between the smallest eigenvalues and small-degree vertices (see [13, Chapter 3])
In this paper, we study the spectrum of the Laplacian for subcritical regimes \(d\leq\log N\) and compute its smallest non-zero eigenvalue \(\lambda_{2}\) also called the spectral gap. Contrarily to the supercritical regime, for \(d\leq\log N\) the graph becomes disconnected and small connected components begin to appear. We recall that the Laplacian is a positive matrix with \(\lambda_{1}=0\) whose multiplicity is equal to the number of connected components. A natural guess would be that \(\lambda_{2}\) is obtained from the Laplacian restricted to the small disconnected clusters. We prove that the smallest eigenvalues are created by _small clusters connected to the giant connected component by only one edge_. Moreover among all these small clusters, the line of maximal length is the optimal subgraph to minimize the eigenvalue. We then obtain an explicit formula \(\lambda_{2}=2-2\cos\big{(}\pi(2t_{*}+1)^{-1}\big{)}+o(1)\) with \(t_{*}\in\mathbb{N}^{*}\) corresponding to the size of the largest but non-giant disconnected component.
### Model
For \(G=(V,E)\) a graph with \(V\) the set of vertices and \(E\) the set of edges we denote \(L(G)\in\mathbb{R}^{|V|\times|V|}\) the Laplacian, \(A(G)\in\mathbb{R}^{|V|\times|V|}\) the adjacency matrix and \(D(G)\in\mathbb{R}^{|V|\times|V|}\) the diagonal matrix of the degree associated with the graph defined by
\[L(G)=\sum_{e\in E}L(e),\qquad A(G)=\sum_{e\in E}A(e),\qquad L(G)=D(G)-A(G)\]
where for every \(e=(x,y)\in V^{2}\)
\[L(e)\coloneqq L((x,y))=(1_{x}-1_{y})(1_{x}-1_{y})^{*},\quad A(e)\coloneqq A( (x,y))=1_{x}1_{y}^{*}+1_{y}1_{x}^{*}.\]
We consider the Erdos-Renyi graph \(\mathbb{G}_{N,d_{N}}\) defined with set of vertices \(V=\{1,\cdots,N\}\) and set of edges \(E\) where each \((x,y)\in V^{2}\), \(x\neq y\) is added to the graph at random, independently and with probability \(p_{N}=\frac{d_{N}}{N}\). We call \(d_{N}\) the mean degree of \(\mathbb{G}_{N,d_{N}}\).
We say that \(\Omega=(\Omega_{N})_{N\in\mathbb{N}}\), events on the Erdos-Renyi graph of size \(N\), occures with high probability if \(\mathbb{P}(\Omega_{N})\to 1\) as \(N\to\infty\).
### Main results
We are interested in the first non-zero eigenvalue of the Laplacian that we denote
\[\lambda_{2}:=\min\bigl{(}\operatorname{Spec}(L(\mathbb{G}_{N,d_{N}}))\setminus \{0\}\bigr{)}.\]
Here is our main result.
**Theorem 1.1**.: _Let \(t_{*}\in\mathbb{N}^{*}\), \(\epsilon>0\) and \(d_{N}\) such that_
\[(1+\epsilon)\frac{\log N}{t_{*}+1}\leq d_{N}\leq(1-\epsilon)\frac{\log N}{t_{*}}. \tag{1}\]
_As \(N\to\infty\) with high probability, we have_
\[\lambda_{2}=2-2\cos\left(\frac{\pi}{2t_{*}+1}\right)+O\left(\frac{1}{\log N} \right). \tag{2}\]
As we will see in the proof, the value of \(\lambda_{2}\) is closely linked with the existence of lines of length \(t_{*}\) that are connected to the rest of the graph by exactly one edge. The picture below summarizes the various regimes covered.
## 2 Proof of Theorem 1.1
### Notations
In the rest of the paper we simply write \(\mathbb{G}=\mathbb{G}_{N,p_{N}}\) and \(d=d_{N}\). We recall that a multiset is a set where elements can appear multiple times. We use the notation \(\widehat{X}\) for a multiset, \(X\) for the associated set and for \(x\in X\) we write \(\widehat{X}(x)\) the multiplicity of \(x\). For example with \(\widehat{X}=\{x,x,y\}\) we have \(X=\{x,y\}\). \(\widehat{X}(x)=2\), \(\widehat{X}(y)=1\). We also denote \(\hat{X}\subset[[Y]]\) for \(X\subset Y\).
**Definition 2.1**.: For a graph \(G=(V,E)\) and subgraph \(T\subset V\) we define the anchor \(\widehat{\partial T}\) as the multiset
\[\widehat{\partial T}=\{x\in T\colon(x,y)\in E,y\notin T\}\]
It corresponds to the _outgoing degree_ of the vertices of \(T\).
Figure 1: Illustration of the regimes covered by Theorem 1.1. As the density parameter \(d\) decreases, the graph becomes sparser and longer lines appear. Some of those lines are connected by exactly one edge to the main connected component and generate the formula (2) for \(\lambda_{2}\). The regimes corresponding to \(d\geq\log N\) (small degree vertices) are not covered in this paper but a detailed analysis can be found in [11, Chapter 3].
**Definition 2.2**.: For a graph \(G=(V,E)\) and a multiset \(\hat{X}\subset[[V]]\) we define the weighted Laplacian
\[L\big{(}G;\widehat{X}\big{)}=L(G)+\sum_{x\in\widehat{X}}1_{x}1_{x}^{*}=L(G)+ \sum_{x\in X}\hat{X}(x)1_{x}1_{x}^{*}.\]
_Remark 2.3_.: Given \(T\subset V\), the weigthed Laplacian corresponds to the usual matrix restriction, \(L(T;\widehat{\partial T})=L(G)|_{T}\).
### Proof of Theorem 1.1
Through all the paper \(t_{*}\in\mathbb{N}^{*}\), \(\epsilon\in(0,\frac{1}{4})\) and as \(-\tau\log\tau+\tau\to 0\) for \(\tau\to 0\) we can choose \(\tau\in(0,1)\) small enough such that
\[-\tau\log\tau+\tau\leq\frac{\epsilon}{2}. \tag{3}\]
Define
\[\mathcal{V}=\{x\in[N],D_{x}\leq\tau d\}.\]
We call a line of size \(t\in\mathbb{N}\) the graph
\[\mathbb{L}_{t}=\big{(}\{1,\cdots,t\},E\big{)}\text{ with }E=\{(i,i+1)\colon 1 \leq i<t\}\]
**Proposition 2.4**.: _With high probability_
1. \(\mathbb{G}|_{\mathcal{V}}\) _is a forest._
2. _For any tree_ \(T\subset\mathbb{G}|_{\mathcal{V}}\) _we have_ \(|T|\leq t_{*}\)_._
3. _There exists a tree_ \(T\subset\mathbb{G}|_{\mathcal{V}}\) _that is isomorph to a line of size_ \(t_{*}\) _and such that it is connected to_ \((\mathcal{V})^{c}\) _by exactly one edge attached at one of its two extremal points._
Denote \(\mathfrak{F}\) the set of tree in the forest \(\mathbb{G}|_{\mathcal{V}}\).
**Proposition 2.5**.: _With high probability_
\[\sum_{\lambda\in Spec(L(\mathbb{G}))}\delta_{\lambda},\qquad\text{and}\qquad \delta_{0}+\sum_{T\in\mathfrak{F}}\sum_{\mu\in Spec(L(T;\widehat{\partial T}) )}\delta_{\mu+\epsilon_{\mu}}\]
_agree on the interval \([0,\frac{1}{2}\tau d]\) and such that \(\epsilon_{\mu}=O\big{(}(1+\mu)d^{-1}\big{)}\) for all \(\mu\)._
**Proposition 2.6**.: _For any tree \(T\) of size \(|T|\leq t_{*}\), \(\widehat{X}\subset[[T]]\) and \(\mu\in Spec(L(T;\widehat{X}))\) with \(\mu>0\) we have_
\[\mu\geq 2-2\cos\left(\frac{\pi}{2t_{*}+1}\right). \tag{4}\]
_Moreover equality holds if and only if \(T\) is a line of size \(t_{*}\) and \(\widehat{X}\) one of its extremal point and \(\mu\) its smallest eigenvalue._
We prove Proposition 2.4 in Section 3, Proposition 2.5 in Section 4 and Proposition 2.6 in Section 5. We then deduce Theorem 1.1 from these Propositions.
Proof of Theorem 1.1.: Because of Proposition 2.5 it is enough to study the spectrum of \(\operatorname{Spec}(L(T;\widehat{\partial T}))\) with \(T\in\mathfrak{F}\) and \(\widehat{\partial T}\) the associated anchor. Using item 2 of Proposition 2.4 and Equation (4) of Proposition 2.6 we obtain that with high probability
\[\lambda_{2}\geq 2-2\cos\left(\frac{\pi}{2t_{*}+1}\right)+O\big{(}d^{-1}\big{)}.\]
On the other hand using item 3 of Proposition 2.4 and the equality case of Proposition 2.6 we also have that with high probability
\[\lambda_{2}\leq 2-2\cos\left(\frac{\pi}{2t_{*}+1}\right)+O\big{(}d^{-1}\big{)}.\]
## 3 Probability estimate on the graph, proof of Proposition 2.4
This section is about properties of the random graph that occur with high probability and that we use in the following sections. Let us begin this section by setting a few notations. We equip the graph \(\mathbb{G}\) with its natural graph distance \(\operatorname{dist}_{\mathbb{G}}\). For \(x\in[N]\) and \(i\in\mathbb{N}\), we define the sphere of radius \(i\) around \(x\) as
\[S_{i}(x):=\{y\in[N]:\,\operatorname{dist}_{\mathbb{G}}(x,y)=i\},\]
as well as the balls of radius \(i\) around \(x\), \(B_{i}(x):=\bigcup_{j\leq i}S_{j}(x)\).
**Lemma 3.1**.: _With high probability, there exists a unique giant connected component \(\mathbb{G}_{cc}\) with size \(N(1-o(1))\). Moreover \(\mathbb{G}_{cc}^{c}\subset\mathcal{V}\)._
Proof.: The giant connected component is a very classical result, see for example [1]. Also because [1, Theorem 6.10] all the small connected component are of size \(O(1)\), than every very vertices of degree at least \(\tau d\) belongs to \(\mathbb{G}_{cc}\).
We now state a few results from [1].
**Proposition 3.2**.: _For some \(\mathcal{C}>0\) and \(r<\frac{\varepsilon\log N}{4(1+t_{*})^{2}\log d}\) with high probability, we have_
1. \(\forall x\in[N],D_{x}\leq\mathcal{C}d\)_,_
2. \(\|A(\mathbb{G})-\mathbb{E}(A(\mathbb{G})\|\leq\mathcal{C}\sqrt{d}\)__
3. \(|\mathcal{V}|\leq N^{1-\frac{1}{1+t_{*}}}\)_,_
_._
4. _For all_ \(x\in\mathcal{V}\)_,_ \(\mathbb{G}|_{B_{r}(x)}\) _is a tree,_
5. _For all_ \(x\in\mathcal{V}\)_,_ \(|B_{r}(x)\cap\mathcal{V}|\leq t_{*}\)_._
Proof of Proposition 3.2.: Item 1 is stated in [1, Lemma 3.3] and Item 2 is stated in [1, Corollary 2.3]. We now prove Item 3. We can apply [1, Lemma D.2]] with \(n=N-1\) and \(\mu=d(1-N^{-1})\) and \(a=\tau-1+O(N^{-1})\), we find
\[\mathbb{P}(x\in\mathcal{V})\leq e^{-d(\tau\log\tau-\tau+1)}(1+O(N^{-1}))\leq N ^{-\frac{1+\epsilon}{1+t_{*}}(1-\frac{\epsilon}{2})}(1+O(N^{-1}))=O(N^{-\nu}), \tag{5}\]
where we chose \(\tau\) as in 3 and \(\frac{(1+\epsilon)(1-\epsilon/2)}{1+t_{*}}\geq\nu\geq\frac{1+\epsilon/3}{1+t_{ *}}\),for \(\epsilon\leq 1/3.\) We find that for a \(C\geq 0\) large enough
\[\mathbb{E}|\mathcal{V}|=N\mathbb{P}(x\in\mathcal{V})\leq CN^{1-\nu}\]
so that by Markov \(\mathbb{P}(|\mathcal{V}|\geq N^{1-\frac{1}{1+t_{*}}})\leq CN^{\frac{1}{1+t_{* }}-\nu}=O\big{(}N^{-\frac{\epsilon}{3(1+t_{*})}}\big{)}\).
To prove Item 4 we use [1, Lemma 5.5] with \(k=1\) and have that for any \(x\in[N]\),
\[\mathbb{P}(x\in\mathcal{V},\,B_{r}(x)\text{ contain a cycle })\leq 2r^{2}(Cd)^{2r+1}N ^{-1-\frac{1}{t_{*}+1}}=o\big{(}N^{-1}\big{)},\]
where we use that \((2r+1)\log d+2\log r\leq 4r\log d\leq\frac{4\epsilon\log N}{(1+t_{*})^{2}} \leq\frac{4}{5(t_{*}+1)}\log N\). Using a union bound, we conclude that
\[\mathbb{P}(\exists x\in[N],\,x\in\mathcal{V},\,B_{r}(x)\text{ contain a cycle })=o(1).\]
To prove Item 5 we explore the neighbourhood of \(x\) adding the set \(S_{i}(x)\) for \(i\leq r\) one after the other. We denote the set of edges
\[E_{i+1}=E(\mathbb{G}(B_{i+1}))\setminus E(\mathbb{G}(B_{i}))\cup E(\mathbb{G }(S_{i}))\]
and the generated sigma algebra \(\mathcal{F}_{i}=\sigma(E_{1},\cdots,E_{i})\). This corresponds to \(\mathbb{G}(B_{i})\) without the set of edges \(\{(x,y)\in E:x,y\in S_{i}\}\). Remark that conditionnally on \(\mathcal{F}_{i}\), \(E_{i+1}\) is given by a set of \(|S_{i}|\times(N-|B_{i}|)\) independent random Bernoulli variables of parameter \(\frac{d}{N}\).
For \(y\in S_{i}\), we define the random variables \(\tilde{D}_{y}:=|S_{1}(y)\cap S_{i+1}|\) which form a family of independent random variable with \(\tilde{D}_{y}\leq D_{y}-1\). We introduce \(V_{i}=|S_{i}\cap\mathcal{V}|\) and, using (5), we find that, for any \(m_{1},\ldots,m_{r}\in\mathbb{N}\),
\[1_{|S_{i}|\leq(\mathcal{C}d)^{i}}\mathbb{P}(V_{i} \geq m_{i}|\mathcal{F}_{i})\leq 1_{|S_{i}|\leq(\mathcal{C}d)^{i}} \sum_{\{y_{1},\cdots,y_{m_{i}}\}\subset S_{i}}\mathbb{P}\Big{(}\cap_{j\leq m_{ i}}\{\tilde{D}_{y_{j}}\leq\tau d-1\}|\mathcal{F}_{i}\Big{)}\] \[\leq(\mathcal{C}d)^{im_{i}}N^{-m_{i}\nu}.\]
We denote \(\mathcal{D}(m)=\{(m_{i})\in\mathbb{N}^{r}:\sum_{i=1}^{r}m_{i}=m\}\) and we have
\[1_{\forall y\in B_{r}(x),D_{y}\leq Cd}\mathbb{P}(|B_{r}(x)\cap \mathcal{V}|\geq m|\mathcal{F}_{r}) \leq 1_{\forall y\in B_{r}(x),D_{y}\leq Cd}\sum_{\mathcal{D}(m)} \mathbb{P}\Big{(}\cap_{i=1}^{r}\{V_{i}\geq m_{i}\}|\mathcal{F}_{r}\Big{)}\] \[\leq\sum_{\mathcal{D}(m)}\prod_{i=1}^{r}1_{|S_{i}|\leq(Cd)^{i}} \mathbb{P}(V_{i}\geq m_{i}|\mathcal{F}_{i})\] \[\leq r^{m}(\mathcal{C}d)^{rm}N^{-m\nu}\]
Therefore for \(m=t_{*}+1\) we have \(N^{-m\nu}\leq N^{-(1+\frac{5}{3})}\) and then
\[\mathbb{P}(\forall y\in[N],D_{y}\leq\mathcal{C}d\cap|B_{r}(x)\cap\mathcal{V}| \geq t_{*}+1)\leq r^{t_{*}+1}(Cd)^{r(t_{*}+1)}N^{-(1+\frac{5}{3})}=o(N^{-1})\]
where we use that \((1+t_{*})r\log d<\frac{\epsilon\log N}{4(1+t_{*})}\). Using a union bound, we conclude that
\[\mathbb{P}(\exists x\in[N],\ |B_{r}(x)\cap\mathcal{V}|\geq t_{*}+1)=o(1).\]
With Proposition 3.2 we obtain the two first items of Proposition 2.4. We now prove the last item.
Proof of Proposition 2.4, item 3.: We denote \(\mathcal{N}\) the number of tree \(T\subset\mathbb{G}|_{\mathcal{V}}\) that is isomorph to a line of size \(t_{*}\) and such that it is connected to \(\mathcal{V}^{c}\) by exactly one edge attached at one of its extremal point. We use a second-moment method and will prove that
\[\mathbb{E}(\mathcal{N}) =Nd^{t_{*}}e^{-t_{*}d}\left(1+O\left(\frac{t_{*}^{2}d}{N}\right)\right)\] \[\mathbb{E}(\mathcal{N}^{2})-\mathbb{E}(\mathcal{N})^{2} =O\left(\mathbb{E}(\mathcal{N})+1+\frac{t_{*}^{2}d\mathbb{E}( \mathcal{N})^{2}}{N}\right) \tag{6}\]
Assuming Equation (6) we have \(\mathbb{E}(\mathcal{N})\geq N^{1-(1-\epsilon)\frac{t_{*}}{t_{*}}+o(1)}\to\infty\) where we use (1) and then by Chebyshev's inequality
\[\mathbb{P}(\mathcal{N}=0)\leq\mathbb{P}\Big{(}\mathcal{N}\leq\frac{\mathbb{E} (\mathcal{N})}{2}\Big{)}\leq\frac{4\mathbb{E}(\mathcal{N}-\mathbb{E}\mathcal{ N})^{2}}{\mathbb{E}(\mathcal{N})^{2}}=O\Big{(}\frac{1}{\mathbb{E}(\mathcal{N})} \Big{)}=o(1).\]
We now prove Equation (6). For \(Y=\{y_{1},\cdots,y_{t_{*}},x\}\subset[N]\), \(\mathcal{L}_{Y}\) the event that \(\mathbb{G}|_{\{y_{1},\cdots,y_{t_{*}}\}}\) is a line and that \((y_{t_{*}},x)\) is the only edge between \(\{y_{1},\cdots,y_{t_{*}}\}\) and \(\{y_{1},\cdots,y_{t_{*}}\}^{c}\). Then
\[\mathbb{P}(\mathcal{L}_{Y})=\frac{d^{t_{*}}}{N^{t_{*}}}\Big{(}1-\frac{d}{N} \Big{)}^{t_{*}(N-t_{*})+1}=\frac{d^{t_{*}}}{N^{t_{*}}}e^{-dt^{*}}\Big{(}1+O \Big{(}\frac{t_{*}^{2}d}{N}\Big{)}\Big{)}\]
Then
\[\mathbb{E}(\mathcal{N})=\sum_{Y}\mathbb{P}(\mathcal{L}_{Y})=Nd^{t_{*}}e^{-dt^ {*}}\Big{(}1+O\Big{(}\frac{t^{2}d}{N}\Big{)}\Big{)}.\]
We also have
\[\mathbb{E}(\mathcal{N}^{2})-\mathbb{E}(\mathcal{N})^{2}=\sum_{Y_{1},Y_{2}}\mathbb{ P}(\mathcal{L}_{Y_{1}}\cap\mathcal{L}_{Y_{2}})-\mathbb{P}(\mathcal{L}_{Y_{1}}) \mathbb{P}(\mathcal{L}_{Y_{2}})\]
If \(Y_{1}\) and \(Y_{2}\) are disjoint, the probability that there is no edges between \(Y_{1}\) and \(Y_{2}\) should be added only once and we have
\[\mathbb{P}(\mathcal{L}_{Y_{1}}\cap\mathcal{L}_{Y_{2}})=\mathbb{P}(\mathcal{L}_ {Y_{1}})\mathbb{P}(\mathcal{L}_{Y_{2}})\Big{(}1-\frac{d}{N}\Big{)}^{-t_{*}^{2}}\]
Therefore
\[\sum_{Y_{1},Y_{2},\text{ disjoint}}\mathbb{P}(\mathcal{L}_{Y_{1}}\cap\mathcal{ L}_{Y_{2}})-\mathbb{P}(\mathcal{L}_{Y_{1}})\mathbb{P}(\mathcal{L}_{Y_{2}})=O \Big{(}\frac{dt^{2}}{N}\mathbb{E}(\mathcal{N})^{2}\Big{)}\]
If \(Y_{1}\cap Y_{2}\neq\emptyset\) and \(Y_{1}\neq Y_{2}\) we have that then \(Y_{1}\cup Y_{2}\) form a connected component in \(\mathcal{V}\) and then
\[\sum_{Y_{1}\neq Y_{2},Y_{1}\cap Y_{2}\neq\emptyset}\mathbb{P}( \mathcal{L}_{Y_{1}}\cap\mathcal{L}_{Y_{2}}) \leq t_{*}^{2}\mathbb{E}\big{(}\big{|}\{U\text{ connected component in }\mathcal{V},\,|U|\geq t_{*}+1\}\big{|}\big{)}\] \[=o(1),\]
as in the proof of Item 5 of Proposition 3.2. Finally for the case \(Y_{1}\cap Y_{2}\neq\emptyset\) and \(Y_{1}=Y_{2}\) we just have
\[\sum_{Y_{1},Y_{2},Y_{1}=Y_{2}}\mathbb{P}(\mathcal{L}_{Y_{1}}\cap\mathcal{L}_{Y _{2}})=\mathbb{E}(\mathcal{N}).\]
and Equation (6) follows from the three above estimates.
## 4 Rigidity results, proof of Proposition 2.5
We analyze the spectrum of the graph locally around the vertices in \(\mathcal{V}.\) We will work on small clusters of such vertices. Let us start by setting a few notations. For \(M\coloneqq(M_{xy})_{x,y\in[n]}\in\mathbb{R}^{n\times n}\) and \(Y\subseteq[n]\) we denote by \(M|_{Y}=(M_{xy})_{x,y\in Y}\) the restriction of \(M\) to \(Y.\) We write \(\lambda_{1}(M)\leq\lambda_{2}(M)\leq\ldots\leq\lambda_{n}(M)\) the increasing ordering of the eigenvalues of \(M.\) We use the standard \(\ell^{2}\) norm, \(\|\cdot\|\coloneqq\|\cdot\|_{2}.\)
**Definition 4.1**.: We denote \(\mathfrak{U}\) the connected components of \(\cup_{x\in\mathcal{V}}B_{2}(x).\)
We define some good properties of the graph.
**Definition 4.2**.: Let \(\Xi_{1}\) a probability set such that
1. For \(x\in[N],\)\(D_{x}\leq\mathcal{C}d.\)
2. For all \(U\in\mathfrak{U},\)\(|\mathcal{V}\cap U|\leq t_{*}.\)
3. The graph restricted to \(\cup_{x\in\mathcal{V}}B_{3}(x)\) is a forest.
Let \(\Xi_{2}\) a probability set such that
1. For \(x\in[N]\), \(D_{x}\leq\mathcal{C}d\).
2. \(\|A(\mathbb{G})-\mathbb{E}(A(\mathbb{G})\|\leq\mathcal{C}\sqrt{d}\)
3. \(|\mathcal{V}|\leq N^{1-\frac{1}{1+\epsilon_{*}}}\),
These properties occur with high probability and we will assume that they are satisfied throughout this section.
**Proposition 4.3**.: \(\mathbb{P}(\Xi_{1}\cap\Xi_{2})\geq 1-o(1)\)__
Proof of Proposition 4.3.: If \(U\) is a cluster with \(t_{*}+1\) vertices in \(\mathcal{V}\), then there is a chain \(y_{1},\cdots,y_{t_{*}+1}\in\mathcal{V}\) such that for any \(1<i\leq t_{*}+1\), there exists \(j<i\) with \(\operatorname{dist}_{\mathbb{G}}(y_{i},y_{j})\leq 5\). Therefore \(y_{j}\in B_{y_{1}}(5t_{*})\) for all \(j\leq t_{*}+1\) and we conclude that the probability is small by Proposition 3.2, item 5. The other items in \(\Xi_{1}\) and \(\Xi_{2}\) also follow from Proposition 3.2.
**Proposition 4.4**.: _On the event \(\Xi_{1}\), for any \(U\in\mathfrak{U}\) and \(F=\mathcal{V}\cap U\),_
\[\sum_{\lambda\in Spec(L(\mathbb{G})|_{U})}\delta_{\lambda},\qquad\text{and} \qquad\sum_{\mu\in Spec(L(F;\widehat{\partial F}))}\delta_{\mu+\epsilon_{\mu}}\]
_agree on the interval \([0,\frac{\tau}{2}d]\) and such that \(\epsilon_{\mu}=O\big{(}(\mu+1)d^{-1}\big{)}\) for all \(\mu\)._
Proof of Proposition 4.4.: Let us consider a connected component \(U\in\mathcal{U}\) and the restriction of the Laplacian to this connected component \(H:=L(\mathbb{G})|_{U}.\) Let \(A:=L(\mathbb{G})|_{F}\), \(C:=L(\mathbb{G})|_{U\setminus F}\) and \(B\in\{0,-1\}^{|F|\times|U\setminus F|}\) such that
\[H=\begin{pmatrix}A&B^{*}\\ B&C\end{pmatrix}.\]
Then recalling Definition 2.2, we see that \(A=L(F,\widehat{\partial F})\). Note that \(A\) may be itself a block matrix if \(F\) is made up of disjoint trees. Moreover the matrix \(C\) is given by the sum of a diagonal matrix with entries in the interval \([\tau d,+\infty)\) and the adjacency matrix of a forest with maximal degree \(\mathcal{C}d\) by item 3 of \(\Xi_{1}\). By perturbation theory and Lemma 7.1 we know that
\[\operatorname{spec}C\subseteq\Big{[}\tau d-2\sqrt{\mathcal{C}d},\infty\Big{)} \subseteq\big{[}3\tau d/4,\infty\big{)} \tag{7}\]
for \(d\) large enough. In addition, since \(B\) is given as the projection of the adjacency matrix of a collection of disjoint stars (rooted at vertices of \(\partial F\)), we know that \(\|B\|\leq\sqrt{\tau d}\).
We define
\[H(s)=\begin{pmatrix}A&sB^{*}\\ sB&C\end{pmatrix}\qquad\text{for }s\in[0,1].\]
Here \(H(1)=H\) and \(H(0)\) is block diagonal with the matrices \(A\) and \(C\) therefore \(\operatorname{spec}(H(0))\cap[0,\frac{3}{4}\tau d)=\operatorname{spec}(A)\cap[0, \frac{3}{4}\tau d)\). We write \(\lambda_{1}(s)\leq\lambda_{2}(s)\leq\cdots\) the eigenvalues of \(H(s)\). By perturbation theory, these are Lipschitz functions. More precisely suppose \(\lambda_{i}(s)\in\operatorname{spec}(H(s))\cap[0,\frac{\tau}{2}d]\), write \(u(s)\in\mathbb{R}^{|\mathbb{U}|}\) the corresponding normalized eigenvector and denote by \(u_{1}(s):=u(s)|_{F}\) and \(u_{2}(s):=u(s)|_{U\setminus F}\). We have
\[\frac{d}{ds}\lambda_{i}(s)=\Big{\langle}u(s),\left[\frac{d}{ds}H(s)\right]u(s) \Big{\rangle}=2\big{\langle}u_{1}(s),B^{*}u_{2}(s)\big{\rangle} \tag{8}\]
where we use that \(2\big{\langle}\frac{d}{ds}u(s),H(s)u(s)\big{\rangle}=\lambda(s)\frac{d}{ds}\|u (s)\|^{2}=0\). In the case of a degenerate eigenvalue, \(u(s)\) is not well defined. However one can still choose a normalized vector in the eigenspace such that the above equation is true (following the classical proof that the eigenvalues are Lipschitz using min-max principle).
We will prove that
\[\Big{|}\frac{d}{ds}\lambda_{i}(s)\Big{|}\leq\frac{8t_{*}}{\tau d}(\lambda_{i} (s)+4),\qquad s\in[0,1], \tag{9}\]
from which we will conclude that
\[|\lambda-\mu|=|\lambda_{i}(0)-\lambda_{i}(1)|=O\big{(}(\lambda_{i}(0)+1)d^{-1} \big{)}.\]
Recall here that \(\lambda_{i}(0)\in\operatorname{spec}(A)=\operatorname{spec}(L(F;\widehat{ \partial F}))\) and \(\lambda_{i}(1)\in\operatorname{spec}(L(\mathbb{G})|_{U})\).
Let us fix \(s\in[0,1]\) and write \(\lambda=\lambda(s)\) and \(u_{i}=u_{i}(s)\) in the following computations. The eigenvalue-eigenvector equation can be written in blocks as
\[\begin{pmatrix}A-\lambda&sB^{*}\\ sB&C-\lambda\end{pmatrix}\begin{pmatrix}u_{1}\\ u_{2}\end{pmatrix}=0.\]
Using (7), \(\|B\|\leq\sqrt{\tau d}\) and standard perturbation theory, we see that
\[\min_{\nu\in\operatorname{spec}(C)}|\lambda-\nu|\geq\frac{\tau d}{4}.\]
The matrix \(C-\lambda\) is thus invertible with \(\|(C-\lambda)^{-1}\|\leq\frac{\tau d}{4}\). After a couple of substitutions, the eigenvalue-eigenvector equation becomes
\[\begin{cases}u_{2}=-s(C-\lambda)^{-1}Bu_{1}&,\\ (A-s^{2}B^{*}(C-\lambda)^{-1}B)u_{1}=\lambda u_{1}&.\end{cases} \tag{10}\]
Replacing \(A\) by the deformed Laplacian matrix and applying \(u_{1}^{*}\) to the second line yields
\[\lambda\|u_{1}\|^{2}=\langle u_{1},L(F)u_{1}\rangle+\sum_{x\in\partial F} \widehat{\partial F}(x)u_{1}(x)^{2}-s^{2}\big{\langle}u_{1},(B^{*}(C-\lambda) ^{-1}B)u_{1}\big{\rangle}.\]
and then
\[\sum_{x\in\partial F}\widehat{\partial F}(x)u_{1}(x)^{2}\leq\lambda\|u_{1}\|^{2}+ s^{2}\big{|}\big{\langle}u_{1},(B^{*}(C-\lambda)^{-1}B)u_{1}\big{\rangle}\big{|} \leq(\lambda+4)\|u_{1}\|^{2} \tag{11}\]
where we use the fact that \(L(F)\) is positive and that \(\|B^{*}(C-\lambda)^{-1}B\|\leq\|B\|^{2}\|(C-\lambda)^{-1}\|\leq 4\). Finally, applying \(\langle u_{1}^{*},B^{*}\cdot\rangle\) to the first line of (10), we get
\[\big{|}\big{\langle}u_{1},B^{*}u_{2}\big{\rangle}\big{|} =\big{|}\big{\langle}u_{1},B^{*}(C-\lambda)^{-1}Bu_{1}\big{\rangle} \big{|}\] \[=\Big{|}\sum_{x,y\in\partial F}u_{1}(x)u_{1}(y)\big{\langle}B1_{y },(C-\lambda)^{-1}B1_{x}\big{\rangle}\Big{|}\] \[\leq\sum_{x,y\in\partial F}|u_{1}(x)||u_{1}(y)|\frac{4\sqrt{ \widehat{\partial F}(x)\widehat{\partial F}(y)}}{\tau d}\] \[=\frac{4}{\tau d}\left(\sum_{x\in\partial F}|u_{1}(x)|\sqrt{ \widehat{\partial F}(x)}\right)^{2}\] \[\leq\frac{4t_{*}}{\tau d}(\lambda+4)\|u_{1}\|^{2}.\]
by Cauchy-Schwartz and (11). This proves (9) and concludes the proof.
**Proposition 4.5**.: _On the event \(\Xi_{1}\), for any \(U\in\mathfrak{U}\), \(\lambda\in[0,\frac{\tau}{2}d]\cap\text{Spec}(L(\mathbb{G}|_{U}))\) and \(\mathbf{u}\) the associated eigenvector we have_
\[\|\mathbf{u}|_{\partial U}\|=O\big{(}d^{-1}\big{)}\]
Proof of Proposition 4.5.: We use the same notation as for the proof of Proposition 4.4. Setting \(s=1\) in 4, we have
\[u_{2}=-(C-\lambda)^{-1}Bu_{1}.\]
Let us write \(C=D-A(\mathbb{G}|_{U\setminus F})\) with with \(D:=D(\mathbb{G})|_{U\setminus F}\). The second resolvent identity reads
\[(C-\lambda)^{-1}=(D-\lambda)^{-1}+(C-\lambda)^{-1}A(\mathbb{G}|_{U\setminus F })(D-\lambda)^{-1},\]
and because \(\text{supp}((D-\lambda)^{-1}Bu_{1})\subset\cup_{x\in F}S_{1}(x)\), we have
\[\text{supp}\big{(}(D-\lambda)^{-1}Bu_{1}\cap\partial U\big{)}=\emptyset.\]
We thus find that
\[u_{2}|_{\partial U}=-\left((C-\lambda)^{-1}A(\mathbb{G}|_{U\setminus F})(D- \lambda)^{-1}Bu_{1}\right)\big{|}_{\partial U}\]
The graph \(\mathbb{G}|_{U\setminus F}\) is a tree so by Lemma 7.1, we have \(\|A(\mathbb{G}|_{U\setminus F})\|\leq 2\sqrt{\mathcal{C}d}.\)We can use the same bounds for \(\|B\|\), \(\|(D-\lambda)^{-1}\|\) and \(\|(C-\lambda)^{-1}\|\) as in the proof of Proposition 4.4. Putting everything together, we find
\[\|\mathbf{u}|_{\partial U}\|=\|u_{2}|_{\partial U}\|\leq\frac{16\sqrt{\mathcal{C}} }{\tau^{3/2}d}\|u_{1}\|.\]
**Proposition 4.6**.: _On the event \(\Xi_{2},\) the vector \(\mathbf{q}:=\frac{1}{\sqrt{|Y|}}1_{Y},\)where \(Y:=[N]\setminus\bigcup_{x\in\mathcal{V}}B_{3}(x)\) satisfies for any \(c<\frac{1}{1+t_{*}}\)_
\[\|\mathbf{e}-\mathbf{q}\|^{2}=O(N^{-c}),\qquad\langle\mathbf{q},L(\mathbb{G}) \mathbf{q}\rangle=O(N^{-c}).\]
Proof of Proposition 4.6.: On the event \(\Xi_{2},\) we have
\[|Y^{c}|\leq\Big{|}\bigcup_{x\in\mathcal{V}}B_{3}(x)\Big{|}\leq(\mathcal{C}d)^ {3}|\mathcal{V}|\leq\mathcal{C}^{5}d^{4}N^{1-\frac{1}{1+t_{*}}}=O(N^{1-c}),\]
for \(c<\frac{1}{1+t_{*}}.\) In particular, \(|Y|=(1-o(1))N.\) It is a straightforward computation to see that on \(\Xi_{1}\)
\[\langle\mathbf{q},L\mathbf{q}\rangle=\frac{1}{|Y|}\sum_{y\in\bigcup_{x\in \mathcal{V}}S_{4}(x)}1\leq\frac{\mathcal{C}d|Y^{c}|}{|Y|}=O(N^{-c}),\]
and
\[\mathbf{e}-\mathbf{q}=\frac{1}{\sqrt{N}}\sum_{y\notin Y}1_{y}+\sum_{z\in Y} \Bigl{(}\frac{1}{\sqrt{N}}-\frac{1}{\sqrt{|Y|}}\Bigr{)}1_{z},\]
and thus
\[\|\mathbf{e}-\mathbf{q}\|^{2}=\frac{|Y^{c}|}{N}+\frac{\bigl{(}\sqrt{|Y|}- \sqrt{N}\bigr{)}^{2}}{N}=O(N^{-c}).\]
**Proposition 4.7**.: _On the event \(\Xi_{2},\)\(\operatorname{spec}(\mathrm{L}(\mathbb{G})|_{\mathcal{V}^{c}})\subset\{\mu\} \cup[\tau\mathrm{d}/2,+\infty)\) for some \(\mu=O(N^{-c})\), \(0<c<\frac{1}{1+t_{*}}\)._
Proof of Proposition 4.7.: We have
\[L(\mathbb{G})=D(\mathbb{G})-(A(\mathbb{G})-\mathbb{E}(A(\mathbb{G}))-\mathbb{ E}(A(\mathbb{G})).\]
By definition of \(D\) and \(\mathcal{V}\) we have \(\lambda_{1}(D(\mathbb{G})|_{\mathcal{V}})\geq\tau d\) and with Definition 4.2 we have that \(\|A(\mathbb{G})-\mathbb{E}(A(\mathbb{G})\|=O(\sqrt{d})\). Therefore,
\[\lambda_{1}\bigl{(}(D(\mathbb{G})-(A(\mathbb{G})-\mathbb{E}(A(\mathbb{G})))|_ {\mathcal{V}^{c}}\bigr{)}\geq\tau d-O(\sqrt{d}).\]
Because \(\mathbb{E}(A(\mathbb{G}))|_{\mathcal{V}^{c}}=\frac{d}{N}1_{\mathcal{V}^{c}}1_ {\mathcal{V}^{c}}^{*}\) is a rank one matrix, by interlacing \(\lambda_{2}(L(\mathbb{G})|_{\mathcal{V}^{c}})\geq\tau d-O(\sqrt{d})\). Moreover using Proposition 4.6 and Lemma 7.2, we have \(\lambda_{1}(L(\mathbb{G})|_{\mathcal{V}^{c}})=O(N^{-c})\) for any constant \(c>0\) small enough.
We now have all the ingredients to prove Proposition 2.5.
Proof of Proposition 2.5.: We work on \(\Xi_{1}\cap\Xi_{2}\) that is a high probability event by Proposition 4.3.
We denote \(\mathcal{U}=\cup_{x\in\mathcal{V}}B_{2}(x)=\bigcup_{U\in\mathfrak{U}}U\) and recall the definition of \(\mathfrak{U}\) and \(\mathbf{q}\) from Definition 4.1 and Proposition 4.6 respectively. We define three vector spaces \(E_{1}=\operatorname{Span}(\boldsymbol{q})\), \(E_{2}=\operatorname{Span}(\{\boldsymbol{1}_{y},y\in\mathcal{U}\})\) and \(E_{3}=(E_{1}+E_{2})^{\perp}\). Let \((v_{i},\lambda_{i})_{i\leq|\mathcal{U}|}\) the eigenvectors-eigenvalues of \(L(\mathbb{G})|_{\mathcal{U}}\) that form an orthonormal basis of \(E_{2}\) that decompose into \(E_{2}=E_{2}^{\leq}+E_{2}^{>}\) where \(E_{2}^{\leq}=\operatorname{Span}(\{v_{i}\colon\lambda_{i}\leq\frac{1}{2}\tau d\})\) and \(E_{2}^{>}=\operatorname{Span}(\{v_{i}\colon\lambda_{i}>\frac{1}{2}\tau d\})\). We complete \(\{\boldsymbol{q},v_{1},\cdots,v_{|\mathcal{U}|}\}\) into an orthogonal basis for \(\mathbb{R}^{[N]}=E_{1}+E_{2}^{\leq}+E_{2}^{>}+E_{3}\) to obtain an orthogonal matrix \(V\). The Laplacian of the graph in this basis is in the form of a block matrix
\[V^{*}L(\mathbb{G})V=\begin{pmatrix}\nu&0&0&X_{\nu}^{*}\\ 0&D^{\leq q}&0&X^{\leq}\\ 0&0&D^{>}&X^{>}\\ X_{\nu}&X^{\leq}&X^{>}&Y\end{pmatrix}\]
We explain every element of this matrix :
* \(\nu=\langle\mathbf{q}\),\(L(\mathbb{G})\mathbf{q}\rangle=O(N^{-c})\) by Proposition 4.6, for \(c<\frac{1}{1+t_{*}}\).
* Because \(\operatorname{supp}(\boldsymbol{q})\subseteq(\bigcup_{x\in\mathcal{V}_{2}}B_ {3}(x))^{c}\) we have \(\operatorname{supp}(L(\mathbb{G})\mathbf{q})\subseteq(\bigcup_{x\in \mathcal{V}_{2}}B_{2}(x))^{c}\) and then \(L(\mathbb{G})\mathbf{q}\in E_{2}^{\perp}\), so we have the two zeros on the first line and column.
* \(X_{\nu}\) is a \(1\)-column matrix that can be identified as a vector such that \(L(\mathbb{G})\mathbf{q}=\nu\mathbf{q}+X_{\nu}\). We have \[\|X_{\nu}\|\leq\|L(\mathbb{G})\mathbf{q}\|=\|L(\mathbb{G})(\mathbf{q}- \boldsymbol{e})\|\leq\|L(\mathbb{G})\|\|(\mathbf{q}-\boldsymbol{e})\|=O(N^{-c +o(1)})\] where we use that \(L(\mathbb{G})\boldsymbol{e}=0\) and Proposition 4.6.
* \(D^{\leq}\) and \(D^{>}\) are diagonal matrices with entries \(\{\lambda_{i}\colon\lambda_{i}\leq\frac{1}{2}\tau d\}\) and \(\{\lambda_{i}\colon\lambda_{i}>\frac{1}{2}\tau d\}\). This directly follows from the choice of \(v_{i}\) as the eigenvectors \(L(\mathbb{G})|_{\mathcal{U}}\).
* \(X^{\leq}\) and \(X^{>}\) describe the adjancy matrix\(-A(\mathbb{G})\) between the set \(\mathcal{U}\) and \(\mathcal{U}^{c}\) that is a tree by definition of \(\Xi_{1}\) and we have \(\max\{\|X^{\leq}\|,\|X^{>}\|\}\leq 2\sqrt{\mathcal{C}d}\) by Lemma 7.1.
* \(Y=L(\mathbb{G})|_{E_{3}}\). We have \(E_{1}+E_{3}=E_{2}^{\perp}=\operatorname{Span}(1_{x},x\in\mathcal{U}^{c})\). Then if we remove the block matrices associated to the space \(E_{2}\) we obtain \[V^{*}L(\mathbb{G})|_{\mathcal{U}^{c}}V=\begin{pmatrix}\nu&X_{\nu}^{*}\\ X_{\nu}&Y\end{pmatrix}=\begin{pmatrix}\nu&0\\ 0&Y\end{pmatrix}+\begin{pmatrix}0&X_{\nu}^{*}\\ X_{\nu}&0\end{pmatrix},\] where we write \(V\) instead of \(V|_{\mathcal{U}^{c}}\)-and simplify the notation by omitting the zero blocks. We immediately conclude that \[\lambda_{2}(L(\mathbb{G})|_{\mathcal{U}^{c}})\geq\max\{\nu,\lambda_{1}(Y)\}- \|X_{\nu}\|.\]
We have Proposition 4.7 and then use that \(\mathrm{Span}(1_{x},x\in\mathcal{U}^{c})\subset\mathrm{Span}(1_{x},x\in\mathcal{V} ^{c})\) so by interlacing we obtain \[\frac{1}{2}\tau d\leq\lambda_{2}(L(\mathbb{G})|_{\mathcal{V}^{c}})\leq\lambda_{ 2}(L(\mathbb{G})|_{\mathcal{U}^{c}}).\] Because of the previous estimate on \(\nu\) and \(\|X_{\nu}\|\) we finally conclude that \[\lambda_{1}(Y)\geq\frac{1}{2}\tau d-o(1).\]
* Finally we improve the bound for \(\|X^{\leq}\|\). Let \(u_{2}\in E_{2}^{\leq},u_{3}\in E_{3}\), we have \[|\langle L(\mathbb{G})u_{3},u_{2}\rangle|=|\langle A(\mathbb{G})u_{3},u_{2}|_{ \partial\mathcal{U}}\rangle|\leq 2\sqrt{\mathcal{C}d}\|u_{3}\|\|u_{2}|_{ \partial\mathcal{U}}\|\] and write \(u_{2}=\sum_{i}\alpha_{i}v_{i}\) for some \(\alpha_{i}\in\mathbb{R}\). We have \[\|u_{2}|_{\partial\mathcal{U}}\|^{2} =\sum_{U\in\mathfrak{U}}\|u_{2}|_{\partial U}\|^{2}\] \[=\sum_{U\in\mathfrak{U}}\|\sum_{\mathrm{Supp}(v_{i})\subset U} \alpha_{i}v_{i}|_{\partial U}\|^{2}\] \[\leq\sum_{U\in\mathfrak{U}}t_{*}\sum_{\mathrm{Supp}(v_{i})\subset U }\alpha_{i}^{2}\|v_{i}|_{\partial U}\|^{2}\] \[\leq Ct_{*}\sum_{U\in\mathfrak{U}}\sum_{\mathrm{Supp}(v_{i}) \subset U}\alpha_{i}^{2}\,\frac{\|v_{i}\|^{2}}{d^{2}}\] \[=Ct_{*}\frac{\|u_{2}\|^{2}}{d^{2}}\] for some fix \(C>0\) by Proposition 4.5 and therefore \[\|X^{\leq}\|=\sup_{u_{2}\in E_{2}^{\leq},u_{3}\in E_{3}}\frac{\langle u_{3},L (\mathbb{G})u_{2}\rangle}{\|u_{3}\|\|u_{2}\|}=O\Big{(}\frac{1}{\sqrt{d}}\Big{)}.\]
We now finish the proof of Proposition 2.5 we have
\[V^{*}L(\mathbb{G})V=\begin{pmatrix}\nu&0&0&0\\ 0&D^{\leq}&0&0\\ 0&0&D^{>}&X^{>}\\ 0&0&X^{>}&Y\end{pmatrix}+\begin{pmatrix}0&0&0&X_{\nu}^{*}\\ 0&0&0&X^{\leq}\\ 0&0&0&0\\ X_{\nu}&X^{\leq}&0&0\end{pmatrix}\]
therefore
\[\sum_{\lambda\in\mathrm{Spec}(L(\mathbb{G}))}\delta_{\lambda}\quad\text{and} \quad\delta_{\nu+\delta\nu}+\sum_{\mu\in\mathrm{Spec}(D\leq)}\delta_{\mu+ \epsilon_{\mu}} \tag{12}\]
agrees on \([0,\tau d/2-o(1)]\) with \(\delta\nu\leq\|X_{\nu}\|=O(N^{-c})\) and
\[\epsilon_{\mu}\leq\min\{\|X^{\leq}\|,\frac{5\|X^{\leq}\|^{2}}{|\frac{1}{2} \tau d-\mu|}\}\leq\begin{cases}\frac{1}{\sqrt{d}}&if\;\mu\geq\tau d/4,\\ \frac{20}{\tau d^{2}}&if\;\mu\leq\tau d/4\end{cases}=O((1+\mu)d^{-1}),\]
where we used Lemma 7.2. We now use Proposition 4.4
\[\sum_{\mu\in\operatorname{Spec}(D\leq)}\delta_{\mu+\epsilon_{\mu}}=\sum_{U\in \mathfrak{M}}\sum_{\mu\in\operatorname{Spec}(L(\mathfrak{G})|_{U})}\delta_{\mu+ \epsilon_{\mu}}=\sum_{F\in\widehat{X}}\sum_{\mu\in\operatorname{Spec}(L(F; \widehat{\partial F}))}\delta_{\mu+\epsilon_{\mu}+\epsilon_{\mu}^{\prime}} \tag{13}\]
with \(\epsilon_{\mu}^{\prime}=O\big{(}(\mu+1)d^{-1}\big{)}\). Combining (12) and (13) gives the right-hand side term of Proposition 2.5. Finally because the multiplicity of the eigenvalue \(\{0\}\) of \(L(\mathbb{G})\) is equal to the number of connected component in the graph, we actually have that \(\delta_{\nu+\delta\nu}=\delta_{0}\) (which can be seen as the eigenvalue associated with the giant connected component).
## 5 Spectrum of weighted Laplacian on trees, Proof of Proposition 2.6.
In this section, we prove that the line is the optimal graph to minimize the smallest non-zero eigenvalue and then compute explicitly its spectrum.
**Proposition 5.1**.: _For any tree \(T\) of size \(|T|\leq t\) and \(\widehat{X}\subset[[T]]\) we have_
\[\lambda_{1}(L(\mathbb{L}_{t},\{1\}))\leq\min(\text{Spec}(L(T,\widehat{X})) \setminus\{0\}). \tag{14}\]
Note that since the number of trees of size \(t\) is finite and since increasing the size of a multiset is a positive rank-one perturbation of the corresponding deformed Laplacian matrix, Proposition 5.1 tells that for any \(t\in\mathbb{N}\) there exists a universal constant \(c\)>\(0\) such that \(\lambda_{1}(L(\mathbb{L}_{t},\{1\}))\leq\min(\operatorname{Spec}(L(T,\widehat{X }))\setminus\{0\})\)-c, for every tree \(T\) of size \(t\) and any multiset on \([[T]]\).
**Lemma 5.2**.: _The eigenvalues \(L(\mathbb{L}_{t},\{1\})\) are \(\lambda_{k}=2-2\cos\left(\frac{\pi}{2t+1}+\frac{2k\pi}{2t+1}\right)\) for \(0\leq k<t\)._
Proof of Proposition 2.6.: This directly follows from Proposition 5.1 and Lemma 5.2 with \(t=t_{*}\).
Proof of Proposition 5.1.: We are interesting in the minimiser of \(\min(\operatorname{Spec}(L(T,\widehat{X}))\setminus\{0\})\) among all the tree \(T\) of size \(|T|\leq t\). For any \(i\in T\), \(1_{i}1_{i}^{*}\) is positive operator therefore we have that for all multiset \(\widehat{X}\subset\widehat{Y}\subset[[T]]\), \(L(T,\widehat{X})\leq L(T,\widehat{Y})\) (as operator) and then
\[\lambda_{1}\big{(}L\big{(}T,\widehat{X}\big{)}\big{)}\leq\lambda_{1}\big{(}L \big{(}T,\widehat{Y}\big{)}\big{)}\]
In the case \(\widehat{X}=\emptyset\) we have \(\lambda_{1}(L(T,\emptyset))=\lambda_{1}(L(T))=0\) and by interlacing we have
\[0=\lambda_{1}(L(T,\emptyset))<\lambda_{1}\big{(}L\big{(}T,\{i\}\big{)}\big{)}< \lambda_{2}\big{(}L\big{(}T,\emptyset\big{)}\big{)}.\]
If \(|T|<t\) there exists a large graph \(T^{\prime}\) with \(|T^{\prime}|=t\) such that \(T\subset T^{\prime}\) and again by interlacing we have
\[0<\lambda_{1}\big{(}L\big{(}T^{\prime},\{i\}\big{)}\big{)}<\lambda_{1}\big{(} L\big{(}T,\{i\}\big{)}\big{)}\]
Therefore the minimiser is of the form \(\lambda_{1}(L(T,\{i\}))\) with \(|T|=t\).
We think of \(i\) as the root of \(T\) and for every edge \(e=(x,y)\) of \(T\) we define the weight
\[q(e)=u(x)-u(y)\]
and denote \(\boldsymbol{q}=(q_{e})_{e\in E}\in\mathbb{R}^{t-1}.\) We have
\[\lambda_{1}(L(T,\{i\}))=\inf_{u\in\mathbb{R}^{t}\setminus\{0\}}\frac{\langle u,L(T,\{i\})u\rangle}{\|u\|^{2}}=\inf_{u\in\mathbb{R}^{t}\setminus\{0\}}\frac{ \sum_{e\in E}q(e)^{2}+|u(i)|^{2}}{\|u\|^{2}}. \tag{15}\]
Let us now change to a dual approach. We have a bijection between \(\{u(i),\boldsymbol{q}\}\) and \((u(x))_{x\in T}\) given by the equation
\[u(x)=u(i)+\sum_{e\in P_{i\to x}(T)}q(e) \tag{16}\]
where we denoted by \(P_{i\to x}(T)\) the unique path from \(i\) to \(x\) in \(T.\) We denote \(u=\phi(T,i,u(i),\boldsymbol{q})\) the bijection of Equation (16). Then we have
\[\min_{T,i}\lambda_{1}(L(T,\{i\}))=\inf_{u(i),\boldsymbol{q}}\min_{T,i}\frac{ \sum_{e}|q(e)|^{2}+|u(i)|^{2}}{\|\phi(T,i,u(i),\boldsymbol{q})\|^{2}}=\inf_{u (i),\boldsymbol{q}}\frac{\sum_{e}|q(e)|^{2}+|u(i)|^{2}}{\max_{T,i}\|\phi(T,i,u (i),\boldsymbol{q})\|^{2}}.\]
We now show that for fixed \(u(i),\boldsymbol{q}\) the norm \(\|\phi(T,i,u(i),\boldsymbol{q})\|^{2}\) is maximized when \(T\) is the line and \(i\) an extremal point.
We sort the edges decreasingly according to their weight
\[|q(e_{1})|\geq|q(e_{2})|\geq\ldots\geq|q(e_{t-1})|\geq 0\]
and the vertices increasingly according to their graph distance to the root \(i=x_{0}\)
\[0\leq\text{dist}(i,x_{1})\leq\text{dist}(i,x_{2})\leq\cdots\leq\text{dist}(i,x_ {t-1}).\]
We have
\[\bigg{|}\sum_{e\in P_{i\to x_{k}}}q(e)\bigg{|}\leq\sum_{e\in P_{i\to x_{k}}}| q(e)|\leq\sum_{l=1}^{|P(i,x_{k})|}|q(e_{i})|\leq\sum_{l=1}^{k}|q(e_{i})|,\]
where we use that the length of the path \(P(i,x_{k})\) is at most \(k\). Therefore
\[\|\phi(T,i,u(i),\boldsymbol{q})\|^{2}=\sum_{k=0}^{t-1}|u(i)+\sum_{e\in P_{i \to x_{k}}}q(e)|^{2}\leq\sum_{k=0}^{t-1}\left|\left|u(i)\right|+\sum_{l=1}^{k }|q(e_{i})|\right|^{2} \tag{17}\]
Observe that the right-hand side can be reached if \(T\) is a line and \(i\) is an extremal point. Moreover this solution is unique if \(q(e_{i})>0\) for all \(i\leq t-1\). Then we have
\[\max_{T,i}\|\phi(T,i,u(i),\boldsymbol{q},T)\|^{2}=\|\phi(\mathbb{L}_{t},1,u( i),\boldsymbol{q})\|^{2}\]
and finally conclude that \(\min_{T,i}\lambda_{1}(L(T,\{i\}))=\lambda_{1}(L(\mathbb{L}_{t},\{1\}))\).
Finally, we now prove that we can assume \(|q(e_{t-1})|>0\). If \(q(e_{t-1})=0\) then
\[\frac{d}{dq(e_{t-1})}\left(\sum_{e}|q(e)|^{2}+|u(i)|^{2}\right)=0,\quad\frac{d }{dq(e_{t-1})}\sum_{k=0}^{t-1}\left|\left|u(i)\right|+\sum_{l=1}^{k}|q(e_{i}) |\right|^{2}\neq 0,\]
therefore \(\boldsymbol{q}\) is not the minimser of
\[\inf_{u(i),\boldsymbol{q}}\frac{\sum_{e}|q(e)|^{2}+|u(i)|^{2}}{\|\phi(\mathbb{ L}_{t},1,u(i),\boldsymbol{q})\|^{2}}.\]
Proof of Lemma 5.2.: We write the matrix
\[L(\mathbb{L}_{t},\{1\})=\begin{pmatrix}2&-1&0&\cdots&0&0\\ -1&2&-1&0&&0\\ 0&-1&\ddots&\ddots&&\vdots\\ \vdots&0&\ddots&\ddots&\ddots&0\\ 0&&&\ddots&2&-1\\ 0&0&\cdots&0&-1&1\end{pmatrix}\in\mathbb{R}^{t\times t}.\]
Let \(\lambda\) an eigenvalue and and \(u\in\mathbb{R}^{t+1}\) the associated eigenvector with the convention \(u_{0}=0\). We have
\[Lu=\lambda u\Leftrightarrow\begin{cases}u_{k+1}+u_{k-1}=(2-\lambda)u_{k}\quad \text{for $1\leq k<t$}\\ u_{t-1}=(1-\lambda)u_{t}\end{cases}\]
Therefore for any \(1\leq k<t\)
\[\begin{pmatrix}u_{k+1}\\ u_{k}\end{pmatrix}=P\begin{pmatrix}u_{k}\\ u_{k-1}\end{pmatrix}=P^{k}\begin{pmatrix}u_{1}\\ 0\end{pmatrix}\quad\text{with}\quad P=\begin{pmatrix}(2-\lambda)&-1\\ 1&0\end{pmatrix}.\]
Let \(\alpha\in[0,\frac{\pi}{2}]\) such that \(2-\lambda=2\cos\alpha\) then the eigenvalues of \(P\) are solution of \(\gamma^{2}-2\cos(\alpha)\gamma+1=0\) which are
\[\gamma_{\pm}=\frac{2\cos(\alpha)\pm i\sqrt{4-4\cos(\alpha)^{2}}}{2}=\cos \alpha\pm i\sin\alpha=e^{\pm i\alpha}.\]
Then \(u_{k}=ae^{ik\alpha}+be^{-ik\alpha}\) for some \(a,b\in\mathbb{C}\) and with \(u_{0}=0\) we conclude that \(u_{k}=\sin(k\alpha)\) up to a multiplicative factor. Finally, the last equation becomes
\[(2\cos\alpha-1)\sin(t\alpha)=\sin((t-1)\alpha)\]
and using the recursive algorithm for finding the \(n\)th multiple angle, we see that
\[\sin((t+1)\alpha)=2\cos(\alpha)\sin(t\alpha)-\sin((t-1)\alpha)=\sin(t\alpha).\]
We conclude that \(\sin((t+1)\alpha)=\sin(t\alpha)\) and thus
\[\alpha=\frac{\pi}{2t+1}\mod\Big{(}\frac{2\pi}{2t+1}\Big{)}.\]
Outlook
### Fluctuations and eigenvector localization
With Theorem 1.1 we understand the distribution of the smallest eigenvalues of \(L\) but at a deterministic level. A natural development would be to improve such a prediction to obtain the fluctuation scale and its asymptotic law. Also Theorem 1.1 does not give any information about the corresponding eigenvectors but the proof hints at the fact that there exists a narrow relation between the smallest eigenvalues of \(L\) and some specific geometric objects that can be found in \(G\), namely trees of a particular shape. We believe that this relation can be made into a rigorous mathematical result by showing that the eigenvectors corresponding to the smallest eigenvalues are localized on the lines of maximal size. The work to get there is non-trivial but a very similar result has been done in previous work ([1, Theorem 1.7] as well as in [11]. Inspired by these works we expect the following to hold :
**Conjecture 6.1**.: _There is an interval \([a,b]\), with \(a,b\) close to \(2-2\cos\left(\frac{\pi}{2t_{*}+1}\right)\) such that_
1. \(\lambda_{2}\in[a,b]\) _with high probably._
2. _(Fluctuations)_ \(\sum_{\lambda\in Spec(L(\mathbb{G}))}\delta_{\lambda}\) _converge to a Poisson Point Process on_ \([a,b]\)_._
3. _(Localization) For any_ \(\lambda\in[a,b]\) _and_ \(v\) _the corresponding eigenvector, there exists a tree_ \(T\subset\mathbb{G}\) _that is isomorphic to line of size_ \(t_{*}\) _and such that_ \(\|v|_{T}\|=1-o(1)\)__
Following the strategy in [1] one have to improve the bijection of Proposition 2.5 such that the error terms \(\epsilon_{\mu}\) are much smaller. We believe that this could be done analyzing the Laplacian on a neighborhood \(U\) of \(F\subset U\) instead of \(L(F;\widehat{\partial F})\). For the first orders one should replace the computation of the spectrum of \(L(\mathbb{L}_{t},\{1\})\) by the one of the following tridiagonal matrix
\[M:=M(t_{*},D_{z},|S_{2}(z)|)=\begin{pmatrix}1&-1&0&\cdots&0&0\\ -1&2&-1&0&&0\\ 0&-1&\ddots&\ddots&&\vdots\\ \vdots&0&\ddots&\ddots&&\\ &&&&2&-1&0\\ 0&&&-1&D_{z}&\sqrt{D_{z}}\\ 0&0&\cdots&&0&\sqrt{D_{z}}&\frac{|S_{2}(z)|}{D_{z}}\end{pmatrix}\in\mathbb{R} ^{t_{*}+2\times t_{*}+2}. \tag{18}\]
We claim that there exists an eigenvalue of \(L(\mathbb{G})\) such that \(\lambda=\lambda_{1}(M)+O(d^{-2})\) and that we can obtain
\[\lambda_{1}(M)=2-2\cos\Bigl{(}\frac{\pi}{2t_{*}+1}\Bigr{)}+\delta\bigl{(}D_{z },|S_{2}(z)|\bigr{)}+O\bigl{(}d^{-2}\bigr{)}, \tag{19}\]
for some explicit function \(\delta:\mathbb{R}^{2}\to\mathbb{R}\).
We should also improve Proposition 2.4 having not only the existence of the line but controlling the degree of the anchor as well: \(\{T\sim\mathbb{L}_{t},z\text{ anchor with }D_{z}=\alpha d\}\) for some \(\tau\leq\alpha<1\). Because of (19), the smallest eigenvalue would be obtained with the smallest possible degree of the anchor. We expect it to be close to \(cd\) where
\[N^{1-t^{*}d+o(1)}\times\mathbb{P}(D_{z}=cd)\asymp 1. \tag{20}\]
We then would choose the interval \([a,b]\) as a small neighborhood of \(2-2\cos(\frac{\pi}{2t_{*}+1})+\delta(cd,cd^{2})\). Finally, the main ingredient for the localization is an estimate on \(|\lambda_{i}-\lambda_{j}|\gg\max\{\epsilon_{i},\epsilon_{j}\}\) combined with Lemma 7.2. An upper bound on \(\max\{\epsilon_{i},\epsilon_{j}\}\) should come from the improve rigidity result above while a lower bound on \(|\lambda_{i}-\lambda_{j}|\) should be a consequence of the Poisson point process statistic and its limiting law.
### Intermediate regime
We now add a few remarks about the intermediate regime
\[(1-\epsilon)\frac{\log N}{t_{*}}\leq d\leq(1+\epsilon)\frac{\log N}{t_{*}}.\]
In our paper the condition (1) is needed for two properties : the existence of a line of size \(t_{*}\) and to make sure that all the vertices in its neighborhood have large degree (for a spectral gap and to do perturbations theory). In the intermediate regime, the existence of component of size \(t_{*}\) becomes random but we still have with high probability a line of size \(t_{*}-1\) and no connected component of size \(t_{*}+1\). Therefore one could expected that \(\lambda_{2}\) is random but because of Propositions 2.4, 2.5, 2.6 we can still state the following.
**Proposition 6.2**.: _In the intermediate regime with high probability_
\[2-2\cos\left(\frac{\pi}{2t_{*}+1}\right)+O\big{(}d^{-1}\big{)}\leq\lambda_{2} \leq 2-2\cos\left(\frac{\pi}{2t_{*}-1}\right)+O\big{(}d^{-1}\big{)}\]
Figure 2: Illustration of the intermediate regime. We expect lines of size \(t_{*}+1\) to appear as a result of the _erosion_ of the anchor of lines of size \(t_{*}\).
To refine the analysis, we expect the degree of the anchor \(D_{z}\) to play a major role in the transition from the \(t_{*}\) line regime to the \(t_{*}+1\) line regime, going from \(D_{z}\geq\tau d\) for \(d\approx(1+\epsilon)\frac{\log N}{t_{*}+1}\) and decreasing to \(D_{z}\sim 1\) for \(d\approx\frac{\log N}{t_{*}+1}\) therefore creating a \(t_{*}+1\) line. One could then try to use (19) to compute \(\lambda_{2}\) in the intermediate regime. However, because the predictions obtained from the spectrum of finite trees only give discrete values, there should exist regimes of \(d\) for which \(\lambda_{2}\) is random with fluctuations \(\asymp 1\). An illustration of our heuristics is given in Figure 2.
### Numerical simulations
In this section, we compare our estimates from Theorem 1.1 with numerical simulations presented in Figure 3 and 4. Using Theorem 1.1, we obtain a first-order approximation (dark red): note that this estimate only depends on \(\lfloor t\rfloor\) (see (2)) and is thus a piece-wise constant function. The error between the dark red line and the blue dots remains of the order \(O\big{(}d^{-1}\big{)}\), in agreement with (2) (indeed for \(N=10^{4}\), \(1/d\sim 0.1\)). We expect the fit to become much better
Figure 3: We plotted the spectral gap of \(L\) for various regimes of \(d\) below the criticality threshold. Simulations were obtained with \(N=10^{4}\), \(d=\frac{1}{t}\log N\) for values of \(1\leq t\leq 3.2\).
as \(d\to+\infty\), but unfortunately, since \(d\asymp\log N\), we would need to simulate exponentially large graphs to reduce the error, which is infeasible in practice. Second, we can use the discussion at the beginning of the second to approximate \(\lambda_{2}(L)\) by \(\lambda_{1}(M)\) for \(M\) defined in (18). The matrix \(M\) has three parameters namely \(t_{*}\) which corresponds to \(\lfloor t\rfloor\), \(D_{z}\) and \(|S_{2}(z)|.\) We set \(D_{z}=cd\), \(c>0\), to be the (expected) smallest degree of any anchor of a line of size \(t_{*}\) and \(|S_{2}(z)|=dD_{z}\) (see the discussion leading to (20)). The second-order approximation seems to fit the numerical results much more closely, than the first-order approximation. The simulations are displayed in Figure 3.
To give a more complete view of the behavior of the spectral gap, we also provide simulations for \(d\geq\log N\) on Figure 4 It was shown in [14][Chapter 3] that the spectral gap of \(L\) in these regimes was given to the first order by \(\Delta+\frac{d}{\Delta-d}\), \(\Delta:=\min_{x\in[N]}D_{x}.\) Therefore, whenever \(d\geq\log N\), we use \(\Delta\) as a first order approximation for \(\lambda_{2}(L)\) (dark green line) and \(\Delta+\frac{d}{\Delta-d}\) as a second order approximation (light green line). In the subcritical regime, we can approximate \(\lambda_{2}(L)\) in two different ways.
Figure 4: We plotted the spectral gap of \(L\) for \(d\geq\frac{1}{2}\log N\). Simulations were obtained with \(N=10^{4}\), \(d=\frac{1}{t}\log N\) for values of \(0.6\leq t\leq 1\).
Appendix
We mention [1, Lemma E.1 and E.5].
**Lemma 7.1**.: _For \(T\) a tree with degrees bounded by \(M>0\) and \(A\) its adjacency matrix we have \(\|A\|\leq\sqrt{2M}\)._
**Lemma 7.2**.: _Let \(M\) be a self-adjoint matrix. Let \(\epsilon,\Delta>0\) satisfy \(5\epsilon\leq\Delta\). Let \(\lambda\in\mathbb{R}\) and suppose that \(M\) has a unique eigenvalue \(\mu\) in \([\lambda-\Delta,\lambda+\Delta]\), with corresponding normalized eigenvector \(w\). If there exists a normalized vector \(v\) such that \(\|(M-\lambda)v\|\leq\epsilon\) then_
\[\mu-\lambda=\langle v,(M-\lambda)v\rangle+O\left(\frac{\epsilon^{2}}{\Delta} \right),\qquad\|w-v\|=O\left(\frac{\epsilon^{2}}{\Delta}\right).\]
|
2309.05696 | What are the parities of photon-ring images near a black hole? | Light that grazes a black-hole event horizon can loop around one or more
times before escaping again, resulting for distance observers in an infinite
sequence of ever fainter and more delayed images near the black hole shadow. In
the case of the M87 and Sgr A$^*$ back holes, the first of these so-called
photon-ring images have now been observed. A question then arises: are such
images minima, maxima, or saddle-points in the sense of Fermat's principle in
gravitational lensing? or more briefly, the title question above. In the theory
of lensing by weak gravitational fields, image parities are readily found by
considering the time-delay surface (also called the Fermat potential or the
arrival-time surface). In this work, we extend the notion of the time delay
surface to strong gravitational fields and compute the surface for a
Schwarzschild black hole. The time-delay surface is the difference of two
wavefronts, one travelling forward from the source and one travelling backwards
from the observer. Image parities are read off from the topography of the
surface, exactly as in the weak-field regime, but the surface itself is more
complicated. Of the images, furthest from the black hole and similar to the
weak-field limit, are a minimum and a saddle point. The strong field repeats
the pattern, corresponding to light taking one or more loops around the back
hole. In between, there are steeply-rising walls in the time-delay surface,
which can be interpreted as maxima and saddle points that are infinitely
delayed and not observable -- these correspond to light rays taking a U-turn
around the black hole. | Ashish Kumar Meena, Prasenjit Saha | 2023-09-11T18:00:00Z | http://arxiv.org/abs/2309.05696v2 | # What are the parities of photon-ring images near a black hole?
###### Abstract
Light that grazes a black-hole event horizon can loop around one or more times before escaping again, resulting for distance observers in an infinite sequence of ever fainter and more delayed images near the black hole shadow. In the case of the M87 and Sgr A\({}^{*}\) back holes, the first of these so-called photon-ring images have now been observed. A question then arises: are such images minima, maxima, or saddle-points in the sense of Fermat's principle in gravitational lensing? or more briefly, the title question above. In the theory of lensing by weak gravitational fields, image parities are readily found by considering the time-delay surface (also called the Fermat potential or the arrival-time surface). In this work, we extend the notion of the time delay surface to strong gravitational fields, and compute the surface for a Schwarzschild black hole. The time-delay surface is the difference of two wavefronts, one travelling forward from the source, and one travelling backward from the observer. Image parities are read off from the topography of the surface, exactly as in the weak-field regime, but the surface itself is more complicated. Of the images, furthest from the black hole and similar to the weak-field limit, are a minimum and a saddle point. The strong field repeats the pattern, corresponding to light taking one or more loops around the back hole. In between, there are walls in the time-delay surface, which can be interpreted as maxima and saddle points that are infinitely delayed and not observable -- these correspond to light rays taking a U-turn around the black hole.
## I Introduction
One of the tests of Einstein's theory of gravity is the deflection of light rays (also known as gravitational lensing) by matter distribution [1; 2; 3]. It was first recognised in the observation of stars behind the Sun during the famous 1919 eclipse [4] and, later, in the radio observation of a distant quasar lying behind a foreground galaxy and forming multiple images [5]. Since then, light deflection has been observed from individual stars in our Galaxy to distant galaxy clusters [6; 7] and became an integral part of the study of the Universe [8; 9].
In general, weak-field approximation (i.e., the spacetime can be decomposed into background and a small perturbation on this background created by the lens [10]) is sufficient to study the conventional gravitational lensing from star, galaxy, or galaxy clusters and explain all observed properties of the lensed images [11]. However, weak field approximation breaks down very close to a neutron star or a black hole where light rays experience very strong gravitational fields. The existence of such objects (neutron stars and black holes) has been confirmed by different methods (for example, using x-ray binaries [12]; gravitational wave observations [13; 14]; astrometric microlensing [15]) so far so that we have even imaged the supermassive black holes at the centre of the nearby galaxy M87 [16; 17; 18; 19] and our own Galaxy [20; 21; 22].
In strong gravitational field near a black hole, instead of using the conventional lens equation derived using weak-field approximation, one needs to solve the geodesic equation to determine the path of light rays. The simplest case to study light propagation in a strong gravitational field is lensing by a Schwarzschild black hole [23] (also see [24]), a classic topic in the literature. An analytical solution for the deflection angle near Schwarzschild black hole can be derived in terms of elliptic integral [25; 26; 27]. [28] and [29] obtained (approximate) gravitational lens equation applicable in strong gravitational field near the Schwarzschild black hole and discussed the presence of the infinite sequence of (increasingly de-magnified) lensed images of a background source (also known as _relativistic images_). Using a different formalism, [30] derived an exact lens equation for Schwarzschild black hole. Going beyond lensing by Schwarzschild black hole, similar analysis have also been performed for Kerr(-Newman) and more exotic black holes [e.g., 31; 32; 33; 34; 35].
A very instructive way to describe gravitational lensing is thinking in terms of _wavefronts_ emitted from the source instead of individual light rays. The wavefront method was first used in gravitational lensing to estimate the time delay between multiple images for point mass and axially symmetric lenses to determine the Hubble constant [36; 37; 38]. For a given lens system (made of source, lens, and observer), a wavefront emitted from the source gets deformed and develops crossings as it crosses the lens and moves forward [e.g., 39]. A pedagogical introduction to wavefronts in gravitational lensing is presented in [40; 41]. The use of wavefront method in strong gravitational field was first discussed in [42] to construct caustic structure near a Kerr black hole. Later, wavefront method was used in [43; 44; 45] to further understand the light propa
gation near Kerr black hole and in other (more exotic) spacetimes [e.g., 46].
In the contemporary literature on lensing in the weak field limit, the _time delay surface_ is a fundamental quantity which can be used to describe the various properties of the lensed images like position, magnification, and parity [47]. For strong fields there is a general formulation of Fermat's principle[48], but the weak-field time-delay surface has not been generalized. There is, however, an elegant construction using wavefronts [40; 41], which can be adapted. In our current work, we use wavefronts to compute time delay surfaces near Schwarzschild black hole for axially and non-axially symmetric cases. Since, the images are essentially extrema points of the time delay surface, we can infer that even in the strong gravitational field, the lensed images should be either minima or maxima, or saddle-point. However, to determine the exact order in which these different type of images will appear is not obvious. In addition, a Schwarzschild black hole is a singular lens. This can lead to tears in the time delay surface similar to the point mass lens in weak field limit and make overall geometry of the time delay surface very complex near the black hole. Hence, an explicit construction of time delay surface is black hole is necessary to address the above issues.
The current work is organised as follows. In Section II, we briefly discuss the light propagation and wavefronts near Schwarzschild black hole. In Section III, we construct the time delay surface for axis symmetric case (i.e., when the source lies on the optical axis, a line joining the observer and lens) and discuss the parity of the lensed images near Schwarzschild black hole. In Section IV, similarly, we construct the time delay surface and determine the parities for an off-axis source. Section V, discusses the formation of infinitely delayed images in between different pairs of observed images. We conclude our work in Section VI. Throughout this work, we use the natural unit system, (\(c=1\), \(G=1\)), unless mention otherwise.
## II Light paths past a Schwarzschild lens
The trajectory of photons passing near the Schwarzschild black hole are determined by the following equaitons,
\[\begin{split}\frac{dr}{dt}&=\left(1-\frac{2M}{r} \right)\sqrt{1-\frac{b^{3}}{b-2M}\frac{r-2M}{r^{3}}},\\ \frac{d\phi}{dt}&=\left(1-\frac{2M}{r}\right)\frac{1 }{r^{2}}\sqrt{\frac{b^{3}}{b-2M}},\end{split} \tag{1}\]
where \(r=b\) is the distance of close approach and \((t,r,\phi)\) are the spacetime coordinates in the plane where the light travels (i.e., \(\theta=\pi/2\)). We refer readers to Appendix A describing a method to derive the above equations and Appendix B for various limiting cases of the above equations.
In our current work, to determine the path of light rays in the strong field and trace wavefronts originated from the source at \((t,r,\phi)=(0,R,\pi)\), we solve Equation (1) numerically while choosing a range of \(b\) values. To numerically solve the coupled different Equation (1), we use odeint from SciPy[49]. An example of light rays and wavefront propagation near the Schwarzschild black hole is shown in Figure 1. In each panel, the black dot represents the black hole and the dashed circle around it marks the photon sphere (\(r=3M\)). The source is represented by the green dot and light rays emitted from
Figure 1: Wavefront propagation near a Schwarzschild black hole lens. The black dot in each panel represents the black hole position. The black dashed circle around the black hole marks the photon sphere (\(r=3M\)). The green dot represents the source position and gray curves represent the light rays emitted from the source. The equal time surfaces (or wavefronts) for these rays are shown by the green curves. The left, middle, and right panels represent the wavefront for three different time values (in increasing order), respectively. In each panel, the gap in the middle of wavefront is corresponding to rays which fall inside the black hole.
the source are represented by the gray curves. The corresponding equal time surfaces (or wavefronts) for three different time values (in increasing order) are shown by green curves in the three panels form left to right. The break in the middle of the wavefront is corresponding to the light rays which fall inside the black hole (\(b<3M\)). From Figure 1, we can see that the rays passing closer to the black hole are deflected more strongly compared to far away rays. Due to the large deflection close to the black hole, we observe part of the wavefront going around the black hole corresponding to the rays which loop around the black hole.
## III Axially Symmetric Case
In this section, we consider wavefronts for an axially symmetric configuration, i.e., the source, lens, and observer lie in a straight line (the optical axis).
To construct the time delay surface in our current work, we use the forward and backward propagating wavefronts emitted by the source and observer, respectively, as described in [40; 41]. Figure 2 depicts the basic idea of using the wavefronts to locate the lensed image positions and construct the time delay surface. The green, black, and red dots mark the positions of the source, black hole, and observer, respectively. We start by marking a forward propagating wavefront at a certain time as shown by green curve. After that, we track a backward propagating wavefront emitted from the observer (shown by the red curves at different times) and determine the time when it crosses the forward propagating wavefront. When red and green wavefronts touch each other such that their normal vectors agree with each other, they correspond to the path of light rays emitted from the source and observed by the observer. This is further highlighted by the gray curves in Figure 2.
To indicate the crossing points of the forward and backward propagating wavefronts, we trace the individual rays corresponding to the backward wavefront as shown by red and gray curves in the left panel of Figure 3. Gray (red) curves mark the rays which (do not) cross the forward propagating wavefront. Whether a ray will cross the forward propagating wavefront or not will depends on the time at which we mark the forward propagating wavefront. In addition, the exact number of time a ray crosses the forward wavefront will also depend on the temporal position of the forward wavefront. If the forward wavefront is yet to cross the black hole, we can only get at most two crossing for a given ray. However, once it crosses the black hole, we can have many crossings (in principle) since a ray can loop around the black hole many times. We stop the forward wavefront before it crosses the black hole and only need the time corresponding to first crossing for each ray.
For the axially symmetric case, time delay as a function of the emission direction \(\phi\) for a given ray is shown in the right panel of Figure 3 (assuming \(M=1\)). The green vertical lines mark the \(\phi\)-values corresponding to the photon sphere (\(r=3M\)). Any photon emitted at an angle smaller than this will fall inside the black hole. The black U-shaped curves show the slices (at \(y=0\)) of a 3D time-delay surface near the primary, secondary, and tertiary images as we go at small to large time delay values. Since the secondary/tertiary images are formed when the light rays do one/two loops around the black hole, they form very close to the photon sphere as can be seen from left panel also. Due to the axial symmetry, in this case, we will observe (Einstein) ring formations for the primary/secondary/tertiary image and all of these images are minima.
This ring formation and parity of images can be seen clearly in the 3D plot of time delay surface as shown in Figure 4. The \(x\) and \(y\) axes represent the spatial axes and \(z\) axis shows the time delay values. The spatial axes are re-scaled such that \(\left(x,y\right)=\left(b-3\right)\left(x,y\right)\) to omit the \(r<3M\) region. The left, middle, and right panels show the time delay surface near primary, secondary, and tertiary images, respectively. In each panel, the ring formation is obvious as well as the corresponding type of images (global minima for primary ring and local minima for secondary and tertiary rings).
Figure 2: Wavefront propagation near a Schwarzschild black hole in the axially symmetric case. The green, black and red dots represent the position of source, lens, and observer, respectively. The green curves represent the wavefront emitted by the source. The red curves represent a wavefront emitted from the observer at different time instances. The lensed images are corresponding to the points on red and green wavefronts where their normal vector agree with each other. These points are essentially the points in the figure where red and green wavefronts touch each other. The gray curves represent the corresponding rays emitted from the source and reach to the observer.
From the right panel of Figure 3, we can see that the time delay surface near primary, secondary, and tertiary rings is not joined together and there are gaps between them. This gap corresponds to rays that go behind the observer. A few such rays (in red) can be seen in the left panel of Figure 3. Since the deflection will be continuous as we move closer to the black hole there will always be a part of forward wavefront between different order of images that will never reach the observer. We discuss this further in Section V.
## IV Off-axis case
The spherically symmetric nature of the Schwarzschild metric leads to the formation of rings when the source lies
Figure 4: Time delay surface near primary/secondary/tertiary ring in the left/middle/right panel for axially symmetric case. The x- and y-axis represent the spatial axes of the lens with respect to the observer (also equivalent to observer sky) and z-axis represent the time delay values (in units of \(GM/c^{3}\)). In each panel, the x- and y-axis are rescaled to remove the region inside the photon sphere.
Figure 3: Construction of time delay surface near a Schwarzschild black hole for axially symmetric case. _Left panel_: The green, black, and red dots at (-15, 0), (0,0), and (0,15) represent the position of source, Schwarzschild black hole, and observer, respectively. The black dashed line represents the photon sphere (\(r=3\)) around the black hole. The green curve represents the wavefront emitted by the source. The gray (red) curves mark the rays emitted from the observer which do (not) cross the wavefront. _Right panel_: Time delay (\(t_{4}\); in units of \(GM/c^{3}\)) as a function of angle of closest approach (\(\phi\); in degrees) with respect to the observer. The green vertical lines mark the angle corresponding to the photon sphere. The black curves represent the slices of time delay surface near primary, secondary, and tertiary images as we go from small to large time delays. For time delay surface near secondary and tertiary images, we show zoom-in plots since the images form very close to the photon sphere.
on the optical axis. However, once we move the source away from the optical axis, light rays emitted from the source and travelling from one side of the black hole will reach the observer earlier compared to the other side and the ring formation will break into two separate images. An example of this is shown in Figure 5. Here, we move the source (shown by green dot) to a negative \(y\) value. The forward (backward) propagating wavefront is shown in green (red) color. We plot multiple temporal positions of the backward wavefront. The blue wavefront also shows the backward propagating wavefront at a larger time value than the red colored wavefronts. Since we moved the source to the negative \(y\) axis, the negative-\(y\) part of the forward wavefront (lower half) will reach the observer first implying that image on negative \(\phi\) values will be observed first by the observer. This can also be seen from the fact that lower half part of the green wavefront touches the last red wavefront whereas the upper half of the green wavefront touches the blue wavefront (which is drawn for a larger time). The light ray paths corresponding to the primary lensed images are shown by the gray curves. The breaking of ring in two different images can be more clearly seen in right panel of Figure 5 where we again plot time delay (\(t_{d}\)) as a function of angle of closest approach (\(\phi\)) for different rays as we observe that images formed on \(\phi<0\) has smaller time delay values compared to the corresponding counterparts on \(\phi>0\). Another obvious yet important observation is the fact that in each (primary/secondary/tertiary) pair of lensed images, the image arriving later forms closer to the black hole.
Here we can again ask for the parity of each of these images, but the \(\phi-t_{d}\) plot shown in the left panel is not sufficient to determine the parity of these images since it only shows a 1D slice along \(y\)-axis of the full 3D time delay surface. Hence, we construct 2D as well as 3D time delay surface plots near the primary and secondary images as shown in Figure 6. The 2D contour plots for primary and secondary images are shown in the bottom and top panel of the left column, respectively. The corresponding 3D plots are shown in the right column. From the left and right columns, we can unambiguously observe that the primary as well as secondary pair of images consist one minima and one saddle-point. Due to the spherical symmetry of the lens, we expect to see the same image types even for higher order images. To plot the time delay surface, we again use the same scaling of axes, \(\left(x,y\right)=\left(b-3\right)\left(x,y\right)\), and omit the \(r<3M\) region. Similar to axis symmetric case, we again observe a gap in the time delay surface between primary and secondary images.
Figure 5: Wavefront propagation near a Schwarzschild black hole in the off-axis case. _Left panel:_ The green, black and red dots represent the position of source, lens, and observer, respectively. The green curves represent the wavefront emitted by the source. The red and blue curves represent a wavefront emitted from the observer at different time instances. The lensed images are corresponding to the points on red/blue and green wavefronts where their normal vector agree with each other. These points are essentially the points in the figure where red/blue and green wavefronts touch each other. The gray curves represent the corresponding rays emitted from the source and reach to the observer. _Right panel:_ Time delay (\(t_{d}\); in units of \(GM/c^{3}\)) as a function of angle of closest approach (\(\phi\); in degrees) with respect to the observer for off-axis case. The green vertical lines mark the angle corresponding to the photon sphere. The black curves represent the slices of time delay surface near the primary, secondary, and tertiary images as we go from small to large time delays. For time delay surface near secondary and tertiary images, we show zoom-in plots since the images form very close to the photon sphere.
## V The "Home" and "away" images
In both of the above cases (axially symmetric and off-axis), we observed gaps in the time delay surface between each order of lensed images as seen from left panels in Figure 3 and 5. As mentioned briefly in Section III, these gaps correspond to the part of the backward propagating wavefront that goes behind the observer and never crosses the forward wavefront. Or, from the forward wavefront perspective, part of the wavefront that loops around the black hole and goes behind the source itself. Since the deflection angle is continuously increasing as we move closer to the black hole, there will always be a part of the forward (backward) propagating wavefront that will go behind the source (observer). We remark that in standard lensing theory, singular lenses such as a point mass, which do not explicitly invoke black holes, also have similar gaps in time delay surface, which can be considered as the infinitely time delayed maxima forming at the position of the point lens [e.g., 50] assuming that the time delay surface is continuous.
Near a Schwarzschild black hole, we can again ask the
Figure 6: Time delay surface near primary and secondary image pairs for the off-axis case. The left column represents the time delay contours in 2D whereas the right column represents the same plot in 3D. The bottom and top rows are corresponding to the primary and secondary image pairs, respectively. In each panel, the x- and y-axis are rescaled to remove the region inside photon sphere.
question, assuming that time delay surface is continuous, do we expect additional images to form in these gaps between observed images? To answer this question, we need to determine the time delay surface topography near the black hole. A schematic surface is shown in Figure 7 for the off-axis case. The position of the black hole is shown by the gray shaded region inside the dashed circle. Here the type of the images are marked by "L", "S", and "H" for minima (or low), saddle-point, and maxima (or high), respectively. The black markers show what we may call "Home" image positions, i.e., images observed by the observer. On the left side of the black hole, we see formation of two such minima whereas on the right side wee see formation of two such saddle-points. These correspond to the first two order of images (primary and secondary) shown in Figure 6. In between these home minima (saddle-points), we show formation of an infinitely delayed saddle-point (maxima) shown in red which we call "Away" images since they never reach the observer. Although here we only show the topography near primary and secondary home images but we expect the same for further higher-order images due to the spherical symmetry of the lens. Since, the proposed images in gaps are not observed but the above topology make the time delay surface continuous. In addition, in the above topology one (global) maxima will also form at the position of the black hole (\(r<3M\)). Doing so, in addition to making the time delay surface continuous also satisfies the odd image theorem [51]. In addition, if we interchange the positions of observer and source then the ealier home images become away images and earlier away images become home images.
## VI Conclusions
The formation of multiple images through gravitational lensing is now a commonplace in astronomy, and has a well-developed theoretical formalism to interpret the observables. This formalism assumes weak gravitational fields, and consequently small deflections, which do not apply to the multiple images formed near a black hole, such as observed near the M87 black hole. Can the existing formalism be generalized to these strong-field applications?
In this work we show that the key element of the weak-field formalism does generalize rather simply. This element is the abstract construction known variously as the time-delay surface, the arrival-time surface, or the Fermat potential. Lensed images form at the zero-gradient locations of the surface (maxima, minima, and saddle-points), and higher derivatives give various properties of images, such as the apparent handedness or parity. In the weak-field formalism, the time delay surface is conventionally given by the sum of two contributions, one geometrical and one gravitational, to the light travel time. In strong fields, it is not clear how to identify two such separate contributions. However, an alternative definition of the time-delay surface, as the difference between a forward and a backward wavefront, can be applied to any static spacetime. We compute the time-delay surface near a Schwarzschild black hole and study its properties.
Concretely we use crossings of forward and backward propagating wavefronts from source and observer, respectively, to construct the time delay surface and determine the image types. In the axially symmetric case (having source, lens center, and observer on the same line), we observe ring formation corresponding to a minimal valley on time delay surface for primary (1st order), secondary (2nd order), and tertiary (3rd order) images. Moving the source away from the optical axis causes the ring to break into two separate images, one minima and one saddle-point. We again show this by constructing the time delay surface near the primary as well as secondary images. The pattern will continue, since near a Schwarzschild black hole there is an infinite sequence of images with continuously decreasing magnification factors (see Appendix C) as we move towards higher order in the sequence.
In between each ring (or pair of lensed images), we find sloping but infinitely high walls in the time delay surface. These walls are a result of the fact that near the black hole light rays can loop around the black hole and between each order of images there will be a certain fraction of rays emitted from the source that will never reach the observer and go behind the source itself. We can think
Figure 7: A possible time delay surface topology near the Schwarzschild black hole for off-axis case. The position of the black hole is shown by the gray shaded region encircled by dashed curve at the centre of the plot. “L”, “S”, and “H” denote the position of minima, saddle-point, and maxima, respectively. The image positions in black, marks the position of observed images (or “home” images) where image positions in red mark the infinitely delayed images (or “away” images) which we expect to form in gaps between the home images. Here we only show the first two order of home images and one set of away images but the same topology is expected even for the higher-order images.
of each of these walls as harboring one saddle and one maximum, both infinitely delayed and therefore not visible. We name these _away_ images, as distinct from the observable _home_ images. A final image, infinitely demagnified and infinitely delayed, will form within the photon sphere (\(r<3M\)). The odd-image theorem remains valid. To an observer on the other side of the black hole, _home_ and _away_ images get swapped, as do minima and maxima.
This work has been limited to a Schwarzschild black hole, for which we have mainly offered heuristic arguments from examining the numerical results on the time-delay surface. Formulating the image properties more precisely in terms of the surface is desirable, but it is not obvious how to proceed. The time-delay surface for a Kerr black hole, which would be more representative of the observations, would be interesting to compute, though significantly more complicated than the Schwarzschild case.
## Acknowledgements
Authors thank Jasjeet Singh Bagla, Rajaram Nityananda, Dominique Sluse, and Liliya Williams for useful comments. A.K.M. acknowledges support by grant 2020750 from the United States-Israel Binational Science Foundation (BSF) and grant 2109066 from the United States National Science Foundation (NSF), and by the Ministry of Science & Technology, Israel. This research has made use of NASA's Astrophysics Data System Bibliographic Services.
The work utilises the following software packages: Python ([https://www.python.org/](https://www.python.org/)) NumPy[52], Matplotlib[53], SciPy[49].
|
2309.14943 | Experimental Study of the Nematic Transition in Granular Spherocylinder
Packings under Tapping | Using x-ray tomography, we experimentally investigate the nematic transition
in granular spherocylinder packings induced by tapping. Upon the validation of
the Edwards ensemble framework in spherocylinders, we introduce an empirical
free energy that accounts for the influence of gravity and the mechanical
stability requirements specific to granular systems. This free energy can
predict not only the correct phase transition behavior of the system from a
disordered state to a nematic phase, but also a phase coexistence range and
nucleation energy barriers that agree with experimental observations. | Haitao Yu, Zhikun Zeng, Ye Yuan, Shuyang Zhang, Chengjie Xia, Yujie Wang | 2023-09-26T13:59:19Z | http://arxiv.org/abs/2309.14943v1 | # Experimental Study of the Nematic Transition in Granular Spherocylinder Packings under Tapping
###### Abstract
Using x-ray tomography, we experimentally investigate the nematic transition in granular spherocylinder packings induced by tapping. Upon the validation of the Edwards ensemble framework in spherocylinders, we introduce an empirical free energy that accounts for the influence of gravity and the mechanical stability requirements specific to granular systems. This free energy can predict not only the correct phase transition behavior of the system from a disordered state to a nematic phase, but also a phase coexistence range and nucleation energy barriers that agree with experimental observations.
The packing of hard particles is ubiquitous in nature and industrial processes [1,2], which can be traced back to Kepler's research on ball packing in 1611 [3]. Since then, the structure and phase behavior of sphere packings have been extensively studied. In recent decades, growing studies on packings of non-spherical particles have revealed appealing richness in shape-dependent packing properties and phase behaviors [4, 5, 6, 7, 8, 9, 10]. A classic example is the
isotropic-nematic transition observed in hard-rod systems owing to the competition between orientational and translational entropy, as explained by Onsager's excluded volume theory [11]. Subsequent numerical and theoretical studies have extensively investigated the equilibrium phase boundaries for hard-rod systems [12-15]. It is worth noting that most of these works have primarily focused on thermal systems, such as colloidal rods [16-19]. As for the out-of-equilibrium granular systems, similar behaviors as their thermal counterparts have been observed. For example, long granular rods, under external agitation, tend to align and lead to an abrupt densification characterized by nematic-like and smectic-like orderings [20-29]. However, due to the intrinsic athermal nature of granular materials, it remains unclear whether their phase behaviors and associated mechanisms can be directly mapped onto thermal systems [30]. Previously, Galanis et al. [31] utilized a generalized thermal-equilibrium free-energy minimization approach to elucidate the phase separation transition in a highly-agitated two-dimensional mixture of granular rods and spheres. However, there is no straightforward application of the thermal-equilibrium statistical mechanics in the case of static granular packings. Instead, an analogous statistical mechanical framework has been proposed by Edwards and coworkers [32], whose validity has been successfully tested in spherical granular packings recently [33,34]. Using this framework, Ding et al. [35] explained a novel cubatic structural transformation in packings of granular cylinders with an aspect ratio close to one. Nevertheless, more research is needed to investigate the potential extension of this framework to other shapes and explore fundamental differences from thermal systems.
In this Letter, we employ x-ray tomography to investigate the compaction process of granular spherocylinder packings under tapping. Our results reveal a clear first-order nematic
phase transition in the system. To provide an understanding of this observed transition, we first verify the applicability of the Edwards ensemble to spherocylinders. Subsequently, we quantify the transition by monitoring particle-scale structural transformation during the transition, which allows us to obtain the excluded volume and orientational entropy associated with each spherocylinder. Based on the Edwards framework, we construct a phenomenological mean-field free energy incorporating new terms that account for the influence of gravity and the mechanical stability requirements specific to granular systems, which are absent in thermal systems. Based on this modified free energy, we can accurately predict a first-order phase transition with a coexistence range that aligns with experimental observations. This free energy also predicts nucleation energy barriers consistent with those obtained from the distribution of the nucleating clusters based on the classical nucleation theory.
The samples used in this study are 3D-printed (ProJet MJP 2500 Plus) plastic spherocylinders, which consist of a cylindrical body with hemispherical caps at both ends. The diameter of the spherocylinders is \(D\!=\!4\) mm, the length of the cylindrical part is \(L_{cylinder}=24\) mm, and thus the aspect ratio \(\alpha\!=\!\left(L_{cylinder}+D\right)\!/D\!=\!7\). The packings are prepared in a cylindrical plastic container with a diameter of 180 cm and a height of about 100 cm. To minimize boundary effects, we glue on the inner surface of the container with segments of spherocylinders with random orientations and lengths ranging from \(1D\) to \(6D\). Each packing consists of about 5300 spherocylinders. When preparing the packing, we insert a thinner cylindrical tube into the container and gently pour spherocylinders into the tube. We then slowly withdraw the tube vertically, allowing the spherocylinders to fill and settle in the container gently. Following this procedure, we can obtain a reproducible loose packing. Other different
packing densities can be realized by tapping the system by a mechanical shaker with tap intensities \(\Gamma=2g\sim 16g\), where \(g\) is the gravitational acceleration constant. Each tapping cycle consists of a 300 ms pulse followed by a 1.5 s interval. The system is tapped for 1000\(\sim\)100,000 times, depending on \(\Gamma\). The evolution of the packing structures under tapping is obtained by a medical CT scanner (UEG Medical Group Ltd., 0.2 mm spatial resolution). Following similar image processing procedures as previous studies [36, 37], the centroid and orientation of each spherocylinder can be determined with uncertainties less than \(0.01D\) and \(0.1\) degrees, respectively. In the subsequent analysis, we only include particles located at least \(7D\) distance away from the container boundary.
To understand the structural changes and the associated phase transition process of the spherocylinder packing during tapping, we first examine the evolution of volume fraction \(\phi=\left<v_{p}\right>\left/\left<v\right>\right.\) under different \(\Gamma\) as a function of tapping number \(t\), where \(v_{p}\) and \(v\) are the respective volumes for the particles and their associated Voronoi cells [see Fig. 1(b)]. To characterize the orientational order of the packing, we employ the nematic order parameter \(s=\left<3\cos^{2}\theta-1\right>/2\) from liquid crystal theory, where \(\theta\) is the included angle between the particle orientation and the direction of gravity. Figures 1(b) and (c) show that all packings are initially in disordered states with \(\phi=47.6\%\pm 0.4\%\), defined as \(s<-0.2\), in which most of the spherocylinders are lying horizontally in random orientations [upper of Fig. 1(a)]. As tapping is applied at different intensities, the system evolves differently. For \(\Gamma\leq 6g\), the system experiences gradual compaction during tapping but remains in the disordered state throughout the experiment duration. For \(\Gamma\geq 7g\), the system can reach a new stable state where \(s\approx 0.5\), indicating the emergence of nematic ordering in the system [lower of Fig. 1(b)]. The associated
volume fraction \(\phi\) at the stable states decreases with the increase of tapping intensity. It is noteworthy that there is a sudden increase in both \(\phi\) and \(s\) during the nematic phase transition, a characteristic feature of a typical first-order phase transition. For \(\Gamma\approx 8g\), both the volume fraction and the order parameter abruptly increase with the further increase in tapping number, which demonstrates the appearance of smectic ordering in the system.
In order to gain a quantitative understanding of the observed phenomena, we treat the tapped granular spherocylinder systems using the Edwards ensemble framework, similar to the case of granular sphere packings [33]. Specifically, we calculate the Voronoi cell volume variance var(\(v\)) at different volume fractions, as shown in Fig. 2(a), where \(v_{p}\) is set as unity for simplicity. We then fit this data using a quadratic polynomial [solid curve in Fig. 2(a)] to obtain an analytical expression, used to calculate the Edwards compactivity, which act as an effective temperature for granular packings. Note that we only use var(\(v\)) of the disordered branch [solid symbols in Fig. 2(a)], which exhibits a smooth continuation of the system's low \(\phi\) behavior, since no clear one-to-one relationship between var(\(v\)) and \(\phi\) can be identified when \(\phi\) is about 0.52-0.6, where the coexistence of distinctive phases occurs. The compactivity of any packing at a specific \(\phi\) is determined by the fluctuation method [33]:
\[\frac{1}{\chi(\phi)}=\int_{\phi_{v}}^{\phi}\frac{d\psi}{\psi^{2}\,\mathrm{var} (v)}\,. \tag{1}\]
where \(\phi_{v_{p}}\approx 0.46\) is the packing fraction of the random loose packing (RLP) state with an infinite \(\chi\). Similar to spherical particles, we also adopt an alternative histogram overlapping method to calculate the compactivity \(\chi\)[33], which can also verify the equal-probability assumption of the Edwards ensemble. According to this method, if the density of states sampled at different compactivity is identical for a same granular packing system, the logarithm of the
ratio between the volume distribution \(P(v)\) for packings at different \(\chi\) should linearly depend on \(v\). This phenomenon is indeed observed in our experiment, as shown in the inset of Fig. 2(b). Notably, the values of \(\chi\) calculated using the histogram overlapping method agree well with those obtained using the fluctuation method, as shown in Fig. 2(b), providing further support to the validity of Edwards ensemble in non-spherical particle systems.
Once the validity of the Edwards ensemble framework is verified, we can proceed to obtain the free energy of the system to gain deeper insights into the nematic phase transition. In our spherocylindrical system, the single-particle free energy \(f\) can be tentatively expressed to first order as [38]:
\[f(\chi)=W-\chi S_{\theta}\,, \tag{2}\]
where \(W\) is the packing volume, analogous to energy in a conventional thermal system, and \(S_{\theta}\) is the orientational entropy. Here we assume that the nematic phase transition is driven primarily by the variation in orientational entropy, given that the change in translational entropies across the transition is usually small and can be neglected [35].
To develop a model encompassing both the packing volume and orientational entropy, we need to first obtain the orientational distribution of all spherocylinders in the system. For simplicity, we employ a mean-field approximation assuming that each spherocylinder independently satisfies the same orientational distribution function (ODF) as follows: \(P(\theta)=k\exp(\sum\limits_{i=2}^{n}\lambda_{i}P_{i}\left(\cos\theta\right))\), where \(\lambda_{i}\) is the Lagrangian multiplier for the \(i\)th-order Legendre polynomial \(P_{i}\), and \(i\) takes even values from 2 to \(n\). We note that \(\left\langle P_{2}\right\rangle=\left\langle\frac{1}{2}\left(3\cos^{2}\theta- 1\right)\right\rangle\) is nothing but the order parameter \(s\) along the gravity direction. Through the utilization of the maximum-entropy method, we can approximate the full ODF by employing a limited set of
parameters [39]. Specifically, when we consider solely the first two terms, the ODF takes the following form of \(P(\theta)=k(\lambda_{2},\ \lambda_{4})\exp(\lambda_{2}P_{2}(\cos\theta)+\lambda_{4}P_{4}( \cos\theta))\), in which each pair \(\left(\lambda_{2},\ \lambda_{2}\right)\) defines a complete set of \(\left\langle P_{i}\right\rangle\) momenta and thus a given shape of the ODF. Furthermore, an empirical formula \(\lambda_{4}=f(\lambda_{2})\) between \(\lambda_{2}\) and \(\lambda_{4}\) can be established by fitting the experimental data, allowing us to write the ODF as:
\[P(\theta)=k(\lambda_{2})\exp(\lambda_{2}P_{2}(\cos\theta)+f(\lambda_{2})P_{4}( \cos\theta)). \tag{3}\]
The calculated ODF reproduce rather satisfactorily the experimental orientational distribution, as shown in Fig. 3(a).
Once the ODF is determined, we can calculate the excluded volumes between two spherocylinders when they are in contact at an angle \(\gamma\), using Onsager's excluded volume theory:
\[V_{ext}(\gamma)=\left(4\pi/3\right)D^{3}+2\pi D^{2}L+2DL^{2}\sin\gamma. \tag{4}\]
Integration over the ODF yields the average excluded volume of the system, as shown in Fig. 3(b):
\[\left\langle V_{ext}\right\rangle=\int P(\theta_{1})P(\theta_{2})V_{ext}( \gamma_{12})\mathrm{d}\Omega_{1}\left(\theta_{1},\varphi_{1}\right)\mathrm{d} \Omega_{2}\left(\theta_{2},\varphi_{2}\right), \tag{5}\]
where \(\gamma_{12}\) is the contacting angle between two particles, and \(\mathrm{d}\Omega\) denotes the integration over the full solid angle. In this calculation, we assume that the orientations of the spherocylinders are uniformly distributed in the other spherical coordinate \(\varphi\). By adopting a simple random contact model [40], \(W\left(\lambda_{2}\right)\) can be obtained from the excluded volume as \(W=\dfrac{\left\langle V_{ext}\right\rangle}{2z}\), where \(z\) is the average contact number (see Supplemental Materials [41] for more details). Furthermore, the orientational entropy can be calculated as [upper red curve in Fig. 3(b)]:
\[S_{\varphi}=-\left[\int P(\theta)\log\left[4\pi P(\theta)\right]\sin\theta \mathrm{d}\theta\mathrm{d}\varphi\,\right. \tag{6}\]
where we have subtracted the entropy of the system when particles are uniformly distributed in orientation.
It is important to note that for granular systems, the free energy \(f\) needs modification compared to its simplified thermal form of Eq. (2) due to the influence of gravity and mechanical stability constraints. Unlike thermal systems, granular systems with finite friction tend to have the spherocylinders lying horizontally over each other, as contacts orientated along other directions are less likely to maintain mechanical stability under gravity. This highlights a fundamental difference between the Edwards ensemble and the thermal ensemble, as the requirement of the mechanical stability of the Edwards ensemble introduces varying statistical weights for different contact configurations. To account for the preference for contacts to be orientated along the gravity direction, we modify the free energy \(f\) by introducing a new term [lower red curve in Fig. 3(b)]:
\[S_{z}=S_{z0}\left\langle\left(\frac{\mathbf{u}_{z}\times\mathbf{u}_{z}}{\left| \mathbf{u}_{z}\right|}\right)_{z}^{2}\right\rangle, \tag{7}\]
where the contact direction is calculated by the cross product of the orientations of the two contacting particles \(\mathbf{u}_{z}\times\mathbf{u}_{z}\), \(S_{z0}=2\) is an empirical fitting parameter and \(z\) denotes the component along the vertical direction. Additionally, it is observed that when nematic order emerges in the system, the order tends to align with the gravity direction, indicating a coupling of the order parameter with the gravity field _hs_, where \(h=0.27\) is another fitting parameter characterizing the coupling strength. The modified free energy now becomes:
\[f(\chi,s)=W(s)-\chi\left(S_{\rho}(s)+S_{z}(s)\right)+hs\,. \tag{8}\]
It turns out the phase transition behavior of this empirical free energy \(f\), as the system undergoes the transition from the disordered phase to the nematic phase, is in quantitative agreement with
the experimental results [Fig. 3(d)]. Specifically, as shown in Fig. 3(c), when \(\phi\) is small, there exists a single minimum at \(S\approx\) -0.5 in the free energy, corresponding to the disordered phase where most particles lie close to the horizontal plane. Around \(\phi\approx 0.52\), two minima appear in the free energy, corresponding to the emergence of the ordered nematic phase. The coexistence of these two local minima persists until \(\phi\approx 0.6\), where the disordered phase becomes unstable, nicely matching the experimentally observed coexistence range.
To investigate the nucleation process associated with the first-order phase transition, we examine the evolution of ordered nuclei or clusters in the system with different tapping intensities. The criteria for two particles to belong to the same cluster involve evaluating their distance and orientation. Specifically, for two spherocylinders \(i\) and \(j\), it is required that their surface distance \(d_{ij}<0.5D\), and their orientation vectors satisfy \(\left|\mathbf{u}_{i}\cdot\mathbf{u}_{j}\right|>0.995\)[43]. In the inset of Fig. 4(a), we show the size distribution of clusters \(P(n)\) at different \(\chi\). If the formation of the nuclei is thermally driven based on the Edwards statistics, \(P(n)\) at different \(\chi\) should be determined by \(P(n)=P_{0}\exp\left(-\Delta G(n)/\chi\right)\), with a normalization factor \(P_{0}\), from which we can infer the free energy \(\Delta G(n)\) associated with a cluster of size \(n\) [symbols in Fig. 4(a)]. According to the classical nucleation theory (CNT) [44], this free energy \(\Delta G(n)\) should consist of at least a bulk and a surface tension term:
\[\Delta G=An(\chi-\chi_{0})+Bn^{2\gamma_{3}}\,. \tag{9}\]
By employing a single set of fitting parameters \(A\) and \(B\), \(\Delta G\) of systems with different \(\chi\) can be reasonably fitted [curves in Fig. 4(a)]. Furthermore, the bulk term is given by \(An(\chi-\chi_{0})=n\Delta G_{b}\), where \(\Delta G_{b}\) represents the change in free energy per particle at a supercooling of \(\left(\chi-\chi_{0}\right)\). As shown in Fig. 4(c), \(\Delta G_{b}\) decreases as compactivity \(\chi\) decreases and
becomes zero around \(\phi\approx 0.55\). Alternatively, \(\Delta G_{b}\) can also be determined from the free energy \(f\), calculated as the difference between the two minimum values corresponding to the disordered state and the ordered nematic phase. It turns out that \(\Delta G_{b}\) obtained from the two approaches are in nice agreement, indicating the validity of CNT even in the athermal granular systems.
In summary, using x-ray tomography, we carry out an experimental study of the nematic phase transition in tapped spherocylinder systems. We first verify the applicability of Edwards ensemble framework to our spherocylindrical system. Subsequently, within this framework, we successfully characterized the first-order nematic phase transition and the associated nucleation process in the system. The experimental results are consistent with the prediction of an empirical free energy that we have introduced. This empirical free energy incorporates terms that account for gravity coupling and the mechanical stability requirements, which are unique to granular materials. Our findings suggest that the phase transition concepts developed in thermal systems can also be generalized to granular materials after appropriate modifications, despite their athermal nature. These results significantly contribute to our understanding of the fundamental principles governing phase transitions in granular systems.
The work is supported by the National Natural Science Foundation of China (No. 12274292 and 11974240), and the Science and Technology Innovation Foundation of Shanghai Jiao Tong University (No. 21X010200829).
Corresponding author |
2309.04588 | Distributed Optimization via Gradient Descent with Event-Triggered
Zooming over Quantized Communication | In this paper, we study unconstrained distributed optimization strongly
convex problems, in which the exchange of information in the network is
captured by a directed graph topology over digital channels that have limited
capacity (and hence information should be quantized). Distributed methods in
which nodes use quantized communication yield a solution at the proximity of
the optimal solution, hence reaching an error floor that depends on the
quantization level used; the finer the quantization the lower the error floor.
However, it is not possible to determine in advance the optimal quantization
level that ensures specific performance guarantees (such as achieving an error
floor below a predefined threshold). Choosing a very small quantization level
that would guarantee the desired performance, requires {information} packets of
very large size, which is not desirable (could increase the probability of
packet losses, increase delays, etc) and often not feasible due to the limited
capacity of the channels available. In order to obtain a
communication-efficient distributed solution and a sufficiently close proximity
to the optimal solution, we propose a quantized distributed optimization
algorithm that converges in a finite number of steps and is able to adjust the
quantization level accordingly. The proposed solution uses a finite-time
distributed optimization protocol to find a solution to the problem for a given
quantization level in a finite number of steps and keeps refining the
quantization level until the difference in the solution between two successive
solutions with different quantization levels is below a certain pre-specified
threshold. | Apostolos I. Rikos, Wei Jiang, Themistoklis Charalambous, Karl H. Johansson | 2023-09-08T20:33:24Z | http://arxiv.org/abs/2309.04588v1 | # Distributed Optimization via Gradient Descent with
###### Abstract
In this paper, we study unconstrained distributed optimization strongly convex problems, in which the exchange of information in the network is captured by a directed graph topology over digital channels that have limited capacity (and hence information should be quantized). Distributed methods in which nodes use quantized communication yield a solution at the proximity of the optimal solution, hence reaching an error floor that depends on the quantization level used; the finer the quantization the lower the error floor. However, it is not possible to determine in advance the optimal quantization level that ensures specific performance guarantees (such as achieving an error floor below a predefined threshold). Choosing a very small quantization level that would guarantee the desired performance, requires information packets of very large size, which is not desirable (could increase the probability of packet losses, increase delays, etc) and often not feasible due to the limited capacity of the channels available. In order to obtain a communication-efficient distributed solution and a sufficiently close proximity to the optimal solution, we propose a quantized distributed optimization algorithm that converges in a finite number of steps and is able to adjust the quantization level accordingly. The proposed solution uses a finite-time distributed optimization protocol to find a solution to the problem for a given quantization level in a finite number of steps and keeps refining the quantization level until the difference in the solution between two successive solutions with different quantization levels is below a certain pre-specified threshold. Therefore, the proposed algorithm progressively refines the quantization level, thus eventually achieving low error floor with a reduced communication burden. The performance gains of the proposed algorithm are demonstrated via illustrative examples.
## I Introduction
The problem of distributed optimization has become increasingly important in recent years due to the rise of large-scale machine learning [1], control [2], and other data-driven applications [3] that involve massive amounts of data.
Most distributed optimization algorithms in current literature assume that nodes exchange real valued messages of infinite precision [4, 5, 6, 7, 8]. In distributed computing settings, nodes typically communicate with each other over a network that has limited communication bandwidth and latency. This means that exchanging messages with infinite precision can be impractical or even impossible. More specifically, the assumption of infinite-capacity communication channels is unrealistic because it requires the ability to transmit an infinite number of bits per second. Additionally, most distributed algorithms assume the transmission of rational numbers, which however, is only possible over infinite-capacity communication channels.
In order to alleviate the aforementioned limiting assumption, researchers have focused on the scenario where nodes are exchanging quantized1 messages [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. This may lead to a solution to the proximity of the optimal solution that depends on the utilized quantization level. However, most of the proposed works are mainly quantizing values of an asymptotic coordination algorithm. As a consequence, they are only able to exhibit asymptotic convergence to a solution in the proximity of the optimal solution.
Footnote 1: Quantization is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set. In quantization, nodes compress (i.e., quantize) their value (of their state or any other stored information), so that they can represent it with a few bits and then transmit it through the channel.
A recent work [22] proposed a finite-time communication-efficient algorithm for distributed optimization. However, it is not obvious how coarse/fine the quantization should be. If it is too coarse, the solution to the optimization may lead to an error floor that is considerably large and hence, unacceptable (for the considered application). If it is too fine, then larger packets are needed for communication (which means that the overall system may experience delays, more packet losses, etc). Since the exact solution is not known _a priori_ though, it is not possible to know whether the quantization level chosen is sufficient.
**Main Contributions.** In this paper, we present a novel distributed optimization algorithm aimed at addressing the challenge of quantization level tuning. Our proposed algorithm extends the quantized distributed optimization method in [22] (which converges to an approximate solution within a finite number of iterations). Our key contribution is a strategy that dynamically adjusts the quantization level based on the comparison of error floors resulting from different quantization levels. The proposed strategy allows us to assess the satisfaction of the obtained solution, even in the absence of knowledge about the optimal solution. Our key contributions are the following.
**A.** We present a distributed optimization algorithm that leverages on gradient descent and fosters efficient communication among nodes through the use of quantized messages; see Algorithm 1. Our algorithm operates by comparing solutions obtained with different quantization levels. If these solutions exceed a predefined threshold, we continue to refine the quantization level, otherwise, we terminate its operation; see for example Fig. 1. While we cannot directly enforce the exact desired accuracy, our algorithm can attain a desired level of accuracy through the selection of an appropriate threshold. For example, by setting the threshold in the order of \(10^{-7}\), we can guarantee an error floor as low as \(10^{-6}\). Remarkably, with each iteration of the optimization process, the quantization granularity becomes finer, and the initial conditions approach the vicinity of the optimal state. This behavior resembles a distributed zooming process over the optimization region.
**B.** We validate the performance of our proposed algorithm through illustrative examples, demonstrating its effectiveness in terms of communication efficiency and the computation of optimal solutions; see Section V. The achieved improvement in communication efficiency is substantial and holds practical significance; see Remark 3.
## II Notation and Preliminaries
**Notions.** The sets of real, rational, integer and natural numbers are denoted by \(\mathds{R},\mathds{Q},\mathds{Z}\) and \(\mathds{N}\), respectively. The symbol \(\mathds{Z}_{\geq 0}\) denotes the set of nonnegative integer numbers. The symbol \(\mathds{R}_{\geq 0}\) denotes the set of nonnegative real numbers. The symbol \(\mathds{R}_{\geq 0}^{n}\) denotes the nonnegative orthant of the \(n\)-dimensional real space \(\mathds{R}^{n}\). Matrices are denoted with capital letters (e.g., \(A\)), and vectors with small letters (e.g., \(x\)). The transpose of matrix \(A\) and vector \(x\) are denoted as \(A^{\top}\), \(x^{\top}\), respectively. For any real number \(a\in\mathds{R}\), the floor \(\lfloor a\rfloor\) denotes the greatest integer less than or equal to \(a\) while the ceiling \(\lceil a\rceil\) denotes the least integer greater than or equal to \(a\). For any matrix \(A\in\mathds{R}^{n\times n}\), the \(a_{ij}\) denotes the entry in row \(i\) and column \(j\). By \(\mathds{1}\), we denote the all-ones vector and by \(\mathds{I}\) the identity matrix of appropriate dimensions. By \(\|\cdot\|\), we denote the Euclidean norm of a vector.
**Graph Theory.** The communication network is captured by a directed graph (digraph) defined as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). This digraph consists of \(n\) (\(n\geq 2\)) nodes communicating only with their immediate neighbors, and is static (i.e., it does not change over time). In \(\mathcal{G}\), the set of nodes is denoted as \(\mathcal{V}=\{v_{1},v_{2},...,v_{n}\}\), and the set of edges as \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\setminus\{(v_{i},v_{i})\ |\ v_{i}\in\mathcal{V}\}\) (note that self-edges are excluded). The cardinality of the sets of nodes, edges are denoted as \(|\mathcal{V}|=n\), \(|\mathcal{E}|=m\), respectively. A directed edge from node \(v_{i}\) to node \(v_{l}\) is denoted by \((v_{l},v_{i})\in\mathcal{E}\), and captures the fact that node \(v_{l}\) can receive information from node \(v_{i}\) (but not the other way around). The subset of nodes that can directly transmit information to node \(v_{i}\) is called the set of in-neighbors of \(v_{i}\) and is represented by \(\mathcal{N}_{i}^{-}=\{v_{j}\in\mathcal{V}\ |\ (v_{i},v_{j})\in\mathcal{E}\}\). The subset of nodes that can directly receive information from nodes \(v_{i}\) is called the set of out-neighbors of \(v_{i}\) and is represented by \(\mathcal{N}_{i}^{+}=\{v_{l}\in\mathcal{V}\ |\ (v_{l},v_{i})\in\mathcal{E}\}\). The _in-degree_, and _out-degree_ of \(v_{j}\) and is denoted by \(\mathcal{D}_{i}^{-}=|\mathcal{N}_{i}^{-}|\), \(\mathcal{D}_{i}^{+}=|\mathcal{N}_{i}^{+}|\), respectively. The diameter \(D\) of a digraph is the longest shortest path between any two nodes \(v_{l},v_{i}\in\mathcal{V}\). A directed _path_ from \(v_{i}\) to \(v_{l}\) of length \(t\) exists if we can find a sequence of nodes \(i\equiv l_{0},l_{1},\ldots,l_{t}\equiv l\) such that \((l_{\tau+1},l_{\tau})\in\mathcal{E}\) for \(\tau=0,1,\ldots,t-1\). A digraph is _strongly connected_ if there exists a directed path from every node \(v_{i}\) to every node \(v_{l}\), for every \(v_{i},v_{l}\in\mathcal{V}\).
**Node Operation.** Each node \(v_{i}\in\mathcal{V}\) executes a distributed optimization algorithm and a distributed coordination algorithm. For the optimization algorithm (see Algorithm 1 (GraDeZoQuC) below) at each time step \(k\), each node \(v_{i}\) maintains
* its local estimate variable \(x_{i}^{[k]}\in\mathds{Q}\) (used to calculate the optimal solution),
* \(\gamma_{\beta}\) which is the time step during which nodes have converged to a neighborhood of the optimal solution,
* the set \(S_{i}\) which is used to store the \(\gamma_{\beta}\),
* the variable \(\text{ind}_{i}\) (used as an indicator of the length of the set \(S_{i}\)),
* the variable \(\text{flag}_{i}\) (used to decide whether to terminate the optimization algorithm operation).
For the coordination algorithm (Algorithm 2 (FiTQuAC) below) at each time step \(k\), each node \(v_{i}\) maintains
* the stopping variables \(M_{i}\), \(m_{i}\in\mathds{N}\) (used to determine whether convergence has been achieved), and
* the variables \(y_{i}\in\mathds{Q}\), \(c_{i}^{y},c_{i}^{z}\in\mathds{Z}\), and \(z_{i}\in\mathds{Q}\), (used to communicate with other nodes by either transmitting or receiving messages).
**Asymmetric Quantizers.** Quantization is a strategy that lessens the number of bits needed to represent information. It is used to compress data before transmission, thus reducing the amount of bandwidth required to transmit messages, and increasing power and computation efficiency. Quantization is mainly used to describe communication constraints and imperfect information exchanges between nodes such as in wireless communication systems, distributed control systems, and sensor networks. The three main types of quantizers are (i) asymmetric, (ii) uniform, and (iii) logarithmic [23]. In this paper we rely on asymmetric quantizers in order to reduce the required communication bandwidth (but our results can also be extended to logarithmic and uniform quantizers). Asymmetric quantizers are defined as
\[q_{\Delta}^{a}(\xi)=\Big{\lfloor}\frac{\xi}{\Delta}\Big{\rfloor}, \tag{1}\]
where \(\Delta\in\mathds{Q}\) is the quantization level, \(\xi\in\mathds{R}\) is the value to be quantized, and \(q_{\Delta}^{a}(\xi)\in\mathds{Q}\) is the quantized version of \(\xi\) with quantization level \(\Delta\) (note that the superscript "\(a\)" indicates that the quantizer is asymmetric.).
The \(\max\)-consensus algorithm converges to the maximum value among all nodes in a finite number of steps \(s_{m}\leq D\), where \(D\) is the network diameter (see, [24, Theorem 5.4]). Similar results hold for the \(\min\)-consensus algorithm.
## III Problem Formulation
Let us consider a distributed network modeled as a digraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with \(n=|\mathcal{V}|\) nodes. We assume that each node \(v_{i}\) is endowed with a local cost function \(f_{i}(x):\mathds{R}^{p}\mapsto\mathds{R}\) only known to itself, and communication channels among nodes have limited capacity and as a result the exact states cannot be communicated if they are irrational. In other words, only quantized values can be transmitted/communicated and thus \(x\) can take values that can be expressed as rational numbers.
In this paper we aim to develop a distributed algorithm which allows nodes, despite the communication limitations, to cooperatively solve approximately the following optimization problem, herein called **P1**:
\[\min_{x\in\mathcal{X}} \ F(x_{1},x_{2},...,x_{n})\equiv\sum_{i=1}^{n}f_{i}(x_{i}),\] (2a) s.t. \[x_{i}=x_{j},\forall v_{i},v_{j}\in\mathcal{V}, \tag{2b}\] \[x_{i}^{[0]}\in\mathcal{X}\subset\mathds{Q}_{\geq 0},\forall v_{i} \in\mathcal{V},\] (2c) nodes communicate with quantized values, (2d) if \[\|f_{i}(x_{i}^{[\gamma_{\beta}-1]})-f_{i}(x_{i}^{[\gamma_{\beta}]})\|\leq \varepsilon_{s},\ \forall v_{i}\in\mathcal{V},\] for any \[\varepsilon_{s}>0,\mathrm{then\ terminate\ operation}, \tag{2e}\]
where \(\beta\in\mathds{N}\), \(\gamma_{\beta}\) is the optimization convergence point for which we have \(f_{i}(x_{i}^{[1+\gamma_{\beta}]})=f_{i}(x_{i}^{[\gamma_{\beta}]})\ \forall v_{i}\in \mathcal{V}\), \(\mathcal{X}\) is the set of feasible values of parameter \(x\), and \(x^{*}\) is the optimal solution of the optimization problem. Eq. (2a) means that we aim to minimize the global cost function which is defined as the sum of the local cost functions in the network. Eq. (2b) means that nodes need to calculate equal optimal solutions. Eq. (2c) means that the initial estimations of nodes belong in a common set. Note that it is not necessary for the initial values of nodes to be rational numbers, i.e., \(x_{i}^{[0]}\in\mathcal{X}\subset\mathds{Q}_{\geq 0}\). However, nodes can generate a quantized version of their initial states by utilizing the Asymmetric Quantizer presented in Section II. Eq. (2d) means that nodes are transmitting and receiving quantized values with their neighbors since communication channels among nodes have limited bandwidth. Eq. (2e) means that nodes are tracking the improvement of their local cost function between two consecutive convergence points \(\gamma_{\beta+1}\) and \(\gamma_{\beta}\). If the improvement of the local cost function of every node is less than a predefined threshold \(\varepsilon_{s}\), then they decide to stop their operation in a distributed way.
**Remark 1**.: _It will be shown later that our algorithm converges to a neighborhood of the optimal solution due to the quantized communication between nodes (see (2d)). Therefore, with \(\gamma_{\beta}\) we denote the time step for which all nodes have converged to this neighborhood (i.e., it is the optimization convergence point), and for this reason \(f_{i}(x_{i}^{[t+\gamma_{\beta}]})=f_{i}(x_{i}^{[\gamma_{\beta}]}),\forall v_ {i}\in\mathcal{V}\)._
## IV Distributed Optimization with Zooming over Quantized Communication
In this section, we present a distributed algorithm which solves problem **P1** described in Section III. Before presenting the operation of our proposed algorithm, we make the following assumptions which are necessary for the development of our results.
**Assumption 1**.: _The communication network (described as a digraph) \(\mathcal{G}\) is strongly connected._
**Assumption 2**.: _For every node \(v_{i}\), the local cost function \(f_{i}(x)\) is smooth and strongly convex. This means that for every node \(v_{i}\), for every \(x_{1},x_{2}\in\mathcal{X}\),_
* _there exists positive constant_ \(L_{i}\) _such that_ \[\|\nabla f_{i}(x_{1})-\nabla f_{i}(x_{2})\|_{2}\leq L_{i}\|x_{1}-x_{2}\|_{2},\] (3)
* _there exists positive constant_ \(\mu_{i}\) _such that_ \[f_{i}(x_{2})\geq f_{i}(x_{1})+\nabla f_{i}(x_{1})^{\top}(x_{2}-x_{1})+\frac{ \mu_{i}}{2}\|x_{2}-x_{1}\|_{2}^{2}.\] (4)
_This means that the Lipschitz-continuity and strong-convexity constants of the global cost function \(F\) (see (2a)) are \(L\ \mu\), defined as \(L=\max\{L_{i}\}\), and \(\mu=\min\{\mu_{i}\}\)._
**Assumption 3**.: _The diameter \(D\) (or an upper bound) is known to every node \(v_{i}\) in the network._
Assumption 1 is a necessary condition so that information from each node can reach every other node in the network, thus all nodes to be able to calculate the optimal solution \(x^{*}\) of \(P1\). Assumption 2 is the Lipschitz-continuity condition in (3), and strong-convexity condition in (4). Lipschitz-continuity is a standard assumption in distributed first-order optimization problems (see [25, 26]) and guarantees (i) the existence of the solution \(x^{*}\), and (ii) that nodes are able to calculate the global optimal minimizer \(x^{*}\) for (2a). Strong-convexity is useful for guaranteeing (i) linear convergence rate, and (ii) that the global function \(F\) has no more than one global minimum. Assumption 3 allows each node \(v_{i}\in\mathcal{V}\) to determine whether calculation of a solution \(x_{i}\) that fulfills (2b) has been achieved in a distributed manner.
The intuition of Algorithm 1 (GraDeZoQuC) is the following.
_Initialization._ Each node \(v_{i}\) maintains an estimate of the optimal solution \(x_{i}^{[0]}\), the desired quantization level \(\Delta\), and the refinement constant \(c_{r}\) which is used to refine the quantization level. Quantization level (i) is the same for every node, (ii) allows quantized communication between nodes, and (iii) determines the desired level of precision of the solution. Additionally, each node initializes a set \(S_{i}\). This set serves as a repository for storing the time steps during which nodes have collectively calculated the neighborhood of the optimal solution according to the utilized quantization level \(\Delta\). More specifically, Algorithm 1 converges to a neighborhood of the optimal solution due to the quantized communication between nodes. Each node \(v_{i}\) stores in \(S_{i}\) the optimization time step during which this neighborhood has been reached.
_Iteration._ At each time step \(k\), each node \(v_{i}\):
* Updates the estimate of the optimal solution \(x_{i}^{[k+\frac{1}{2}]}\) by performing a gradient descent step towards the negative direction the node's gradient; see Iteration step \(1\).
* Utilizes Algorithm 2 (FiTQuAC); see Iteration step \(2\). Algorithm 2 (details of its operation are presented below) allows each node to fulfill (2d), and to calculate in finite time an estimate of the optimal solution \(x_{i}^{[k+1]}\) that fulfills (2b).
* Checks if the calculated estimate of the optimal solution \(x_{i}^{[k+1]}\) is the same as the previous optimization step \(x_{i}^{[k]}\); see Iteration step \(3\).
* If the above condition holds, then nodes have reached a neighborhood of the optimal solution which depends on the utilized quantization level (i.e., they reached the optimization convergence point for the current quantization level). In this case, node \(v_{i}\) stores the corresponding time step \(\gamma_{\beta}=k\) at the set \(S_{i}\); see Iteration steps \(3a\), \(3b\).
* Checks if the difference between the value of its local function at the current optimization convergence point \(f_{i}(x_{i}^{[\gamma_{\beta}]})\) and the value of its local function at the previous optimization convergence point \(f_{i}(x_{i}^{[\gamma_{\beta}-1]})\) is less than a given threshold \(\varepsilon_{s}\); see Iteration step \(3c\).
* If the above condition holds, it sets its voting variable equal to \(0\) (otherwise it sets it to \(1\)). Then nodes are performing a max-Consensus protocol to decide whether they will continue the operation of Algorithm 1; see Iteration step \(3d\). The main idea for executing max-Consensus is that if every node finds that the difference between \(f_{i}(x_{i}^{[\gamma_{\beta}]})\) and \(f_{i}(x_{i}^{[\gamma_{\beta-1}]})\) is less than \(\varepsilon_{s}\) (signaling convergence) then nodes opt to halt their operation.
* After executing max-Consensus, if at least one node detects that the difference exceeds \(\varepsilon_{s}\) (indicating a lack of convergence) then nodes utilize the refinement constant \(c_{r}\) to adjust the quantization level and repeat the algorithm's operation accordingly, otherwise the operation is terminated; see Iteration step \(3e\).
Algorithm 2 (FiTQuAC) allows each node to be able to calculate the quantized average of each node's estimate in finite time by processing and transmitting quantized messages, with precision determined by the quantization level. FAQuA algorithm utilizes (i) asymmetric quantization, (ii) quantized averaging, and (iii) a stopping strategy. The intuition of Algorithm 2 (FiTQuAC) is the following. Initially, each node \(v_{i}\) uses an asymmetric quantizer to quantize its state; see Initialization-step \(2\). Then, at each time step \(\eta\) each node \(v_{i}\):
* Splits the \(y_{i}\) into \(z_{i}\) equal pieces (the value of some pieces might be greater than others by one); see Iteration-steps \(4.1\), \(4.2\).
* Transmits each piece to a randomly selected out-neighbor or to itself; see Iteration-step \(4.3\).
* Receives the pieces transmitted from its in-neighbors, sums them with \(y_{i}\) and \(z_{i}\), and repeats the operation; see Iteration-step \(4.4\).
Finally, every \(D\) time steps, each node \(v_{i}\) performs in parallel a max-consensus and a min-consensus operation; see Iteration-steps \(1\), \(2\), \(5\). If the results of the max-consensus and min-consensus have a difference less or equal to one, each node \(v_{i}\) (i) scales the solution according to the quantization level, (ii) stops the operation of Algorithm 2, (iii) uses the value \(x_{i}^{[k+1]}\) to continue the operation of Algorithm 1. Algorithm 2 converges in finite time according to [27, Theorem 1]. It is important to note here that Algorithm 2 (FiTQuAC) runs between every two consecutive optimization steps \(k\) and \(k+1\) of Algorithm 1 (GraDeZoQuC) (for this reason it uses a different time index \(\lambda\) and not \(k\) as Algorithm 1).
Our proposed algorithm is detailed below as Algorithm 1.
**Input:** A strongly connected directed graph \(\mathcal{G}\) with \(n=|\mathcal{V}|\) nodes and \(m=|\mathcal{E}|\) edges. Static step-size \(\alpha\in\mathds{R}\), digraph diameter \(D\), initial value \(x_{i}^{[0]}\), local cost function \(f_{i}\), error bound \(\varepsilon_{s}\), quantization level \(\Delta\in\mathds{Q}\), refinement constant \(c_{r}\in\mathds{N}\), for every node \(v_{j}\in\mathcal{V}\). Assumptions 1, 2, 3 hold.
**Initialization:** Each node \(v_{i}\in\mathcal{V}\) sets \(\text{ind}_{i}=0\), \(\beta=\text{ind}_{i}\), \(S_{i}=\{0\}\).
**Iteration:** For \(k=0,1,2,\ldots\), each node \(v_{i}\in\mathcal{V}\) does the following:
1. [leftmargin=*]
2. \(x_{i}^{[k+\frac{1}{2}]}=x_{i}^{[k]}-\alpha\nabla f_{i}(x_{i}^{[k]})\);
3. \(x_{i}^{[k+1]}=\text{Algorithm }2(x_{i}^{[k+\frac{1}{2}]},D,\Delta)\);
4. **if**\(x_{i}^{[k+1]}=x_{i}^{[k]}\)**, then
5. set \(\text{ind}_{i}=\text{ind}_{i}+1\), \(\beta=\text{ind}_{i}\), \(\gamma_{\beta}=k\);
6. set \(S_{i}=S_{i}\cup\{\gamma_{\beta}\}\);
7. **if**\(\|f_{i}(x_{i}^{[\gamma_{\beta-1}]})-f_{i}(x_{i}^{[\gamma_{\beta}]})\|\leq \varepsilon_{s}\)**, then set \(\text{vot}_{i}=0\);
8. else set \(\text{vot}_{i}=1\);
9. flag\({}_{i}\) = max - Consensus (vot\({}_{i}\));
10. if flag\({}_{i}=0\)**then** terminate operation;
11. set \(\Delta=\Delta/c_{r}\) and go to Step \(1\);
12. Each node \(v_{i}\in\mathcal{V}\) calculates \(x_{i}^{*}\) which solves problem **P1** in Section III.
### _Convergence of Algorithm 1_
We now analyze the convergence time of Algorithm 1 via the following theorem.
**Theorem 1**.: _Under Assumptions 1-3, when the step-size \(\alpha\) satisfies \(\alpha\in(\frac{n(\mu+L)}{4\mu L},\frac{2n}{\mu+L})\) and \(\delta\in(0,\frac{n[4\alpha\mu L-n(\mu+L)]}{2\alpha[n(\mu+L)-2\alpha\mu L]})\) where \(L=\max\{L_{i}\},\mu=\min\{\mu_{i}\}\), Algorithm 1 generates a sequence of points \(\{x^{[k]}\}\) (i.e., the variable \(x_{i}^{[k]}\) of each node \(v_{i}\in\mathcal{V}\)) which satisfies_
\[\|\hat{x}^{[k+1]}-x^{*}\|^{2}<\vartheta\|\hat{x}^{[k]}-x^{*}\|^{2}+\mathcal{O} (\Delta^{2}), \tag{7}\]
_where \(\Delta\) is the quantizer and_
\[\vartheta:= 2(1+\frac{\alpha\delta}{n})(1-\frac{2\alpha\mu L}{n(\mu+L)})\in(0,1), \tag{8a}\] \[\mathcal{O}(\Delta^{2})= (8+32n^{2}\hat{\alpha}^{2}L^{2}+\frac{32n^{2}\hat{\alpha}L^{2}}{ \delta})\Delta^{2}. \tag{8b}\]
Proof.: The proof follows directly from the proof of [22, Theorem 1], with the difference that the process is restarted under some condition (eq. (2c)). The details are omitted due to space limitations.
**Remark 2** (Convergence Precision).: _The focus of our convergence analysis in Theorem 1 is on the optimization steps performed during the operation of Algorithm 1. As stated, an additional term \(\mathcal{O}(\Delta^{2})\) appears in (7). This term affects the precision of the calculated optimal solution. While some distributed quantized algorithms in the literature exhibit exact convergence to the optimal solution (e.g., see [9, 16]), our Algorithm 1 adopts an adaptive quantization level to balance communication efficiency and convergence precision. However, by setting \(\varepsilon_{s}=0\) during Initialization, Algorithm 1 can be adjusted to converge to the exact optimal solution \(x^{*}\) (by refining the quantization level infinitely often). This characteristic is highly important in scenarios where higher precision is crucial. Specifically, Algorithm 1 is able to adjust to specific application requirements by performing a trade-off between communication efficiency and convergence precision. Furthermore, it is worth noting that Algorithm 1 offers distinct advantages particularly in scenarios where communication efficiency is a priority while maintaining satisfactory convergence precision in various applications. As will be shown in Section \(V\), the operational advantages of Algorithm 1 are evident, making it a valuable tool in distributed optimization tasks._
## V Simulation Results
In this section, we present simulation results in order to demonstrate the operation of Algorithm 1 and its potential advantages. More specifically:
**A.** We focus on a random digraph of \(20\) nodes and show how the nodes' states converge to the optimal solution (see Fig. 1). Furthermore, we analyze how the event-triggered zooming (i) leads to a more precise calculation of the optimal solution, and (ii) allows nodes to terminate their operation.
**B.** We compare the operation of Algorithm 1 against existing algorithms in the literature, and we emphasize on the introduced improvements (see Fig. 2).
For both cases **A.** and **B.** each node \(v_{i}\) is endowed with a local cost function \(f_{i}(x)=\frac{1}{2}\beta_{i}(x-x_{0})^{2}\). This cost function is smooth and strongly convex. Furthermore, for \(f_{i}(x)\) we have that (i) \(\beta_{i}\) is initialized as a random integer between \(1\) and \(5\) for each node in the network (and characterizes the cost sensitivity of node \(v_{i}\)), and (ii) \(x_{0}\) is initialized as a random integer between \(1\) and \(5\) (and represents the demand of node \(v_{i}\)).
**A. Operation over a random digraph of \(20\) nodes.** In Fig. 1, we demonstrate our algorithm over a randomly generated digraph consisted of \(20\) nodes. For each node \(v_{i}\) we have \(\alpha=0.12\), \(x_{i}^{[0]}\in[1,5]\), \(\varepsilon_{s}=0.003\), \(\Delta=0.001\), \(c_{r}=10\). In Fig. 1, we plot the error \(e^{[k]}\) in a logarithmic scale against the number of iterations. The error \(e^{[k]}\) is defined as
\[e^{[k]}=\sqrt{\sum_{j=1}^{n}\frac{(x_{j}^{[k]}-x^{*})^{2}}{(x_{j}^{[0]}-x^{*}) ^{2}}}, \tag{9}\]
where \(x^{*}\) is the optimal solution of the problem **P1**.
In Fig. 1 we can see that our algorithm is able to converge to the optimal solution. Furthermore, let us focus at time steps \(k=13,14\), and \(k=21,22\). At time steps \(k=13,14\)
Fig. 1: Execution of Algorithm 1 over a random digraph of \(20\) nodes.
we have that the condition in Iteration Step \(3\) holds (i.e., \(x_{i}^{[13]}=x_{i}^{[14]}\) for every \(v_{i}\in\mathcal{V}\)), and \(e^{[13]}=e^{[14]}\). Therefore, during time step \(14\), nodes are checking the overall improvement of their local cost functions (i.e., Iteration Step \(3c\)). Since this condition does not hold for at least one node, they decide to refine the quantization level (i.e., set \(\Delta=\Delta/10=0.0001\)), and continue executing Algorithm 1. At time steps, \(k=14,...,21\), nodes are able to approximate the optimal solution with more precision than before since the precision depends on the quantization level (as we showed in Theorem 1). At time steps \(k=21,22\) we have that the condition in Iteration Step \(3\) holds again. However, during time step \(22\) the overall improvement of every nodes' local cost function is less than the given threshold \(\epsilon_{s}\), i.e., \(\|f_{i}(x_{i}^{[14]})-f_{i}(x_{i}^{[22]})\|\leq\varepsilon_{s}\), for every \(v_{i}\in\mathcal{V}\) (see Iteration Step \(3c\)). As a result, nodes decide to terminate the operation at time step \(k=22\) (see Iteration Step \(3c\)). Note here that a choice of a smaller \(\epsilon_{s}\) may lead nodes to refine again the quantization level. This refinement (i.e, \(\Delta\leq 0.00001\)) will allow them to approximate the optimal solution with even higher precision.
**B. Comparison with current literature.** In Fig. 2, we compare the operation of Algorithm 1 against [8, 22]. We plot the error \(e^{[k]}\) defined in (9). For the operation of the three algorithms, for each node \(v_{i}\) we have \(\alpha=0.12\), \(x_{i}^{[0]}\in[1,5]\), \(\varepsilon_{s}=27\cdot 10^{-7}\), \(\Delta=0.001\), \(c_{r}=10\) (note that [8] is not utilizing \(\varepsilon_{s}\), \(\Delta\), \(c_{r}\), and [22] is not utilizing \(\varepsilon_{s}\), \(c_{r}\)). Our comparisons focus on:
**B-A.** The convergence of Algorithm 1 compared to [8, 22].
**B-B.** The required communication for convergence (in terms of bits per optimization step) of Algorithm 1 compared to [8, 22].
**B-A** (Convergence). In Fig. 2 we can see that Algorithm 1 converges identically to [22] for optimization steps \(k=0,...,12\). However, at time step \(12\), each node refines the quantization level (because the condition at Iteration Step \(3c\) of Algorithm 1 does not hold for at least one node). In this case, for time steps \(k>12\) we can see that Algorithm 1 approximates the optimal solution with higher precision than [22]. This is mainly because [22] utilizes a static quantization level, which is not refined during the operation of the algorithm. Then, at time step \(k=21\), Algorithm 1 refines again the quantization level, obtaining an even more precise estimation of the optimal solution. However, at time step \(k=27\), we have that the condition at Iteration Step \(3c\) holds for every node and Algorithm 1 terminates its operation. Finally, in Fig. 2 we can see that [8] exhibits linear convergence rate and is the fastest among the three algorithms. However, during its operation, each node needs to form the Hankel matrix and perform additional computations when the matrix loses rank. This requires the exact values from each node. It means that nodes need to exchange messages of infinite capacity which is practically infeasible and imposes excessive communication requirements over the network. Therefore the main advantage of Algorithm 1 compared to [8], is that nodes exchange quantized values guaranteeing efficient communication.
**B-B** (Communication). In Fig. 2, let us focus on comparing Algorithm 1 with [22] for \(\Delta=0.00001\) (see green circles line in Fig. 2). Specifically, we will focus on the communication requirements (in terms of total number of bits and bits per optimization time step) for achieving the error \(e^{[27]}\) for Algorithm 1 (which is the same as the error \(e^{[21]}\) for the algorithm in [22]). The communication bits are calculated as the ceiling of the base-\(2\) logarithm of the transmitted values. For example if node \(v_{i}\) transmits the quantized value \(\alpha\), then the number of bits it transmits is equal to \(\lceil\log_{2}(a)\rceil\). Note that comparing Algorithm 1 with [22] for \(\Delta=0.001\), and \(\Delta=0.0001\) can be shown identically. In Fig. 2, we have that during the operation of [22] for \(\Delta=0.00001\), nodes are utilizing _in total_\(800754\) bits for communicating with their neighbors. This means that the average communication requirement for each node is \(\frac{800754}{20(21)}=1906.55\) bits per optimization time step (since the network consists of \(20\) nodes which need \(21\) iterations to converge). During the operation of Algorithm 1, nodes are utilizing \(\Delta=0.001\) for steps \(k=0,...,12\), \(\Delta=0.0001\) for steps \(k=13,...,21\), and \(\Delta=0.00001\) for steps \(k=22,...,27\). For steps \(k=0,...,12\), nodes are utilizing _in total_\(195607\) bits for communicating with their neighbors. For steps \(k=12,...,21\), nodes are utilizing _in total_\(215635\) bits for communicating with their neighbors. For steps \(k=21,...,27\), nodes are utilizing _in total_\(201044\) bits for communicating with their neighbors. The total requirement of bits is \(612286\) for \(k=0,...,27\). This means that the average communication requirement for each node is \(\frac{612286}{(20)(27)}=1133.86\) bits per optimization time step. As a result, Algorithm 1, is able to approximate the optimal solution with precision similar to [22] (for \(\Delta=0.00001\)), but its communication requirements are significantly lower in terms of total number of bits and bits per optimization time step.
**Remark 3**.: _During the analysis in **B-B**, we have that Algorithm 1 requires less bits for communication compared to [22] (for \(\Delta=0.00001\)) because nodes are utilizing a higher quantization level than [22] for optimization steps
Fig. 2: Comparison of Algorithm 1 against [8, 22] over a random digraph of \(20\) nodes.
\(k=1,...,21\). This means that nodes are utilizing less bits to quantize and transmit their states towards their neighboring nodes. However, note here that during the operation of Algorithm 1 we can further improve communication efficiency by shifting the quantization basis after we refine the quantization step. Shifting the quantization basis means changing the location of the quantization levels relative to the states of the nodes. This can be done by adding/subtracting a constant value to the states of the nodes before quantization. This constant value that we can subtract is equal to the optimal solution to which the states of the nodes have converged before refining the quantization level. For example, in Fig. 2, during optimization step \(k=15\), node \(v_{i}\) will quantize the state \(x_{i}^{[15]}-x_{i}^{[\gamma_{1}]}\) (where \(x_{i}^{[1]}\) is equal to \(x_{i}^{[12]}\)). This strategy increases even further communication efficiency since the states of the nodes can be represented using fewer bits without sacrificing the accuracy of the calculated optimal solutions during the optimization operation. It will be further analyzed at an extended version of our paper._
## VI Conclusions
In this paper, we considered an unconstrained distributed strongly convex optimization problem, in which the exchange of information is done over digital channels that have limited capacity (and hence information should be quantized). We proposed a distributed algorithm that solves the problem with a solution at a close proximity to the optimal, by progressively refining the quantization level of a node, thus guaranteeing a certain error floor and more efficient communication (smaller packets/reduced number of bits). A simple numerical example shows the performance of our proposed algorithm and highlights the benefits in terms of communication efficiency. More specifically, in the specific example it was shown that the number of bits needed is \(\sim 25\%\) less when the quantization level is refined.
|
2309.13792 | On two approaches to studying the aftershocks of a strong earthquake | The paper is devoted to comparing two approaches to the study of aftershocks.
The methodological foundations of the traditional approach were laid many years
ago. A new approach has emerged relatively recently. The two approaches differ
from each other in the object, purpose and method of research. The differences
are as follows. With the new approach, attention is focused not on aftershocks,
but on the source of the earthquake. The evolution of the source is studied
experimentally, and not the degradation of the frequency of aftershocks.
Instead of a speculative selection of empirical formulas, the source
deactivation coefficient is measured, variations in the coefficient are
observed, and only on the basis of measurements and observations are
conclusions drawn about the dynamics of the source. Thus, the divergence
between the two approaches is doctrinal. The new approach turned out to be
effective. Through targeted analysis of aftershock data, the Omori epoch and
the phenomenon of bifurcation of the earthquake source were discovered. The
purpose of further research is indicated. Keywords: earthquake source, Omori
law, Hirano-Utsu law, logistic law, deactivation coefficient, bifurcation,
master equation, methodology. | A. V. Guglielmi | 2023-09-25T01:00:54Z | http://arxiv.org/abs/2309.13792v1 | # On two approaches to studying the aftershocks of a strong earthquake
###### Abstract
The paper is devoted to comparing two approaches to the study of aftershocks. The methodological foundations of the traditional approach were laid many years ago. A new approach has emerged relatively recently. The two approaches differ from each other in the object, purpose and method of research. The differences are as follows. With the new approach, attention is focused not on aftershocks, but on the source of the earthquake. The evolution of the source is studied experimentally, and not the degradation of the frequency of aftershocks. Instead of a speculative selection of empirical formulas, the source deactivation coefficient is measured, variations in the coefficient are observed, and only on the basis of measurements and observations are conclusions drawn about the dynamics of the source. Thus, the divergence between the two approaches is doctrinal. The new approach turned out to be effective. Through targeted analysis of aftershock data, the Omori epoch and the phenomenon of bifurcation of the earthquake source were discovered. The purpose of further research is indicated.
**On two approaches to studying the aftershocks of a strong earthquake**
_Schmidt Institute of Physics of the Earth, Russian Academy of Sciences, 10-1_
_Bolshaya Gruzinskaya street,123242 Moscow, Russian Federation_
_E-mail: [email protected]_
_Keywords:_ earthquake source, Omori law, Hirano-Utsu law, logistic law, deactivation coefficient, bifurcation, master equation, methodology.
## 1 Introduction
130 years ago, Omori found that the frequency of aftershocks decreases hyperbolically with time [1]. Omori's law has the following form
\[n\left(t\right)=\frac{k}{c+t}\,. \tag{1}\]
Here \(n\) is the frequency of aftershocks, \(t\geq 0\), \(k>0\), \(c>0\). The parameter \(k\) characterizes the earthquake source as a physical system. Omori's law is one-parameter, since the value of \(c\) is determined by the choice of the time reference point.
100 years ago, Hirano [2] proposed changing Omori law with a power law of the form
\[n\bigl{(}t\bigr{)}=\frac{k}{\bigl{(}c+t\bigr{)}^{p}}\,. \tag{2}\]
Here \(p>0\) is an additional fitting parameter. In other words, the power law is two-parameter. When \(p=1\), formula (2) coincides with Omori formula (1).
In the second half of the last century, Utsu [3, 4, 5] showed the effectiveness of formula (2) in the analytical description of aftershocks. Since then, formula (2) has been widely used in seismology for processing and analyzing aftershocks (see, for example, [6, 7, 8, 9, 10, 11]). Sometimes formula (2) is called the Utsu law [11], although it would be more correct to call it the Hirano-Utsu law. In the course of research, it was found that the exponent is on average \(p=1.1\), but varies from case to case within a wide range (approximately from 0.7 to 1.5).
The described approach to the problem is based on a certain methodological setting. Research methods generally change over time, and usually for the better. In recent years, a new approach to the problem has begun to emerge, epistemologically different from that of Omori, Hirano and Utsu. The difference concerns the object, purpose and method of research [12, 13, 14, 15, 16]. This paper is devoted to the description and comparison of two approaches to the study of aftershocks.
## 2 Selection of empirical formulas
The search for an empirical formula approximating observational data is widely used in the practice of studying natural phenomena. With a good choice, the formula may turn out to be fundamental. We do not know for what reasons Omori chose the hyperbolic function to formulate the law of aftershock evolution. (To get ahead of ourselves, let's say that the choice turned out to be relatively successful.) It is quite possible that considerations of simplicity played no small role for him. At the same time, as Dirac noted, the beauty of the mathematical formulation of the theory may be even more important than compliance with experimental data. And the hyperbola, being one of the conic sections, is extremely attractive in its own way.
It is more difficult to understand why Hirano and Utsu preferred the power law. Of course, the two-parameter formula (2) better approximates the experimental points than the one-parameter formula (1). But this improvement comes at a high price. Indeed, if in the Omori law the parameter
\(k\) is dimensionless, then in the Hirano-Utsu law the parameter \(k\) has no specific dimension at all. This deprives formula (2) of physical meaning. The dimension \(k\) depends on the value \(p\). For example, when \(p=1.1\) we have \(\left[k\right]=s^{\nu 10}\). This is unacceptable from the point of view of physical-mathematical spelling. It is impossible to rationally explain the century-old misconception on this score. This is an amazing thing. Even the famous mathematician and geophysicist Harold Jeffreys used (2) to approximate the flow of aftershocks after the 1927 earthquake in Tango (Japan) [17]. Meanwhile, formula (2) does not have the status of a law, since in physics the phenomenological parameters have a fixed dimension.
There is another, less significant drawback of the Hirano-Utsu law. From (2) it follows that \(\lim n\left(t\right)=0\) at \(t\to\infty\), while observational experience tells us that the specified limit is non-zero. In asymptotics, the source goes into the background seismicity regime. The frequency of tremors \(n_{\infty}\), generally speaking, is different from zero. In the next section of the paper we will present a two-parameter law of evolution, devoid of the disadvantages of the Hirano-Utsu law mentioned here.
## 3 Logistic law of aftershock evolution
If, in the mathematical formulation of the law, we use two phenomenological parameters, then a vast scope for choice opens up. In order not to wander in the dark, we need to be guided by some rational considerations. We will look for a curve for the decline in the frequency of
aftershocks among the curves that have long been well known in science. Let us also impose the conditions of simplicity and beauty in the Dirac sense. This particular choice is not completely determined, but logistic curves immediately come to mind. They have long been successfully used in biology, chemistry, demography and other sciences. Let us show that one of the two classes of logistic curves is perfectly suitable for solving our problem [18].
Logistic curves are solutions to the logistic equation
\[\frac{dn}{dt}=n\left(\gamma-\sigma n\right). \tag{3}\]
Here \(\gamma\) and \(\sigma\) are two parameters characterizing the earthquake source. The problem of dimensionality of parameters does not arise for us, and we immediately solve the problem of the asymptotic behavior of aftershocks: \(n\to n_{\infty}=\gamma\,/\,\sigma\) at \(t\to\infty\).
There are two classes of solutions to the logistic equation - rising and falling solutions (Figure 1). The choice between classes occurs when setting up the Cauchy problem for equation (3). Let us explain this procedure.
We will set the initial condition \(n\big{(}0\big{)}=n_{0}\) at \(t=0\), and we will look for a solution at \(t>0\). Increasing logistic curves arise when \(n_{0}<n_{\infty}\). They have long been widely used in biology, chemistry and sociology, as mentioned above. Decreasing logistic curves arise at \(n_{0}>n_{\infty}\). They are the ones that are of interest to us.
The curve in the right panel of Figure 1 is remarkably similar to an Omori hyperbola. Moreover, when \(n_{0}\gg n_{\infty}\), the upper segment of the logistic curve is practically indistinguishable from a hyperbola. Let's move on to investigating this issue.
When \(n_{0}>n\big{(}t\big{)}\gg n_{\infty}\) we can neglect the first term on the right side of equation (3):
\[\frac{dn}{dt}+\sigma n^{2}=0 \tag{4}\]
Let us show that the shortened differential evolution equation (4) is equivalent to the hyperbolic Omori law [18].
The solution to the Cauchy problem for equation (4) has the form
Figure 1: Two classes of solutions to the logistic equation of evolution. The dimensionless quantity \(T=\gamma t\) (\(X=n\,/\,n_{\infty}\)) is plotted along the horizontal (vertical) axis.
\[n\left(\tau\right)=\frac{n_{0}}{1+n_{0}\tau}\,, \tag{5}\]
where
\[\tau=\int_{0}^{t}\sigma\left(t^{\prime}\right)dt^{\prime}\,. \tag{6}\]
The value \(\sigma\) will be called the source deactivation coefficient. This coefficient indicates the rate at which the source loses its ability to generate aftershocks. We see that for \(\sigma=\mbox{const}\) formula (5) coincides with Omori formula (1) up to notation.
Thus, we came to the conclusion that at the beginning of evolution, when \(n\left(t\right)\gg n_{\infty}\), the logistic law (3) is practically indistinguishable from the Omori law (1) provided that \(\sigma=\mbox{const}\). Our conclusion can be verified experimentally (see below).
## 4 New approach to the problem
"Measure what is measurable" / Galileo's advice
We begin to present a new approach to the study of aftershocks [12, 13, 14, 15, 16]. One of the Cartesian principles on which scientific thinking is based states that a problem must be clearly divided into simple and clearly visible parts. In the problem of aftershocks, we highlight the object, purpose and method of research. The new approach is radically different from that of Omori, Hirano and Utsu in all three parts.
We choose the earthquake source as the object of study, and not the aftershocks themselves. The goal is not a speculative selection of an empirical formula to describe the degradation of the frequency of aftershocks, but a search for the laws of evolution of the source after the formation of a main rupture in the continuity of rocks in it. The difference between the methods is the most dramatic. We took advantage of Galileo's powerful idea: observe, measure, and only on the basis of measurements draw theoretical conclusions instead of looking for speculative fitting formulas. Thus, the divergence between the two approaches is doctrinal.
Let us assume that from observations we know the dependence \(n\big{(}t\big{)}\). First we will calculate the auxiliary function
\[g\big{(}t\big{)}=\frac{1}{n\big{(}t\big{)}}-\frac{1}{n_{0}}\,. \tag{7}\]
Now we will introduce a measure \(\sigma\big{(}t\big{)}\), characterizing the current state of the source, as follows.
Let's smooth the function \(g\big{(}t\big{)}\), that is, average it over fast oscillations, and then differentiate it with respect to time:
\[\sigma\big{(}t\big{)}=\frac{d}{dt}\big{\langle}g\big{(}t\big{)}\big{\rangle}\,. \tag{8}\]
Here the angle brackets denote the smoothing procedure of the auxiliary function.
It was not by chance that we chose the symbol \(\sigma\) to denote the measure of the state of the source. Taking into account (4), (7), (8), we can verify that Omori's law (1) is equivalent to the constancy of the source deactivation coefficient:
\[\sigma=\text{const}\,. \tag{9}\]
We can calculate \(\sigma\big{(}t\big{)}\) based on measurements and check whether law (9) is satisfied.
We have carried out critical testing of Omori's law within the framework of the new approach many times [12, 13, 14, 15, 20, 21, 22, 23, 24]. One of the results is shown in Figure 2. The conclusion is that Omori's law is fulfilled, but only at the first stage of the evolution of the source. The period of time in which \(\sigma=\text{const}\) was called the Omori epoch. The duration of the Omori epoch varies from case
Figure 2: Aftershock frequency (left) and source deactivation coefficient (right) after the strong earthquake in Southern California on January 17, 1994 (\(M=6.7\)).
to case from several days to several months. There is a tendency for the duration to increase with increasing magnitude of the main shock. The source deactivation coefficient in the Omori epoch is lower, the higher the magnitude of the main shock of earthquake.
At the end of the Omori epoch, the state of the source changes. Let's introduce the parameter \(\theta=d\sigma/dt\). In the Omori epoch, \(\theta=0\). The transition to a new state is indicated by a sharp jump in the parameter \(\theta\). This suggests that the end of the Omori epoch is accompanied by bifurcation of the earthquake source [25]. Figure 3 illustrates this [16].
We have not observed a single event in which Omori's hyperbolic law is satisfied throughout the entire evolution of aftershocks. Apparently, a hundred years ago, it was precisely this circumstance that served as a reason for Hirano to replace the Omori hyperbola with a power function (2).
So, Omori's law manifests itself only at the first stage of evolution. After that, bifurcation occurs. The transition from one source deactivation regime to a qualitatively different regime is abrupt in the sense that its duration is much less than the duration of the Omori epoch.
## 5 Discussion
We now understand the mistake of Hirano [2], who rejected the Omori law (1) and replaced it with the power law (2). Omori law in the form \(\sigma=\mathrm{const}\) manifests itself, but only for a limited period of time in the initial stage of the evolution of the source. At the end of this period, which we call the Omori epoch, the deactivation coefficient begins to change in a complex way over time. The methodological error made by Hirano 100 years ago [2] and later supported by Utsu [3, 4, 5] consists
Figure 3: Piecewise linear approximation of the deactivation coefficient (left) and its time derivative (right). The event occurred on November 23, 1984 in Northern California (\(M=6\)).
of a purely speculative selection of a fitting formula that best approximates the **entire** process of aftershock evolution. We discovered that evolution does not occur uniformly over time. Only at the first stage can the frequency of aftershocks be described by a simple formula, and this is a formula for a logistic curve, and not a formula for a hyperbola.
A methodological approach to the study of a natural phenomenon must be judged by its effectiveness. The new approach outlined above has proven to be very effective. We see that from the first steps of its development the method provided non-trivial information about the dynamics of the earthquake source. But the goal has not yet been achieved. We were unable to find the equation for the evolution of the source. One of the search directions is associated with the selection of the control parameter \(\lambda\) and the order parameter \(\varphi\) such that [16]
\[\frac{d\varphi}{dt}=\Gamma\varphi\,, \tag{10}\]
with
\[\Gamma\left(\lambda,\varphi\right)=\Gamma_{1}\left(\lambda\right)+\Gamma_{2} \left(\varphi\right)\,, \tag{11}\]
where
\[\Gamma_{1}=a\left(\lambda-\lambda_{c}\right),\ \Gamma_{2}=b\varphi-c\varphi^{2}\,. \tag{12}\]
The values of \(\lambda\), \(\lambda_{c}\), \(a\), \(b\), \(c\) are greater than zero. Let's set \(\varphi=\theta\) and consider the case \(\lambda<\lambda_{c}\). The equilibrium point \(\theta=0\) is stable and corresponds to the Omori epoch (\(\sigma=\) const ). If the control parameter \(\lambda\) increases smoothly and reaches a threshold of \(\lambda_{c}\), then the order parameter increases abruptly from zero to a value of \(\theta=b/c\). Qualitatively, this resembles the phenomenon of source bifurcation.
## 6 Conclusion
We performed a comparative analysis of two approaches to studying aftershocks. One of them was formed many years ago. Its essence comes down to the selection of empirical formulas that best approximate observational data. The new approach has begun to develop relatively recently. It differs from the traditional approach in the object, purpose and method of research. Briefly, the differences are as follows:
The focus is not on the aftershocks, but on the source of the earthquake.
The evolution of the source is studied experimentally, and not the degradation of the frequency of aftershocks.
Instead of a speculative selection of empirical formulas, the deactivation coefficient is measured, its variations are observed, and conclusions are drawn only on the basis of measurements and observations.
The new approach proved to be effective. It was possible to discover the existence of the Omori epoch and the phenomenon of bifurcation of a source. The problem of searching for a master equation that describes the evolution of an earthquake source remains unresolved.
_Acknowledgments_. I express my deep gratitude to B.I. Klain, A.D. Zavyalov and O.D. Zotov for many years of cooperation. Together with them, the foundations were laid for a new approach to the study of aftershocks.
|
2309.03581 | Interactive Hyperparameter Optimization in Multi-Objective Problems via
Preference Learning | Hyperparameter optimization (HPO) is important to leverage the full potential
of machine learning (ML). In practice, users are often interested in
multi-objective (MO) problems, i.e., optimizing potentially conflicting
objectives, like accuracy and energy consumption. To tackle this, the vast
majority of MO-ML algorithms return a Pareto front of non-dominated machine
learning models to the user. Optimizing the hyperparameters of such algorithms
is non-trivial as evaluating a hyperparameter configuration entails evaluating
the quality of the resulting Pareto front. In literature, there are known
indicators that assess the quality of a Pareto front (e.g., hypervolume, R2) by
quantifying different properties (e.g., volume, proximity to a reference
point). However, choosing the indicator that leads to the desired Pareto front
might be a hard task for a user. In this paper, we propose a human-centered
interactive HPO approach tailored towards multi-objective ML leveraging
preference learning to extract desiderata from users that guide the
optimization. Instead of relying on the user guessing the most suitable
indicator for their needs, our approach automatically learns an appropriate
indicator. Concretely, we leverage pairwise comparisons of distinct Pareto
fronts to learn such an appropriate quality indicator. Then, we optimize the
hyperparameters of the underlying MO-ML algorithm towards this learned
indicator using a state-of-the-art HPO approach. In an experimental study
targeting the environmental impact of ML, we demonstrate that our approach
leads to substantially better Pareto fronts compared to optimizing based on a
wrong indicator pre-selected by the user, and performs comparable in the case
of an advanced user knowing which indicator to pick. | Joseph Giovanelli, Alexander Tornede, Tanja Tornede, Marius Lindauer | 2023-09-07T09:22:05Z | http://arxiv.org/abs/2309.03581v3 | # Interactive Hyperparameter Optimization in Multi-Objective Problems
###### Abstract
Hyperparameter optimization (HPO) is important to leverage the full potential of machine learning (ML). In practice, users are often interested in multi-objective (MO) problems, i.e., optimizing potentially conflicting objectives, like accuracy and energy consumption. To tackle this, the vast majority of MO-ML algorithms return a Pareto front of non-dominated machine learning models to the user. Optimizing the hyperparameters of such algorithms is non-trivial as evaluating a hyperparameter configuration entails evaluating the quality of the resulting Pareto front. In literature, there are known indicators that assess the quality of a Pareto front (e.g., hypervolume, R2) by quantifying different properties (e.g., volume, proximity to a reference point). However, choosing the indicator that leads to the desired Pareto front might be a hard task for a user. In this paper, we propose a human-centered interactive HPO approach tailored towards multi-objective ML leveraging preference learning to extract desiderata from users that guide the optimization. Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator. Concretely, we leverage pairwise comparisons of distinct Pareto fronts to learn such an appropriate quality indicator. Then, we optimize the hyperparameters of the underlying MO-ML algorithm towards this learned indicator using a state-of-the-art HPO approach. In an experimental study targeting the environmental impact of ML, we demonstrate that our approach leads to substantially better Pareto fronts compared to optimizing based on a wrong indicator pre-selected by the user, and performs comparable in the case of an advanced user knowing which indicator to pick.
## 1 Introduction
Practical applications of machine learning often call for the optimization of more than one loss or objective function, called multi-objective machine learning (MO-ML) [13] borrowing from the notion of multi-objective optimization (MO) [4, 15]. For example, instead of only focusing on the performance of a model, its energy consumption is becoming more and more important in various domains such as edge computing, but also in general, sparked by efforts in the area of green artificial intelligence (Green AI) [16, 17]. Comparing two ML models with respect to several objectives is non-trivial and most approaches solve this problem by returning a set of solutions that cannot be improved further without going to the expense of at least one of the objectives, called the Pareto front.
Naturally, as standard ML algorithms, MO-ML algorithms expose hyperparameters controlling their learning behavior and thus, also the resulting Pareto front. To unleash the full potential of the MO-ML algorithms, these hyperparameters should be optimized with adequate methods. Unfortunately, optimizing the hyperparameters of such MO-ML algorithms is challenging for a user with standard hyperparameter optimization (HPO) [18, 19] approaches, such as SMAC [12, 10], Optuna [1], Hyperopt [15] or SyneTune [16], which iteratively evaluate configurations. This is the case, as evaluating a hyperparameter configuration of an MO-ML algorithm involves evaluating the quality of the Pareto front of models returned by the corresponding algorithm. Several so-called (quality) indicators, such as hypervolume [12], are used to assess different properties of the shape of the Pareto front and in principle can also be used for the purpose of rating a configuration in the context of HPO. Nevertheless, although a user might have a clear idea about what kind of Pareto front shape they would like to choose their final model from, choosing the quality indicator leading to this Pareto front shape is a challenging task in practice. At the same time, however, correctly configuring the HPO tool with the right loss function, i.e. quality indicator in our case, is crucial to achieve the desired result.
With this paper, we propose an interactive human-centered HPO [17, 18, 19, 20, 21, 22, 23] approach for MO-ML algorithms that frees users from choosing a predefined quality indicator suitable for their needs by learning one tailored towards them based on feedback. To achieve this, it first learns the desired Pareto front shape from the user in a short interactive session and then starts a corresponding HPO process optimizing towards the previously learned Pareto front shapes. Instead of requiring the user to present us with a concrete Pareto front they would favor, we interactively and iteratively present them a few
pairs of Pareto fronts asking them for their preferences. Based on these pairwise comparisons, we leverage methods from the field of preference learning (PL) [10] to learn a latent utility function serving as a Pareto front quality indicator customized to the user. In the subsequent stage, we run a state-of-the-art HPO tool instantiated with the learned Pareto front quality indicator to evaluate configurations and the corresponding Pareto fronts.
In summary, we make the following contributions:
1. We propose an interactive approach to learn a Pareto front quality indicator from pairwise comparisons based on a latent utility function with methods from the field of preference learning. This quality indicator is customized to the user and can be learned from a small number of pairwise comparisons.
2. We combine the aforementioned learned quality indicator with an HPO tool to provide a full-fledged HPO approach for multi-objective MO-ML algorithms, which frees the user to choose an appropriate Pareto front quality indicator offering less opportunity for a mistake and thus, bad optimization results.
3. In an experimental case study, we demonstrate that our approach leads to substantially better Pareto fronts compared to optimizing based on a wrong indicator preselected by the user, and performs comparable in case of an advanced user knowing which indicator to pick. Thus, our approach makes HPO for MO-ML algorithms substantially more easily and robustly applicable in practice.
## 2 Background
Since our work is spanned along the dimensions of hyperparameter optimization (HPO), multi-objective optimization (MO), and preference learning (PL), we introduce in the following all of the aforementioned concepts.
### Hyperparameter Optimization
HPO formalizes the task of finding a hyperparameter configuration for a machine learning algorithm leading to a well-performing model on a given dataset
\[\mathcal{D}=\{(\mathbf{x}_{n},y_{n})\}_{n=1}^{N}\in\mathbb{D}\subset\mathcal{X} \times\mathcal{Y} \tag{1}\]
with an instance space \(\mathcal{X}\) and a target space \(\mathcal{Y}\). In addition to the dataset, we are provided with a hyperparameter configuration space \(\mathbf{\Lambda}=\Lambda_{1}\times\cdots\times\Lambda_{K}\) with \(K\) hyperparameters, where \(\Lambda_{k}\) is the domain of the \(k^{th}\) hyperparameter, and an algorithm \(A:\mathbb{D}\times\mathbf{\Lambda}\rightarrow\mathcal{H}\) which trains a model from the model space \(\mathcal{H}\) given a dataset and a hyperparameter configuration. Furthermore, we are provided with a loss function \(\mathcal{L}:\mathcal{H}\times\mathbb{D}\rightarrow\mathbb{R}\) quantifying how well a given model performs on a given dataset. The loss function can be used to assess the quality of a hyperparameter configuration by splitting the original dataset \(\mathcal{D}\) into two disjoint datasets \(\mathcal{D}_{train}\) and \(\mathcal{D}_{test}\), where the model is trained only based on \(\mathcal{D}_{train}\) but evaluated with \(\mathcal{L}\) on \(\mathcal{D}_{test}\). Overall, we seek to find the optimal hyperparameter configuration \(\mathbf{\lambda}^{*}\in\mathbf{\Lambda}\) defined as
\[\mathbf{\lambda}^{*}\in\arg\min_{\mathbf{\lambda}\in\mathbf{\Lambda}}\mathcal{L}\left(A \left(\mathcal{D}_{train},\mathbf{\lambda}\right),\mathcal{D}_{test}\right)\,. \tag{2}\]
There exist several approaches to automatically solving the optimization problem defined in (2), many of which are powered by Bayesian optimization (BO) [1] or evolutionary approaches [1]. Most of these techniques internally iteratively evaluate a large set of configurations based based on their true estimated loss. To this end, they split up a so-called validation dataset \(\mathcal{D}_{val}\) from the training dataset \(\mathcal{D}_{train}\) to estimate the loss \(\mathcal{L}(A(\mathcal{D}_{train},\mathbf{\lambda}),\mathcal{D}_{test})\) of a configuration \(\mathbf{\lambda}\) by \(\mathcal{L}(A(\mathcal{D}_{train},\mathbf{\lambda}),\mathcal{D}_{val})\) and avoid a biased overfit to \(\mathcal{D}_{test}\).
### Multi-Objective Machine Learning
When it comes to learning a machine learning model with more than one (possibly conflicting) objective or loss function \(\mathcal{L}_{1},\ldots,\mathcal{L}_{M}\) in mind, comparing the quality of models becomes difficult. For instance, in the context of Green AI, searching for a model with higher performance usually leads to one with higher power consumption, and vice versa. Thus, in multi-objective ML, learning algorithms leverage the concept of dominance formalizing that one model \(h_{1}\in\mathcal{H}\) is better than another \(h_{2}\in\mathcal{H}\), if \(h_{1}\) performs better in at least one of the objectives while not performing worse than \(h_{2}\) in the remaining ones. Yet, this leaves us with a set of non-dominated models, which are indistinguishable with respect to this dominance idea. These models form a so-called Pareto front \(P_{\mathcal{D}_{val}}(H)\) evaluated on dataset \(\mathcal{D}_{val}\) for a given set of models \(H\subset\mathcal{H}\). Formally, a Pareto front is defined as
\[P_{\mathcal{D}_{val}}(H)=\left\{\begin{array}{l}h\in H,\sharp h^{\prime}\in H :\\ \forall m\in\{1,\ldots,M\}:\\ L_{m}(h^{\prime},\mathcal{D}_{val})\leqslant\mathcal{L}_{m}(h,\mathcal{D}_{ val}),\\ \exists j\in\{1,\ldots,M\}:\\ \mathcal{L}_{j}(h^{\prime},\mathcal{D}_{val})<\mathcal{L}_{j}(h,\mathcal{D}_{ val})\end{array}\right\}\,. \tag{3}\]
Due to the problem of further distinguishing non-dominated models among each other, MO-ML algorithms usually resolve to returning the complete Pareto front of models instead of a single model. Thus, formally, the signature of an algorithm changes to \(A:\mathbb{D}\times\mathbf{\Lambda}\to 2^{\mathcal{H}}\). As a consequence, the evaluation of each hyperparameter configuration of such an MO-ML algorithm also involves quantifying the quality of a Pareto front of models instead of a single model returned by the algorithm, yielding a much more difficult problem. As such, the signature of our HPO loss function used in Eq. 2 also needs to change to \(\mathcal{L}:2^{\mathcal{H}}\times\mathbb{D}\rightarrow\mathbb{R}\) and thus can no longer be instantiated with simple loss functions such as accuracy.
One way of assessing the quality of a Pareto front and thus instantiating the HPO loss function \(\mathcal{L}\) in this case, are so-called Pareto front quality indicators [1]. These can be categorized into so-called external and internal indicators where external indicators measure the convergence of a Pareto front to the optimal one, i.e., the one that the user prefers. Yet, in real-case problems, computing the entire Pareto front space is unfeasible and there is no way to describe the desiderata beforehand. In contrast, internal indicators assess the Pareto front quality by measuring specific characteristics. A very common indicator is called hypervolume [11] quantifying
the volume occupied by the Pareto front w.r.t. a reference point. Other indicators evaluate factors such as uniformity of solution distribution (e.g., spacing indicator [12]), range of values covered by the Pareto front (e.g., maximum spread indicator [13]), or proximity to specific threshold points (e.g., R2 indicator [14]). Even though internal indicators can be effectively used as a loss function in HPO of MO-ML algorithms to quantify the quality of Pareto front and thus of a configuration, choosing the measure leading to a Pareto front which has a desired shape requires deep expert knowledge and thus, is a hard task for the user. In particular, a user has to map their implicit desiderata for the Pareto front shape to properties of the quality indicators without a concrete way of tailoring the HPO approach to these desiderata.
### Preference Learning
Preference learning (PL) [17] is a subfield of machine learning dealing with the problem of learning from different forms of preferences. Although, PL in general comprises a large set of learning problems, we focus on the object ranking problem [16] and in particular of learning to rank objects based on a given set of pairwise preferences.
In the object ranking problem, we consider a space of objects \(\mathcal{O}\), where each object \(o\in\mathcal{O}\) is represented as a feature vector \(\boldsymbol{f}_{o}\). Such an object can be an item, a document, or (in our case) a Pareto front. Preferences among two objects \(o_{i},o_{j}\in\mathcal{O}\) are denoted as \(o_{i}>o_{j}\) indicating that object \(o_{i}\) is preferred over \(o_{j}\). The corresponding space of pairwise rankings over \(\mathcal{O}\) is denoted as \(\mathcal{R}(\mathcal{O})\). Then, given a dataset of the form \(\mathcal{U}=\{o_{i,1}>o_{i,2}\}_{i=1}^{U}\), the goal is to learn a function \(f:\mathcal{O}\times\mathcal{O}\rightarrow\mathcal{R}(\mathcal{O})\) that, given two objects, returns the correct pairwise ranking of these two objects.
Most approaches to object ranking work by learning a utility function \(u:\mathcal{O}\rightarrow\mathbb{R}\) returning a utility score for an object that can be used to create a ranking of two objects by sorting them according to their utility score.
## 3 Related Work
In the following, we give a short overview of related work in the areas of human-centered HPO/AutoML and the usage of preference learning in AutoML-related fields.
The field of human-centered HPO/AutoML has gained increasing traction in the last years with approaches targeted at explaining the hyperparameter optimization process to increase the trust in automated tools [15, 16, 17, 18]. This also includes concrete tools developed to help a user interpret the results such as XAutoML [17] or DeepCave [20].
Motivated from a similar stance that the user should be put back into the loop of the AutoML process to a certain extent, Francia, Giovanelli, and Pisano [16] propose to leverage structured argumentation to semi-automatically constrain search spaces based on user input. Further going into this direction and most similar to our approach is the work by Kulbach, Philipp, and Thoma [17] building upon the observation that users are often unable to concretely configure a loss function in an AutoML tool fitting for their problem at hand. They suggest to learn a loss function customized to the user as a scalarization over several frequently used standard loss functions via PL methods. Our work differs from theirs in two main aspects: (1) First, we consider tuning the hyperparameters of MO-ML algorithms, while they try to find complete AutoML pipelines with standard classification/regression algorithms. Hence, there is not only a difference in the considered meta-problem (AutoML vs. HPO), but also in the types of machine learning algorithms to be composed / tuned (pipelines of standard ML algorithms vs MO-ML algorithms). (2) Second, we demonstrate pairwise comparisons of Pareto fronts to the user compared to pairwise comparisons of feature vectors, targets and predictions presented by Kulbach, Philipp, and Thoma [17]. We believe that our comparisons are much easier to make for a user. (3) Moreover, the Pareto front quality indicator learned by us is not constrained to be a scalarization of existing loss functions or quality indicators. Instead, we can, in principle, leverage any object ranking approach that allows for the extraction of a utility function.
Lastly, our approach is not to be confused with multi-objective HPO [16, 17], where HPO problem itself features multiple loss functions to be optimized, but the ML algorithm itself only returns a single solution.
## 4 Interactive Hyperparameter Optimization in Multi-Objective Problems
Our approach tackles the HPO problem (cf. Eq. 2) for MO-ML algorithms, i.e., learning algorithms that return a Pareto front of models instead of a single model as a result of the learning process, works in three phases as depicted in Fig. 1:
1. **Preliminary Sampling**: In the preliminary sampling phase we sample a fixed but small amount of hyperparameter configurations, evaluate these configurations and store the corresponding Pareto fronts of models returned by the MO-ML algorithm.
2. **Interactive Preference Learning**: In the interactive preference learning phase, we construct pairs of Pareto fronts from the ones obtained in the preliminary sampling phase and show these pairs to the user, who rates which of two shown Pareto fronts they prefer. Based on the pairwise preferences obtained from the user and feature representations of the Pareto fronts underlying these pairwise preferences, we learn a latent utility function, which, given a Pareto front, outputs a utility score.
3. **Utility-driven HPO**: In this HPO phase, we instantiate an HPO tool (e.g. SMAC [15, 16] as in our experiments) with the previously learned utility function as a loss function and perform standard HPO.
### Preliminary Sampling
The underlying goal of the preliminary sampling phase is to obtain a set of Pareto fronts which we can use to construct pairs of Pareto fronts, which the user can rate in the
subsequent stage. Keeping in mind, that we want to learn a utility function from the pairwise comparisons provided by the user, our set of Pareto fronts should ideally reasonably cover the space of possible Pareto fronts. This avoids potential generalization problems of the learned utility function, if it is provided with a Pareto front from a part of the Pareto front space which is very far off from any training data.
Recall that we perform HPO and our MO-ML algorithm \(A:\mathbb{D}\times\mathbf{\Lambda}\to 2^{\mathcal{H}}\) returns different Pareto fronts of models for a given dataset \(\mathcal{D}_{train}\in\mathbb{D}\) based on different hyperparameter configurations \(\boldsymbol{\lambda}\in\boldsymbol{\Lambda}\). As such, we can obtain a set of Pareto fronts of models by sampling a fixed number of hyperparameter configurations \(\boldsymbol{\Lambda}_{random}\subset\boldsymbol{\Lambda}\) at random, training the algorithm instantiated with the corresponding hyperparameter configuration on the training data \(\mathcal{D}_{train}\) and evaluating it according to our loss functions \(\mathcal{L}_{m}\) on \(\mathcal{D}_{val}\). This leads to a set of Pareto fronts defined as
\[\mathcal{P}=\left\{P_{\mathcal{D}_{val}}(A(\mathcal{D}_{train},\boldsymbol{ \lambda}))|\boldsymbol{\lambda}\in\boldsymbol{\Lambda}_{random}\right\}. \tag{4}\]
As the rich literature on AutoML and HPO shows, random search leads to a good coverage of the hyperparameter configuration space (Bischl et al., 2023). However, this does not automatically entail a reasonable coverage of the corresponding Pareto front space.
### Interactive Preference Learning
The goal underlying this phase of our approach is construct pairs of Pareto fronts to show to the user and learn a utility function of the form \(u:\mathcal{P}\rightarrow\mathbb{R}\), which, given a Pareto front, returns a utility score of the Pareto front based on the preferences obtained from the user.
Acquiring User PreferencesMore formally, we start by constructing a set of pairs of Pareto fronts as
\[\Omega=\left\{(P_{1},P_{2})|P_{1},P_{2}\in\mathcal{P},P_{1}\neq P_{2}\right\}, \tag{5}\]
which leads to a number of pairs quadratic in the number of samples Pareto fronts. Since we do not want to overwhelm the user by showing them too many of such pairs, we can fallback to subsampling \(\Omega\) to decrease the number of pairs to show to the user. However, as we show in the experimental evaluation later, we can also simply have a rather short preliminary sampling phase leading to a small number of Pareto fronts and thus also to a reasonably sized set \(\Omega\) of pairs of Pareto fronts without compromising too much of the performance of our approach. Once the user has rated for each of such pairs \((P_{1},P_{2})\in\Omega\) whether he prefers the Pareto front \(P_{1}\) over \(P_{2}\), we have an object ranking dataset of the form
\[\mathcal{U}=\{P_{i,1}>P_{i,2}\}_{i=1}^{U}\,, \tag{6}\]
where, without loss of generality, we assume that the user prefers Pareto front \(P_{i,1}\) over Pareto front \(P_{i,2}\).
Feature Representation of Pareto FrontsTo learn a utility function based on the object ranking dataset defined in Eq. 6, we additionally require a feature representation of a Pareto front such that the object ranking learning algorithm we employ can generalize over unseen Pareto fronts. More precisely, we aim to encode the Pareto front \(P_{\mathcal{D}_{val}(H)}\) returned by \(A\) in a \(d\)-dimensional feature representation \(\boldsymbol{f}_{P_{\mathcal{D}_{val}(H)}}\in\mathbb{R}^{d}\) as depicted in Fig. 2. To this end, we assume to have access to all models returned by \(A\), i.e. \(H\) including all dominated models internally learned by \(A\) and that there is some order imposed on the model space \(\mathcal{H}\) and hence on \(H\). We evaluate all of these models \(h_{b}\in H\) with each loss function \(\mathcal{L}_{i}\) on \(\mathcal{D}_{val}\) resulting in the following matrix
\[\boldsymbol{L}=\left(\begin{array}{cccc}\mathcal{L}_{1}(h_{1},\mathcal{D}_ {val})&\ldots&\mathcal{L}_{M}(h_{1},\mathcal{D}_{val})\\ \vdots&\ddots&\vdots\\ \mathcal{L}_{1}(h_{B},\mathcal{D}_{val})&\ldots&\mathcal{L}_{M}(h_{B}, \mathcal{D}_{val})\end{array}\right), \tag{7}\]
where \(B\geqslant|H|\) is the maximal number of models returned by \(A\). If \(A\) returns \(B^{\prime}<B\) values, we forward impute missing values for \(B^{\prime}+1\leqslant b\leqslant B^{\prime}+(B-B^{\prime})\) as
\[\boldsymbol{L}_{m,b}\leftarrow\mathcal{L}_{m}(h_{b-1},\mathcal{D}_{val})\ \forall 1 \leqslant m\leqslant M\,. \tag{8}\]
For each model \(h_{b}\) we check if it is dominated by the previous model \(h_{b-1}\) as defined in Eq. 3. If it is dominated, its loss values in the matrix are replaced by the ones from the previous non-dominated model following Eq. 8. At the end of this process, this matrix only contains loss values of models contained in the Pareto front. Last, the matrix is flattened and standardized (\(\mathcal{N}(\cdot)\)) across \(\Omega\):
\[\boldsymbol{f}_{P_{\mathcal{D}_{val}(H)}}=\left[\mathcal{N}(\boldsymbol{L}_{ 1,1}),\ldots,\mathcal{N}(\boldsymbol{L}_{B,M})\right]\,. \tag{9}\]
Figure 1: Overview of the three phases of our approach: **Preliminary Sampling** provides the user with different Pareto fronts, **Interactive Preference Learning** allows the user to express their preferences, finally, **Utility-driven HPO** guides the optimization to the user desiderata.
ote that we assume the order on our models to, on one hand ensure that the replacement and imputation strategy works as described above, and on the other hand works as a positional encoding which, assuming a suitable order relation is defined, in itself contains valuable domain knowledge. For our experimental evaluation, we define an ordering based on loss values of one of the considered loss functions.
Learning A Utility Function for Pareto FrontsWith the object ranking dataset as defined in Eq. 6 and the feature representation previously described in Eq. 9, we can now learn the utility function. To this end, we leverage the RankSVM approach by Joachims (2002) which generalizes a standard SVM to the case of object ranking. This approach is very appealing for our use case as it allows to easily extract the latent utility function learned as part of the object ranker, which we want to use in the subsequent stage.
The idea underlying the (linear) RankSVM is that for every pairwise comparison \(P_{1}\succ P_{2}\) in our object ranking dataset \(\mathcal{U}\) we want that, without loss of generality, the hyperplane defined by the learned weight vector \(\mathbf{w}\in\mathbb{R}^{d}\) separates \(P_{1}\) and \(P_{2}\). Formally, it should hold for every pair:
\[P_{1}\succ P_{2} \Leftrightarrow\mathbf{w}^{\intercal}\mathbf{f}_{P_{1}}>\mathbf{w}^{ \intercal}\mathbf{f}_{P_{2}} \tag{10}\] \[\Leftrightarrow\mathbf{w}^{\intercal}\left(\mathbf{f}_{P_{1}}-\mathbf{f}_{P_{ 2}}\right)>0\] (11) \[\Leftrightarrow\mathbf{w}^{\intercal}\left(\mathbf{f}_{P_{2}}-\mathbf{f}_{P_{ 1}}\right)<0\,. \tag{12}\]
As in the standard case of SVMs this problem is NP-hard as noted by Joachims (2002), which can be solved by the standard problem relaxation leveraging slack variables as for normal SVMs. We refer to Joachims (2002) for details.
With the observations formalized in Eq. 10-12 in mind, one can implement a RankSVM with a standard classification SVM trained with a dataset of the form
\[\begin{split}\mathcal{D}_{\mathit{SVM}}&=\big{\{} \big{(}\big{(}\mathbf{f}_{P_{1}}-\mathbf{f}_{P_{2}}\big{)}\,,1\big{)}\big{|}P_{1} \succ P_{2}\in\mathcal{U}\big{\}}\\ &\quad\cup\big{\{}\big{(}\big{(}\mathbf{f}_{P_{2}}-\mathbf{f}_{P_{1}} \big{)}\,,0\big{)}\big{|}P_{1}\succ P_{2}\in\mathcal{U}\big{\}}\,.\end{split} \tag{13}\]
We add a positive example to encourage Eq. 11 and at the same time a negative example to encourage Eq. 12, which both enforce a balanced dataset.
After training a standard classification SVM on \(\mathcal{D}_{\mathit{SVM}}\), we define the utility function \(u:\mathcal{P}\rightarrow\mathbb{R}\) via the SVM weight vector \(\mathbf{w}\) leveraging the feature representation of a Pareto front \(P\in\mathcal{P}\) defined in Eq. 9 as \(u(P)=\mathbf{w}^{T}\mathbf{f}_{P}\).
Although we only explained the linear RankSVM, the kernel trick Scholkopf and Smola (2002) can be applied as usual in the case of SVMs leading to non-linear versions.
All in all, preference learning and in particular object ranking offers a way to quantify the quality of a Pareto front in terms of a single scalar value w.r.t. the user desiredata without asking the user for this concrete scalar value. The RankSVM idea leverages the robustness and effectiveness of support vector machines to operationalize this idea while offering a clear and interpretable ranking mechanism.
### Utility-driven HPO
As we learned the utility function \(u:\mathcal{P}\rightarrow\mathbb{R}\) from the user preferences, we want to leverage it as a Pareto front quality indicator to optimize with a standard HPO tool. To this end, we leverage the well-known HPO tool SMAC Hutter et al. (2011); Lindauer et al. (2022) instantiated with the learned utility function \(u\) as a cost function.
## 5 Evaluation
We evaluate our approach in a case-study related to decreasing the energy consumption of ML models and in particular consider an MO-ML algorithm optimizing for both accuracy and energy consumption showing how our approach can be used in the context of Green AutoML Tornede et al. (2021). In the following, we first detail the general experimental setup and then present two experiments performed.
### General Experimental Setup
The MO-ML algorithm, whose hyperparameters we want to tune, is a wrapper for a funnel-shaped deep neural network (DNN) learning algorithm that exposes the same hyperparameters as the underlying DNN learning algorithm, but performs a grid-search over the number of epochs to train the DNN to return a Pareto front of models (cf. Appendix. D.1).
These models are all identical, but trained for different number of epochs and can thus be seen as snapshots of the DNN learning curve Mohr and van Rijn (2022) at different epochs. A lower number of epochs usually leads to a lower energy consumption, but also to a lower accuracy indicating that the two objectives are potentially conflicting. The concrete hyperparameters our DNN algorithms exposes are defined by LCBench Zimmer et al. (2021), a well-known multi-fidelity deep learning benchmark, and given in Table 2 in the appendix. LCBench comprises evaluations of over 2000 funnel-shaped MLP neural networks with varying hyperparameters on 35 datasets of the OpenML CC-18 suite Bischl et al. (2019). In the spirit of Green AutoML Tornede et al. (2021), we leverage the benchmark surrogate for LCBench provided by YAHPO-GYM Pfisterer et al. (2021).1 We measure the accuracy of a model as the validation accuracy on \(33\%\) of the corresponding OpenML CC-18 dataset. We estimate the power consumption in watts (\(W\)
Figure 2: Visualization of the feature representation of a Pareto front based on two loss functions.
for training a model by assuming the maximum consumption for the whole provided training time. The evaluations present in LCBench were performed on an Intel Xeon Gold 6242 with a maximum consumption of \(150\)\(Wh\).
As noted earlier, many users struggle to choose a Pareto front quality indicator as a loss function for an HPO tool that yields Pareto fronts with their desired shape. As such, our evaluation focuses on users that are likely to make a wrong decision wrt. choosing the correct quality indicator, but can very well label pairwise comparisons of Pareto fronts according to an indicator without knowing the indicator itself. As such, we simulate users by labeling pairwise comparisons according to hypervolume (HV), spacing (SP), maximum spread (MS) and R2 as Pareto front quality indicators. HV quantifies the volume of the front by merging the hypercubes determined by each of its models \(h\in H\) and a reference point \(r\) (i.e., the worst possible model). HV values range from \(0\) to \(1\), where \(1\) is the optimal value. SP is one of the most popular uniformity indicators and gauges the variation of the distance between models in Pareto front. MS is a widely used spread indicator, which measures the range of a Pareto front by considering the maximum extent on each objective. R2 measures the proximity to a specific reference point \(r\) via the Chebyshev norm. A formal definition of these indicators can be found in Appendix B.
The order over the models returned by \(A\), which we assume for the feature representation defined in Sec. 4.2, is given by the energy consumption loss function, which means that models are ordered based on their energy consumption.
All code and documentation required to reproduce any of the results can be found on GitHub2.
Footnote 2: [https://github.com/automl/interactive-mo-ml](https://github.com/automl/interactive-mo-ml)
### Experiment: Object Ranking Performance
In the following, we evaluate our object ranker (cf. Sec. 4.2).
Additional Experimental SetupWe tune the hyper-parameters of our preference learning models, one for each quality indicator, on top of the first 3 datasets of LCBench: KDDCup09_appetency, covertype, Amazon_employee_access. Each configuration has been evaluated by averaging the performance achieved in those datasets over \(3\) different seeds. As to the evaluation within each dataset, we performed a cross-validation: we split the sampled Pareto fronts \(\mathcal{P}\) in \(5\) folds and compute all possible pairwise comparisons within each fold. Specifically, we set the number of sampled Pareto fronts \(|\mathcal{P}|\) at \(40\), so that we can create \(5\) folds of \(8\) elements, which translates into \(\binom{8}{2}=28\) pairwise comparisons. Concretely, at each cross-validation evaluation, we use the pairwise comparisons of the \(4\) training folds to predict the global ranking of the \(8\) Pareto fronts in the test fold, and compare it with the ranking of the Pareto fronts given by the quality indicator at hand. These rankings are compared in terms of a ranking loss function called Kendall's Tau correlation [12]. Roughly speaking, the Kendall's Tau correlation measures how correlated two rankings are. A correlation score of -1 can be interpreted as an inverse correlation, a score of \(0\) as no correlation and a score of \(1\) as perfect correlation. The corresponding Kendall's Tau scores are averaged over the folds. Note that we assume two Pareto fronts to be incomparable and thus "tied" in their ranking, if their indicator values are very close. In particular, we employ the Fisher-Jenks algorithm [10, 11] to operationalize this idea, which we detail in Appendix C.
Result DiscussionIn Fig. 3 we plot the performance of our models in terms of the Kendall's Tau averaged across the folds with the error bars defined by the corresponding standard deviation. In particular, we show how this performance varies with the number of available pairwise comparisons, i.e. the size of the object ranking training data, highlighting that, depending on the quality indicator, we obtain a reliable ranking model for Pareto fronts with rather few training data.
Generally, as to be expected, the correlation scores increase for each utility function as the number of comparisons increases independent of the quality indicator based on which they are learned. However, both the score at the lowest number of comparisons and the degree to which it increases with a growing number of examples varies substantially between the different utility functions. This indicates that it is easier for our object ranker to learn a behavior similar to HV or R2 than to SP or MS which also coincides with the fact that HV and R2 are external quality indicators whereas SP and MS are internal indicators (cf. Sec. 2.2). A potential reason for these differences in modeling performance of our object ranker might be grounded in the feature representation of Pareto fronts we chose and in the linearity of the RankSVM used as an object ranker. In particular, both SP and MS require computations over pairs of models such that a quadratic kernel or feature transformation might be better suited for these indicators. Nevertheless, as we see in the next experiment, the ranking performance seems to be sufficient to guide the HPO process.
### Experiment: HPO Approach Performance
In the following, we leverage the remaining 32 datasets of LCBench to evaluate our complete approach from phases 1 to 3. We will demonstrate that our HPO approach performs much better than SMAC assuming a user that chooses the wrong indicator and that it performs comparable assuming
Figure 3: Kendall’s Tau of the preference learning models.
an advanced user knowing which indicator to pick.
Additional Experimental SetupIn order to quantify how well our approach works, we compare how well SMAC instantiated with each of the Pareto front indicators above as a loss function (IB) works compared to our approach with the user simulated as mentioned above (PB). In particular, for each Pareto front indicator, we run our approach with a simulated user that behaves according to that indicator and compare against the HPO tool instantiated with each of the indicators as a loss function. That way, we cannot only quantify how much better our approach works wrt. to each of the quality indicators, under the assumption that a user chooses a wrong quality indicator, but also that our approach does not perform substantially worse in cases where the user picks the correct indicator. We run both IB and PB for a budget of \(30\) evaluations on each of the datasets for \(3\) seeds and report the mean and standard deviation over the seeds and datasets.
Result DiscussionTable 6 visualizes the comparison of SMAC initialized with different Pareto front quality indicators as a loss function (IB) and our approach based on the learned utility function as a Pareto front quality indicator (PB). The indicators in the rows represent the ones used for labeling the user preferences, and hence, to train the preference learning models. The indicators in the columns represent the quality indicators chosen for optimization in SMAC. As a consequence, in each cell, we find the performance (averaged over seeds and datasets) and the respective standard deviation of the preference-based SMAC (PB) and the indicator-based SMAC (IB). This is expressed in terms of the quality indicator leveraged to rank the Pareto front (i.e. the one given in the row), hence, providing us with an estimation of how compliant the final Pareto front is with the user preferences. The cells in the diagonal correspond to situations where our approach is compared to IB initialized with the "correct" quality indicator function, whereas the off-diagonal cells correspond to scenarios where a user chooses the "wrong" quality indicator for IB. For a better visual interpretability, we color cells with a blue tone depending on how much better our approach (PB) is relative to IB given in the respective column and red in cases where we are worse. Moreover, we highlight the better performance value for each cell in bold. If our learned utility function perfectly resembled the ground truth quality indicator, we would expect that the two values in each diagonal cell are identical and the coloring would be white as a consequence. Moreover, the better our approach is in one of the settings, the darker is the blue of the corresponding cell. At the same time, the worse our approach is in one of the settings, the darker will be the red in the corresponding cell.
As the table shows, our approach (PB) behaves comparable to IB in cases where a user picks the correct Pareto front quality indicator (diagonal) and almost always better in cases where a user picks the wrong Pareto front quality indicator. In particular, we perform better or equal in \(11/16\) cases whereas IB performs slightly better only in \(5/10\) cases. In cases where our approach performs better, the improvements are often substantial, whereas in cases of degradation our approach is often only slightly worse.
Overall, our experimental evaluation demonstrates that our approach successfully frees users from selecting the correct Pareto front quality indicator aligned with their desiderata at the slight cost of visually comparing a low number of Pareto fronts upfront.
## 6 Conclusion
In this paper, we propose a human-centered interactive HPO approach tailored towards MO ML leveraging preference learning to extract desiderata from users that guide the optimization. In particular, we learn a utility function for Parto fronts based on pairwise preferences given by the user which we use as a loss function in a subsequent standard HPO process. In an experimental study, we demonstrate that our end-to-end approach performs much better than off-the-shelf HPO with a user that chooses an indicator not aligned with their desiderata and that it performs comparable to off-the-shelf HPO operated by an advanced user knowing which indicator to pick. As such, our approach successfully frees users from selecting the correct Pareto front quality indicator aligned with their desiderata at the slight cost of visually comparing a low number of Pareto fronts upfront.
As described in more detail in the limitations section in Appendix A, our approach naturally also offers room for future work. For example, we deem it interesting to design other feature representations with less assumptions and to generalize our approach to a larger number of loss functions as it is currently practically limited to two as users might have a hard time to rate higher dimensional Pareto fronts in the interactive part.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{2}{l|}{PB\(\backslash\)IB} & \multicolumn{2}{c|}{\(HV\uparrow\)} & \multicolumn{2}{c|}{\(SP\downarrow\)} & \multicolumn{2}{c|}{\(MS\uparrow\)} & \multicolumn{2}{c}{\(R2\downarrow\)} \\ \hline \multirow{2}{*}{\(HV\uparrow\)} & \multicolumn{2}{c|}{0.76} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.77}\\ \textbf{(\pm 0.17)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.76}\\ \textbf{(\pm 0.17)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.52}\\ \textbf{(\pm 0.17)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.76}\\ \textbf{(\pm 0.17)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.52}\\ \textbf{(\pm 0.24)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.76}\\ \textbf{(\pm 0.17)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.52}\\ \textbf{(\pm 0.16)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.76}\\ \textbf{(\pm 0.16)}\end{array}\)} \\ \hline \multirow{2}{*}{\(SP\downarrow\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.17)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.01)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.01)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.01)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.00)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.00)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.00)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.01)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.16)}\end{array}\)} \\ \hline \multirow{2}{*}{\(MS\uparrow\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.02)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.01)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.02)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.01)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.01)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.01)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.016)}\end{array}\)} & \multirow{2}{*}{\(\begin{array}{c}\textbf{0.01}\\ \textbf{(\pm 0.016)}\end{array}\)} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between indicator-based HPO (i.e., IB, columns) and preference-based HPO (i.e., PB, rows). The preference learning model is trained using 28 pairwise comparisons.
## Acknowledgements
Alexander Tornede and Marius Lindauer acknowledge funding by the European Union (ERC, "ixAutoML", grant no.101041029). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. Tanja Tornede was supported by the German Federal Ministry of the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (GreenAutoML4FAS project no. 67KI32007A).
|
2309.11185 | Harer-Zagier type recursion formula for the elliptic GinOE | We consider the real eigenvalues of the elliptic Ginibre matrix indexed by
the non-Hermiticity parameter $\tau \in [0,1]$, and present a Harer-Zagier type
recursion formula for the even moments in the form of an $11$-term recurrence
relation. For the symmetric GOE case ($\tau=1$), it reduces to a known 5-term
recurrence relation. On the other hand, for the asymmetric cases when $\tau <
1$, the recursion formula is new, even in the special case of the well-studied
Ginibre ensemble ($\tau=0$), where it reduces to a 3-term recurrence. For the
proof, we derive a seventh-order linear differential equation for the moment
generating function. | Sung-Soo Byun | 2023-09-20T10:17:26Z | http://arxiv.org/abs/2309.11185v2 | # Harer-Zagier type recursion formula for the elliptic GinoE
###### Abstract.
We consider real eigenvalues of the elliptic Ginibre matrix indexed by the non-Hermiticity parameter \(\tau\in[0,1]\), and present a Harer-Zagier type recursion formula for the even moments in the form of an \(11\)-term recurrence relation. For the Ginibre case when \(\tau=0\), this formula simplifies to a \(3\)-term recurrence relation. On the other hand, for the GOE case when \(\tau=1\), it reduces to a \(5\)-term recurrence relation, recovering the result established by Ledoux. For the proof, we employ the skew-orthogonal polynomial formalism and the generalised Christoffel-Darboux formula. Together with Gaussian integration by parts, these enable us to derive a seventh-order linear differential equation for the moment generating function.
Sung-Soo Byun was partially supported by the POSCO TJ Park Foundation (POSCO Science Fellowship) and by the New Faculty Startup Fund from Seoul National University.
## 1. Introduction and main results
Random Matrix Theory (RMT) enjoys an intimate connection with various branches of mathematics and physics [2]. One prominent illustration of this relationship is the Harer-Zagier formula [55], which stands as a well-known example demonstrating the combinatorial and topological significance inherent in RMT statistics. While the Harer-Zagier formula originates in the study of the moduli space of curves, it also gives rise to a fundamental formula in the study of spectral moments of classical random matrix ensembles (cf. [22]).
To be more concrete, let us consider the Gaussian Unitary Ensemble (GUE) \(X^{\rm GUE}\) picked randomly with respect to the measure proportional to \(e^{-\frac{1}{2}\operatorname{Tr}X^{2}}\,dX\) on the space \(\mathcal{H}_{N}\) of \(N\times N\) Hermitian matrices. Here, \(dX\) is the Lebesgue measure on \(\mathcal{H}_{N}\cong\mathbb{R}^{N^{2}}\). One of the most basic observables of the GUE is its \(p\)-th spectral moment
\[M_{p}^{\rm GUE}:=\mathbb{E}\Big{[}\operatorname{Tr}(X^{\rm GUE})^{p}\Big{]}, \qquad p\in\mathbb{N}. \tag{1.1}\]
Note here that, due to symmetry, all odd moments vanish. On the other hand, it possesses non-trivial even moments. For instance, the first few even moments are given by
\[M_{0}^{\rm GUE}=N,\qquad M_{2}^{\rm GUE}=N^{2},\qquad M_{4}^{\rm GUE}=2N^{3}+N,\qquad M_{6}^{\rm GUE}=5N^{4}+10N^{2}.\]
In [55], it was demonstrated using the Wick calculus that the spectral moments satisfy the \(3\)-term recursion formula
\[(p+1)M_{2p}^{\rm GUE}=(4p-2)N\,M_{2p-2}^{\rm GUE}+(p-1)(2p-1)(2p-3)M_{2p-4}^{ \rm GUE}. \tag{1.2}\]
This recursion formula yields that for any non-negative integer \(p\), the spectral moment \(M_{2p}^{\rm GUE}\) has the expansion of the form
\[M_{2p}^{\rm GUE}=\sum_{g=0}^{\lfloor p/2\rfloor}c(g;p)\,N^{p+1-2g}. \tag{1.3}\]
Here, the coefficients \(c(g;p)\) can be identified with the number of pairings of the edges of a \(2p\)-gon, dual to a map on a compact Riemann surface of genus \(g\). For this reason, a formula of the type (1.3)
is commonly referred to as the genus expansion in RMT. Indeed, this type of formula provides one of the earliest examples of a topological expansion in the theory of matrix integrals [12]. In particular, the leading coefficient \(c(0;p)\) is given by the \(p\)-th Catalan number
\[c(0;p)=\frac{1}{p+1}\binom{2p}{p}. \tag{1.4}\]
Since the Catalan number corresponds to the even moments of the Wigner's semicircle law
\[d\mu_{\rm sc}(x):=\frac{\sqrt{4-x^{2}}}{2\pi}\,\mathbb{1}_{(-2,2)}(x)\,dx, \tag{1.5}\]
this leading coefficient (1.4) gives rise to the convergence of the GUE density towards (1.5), see e.g. [36, Exercise 1.6]. Furthermore, when combined with the loop equation formalism, the expansion (1.3) can be utilized to derive the large-\(N\) expansion of the densities, cf. [77].
The Harer-Zagier formula (1.2) was revisited by Haagerup and Thorbjornsen in [56]. They re-derived this recursion formula using a closed-form expression for the moment-generating function, which is given in terms of a confluent hypergeometric function, cf. (3.24). Furthermore, they obtained a similar formula for the Laguerre Unitary Ensemble (LUE). Additional discussions on the LUE spectral moments can be found in [25, 29, 27], and similar work on the Jacobi Unitary Ensemble (JUE) is detailed in [51]. Notably, for all these classical GUE, LUE, and JUE models, the spectral moments can be evaluated in closed form using hypergeometric polynomials [26]. We also refer to [23, 44] for recent work on discrete extensions of the above-mentioned results and to [38] for an extension of (1.2) to the \(d\)-dimensional Fermi gas in an harmonic trap. In the context of RMT, the Harer-Zagier type formulas can be employed to derive (small) deviation inequalities for extreme eigenvalues [56, 58, 59], which play an important role in the study of the associated universality problems, see e.g. [32, 33]. Furthermore, as previously mentioned, it can be used to study the finite-size corrections of the densities or the counting statistics [11, 28, 45, 69, 78]. For further mathematical implementation beyond RMT, we refer the reader to a recent work [50], which delves into expressing the Harer-Zagier formula within the context of noncommutative geometry. Furthermore, for applications of these formulas in the context of the time-delay matrix of quantum dots and \(\tau\)-function theory, see [62, 63, 64, 24, 65] and references therein.
Beyond the unitary invariant ensembles, it is natural to investigate the Harer-Zagier type formula for random matrices with orthogonal symmetry. The single-most fundamental model in this class is perhaps the Gaussian Orthogonal Ensemble (GOE) \(X^{\rm GOE}\) that follows the probability distribution proportional to \(e^{-\frac{1}{4}\operatorname{Tr}X^{2}}\,dX\), where in this case \(dX\) is the Lebesgue measure on the space of symmetric matrices \(\mathcal{S}_{N}\cong\mathbb{R}^{N(N+1)/2}\). Similar to the above, we write
\[M_{p}^{\rm GOE}:=\mathbb{E}\Big{[}\operatorname{Tr}(X^{\rm GOE})^{p}\Big{]}, \qquad p\in\mathbb{N} \tag{1.6}\]
for the \(p\)-th spectral moment of the GOE. For instance, we have the following explicit evaluations
\[M_{0}^{\rm GOE} =N,\qquad M_{2}^{\rm GOE}=N^{2}+N,\] \[M_{4}^{\rm GOE} =2N^{3}+5N^{2}+N,\qquad M_{6}^{\rm GOE}=5N^{4}+22N^{3}+52N^{2}+41N.\]
It is trivial but noteworthy for the latter discussion that \(M_{0}^{\rm GUE}=M_{0}^{\rm GOE}=N\) since they coincide with the number of eigenvalues.
As is widely recognized, the integrable structure of orthogonal ensembles is considerably more intricate when compared to their unitary counterparts (cf. [1]). This complexity leads to a delay in the investigation of the Harer-Zagier type formula for the GOE. Notably, Ledoux demonstrated
in [59, Theorem 2] that the GOE spectral moments satisfy the following \(5\)-term recurrence relation
\[\begin{split}(p+1)M_{2p}^{\text{GOE}}&=(4p-1)(2N-1)M_ {2p-2}^{\text{GOE}}\\ &\quad+(2p-3)(10p^{2}-9p-8N^{2}+8N)M_{2p-4}^{\text{GOE}}\\ &\quad-5(2p-3)(2p-4)(2p-5)(2N-1)M_{2p-6}^{\text{GOE}}\\ &\quad-2(2p-3)(2p-4)(2p-5)(2p-6)(2p-7)M_{2p-8}^{\text{GOE}}.\end{split} \tag{1.7}\]
We also refer to [66] for a combinatorial aspect of this formula. In [59], the formula (1.7) follows from a linear differential equation for the moment generating function (MGF), see also [70] and references therein for more recent work. The method of Ledoux relies on elementary yet nontrivial analysis employing Gaussian integration by parts and certain properties of the Hermite polynomials. This method can also be applied to re-derive the recursion (1.2) for the GUE. From the viewpoint of the integrable structure, the additional technical difficulty for the GOE, compared to the GUE, is that the \(1\)-point function (3.13) of the GOE consists of two parts. One of these parts is indeed the Christoffel-Darboux kernel of the Hermite polynomials, which coincides with the GUE density. For the GUE density part, one can make use of the classical Christoffel-Darboux formula. Let us also mention that the structure (3.13) illustrating the relation between the GOE and GUE densities gives rise to a recurrence relation of the mixed moments
\[\begin{split} M_{2p}^{\text{GOE}}&=(4N-2)M_{2p-2}^ {\text{GOE}}+4(2p-2)(2p-3)M_{2p-4}^{\text{GOE}}\\ &\quad+M_{2p}^{\text{GUE}}-(4N-3)M_{2p-2}^{\text{GUE}}-(2p-2)(2p- 3)M_{2p-4}^{\text{GUE}},\end{split} \tag{1.8}\]
see [59, Theorem 3].
While there has been extensive study on the spectral moments of Hermitian random matrix ensembles, their non-Hermitian counterparts remain relatively unexplored, and we aim to make contributions in this direction. In non-Hermitian random matrix theory, the basic model typically considered is the Ginibre matrix, see [16, 17] for recent reviews. In particular, what distinguishes the real Ginibre matrix, referred to as GinOE, from its complex or quaternionic counterparts, GinUE and GinSE respectively, is the presence of purely real eigenvalues, see Figure 1. We shall focus on the statistics of real eigenvalues of real random matrices. Compared to Hermitian random matrices, the study of real eigenvalues of asymmetric random matrices has additional conceptual and technical challenges:
* the number of real eigenvalues is random;
* the classical Christoffel-Darboux formula does not apply.
We refer to [3, 18, 19, 35, 37, 40, 48, 57, 72, 71] and references therein for recent work on real eigenvalues of various asymmetric random matrices.
In this work, we consider the elliptic GinOE, the real random matrices \(X\equiv X_{\tau}\) that are distributed according to the probability measure proportional to
\[\exp\Big{(}-\frac{1}{2(1-\tau^{2})}\operatorname{Tr}(X^{\dagger}X-\tau X^{2} )\,dX,\qquad\tau\in[0,1], \tag{1.9}\]
where \(dX\) is the Lebesgue measure on the space \(\mathbb{R}^{N^{2}}\) of \(N\times N\) matrices with real elements. Here, the parameter \(\tau\) is known as the non-Hermiticity parameter. Alternatively, the elliptic GinOE can be defined as
\[X_{\tau}:=\sqrt{\frac{1+\tau}{2}}\,S_{+}+\sqrt{\frac{1-\tau}{2}}\,S_{-}, \qquad S_{\pm}:=\frac{1}{\sqrt{2}}(G+G^{T}). \tag{1.10}\]
Here \(G\) is an \(N\times N\) GinOE matrix, the random matrix with all independent Gaussian entries \(\operatorname{N}[0,1/N]\). Notice that the symmetrisation \(S_{+}\) of the GinOE coincides with the GOE. The elliptic random matrix
model stands out as a well-known model, seamlessly bridging fundamental concepts in non-Hermitian and Hermitian random matrix theories. Namely, for \(\tau=0\), it recovers the GinOE, whereas in the limit \(\tau\uparrow 1\), it recovers the GOE. We refer the reader to [16, Section 2.3] and [17, Sections 2.8 and 5.5] for reviews as well as comprehensive references on the elliptic Ginibre ensembles. See also [39, 49, 54, 68] for some very recent works on the elliptic Ginibre matrices.
From now on, we shall focus on the case that the matrix dimension \(N\) is even. The odd \(N\) case can also be analysed but requires further non-trivial computations [41, 73]. Let us denote by \(\{\lambda_{j}\}_{j=1}^{\mathcal{N}_{\tau}}\) the real eigenvalues of the elliptic GinOE, where \(\mathcal{N}_{\tau}\) is the number of real eigenvalues. Along the discussions above, we write
\[M_{p}\equiv M_{p,\tau}:=\mathbb{E}\Big{[}\sum_{j=1}^{\mathcal{N}_{\tau}} \lambda_{j}^{p}\Big{]} \tag{1.11}\]
for the \(p\)-th spectral moment of real eigenvalues of the elliptic GinOE. In particular, the zero-th moment
\[M_{0,\tau}=\mathbb{E}\mathcal{N}_{\tau} \tag{1.12}\]
corresponds to the expected number of real eigenvalues. In contrast to the GUE or GOE cases, the evaluation of the spectral moments of the elliptic GinOE does not permit simple formulas and requires highly non-trivial analysis. For instance, it was shown in [43] that \(M_{0,\tau}\) can be evaluated as
\[M_{0,\tau}=\Big{(}\frac{2}{\pi}\frac{1+\tau}{1-\tau}\Big{)}^{\frac{1}{2}}\sum_ {k=0}^{N/2-1}\frac{\Gamma(2k+\frac{1}{2})}{(2k)!}{}_{2}F_{1}\Big{(}\frac{1}{2},\frac{1}{2}\,;\,-2k+\frac{1}{2}\,;\,-\frac{\tau}{1-\tau}\Big{)}, \tag{1.13}\]
where \({}_{2}F_{1}\) is the hypergeometric function defined by
\[{}_{2}F_{1}(a,b;c;z):=\frac{\Gamma(c)}{\Gamma(a)\Gamma(b)}\sum_{s=0}^{\infty} \frac{\Gamma(a+s)\Gamma(b+s)}{\Gamma(c+s)s!}z^{s},\quad(|z|<1) \tag{1.14}\]
and by the analytic continuation otherwise. The expression (1.13) indeed results from intricate computations as well as extensive simplifications, utilizing the skew-orthogonal polynomial formalism
Figure 1. Eigenvalues of the elliptic GinOE, where \(\tau=1/3\) and \(N=200\).
developed by Forrester and Nagao in [42, 43]. For the reader's convenience, we provide a brief exposition of this formalism and the derivation of (1.13) in Appendix A. For the external case when \(\tau=0,1\), the formula (1.13) reduces to
\[M_{0,\tau}=\begin{cases}\sqrt{2}\sum_{k=0}^{N/2-1}\frac{(4k-1)!!}{(4k)!!}& \text{if $\tau=0$},\\ N&\text{if $\tau=1$}.\end{cases} \tag{1.15}\]
This formula for the GinOE case \(\tau=0\) was first obtained in the work [30] of Edelman, Kostlan and Shub. On the other hand, for the GOE case \(\tau=1\), this is trivial because GOE matrices are inherently symmetric. This can also be interpreted as the random variable \(\mathcal{N}_{\tau}\) becoming deterministic in the Hermitian limit \(\tau\uparrow 1\). We mention that the large-\(N\) asymptotic behaviours of \(M_{0,\tau}\) have been extensively studied in recent works [18, 19].
We now present our main result. For this purpose, for \(l=\{1,\ldots,10\}\), let
\[\begin{split}\mathfrak{A}_{l}\equiv\mathfrak{A}_{l}(p;\tau)& :=\frac{1}{(1+6\tau+2N(1-\tau^{2}))(4+17\tau+6\tau^{2}+2N(1-\tau^{2 }))^{2}}\\ &\times\frac{1}{256(p-4)(p-3)(p-2)}\sum_{k=0}^{\min\{10-l,7\}}(2p -k-2l+1)_{k+2l-3}\,\mathfrak{a}_{k,k+2l-3},\end{split} \tag{1.16}\]
where \(\mathfrak{a}_{l,k}\equiv\mathfrak{a}_{l,k}(\tau)\) is the coefficient of the \(t^{k}\)-term in the polynomial \(A_{l}(t)\), cf. (2.24). Here, \(A_{l}(t)\)'s are odd or even polynomials of degree \(17-l\) that are given in (2.15).
**Theorem 1.1** (**Recursion formula of the elliptic GinOE**).: _Let \(\tau\in[0,1].\) Then for any integer \(p\geq 10\) and even integer \(N\geq 2\), we have the 11-term recurrence relation_
\[2(2p+1)(1-\tau)^{6}\,M_{2p,\tau}=\sum_{l=1}^{10}\mathfrak{A}_{l}(p;\tau)\,M_{ 2p-2l,\tau}, \tag{1.17}\]
_where \(\mathfrak{A}_{l}\)'s are given by (1.16)._
We note that while it is possible to write the coefficients \(\mathfrak{A}_{l}\) explicitly, the resulting expressions are excessively long and complicated, see Section 2. Nevertheless, when considering the extremal cases \(\tau=0,1\), we encounter dramatic simplifications:
\[\mathfrak{A}_{l}=0,\qquad\begin{cases}\text{if $\tau=0$ and $l\geq 3$},\\ \text{if $\tau=1$ and $l\leq 5$}.\end{cases} \tag{1.18}\]
This leads to the following corollary.
**Corollary 1.2** (**Recursion formula of the GinOE and GOE**).: _For any even integer \(N\geq 2\), we have the following._
* **The GinOE case (\(\tau=0\))**_. For any integer_ \(p\geq 2\)_, we have the 3-term recurrence relation_ (1.19) \[\begin{split} 2(2p+1)M_{2p,0}&=(2p-1)(6p+4N-5)M_{2p-2,0}\\ &\quad-(2p-3)(2p+N-4)(2p+2N-3)M_{2p-4,0}.\end{split}\]
* **The GOE case (\(\tau=1\))**_. For any integer_ \(p\geq 4\)_, the recurrence relation (_1.7_) holds._
We mention that Corollary 1.2 (ii) reproduces the result [59, Theorem 2] of Ledoux. On the other hand, the recurrence relation (1.19) for real eigenvalues of the GinOE matrix has not been reported in the literature to the best of our knowledge. As previously mentioned, the recursion formulas can be used to study the counting statistics of various random matrix ensembles, cf. Remarks 1.6 and 1.7. In
particular, the counting statistics of non-Hermitian random matrix ensembles have been the subject of recent active research, see e.g. [4, 5, 9, 10, 13, 20, 21, 34, 52, 74, 75] and references therein.
**Example 1.3**.: Let us consider the simplest case that \(N=2\) and \(\tau=1/2\). Then by direct computations using (3.2) and (3.4), we have
\[M_{0}=\sqrt{3},\qquad M_{2}=\frac{9\sqrt{3}}{4},\qquad M_{4}= \frac{207\sqrt{3}}{16},\qquad M_{6}=\frac{7671\sqrt{3}}{64},\] \[M_{8}=\frac{352593\sqrt{3}}{256},\qquad M_{10}=\frac{21130065 \sqrt{3}}{1024},\qquad M_{12}=\frac{1520675775\sqrt{3}}{4096},\] \[M_{14}=\frac{127714031235\sqrt{3}}{16384},\qquad M_{16}=\frac{12 259660377825\sqrt{3}}{65536},\] \[M_{18}=\frac{1324003422872025\sqrt{3}}{262144},\qquad M_{20}= \frac{158878375950056175\sqrt{3}}{1048576}.\]
On the other hand, for \(p=0\), it follows from (2.15) that
\[\mathfrak{A}_{1}=\frac{13509}{448},\qquad\mathfrak{A}_{2}=\frac{2 4597189}{14336},\qquad\mathfrak{A}_{3}=-\frac{21397118355}{487424},\qquad \mathfrak{A}_{4}=-\frac{3461603089815}{3899392},\] \[\mathfrak{A}_{5}=\frac{5108563250505}{458752},\qquad\mathfrak{A}_ {6}=\frac{12260909264175}{139264},\qquad\mathfrak{A}_{7}=-\frac{43231183728232 5}{1114112},\] \[\mathfrak{A}_{8}=-\frac{219353136341625}{278528},\qquad\mathfrak{ A}_{9}=\frac{421920642268125}{557056},\qquad\mathfrak{A}_{10}=\frac{192365034468 75}{278528}.\]
Using these explicit evaluations, one can observe that the recursion formula (1.17) holds. We also mention that in general, it is possible to numerically check (1.17).
The key ingredient for the proof of Theorem 1.1 is the moment generating function (the Laplace transform) of real eigenvalues
\[M(t)\equiv M_{\tau}(t):=\mathbb{E}\Big{[}\sum_{j=1}^{\mathcal{N}_{\tau}}e^{t \,\lambda_{j}}\Big{]}. \tag{1.20}\]
As previously remarked, the recursion formula of the spectral moments can be derived from a linear differential equations of \(M_{\tau}\), cf. (3.34). The differential equation for \(M_{\tau}\) is formulated in the following theorem, which is of independent interest.
**Theorem 1.4** (**Differential equation for the MGF of the elliptic GinOE**).: _Let \(\tau\in[0,1]\). Then for any even integer \(N\geq 2\), the moment generating function \(M_{\tau}(t)\) satisfies the seventh-order differential equation_
\[\sum_{k=0}^{7}A_{k}(t)M_{\tau}^{(k)}(t)=0, \tag{1.21}\]
_where the polynomials \(A_{k}\)'s are given in (2.15)._
We emphasize that the existence of such a linear differential equation for the moment generating function is already far from obvious. Furthermore, it is crucial to highlight that the coefficients in (1.21) are polynomials. This, in turn, results in a finite-term recurrence relation for the spectral moments.
For the extremal cases, explicit evaluations of the polynomials \(A_{k}\) are provided in Example 2.3, where we once again observe significant simplifications. As a consequence, we have the following immediate corollary.
**Corollary 1.5** (**Differential equation for the MGF of the GinOE and GOE**).: _For any even integer \(N\geq 2\), we have the following._
* **The GinOE case (\(\tau=0\))**_. For_ \(\tau=0\)_, the moment generating function_ \(M_{\tau}(t)\) _satisfies the seventh-order differential equation_ (1.22) \[\sum_{k=1}^{7}C_{k}(t)M_{0}^{(k)}(t)=0,\] _where_ (1.23) \[\begin{split} C_{7}(t)&=2\,t^{4},\qquad C_{6}(t) =-3\,t^{5}+2\,t^{3},\\ C_{5}(t)&=t^{6}-(4N+13)\,t^{4}-42\,t^{2},\\ C_{4}(t)&=(3N+5)\,t^{5}+4(N+10)\,t^{3}+120\,t,\\ C_{3}(t)&=(N-1)(2N+3)\,t^{4}+36N\,t^{2}-120,\\ C_{2}(t)&=-6(N+1)^{2}\,t^{3}-120(N+1)\,t,\\ C_{1}(t)&=6(N+1)^{2}\,t^{2}+120(N+1).\end{split}\]
* **The GOE case (\(\tau=1\))**_. For_ \(\tau=1\)_, the moment generating function_ \(M_{\tau}(t)\) _satisfies the fourth-order differential equation_ (1.24) \[\sum_{k=0}^{4}C_{k}(t)M_{1}^{(k)}(t)=0,\] _where_ (1.25) \[\begin{split} C_{4}(t)&=t,\qquad C_{3}(t)=5,\\ C_{2}(t)&=-5t^{3}-(8N-4)t,\qquad C_{1}(t)=-36\,t^{2 }-20N+10,\\ C_{0}(t)&=4\,t^{5}+(20N-10)\,t^{3}+(16N^{2}-16N-44) t.\end{split}\]
We stress that Corollary 1.5 (ii) for the GOE was previously derived and used in [59, Section 3]. On the other hand, the differential equation (1.22) for the GinOE is new to our best knowledge.
Let us discuss Corollary 1.5 (i) from the viewpoint of the large-\(N\) limit of the GinOE.
**Remark 1.6** (**Large-\(N\) expansion of the GinOE**).: For the GinOE case \(\tau=0\), we define the rescaled moment generating function
\[\widetilde{M}(t):=\frac{1}{\sqrt{N}}M\Big{(}\frac{t}{\sqrt{N}}\Big{)}=\int_{ \mathbb{R}}e^{tx}\rho_{N}(x)\,dx, \tag{1.26}\]
where \(\rho_{N}(x):=R_{N}(\sqrt{N}x)\) is the rescaled density, see (3.11) for its explicit formula. By using (1.22) and the change of variables, one can observe that \(\widetilde{M}\) satisfies the differential equation
\[\sum_{k=1}^{7}\widetilde{C}_{k}(t)\,\widetilde{M}^{(k)}(t)=0,\qquad\widetilde {C}_{k}(t):=N^{k/2}\,C_{k}\Big{(}\frac{t}{\sqrt{N}}\Big{)}. \tag{1.27}\]
Expanding \(\widetilde{C}_{k}\)'s for large-\(N\), this equation can be rewritten as
\[\mathfrak{D}(t)\Big{[}\widetilde{M}(t)\Big{]}=0,\qquad\mathfrak{D}(t):= \mathfrak{D}_{0}(t)+\frac{\mathfrak{D}_{1}(t)}{N}+\frac{\mathfrak{D}_{1}(t)}{ N^{2}}, \tag{1.28}\]
where the linear differential operators \(\mathfrak{D}_{k}\)'s are given by
\[\mathfrak{D}_{0}(t) :=2\,t^{4}\,\partial_{t}^{7}+2\,t^{3}\,\partial_{t}^{6}-(4t^{4}+42t ^{2})\,\partial_{t}^{5}+(4t^{3}+120\,t)\,\partial_{t}^{4}\] \[\qquad+(2t^{4}+36t^{2}-120)\,\partial_{t}^{3}-(6t^{3}+120t)\, \partial_{t}^{2}+(6t^{2}+120)\,\partial_{t},\] \[\mathfrak{D}_{1}(t) :=-3\,t^{5}\,\partial_{t}^{6}-13\,t^{4}\,\partial_{t}^{5}+(3\,t^ {5}+40\,t^{3})\,\partial_{t}^{4} \tag{1.30}\] \[\qquad+t^{4}\,\partial_{t}^{3}-(12t^{3}+120t)\,\partial_{t}^{2}+( 12t^{2}+120)\,\partial_{t},\] \[\mathfrak{D}_{2}(t) :=t^{6}\,\partial_{t}^{5}+5t^{2}\,\partial_{t}^{4}-3\,t^{4}\, \partial_{t}^{3}-6t^{3}\,\partial_{t}^{2}+6t^{2}\,\partial_{t}. \tag{1.29}\]
On the one hand, it is well known that the rescaled density \(\rho_{N}(x)\) satisfies the expansion
\[\rho_{N}(x)=\rho_{(0)}(x)+\rho_{(1/2)}(x)\frac{1}{\sqrt{N}}+O\Big{(}\frac{1}{ N}\Big{)} \tag{1.32}\]
in the sense of distribution, where
\[\rho_{(0)}(x):=\frac{1}{\sqrt{2\pi}}\,\mathbb{1}_{(-1,1)}(x),\qquad\rho_{(1/2 )}(x):=\frac{1}{4}\Big{(}\delta(x-1)+\delta(x+1)\Big{)}, \tag{1.33}\]
see e.g. [17, pp. 33-34]. Then it follows that
\[\widetilde{M}(t)=\widetilde{M}_{(0)}(t)+\widetilde{M}_{(1/2)}(t)\frac{1}{ \sqrt{N}}+O\Big{(}\frac{1}{N}\Big{)}, \tag{1.34}\]
where
\[\widetilde{M}_{(0)}(t):=\int_{\mathbb{R}}e^{tx}\rho_{(0)}(x)\,dx=\sqrt{\frac{ 2}{\pi}}\frac{\sinh(t)}{t},\qquad\widetilde{M}_{(1/2)}(t):=\int_{\mathbb{R}} e^{tx}\rho_{(1/2)}(x)\,dx=\frac{\cosh(t)}{2}. \tag{1.35}\]
Furthermore, by (1.28), we have
\[\mathfrak{D}(t)\Big{[}\widetilde{M}(t)\Big{]}=\Big{[}\mathfrak{D}_{0}(t)+ \frac{\mathfrak{D}_{1}(t)}{N}+\frac{\mathfrak{D}_{1}(t)}{N^{2}}\Big{]}\Big{[} \widetilde{M}_{(0)}(t)+\widetilde{M}_{(1/2)}(t)\frac{1}{\sqrt{N}}+O\Big{(} \frac{1}{N}\Big{)}\Big{]},\]
which in turn implies
\[\mathfrak{D}_{0}(t)\Big{[}\widetilde{M}_{(0)}(t)\Big{]}=\mathfrak{D}_{0}(t) \Big{[}\widetilde{M}_{(1/2)}(t)\Big{]}=0. \tag{1.36}\]
Since we have the explicit formulas (1.35), one can directly check that these identities hold.
In the opposite direction, one can make use of (1.35) to find the factorisations
\[\mathfrak{D}_{0}(t) =2(t^{3}\,\partial_{t}^{3}-6t^{2}\,\partial_{t}^{2}+15t\, \partial_{t}-15)\circ(t\,\partial_{t}^{2}+4\,\partial_{t}-t)\circ(\partial_{t }^{2}-1)\] \[=2(t^{3}\,\partial_{t}^{3}-6t^{2}\,\partial_{t}^{2}+15t\, \partial_{t}-15)\circ(\partial_{t}^{2}-1)\circ(t\,\partial_{t}^{2}+2\partial_ {t}-t).\]
We mention that \(\widetilde{M}_{(1/2)}(t)\) is annihilated by the the rightmost differential operator in the first line, while \(\widetilde{M}_{(0)}(t)\) is annihilated by the the rightmost differential operator in the second line. Furthermore, since the general solution of the differential equation
\[(t^{3}\,\partial_{t}^{3}-6t^{2}\,\partial_{t}^{2}+15t\,\partial_{t}-15)f(t)=0\]
is of the form \(f(t)=c_{1}t+c_{2}t^{3}+c_{3}t^{5}\), one can further factorise and obtain
\[\mathfrak{D}_{0}(t) =2(t\,\partial_{t}-a)\circ(t\,\partial_{t}-b)\circ(t\,\partial_{t }-c)\circ(t\,\partial_{t}^{2}+4\,\partial_{t}-t)\circ(\partial_{t}^{2}-1)\] \[=2(t\,\partial_{t}-a)\circ(t\,\partial_{t}-b)\circ(t\,\partial_{t }-c)\circ(\partial_{t}^{2}-1)\circ(t\,\partial_{t}^{2}+2\partial_{t}-t), \tag{1.37}\]
where \((a,b,c)\) is a permutation of \((1,3,5)\).
**Remark 1.7** (**Large-\(N\) expansion of the elliptic GinoE**).: Continuing the discussions from the previous remark, it is natural to consider the large-\(N\) expansion of the elliptic GinoE for a general value of \(\tau\in[0,1]\). Once again, we introduce the rescaled density \(\rho_{N}(x):=R_{N}(\sqrt{N}x)\). In the study of the large-\(N\) asymptotics of the elliptic Ginibre matrices, one needs to distinguish the following two different regimes.
* **Strong non-Hermiticity**. This is the case where \(\tau\) is fixed in the interval \([0,1)\). In this regime, it was shown in [43, Section 6.2] that (1.38) \[\rho_{N}(x)=\frac{1}{\sqrt{2\pi(1-\tau^{2})}}\,\mathbb{1}_{(-1-\tau,1+\tau)}( x)+o(1).\] Therefore, one can observe that (1.39) \[\int_{\mathbb{R}}x^{2p}\,\rho_{N}(x)\,dx\sim M^{\rm s}_{2p},\qquad M^{\rm s}_ {2p}:=\frac{1}{\sqrt{2\pi(1-\tau^{2})}}\frac{2}{2p+1}(1+\tau)^{2p+1}.\]
* **Weak non-Hermiticity**. This is the case that \(\tau=1-\frac{\alpha^{2}}{N}\) for a fixed \(\alpha\in(0,\infty)\). This regime was introduced by Fyodorov, Khoruzhenko and Sommers [46, 47] in the study of the critical transition of the elliptic GinUE statistics, see [6, 8] for further references and physical applications. In this regime, it was shown in [18, Theorem 2.3] that (1.40) \[\rho_{N}(x)=\frac{\sqrt{N}}{2\alpha\sqrt{\pi}}\,{\rm erf}\left(\frac{\alpha}{ 2}\sqrt{4-x^{2}}\right)\mathbb{1}_{(-2,2)}(x)+o(\sqrt{N}).\] We mention that this formula was previously proposed by Efetov in [31]. Note that the leading order density in (1.40) interpolates the uniform density in (1.38) with the Wigner's semi-circle law (1.5). Using (1.40), one can show that (1.41) \[\int_{\mathbb{R}}x^{2p}\,\rho_{N}(x)\,dx\sim\sqrt{N}\,M^{\rm w}_{2p},\qquad M^ {\rm w}_{2p}:=\frac{1}{\pi}\,\sum_{n=0}^{\infty}\frac{2^{2p+1}}{2n+1}\frac{ \Gamma(n+\frac{3}{2})\Gamma(p+\frac{1}{2})}{n!\,(n+p+1)!}\,(-\alpha^{2})^{n}.\] We stress that in the Hermitian limit \(\alpha\to 0\), the moment \(M^{\rm w}_{2p}\) recovers the Catalan number in (1.4), i.e. (1.42) \[\lim_{\alpha\to 0}M^{\rm w}_{2p}=c(0;p).\] In the opposite limit \(\alpha\to\infty\), one can also match \(M^{\rm w}_{2p}\) with \(M^{\rm s}_{2p}\), see [18] for a similar discussion. Let us also mention that \(M^{\rm w}_{2p}\) can be written in terms of the the modified Bessel function of the first kind (1.43) \[I_{\nu}(z):=\sum_{k=0}^{\infty}\frac{(z/2)^{2k+\nu}}{k!\,\Gamma(\nu+k+1)},\] see e.g. [67, Chapter 10]. To be more precise, it is of the form (1.44) \[M^{\rm w}_{2p}=\Big{[}r_{0,2p}\,I_{0}\Big{(}\frac{\alpha^{2}}{2}\Big{)}+r_{1,2p}\,I_{1}\Big{(}\frac{\alpha^{2}}{2}\Big{)}\Big{]}e^{-\frac{\alpha^{2}}{2}},\] where \(r_{0,2p}\) and \(r_{1,2p}\) are some rational functions of \(\alpha\). For instance, the first few values of \(r_{0,2p}\) and \(r_{1,2p}\) are given by (1.45) \[r_{0,0}=1, r_{0,2}=\frac{8(2\alpha^{4}-\alpha^{2})}{5\alpha^{4}},\qquad r_{0,4 }=\frac{64(4\alpha^{8}-6\alpha^{6}+15\alpha^{4}-24\alpha^{2})}{9\alpha^{8}},\] (1.46) \[r_{1,0}=1, r_{1,2}=\frac{8(2\alpha^{4}-3\alpha^{2}+4)}{5\alpha^{4}},\qquad r _{1,4}=\frac{64(4\alpha^{8}-10\alpha^{6}+27\alpha^{4}-60\alpha^{2}+96)}{9 \alpha^{8}}.\] In Appendix A, we provide the derivation of (1.41) and (1.44).
We have discussed the leading-order asymptotics of the spectral moments for both the strong and weak non-Hermiticity regimes. The more precise asymptotic expansions require extending the main findings in [18, 43], which exceed the scope of this paper. We hope to come back to this topic in future work and find further applications of Theorems 1.1 and 1.4.
### Plan of the paper
The rest of this paper is organised as follows. In Section 2, we present explicit formulas for the polynomials \(A_{k}\) used in Theorems 1.1 and 1.4. Section 3 provides an outline of the proof and culminates in the proof of the main theorems. Section 4 is devoted to the remaining proofs, especially the derivation of several differential equations used to prove Theorem 1.4. Appendix A provides a summary of the integrable structure of the elliptic GinOE due to Forrester and Nagao [43] and also provides derivations of the formulas in Remark 1.7.
### Acknowledgements
The author gratefully thanks Peter J. Forrester for several helpful comments and discussions, notably for directing the author's attention to the discussion in Remark 1.6. Thanks are extended to Nick Simm for his interest and valuable suggestions on Remark 1.7. The author also thanks Jaeseong Oh for stimulating conversations.
## 2. Polynomial coefficients
In this section, we introduce the polynomials \(A_{k}\) used in Theorems 1.1 and 1.4. These polynomials are constructed from the following basic polynomials:
\[a(t) :=-2\tau^{2}(1+\tau)^{3}(1+2\tau)\,t^{6}+4\tau(1+\tau)^{2}\Big{(} 1+8\tau-5\tau(1-\tau^{2})N\Big{)}\,t^{4}\] \[\quad+8(1-\tau^{2})\Big{(}1-3\tau-30\tau^{2}+(1-\tau^{2})(3+4\tau +4\tau^{2})N+2(1-\tau^{2})^{2}N^{2}\Big{)}\,t^{2}\] \[\quad+32(1-\tau)^{2}\Big{(}1+6\tau+2(1-\tau^{2})N\Big{)}; \tag{2.2}\] \[b(t) :=-2\tau(1+\tau)^{2}(1-\tau-6\tau^{2})\,t^{5}+4(1-\tau^{2})\Big{(} (1-2\tau)(1+7\tau)+2(1-\tau)^{3}(1+\tau)N\Big{)}\,t^{3}\] (2.3) \[\quad-32(1-\tau)^{2}\Big{(}1+6\tau+2(1-\tau^{2})N\Big{)}\,t;\] (2.4) \[c(t) :=4\tau(1-\tau^{2})(1+2\tau)\,t^{4}-8(1-\tau)^{2}\Big{(}1+6\tau+2 N(1-\tau^{2})\Big{)}\,t^{2}; \tag{2.1}\]
In Lemma 3.9, these polynomials appear as linear coefficients when expressing the elliptic GinUE part of the moment generating function \(u(t)\) in terms of the function \(V(t)\), cf. (3.14) and (3.27).
### The polynomials \(B_{k}\)
First, we introduce the polynomials \(B_{k}\) (\(k=0,\dots,4\)), which serve as the linear coefficients in the differential equation for the function \(V(t)\), see (3.31). Before providing their definitions, let us mention some of their basic properties:
* In general, the degree of polynomials \(B_{k}\) is \(14-k\);
* The polynomial \(B_{k}\) satisfy \(B_{k}(t)=O(t^{k})\) as \(t\to 0\);
* The polynomial \(B_{k}\) is even if \(k\) is even and odd if \(k\) is odd.
To be explicit, the polynomials \(B_{k}\) are defined as
\[B_{k}(t)=\frac{1}{t^{4}}\Big{(}\beta_{k,0}(t)d(t)^{2}+\beta_{k,1}(t)d(t)d^{ \prime}(t)+\beta_{k,2}(t)d(t)d^{\prime\prime}(t)+\beta_{k,3}(t)d^{\prime}(t)^ {2}\Big{)}, \tag{2.5}\]
where \(d(t)\) is given in (2.4). Here, the polynomials \(\beta_{k,j}\)'s are given in terms of \(a(t),b(t)\) and \(c(t)\) in (2.1), (2.2) and (2.3) as follows.
* \(k=4\): (2.6) \[\beta_{4,0}(t)=4(1-\tau)\,c(t),\qquad\beta_{4,1}(t)=\beta_{4,2}(t)=\beta_{4,3}(t)=0.\]
* \(k=3\): (2.7) \[\beta_{3,0}(t) =4(1-\tau)\,\Big{(}b(t)+2c^{\prime}(t)\Big{)}+\Big{(}2\tau(1+\tau) (4-\tau)\,t^{2}-4(1-\tau)\Big{)}\frac{c(t)}{t},\] \[\beta_{3,1}(t) =-8(1-\tau)\,c(t),\qquad\beta_{3,2}(t)=\beta_{3,3}(t)=0.\]
* \(k=2\): (2.8) \[\beta_{2,0}(t) =4(1-\tau)\,\Big{(}a(t)+2b^{\prime}(t)+c^{\prime\prime}(t)\Big{)}\] \[\quad+\Big{(}2\tau(1+\tau)(4-\tau)\,t^{2}-4(1-\tau)\Big{)}\frac{b( t)+c^{\prime}(t)}{t}+3\tau^{2}(1+\tau)^{2}\,t^{2}\,c(t),\] \[\beta_{2,1}(t) =-8(1-\tau)\,\Big{(}b(t)+c^{\prime}(t)\Big{)}-\Big{(}2\tau(1+ \tau)(4-\tau)\,t^{2}-4(1-\tau)\Big{)}\frac{c(t)}{t},\] \[\beta_{2,2}(t) =-4(1-\tau)c(t),\qquad\beta_{2,3}(t)=8(1-\tau)c(t).\]
* \(k=1\): (2.9) \[\beta_{1,0}(t) =4(1-\tau)\Big{(}2a^{\prime}(t)+b^{\prime\prime}(t)\Big{)}\] \[\quad+\Big{(}2\tau(1+\tau)(4-\tau)\,t^{2}-4(1-\tau)\Big{)}\frac{a( t)+b^{\prime}(t)}{t}+3\tau^{2}(1+\tau)^{2}\,t^{2}b(t),\] \[\beta_{1,1}(t) =-8(1-\tau)\Big{(}a(t)+b^{\prime}(t)\Big{)}-\Big{(}2\tau(1+\tau) (4-\tau)\,t^{2}-4(1-\tau)\Big{)}\frac{b(t)}{t},\] \[\beta_{1,2}(t) =-4(1-\tau)\,b(t),\qquad\beta_{1,3}(t)=8(1-\tau)\,b(t).\]
* \(k=0\): (2.10) \[\beta_{0,0}(t) =4(1-\tau)\,a^{\prime\prime}(t)+\Big{(}2\tau(1+\tau)(4-\tau)\,t^{ 2}-4(1-\tau)\Big{)}\frac{a^{\prime}(t)}{t}\] \[\quad+3\tau^{2}(1+\tau)^{2}\,t^{2}\,a(t)-4N\tau^{2}(1+\tau)^{5}\, t\,d(t),\] \[\beta_{0,1}(t) =-8(1-\tau)a^{\prime}(t)-\Big{(}2\tau(1+\tau)(4-\tau)\,t^{2}-4(1- \tau)\Big{)}\frac{a(t)}{t},\] \[\beta_{0,2}(t) =-4(1-\tau)\,a(t),\qquad\beta_{0,3}(t)=8(1-\tau)a(t).\]
Before explaining the use of the polynomials \(B_{k}\), let us consider their extremal cases \(\tau=0,1\), which highlight how they simplify in these situations.
**Example 2.1**.: For the extremal cases \(\tau=0,1\), we have the following.
* The GinOE case (\(\tau=0\)). In this case, it follows from (2.11) \[a(t) =8(2N+1)(N+1)\,t^{2}+32(2N+1),\] \[b(t) =4(2N+1)\,t^{3}-32(2N+1)\,t,\] \[c(t) =-8(2N+1)\,t^{2},\qquad d(t)=-4(N+2)\,t^{3}\]
that (2.12) \[B_{4}(t) =256(2N+1)(N+2)^{2}\times\Big{(}-2t^{4}\Big{)},\] \[B_{3}(t) =256(2N+1)(N+2)^{2}\times\Big{(}t^{5}-2t^{3}\Big{)},\] \[B_{2}(t) =256(2N+1)(N+2)^{2}\times\Big{(}2(N+1)t^{4}+42t^{2}\Big{)}\] \[B_{1}(t) =256(2N+1)(N+2)^{2}\times\Big{(}-6(N+1)t^{3}-120t\Big{)}\] \[B_{0}(t) =256(2N+1)(N+2)^{2}\times\Big{(}6(N+1)t^{2}+120\Big{)}\]
* The GOE case (\(\tau=1\)). In this case, it follows from (2.13) \[a(t)=-48\,t^{6}+144\,t^{4},\qquad b(t)=48\,t^{5},\qquad c(t)=0,\qquad d(t)=18t ^{5}\] that (2.14) \[B_{4}(t) =B_{3}(t)=0,\] \[B_{2}(t) =186624\,t^{10}\times t^{2},\qquad B_{1}(t)=186624\,t^{10}\times 3t,\] \[B_{0}(t) =186624\,t^{10}\times(-t^{4}-2(2N-1)t^{2}-3)\]
### The polynomials \(A_{k}\)
We are now ready to introduce the polynomials \(A_{k}\) (\(k=0,\dots,7\)) used in Theorems 1.1 and 1.4. The polynomials \(A_{k}\)'s are defined in terms of \(B_{k}\)'s in the previous subsection as
\[A_{k}(t)=\sum_{j=0}^{4}\alpha_{k,j}(t)\,B_{j}(t), \tag{2.15}\]
where \(\alpha_{k,j}\)'s are given as follows.
* \(k=7\): (2.16) \[\alpha_{7,4}(t)=\frac{1-\tau}{2},\qquad\alpha_{7,3}(t)=\alpha_{7,2}(t)=\alpha _{7,1}(t)=\alpha_{7,0}(t)=0.\]
* \(k=6\): (2.17) \[\alpha_{6,4}(t) =-\frac{(1+\tau)(\tau^{2}-3\tau+1)}{2}\,t,\] \[\alpha_{6,3}(t) =\frac{1-\tau}{2},\qquad\alpha_{6,2}(t)=\alpha_{6,1}(t)=\alpha_{ 6,0}(t)=0.\]
* \(k=5\): (2.18) \[\alpha_{5,4}(t) =-\frac{1+\tau}{2}\Big{(}2\tau(1-\tau^{2})\,t^{2}+(1-\tau^{2})N+ 5-15\tau+6\tau^{2}\Big{)},\] \[\alpha_{5,3}(t) =-\frac{(1+\tau)(\tau^{2}-3\tau+1)}{2}\,t,\qquad\alpha_{5,2}=\frac {1-\tau}{2},\qquad\alpha_{5,1}(t)=\alpha_{5,0}(t)=0.\]
* \(k=4\): (2.19) \[\alpha_{4,3}(t) =-\frac{1+\tau}{2}\Big{(}2\tau(1-\tau^{2})\,t^{2}+(1-\tau^{2})N+ 4-12\tau+5\tau^{2}\Big{)},\] \[\alpha_{4,2}(t) =-\frac{(1+\tau)(\tau^{2}-3\tau+1)}{2}\,t,\qquad\alpha_{4,1}(t)= \frac{1-\tau}{2},\qquad\alpha_{4,0}(t)=0.\]
* \(k=3\): (2.20) \[\begin{split}&\alpha_{3,4}(t)=-2\tau(1+\tau)^{2}\Big{(}3\tau(1+\tau )\,t^{2}+(1+\tau)N+8-9\tau\Big{)},\\ &\alpha_{3,3}(t)=-\frac{\tau(1+\tau)^{2}}{2}\Big{(}\tau(1+\tau) \,t^{2}+(1+\tau)N+14-15\tau\Big{)}\,t,\\ &\alpha_{3,2}(t)=-\frac{1+\tau}{2}\Big{(}2\tau(1-\tau^{2})\,t^{2 }+(1-\tau^{2})N+3-9\tau+4\tau^{2}\Big{)},\\ &\alpha_{3,1}(t)-\frac{(1+\tau)(\tau^{2}-3\tau+1)}{2}\,t\,B_{1}( t),\qquad\alpha_{3,0}(t)=\frac{1-\tau}{2}B_{0}(t).\end{split}\]
* \(k=2\): \[\begin{split}&\alpha_{2,4}(t)=-18\tau^{2}(1+\tau)^{3}\,t,\\ &\alpha_{2,3}(t)=-\frac{3\tau(1+\tau)^{2}}{2}\Big{(}3\tau(1+\tau )\,t^{2}+(1+\tau)N+6-7\tau\Big{)},\\ &\alpha_{2,2}(t)=-\frac{\tau(1+\tau)^{2}}{2}\Big{(}\tau(1+\tau )\,t^{2}+(1+\tau)N+10-11\tau\Big{)}\,t,\\ &\alpha_{2,1}(t)=-\frac{1+\tau}{2}\Big{(}2\tau(1-\tau^{2})\,t^{2 }+(1-\tau^{2})N+2-6\tau+3\tau^{2}\Big{)},\\ &\alpha_{2,0}(t)-\frac{(1+\tau)(\tau^{2}-3\tau+1)}{2}\,t.\end{split}\] (2.21)
* \(k=1\): \[\begin{split}&\alpha_{1,4}(t)=-12\tau^{2}(1+\tau)^{3},\qquad \alpha_{1,3}(t)=-9\tau^{2}(1+\tau)^{3}\,t,\\ &\alpha_{1,2}(t)=-\tau(1+\tau)^{2}\Big{(}3\tau(1+\tau)\,t^{2}+(1+ \tau)N+4-5\tau\Big{)},\\ &\alpha_{1,1}(t)=-\frac{\tau(1+\tau)^{2}}{2}\Big{(}\tau(1+\tau )\,t^{2}+(1+\tau)N+6-7\tau\Big{)}\,t,\\ &\alpha_{1,0}(t)=-\frac{1-\tau^{2}}{2}\Big{(}2\tau(1+\tau)\,t^{2 }+(1+\tau)N+1-2\tau\Big{)}.\end{split}\] (2.22)
* \(k=0\): (2.23) \[\begin{split}&\alpha_{0,4}(t)=0,\qquad\alpha_{0,3}(t)=-3\tau^{2}(1+ \tau)^{3},\qquad\alpha_{0,2}(t)=-3\tau^{2}(1+\tau)^{3}\,t,\\ &\alpha_{0,1}(t)=-\frac{\tau(1+\tau)^{2}}{2}\Big{(}3\tau(1+\tau )\,t^{2}+(1+\tau)N+2-3\tau\Big{)},\\ &\alpha_{0,0}(t)=-\frac{\tau(1+\tau)^{2}}{2}\Big{(}\tau(1+\tau )\,t^{2}+(1+\tau)N+2-3\tau\Big{)}\,t.\end{split}\]
**Remark 2.2**.: It is straightforward to check that the polynomials \(A_{k}\)'s are of the form
\[\begin{split}& A_{7}(t)=\mathfrak{a}_{7,10}\,t^{10}+\cdots+ \mathfrak{a}_{7,4}\,t^{4},\qquad A_{6}(t)=\mathfrak{a}_{6,11}\,t^{11}+\cdots+ \mathfrak{a}_{6,3}\,t^{3},\\ & A_{5}(t)=\mathfrak{a}_{5,12}\,t^{12}+\cdots+\mathfrak{a}_{5,2}\,t ^{2},\qquad A_{4}(t)=\mathfrak{a}_{4,13}\,t^{13}+\cdots+\mathfrak{a}_{4,1}\,t,\\ & A_{3}(t)=\mathfrak{a}_{3,14}\,t^{14}+\cdots+\mathfrak{a}_{3,0}, \qquad A_{2}(t)=\mathfrak{a}_{2,15}\,t^{15}+\cdots+\mathfrak{a}_{2,1}\,t,\\ & A_{1}(t)=\mathfrak{a}_{1,16}\,t^{16}+\cdots+\mathfrak{a}_{1,0}, \qquad A_{0}(t)=\mathfrak{a}_{0,17}\,t^{17}+\cdots+\mathfrak{a}_{0,7}\,t^{7}. \end{split} \tag{2.24}\]
We also mention that \(\mathfrak{a}_{0,2l-3}=0\) for \(l=\{1,2,3,4\}\). Thus, in this case one can write
\[\sum_{k=1}^{\min\{10-l,7\}}\mathfrak{a}_{k,k+2l-3}(2p-k-2l+1)_{k+2l-3} \tag{2.25}\]
in the inner summation of (1.16).
As before, let us discuss the extremal cases \(\tau=0,1\), both of which exhibit notable simplifications.
**Example 2.3**.: We have the following.
* The GinoE case (\(\tau=0\)). In this case, we have \[\begin{split}& A_{7}(t)=\tfrac{1}{2}B_{4}(t),\qquad A_{6}(t)=- \tfrac{1}{2}\,t\,B_{4}(t)+\tfrac{1}{2}B_{3}(t),\\ & A_{5}(t)=-\tfrac{1}{2}(N+5)B_{4}(t)-\tfrac{1}{2}\,t\,B_{3}(t)+ \tfrac{1}{2}B_{2}(t),\\ & A_{4}(t)=-\tfrac{1}{2}(N+4)B_{3}(t)-\tfrac{1}{2}\,t\,B_{2}(t)+ \tfrac{1}{2}B_{1}(t),\\ & A_{3}(t)=-\tfrac{1}{2}(N+3)B_{2}(t)-\tfrac{1}{2}\,t\,B_{1}(t)+ \tfrac{1}{2}B_{0}(t),\\ & A_{2}(t)=-\tfrac{1}{2}(N+2)B_{1}(t)-\tfrac{1}{2}\,t\,B_{0}(t), \qquad A_{1}(t)=-\tfrac{1}{2}(N+1)B_{0}(t),\qquad A_{0}(t)=0.\end{split}\] (2.26)
* The GOE case (\(\tau=1\)). In this case, we have \[\begin{split}& A_{7}(t)=A_{6}(t)=A_{5}(t)=0,\qquad A_{4}(t)=t\,B_{ 2}(t),\qquad A_{3}(t)=2B_{2}(t)+t\,B_{1}(t),\\ & A_{2}(t)=-2(2\,t^{2}+2N-1)\,t\,B_{2}(t)+B_{1}(t)+t\,B_{0}(t),\\ & A_{1}(t)=-2(12\,t^{2}+4N-2)B_{2}(t)-2(2\,t^{2}+2N-1)\,t\,B_{1}( t),\\ & A_{0}(t)=-24\,t\,B_{2}(t)-2(6\,t^{2}+2N-1)B_{1}(t)-2(2\,t^{2}+2 N-1)\,t\,B_{0}(t).\end{split}\] (2.27)
We mention that Examples 2.1 and 2.3 immediately imply Corollaries 1.2 and 1.5.
## 3. Proof of main results
In this section, we provide an outline and the proof of main theorems, postponing the remaining proofs of some key ingredients (Propositions 3.4, 3.5, 3.7 and Lemma 3.9) to the next section.
The starting point of the proof is the integrable structure and skew-orthogonal polynomial representation of the \(1\)-point function \(R_{N}\) of real eigenvalues. This is defined by its characteristic property
\[\mathbb{E}\biggl{[}\sum_{j=1}^{\mathcal{N}_{\tau}}f(x_{j})\biggr{]}=\int_{ \mathbb{R}}f(x)\,R_{N}(x)\,dx, \tag{3.1}\]
where \(f\) is a given test function. Note that the moment generating function (1.20) and the spectral moments (1.11) can be written in terms of the \(1\)-point function \(R_{N}\) as
\[M(t):=\int_{\mathbb{R}}e^{tx}R_{N}(x)\,dx,\qquad M_{p}:=\int_{\mathbb{R}}x^{p }\,R_{N}(x)\,dx. \tag{3.2}\]
By using the skew-orthogonal polynomial formalism, the \(1\)-point function \(R_{N}\) can be written in terms of the Hermite polynomials
\[H_{k}(x):=(-1)^{k}e^{x^{2}}\frac{d^{n}}{dx^{n}}e^{-x^{2}}. \tag{3.3}\]
To be more precise, let us formulate [43, Eq. (6.11)] in the following lemma.
**Lemma 3.1** (**Expression of the \(1\)-point function**).: _For any even integer \(N\geq 2\) and \(\tau\in[0,1]\), we have_
\[R_{N}(x)=R_{N}^{(1)}(x)+R_{N}^{(2)}(x), \tag{3.4}\]
_where_
\[R_{N}^{(1)}(x):=\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\sum_{k=0}^{N-2} \frac{(\tau/2)^{k}}{k!}H_{k}\Bigl{(}\frac{x}{\sqrt{2\tau}}\Bigr{)}^{2}, \tag{3.5}\]
\[R_{N}^{(2)}(x):=\frac{(\tau/2)^{N-\frac{3}{2}}}{1+\tau}\frac{1}{(N-2)!}\frac{e^{- \frac{x^{2}}{2(1+\tau)}}}{\sqrt{2\pi}}H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)} \int_{0}^{x}e^{-\frac{u^{2}}{2(1+\tau)}}H_{N-2}\Big{(}\frac{u}{\sqrt{2\tau}} \Big{)}\,du. \tag{3.6}\]
See Appendix A for further details and discussions on Lemma 3.1.
**Remark 3.2**.: We stress that \(R_{N}^{(1)}\) indeed corresponds to the density of the elliptic GinUE (with \(N\mapsto N-1\)), restricted on the real axis, see [16, Section 2.3]. We also mention that by [18, Lemma 4.3], one can rewrite it as
\[\begin{split} R_{N}^{(1)}(x)&=R_{N}^{(1)}(0)-\sqrt{ \frac{2}{\pi}}\frac{(\tau/2)^{N-\frac{3}{2}}}{1+\tau}\frac{1}{(N-2)!}\int_{0}^ {x}e^{-\frac{u^{2}}{1+\tau}}H_{N-2}\Big{(}\frac{u}{\sqrt{2\tau}}\Big{)}H_{N-1 }\Big{(}\frac{u}{\sqrt{2\tau}}\Big{)}\,du\\ &=\sqrt{\frac{2}{\pi}}\frac{(\tau/2)^{N-\frac{3}{2}}}{1+\tau} \frac{1}{(N-2)!}\int_{x}^{\infty}e^{-\frac{u^{2}}{1+\tau}}H_{N-2}\Big{(}\frac{ u}{\sqrt{2\tau}}\Big{)}H_{N-1}\Big{(}\frac{u}{\sqrt{2\tau}}\Big{)}\,du.\end{split} \tag{3.7}\]
Such a formula plays an important role in the asymptotic analysis of the elliptic Ginibre ensembles, see [14, 15, 60].
**Example 3.3**.: For the extremal cases, we have the following.
* The GinOE case (\(\tau=0\)). It follows from (3.8) \[\Big{(}\frac{\tau}{2}\Big{)}^{k/2}H_{k}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)} \to x^{k},\qquad\tau\to 0\] that for \(\tau=0\), we have (3.9) \[R_{N}^{(1)}(x):=\frac{e^{-x^{2}}}{\sqrt{2\pi}}\sum_{k=0}^{N-2} \frac{x^{2k}}{k!}=\frac{1}{\sqrt{2\pi}}\frac{1}{(N-2)!}\Gamma(N-1,x^{2}),\] (3.10) \[R_{N}^{(2)}(x):=\frac{1}{(N-2)!}\frac{e^{-\frac{x^{2}}{2}}}{ \sqrt{2\pi}}x^{N-1}\int_{0}^{x}e^{-\frac{u^{2}}{2}}u^{N-2}\,du=\frac{2^{(N-3)/ 2}}{(N-2)!}\frac{e^{-\frac{x^{2}}{2}}}{\sqrt{2\pi}}|x|^{N-1}\gamma\Big{(}\frac {N-1}{2},\frac{x^{2}}{2}\Big{)}.\] Here \(\Gamma(n,x)\) and \(\gamma(n,x)\) are the incomplete gamma functions. Note that by (3.9) and (3.10), the rescaled density \(x\mapsto\sqrt{N}x\) is given by (3.11) \[R_{N}(\sqrt{N}x)=\frac{1}{\sqrt{2\pi}}\frac{1}{(N-2)!}\Gamma(N-1,Nx^{2})+ \frac{2^{(N-3)/2}}{(N-2)!}\frac{e^{-\frac{Nx^{2}}{2}}}{\sqrt{2\pi}}|\sqrt{N}x |^{N-1}\gamma\Big{(}\frac{N-1}{2},\frac{Nx^{2}}{2}\Big{)}.\] This formula appears in [30, Corollary 4.3]. See [5] for a recent work on the use of this formula in the context of the counting statistics.
* The GOE case (\(\tau=1\)). In this case, it is immediate to see that (3.12) \[\begin{split} R_{N}(x)&=\frac{e^{-\frac{x^{2}}{2}}}{ \sqrt{2\pi}}\sum_{k=0}^{N-2}\frac{1}{2^{k}\,k!}H_{k}\Big{(}\frac{x}{\sqrt{2}} \Big{)}^{2}\\ &\qquad+\frac{1}{2^{N-1/2}}\frac{1}{(N-2)!}\frac{e^{-\frac{x^{2}} {4}}}{\sqrt{2\pi}}H_{N-1}\Big{(}\frac{x}{\sqrt{2}}\Big{)}\int_{0}^{x}e^{-\frac {u^{2}}{4}}H_{N-2}\Big{(}\frac{u}{\sqrt{2}}\Big{)}\,du.\end{split}\] This is the classical GOE density, see e.g. [36, Section 6.4]. Notice here that the first line of (3.12) coincides with the density of the GUE of size \(N-1\). We also stress that (3.12) can be
rewritten as
\[\begin{split}& R_{N}(x)=\frac{e^{-\frac{x^{2}}{2}}}{\sqrt{2\pi}} \sum_{k=0}^{N-1}\frac{1}{2^{k}\,k!}H_{k}\Big{(}\frac{x}{\sqrt{2}}\Big{)}^{2}\\ &+\frac{1}{2^{N+1/2}}\frac{1}{(N-1)!}\frac{e^{-\frac{x^{2}}{4}}}{ \sqrt{2\pi}}H_{N-1}\Big{(}\frac{x}{\sqrt{2}}\Big{)}\bigg{(}\sqrt{\pi}\,2^{N/2} (N-1)!!-\int_{x}^{\infty}e^{-\frac{u^{2}}{4}}H_{N-2}\Big{(}\frac{u}{\sqrt{2}} \Big{)}\,du\bigg{)}.\end{split} \tag{3.13}\]
We mention that the use of the formula (3.13) was made in [59]. Note also that the first line of (3.13) corresponds to the density of the GUE of size \(N\). We refer to [1, 76] for a comprehensive discussion on such a relation between the correlation functions of unitary and orthogonal ensembles.
We shall analyse the integrals of \(R_{N}^{(1)}\) and \(R_{N}^{(2)}\) separately. For this purpose, let
\[u(t):=\int_{\mathbb{R}}e^{tx}\,R_{N}^{(1)}(x)\,dx,\qquad v(t):=\int_{\mathbb{R }}e^{tx}\,R_{N}^{(2)}(x)\,dx. \tag{3.14}\]
Note that by (3.2) and (3.4), we have
\[M(t)=\int_{\mathbb{R}}e^{tx}R_{N}(x)\,dx=u(t)+v(t). \tag{3.15}\]
We first derive alternative representations of \(u(t)\), which allow us to perform further analysis.
**Proposition 3.4** (**Representations of the MGF of the elliptic GinUE part**).: _For any even integer \(N\geq 2\) and \(\tau\in[0,1]\), we have the integral representation_
\[u(t)=\frac{2}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\int_{\mathbb{R}} \frac{e^{tx}}{t}\,H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}\Big{(} \frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx. \tag{3.16}\]
_Furthermore, it can also be written as_
\[u(t)=\sqrt{\frac{2}{1+\tau}}e^{\frac{t^{2}(1+\tau)}{4}}\sum_{k=0}^{N-2}\frac{ 1}{k!}\binom{N-1}{k+1}\,\tau^{N-k-2}\Big{(}\frac{\tau-1}{4}\Big{)}^{k+\frac{1 }{2}}H_{2k+1}\Big{(}\frac{1+\tau}{2\sqrt{\tau-1}}\,t\Big{)}\frac{1}{t}. \tag{3.17}\]
Let us mention that indeed, the discrete representation (3.17) will not play a role in the further analysis. Nonetheless, it can also provide an alternative proof for Proposition 3.5 below. Furthermore, we stress that the expression (3.17) is particularly useful for numerical verifications.
Next, we derive a differential equation for \(u(t)\), which plays a key role in Theorem 1.4. For this, we shall make use of the integral representation (3.16).
**Proposition 3.5** (**Differential equation for the MGF of the elliptic GinUE part**).: _For any even integer \(N\geq 2\) and \(\tau\in[0,1]\), the function \(u(t)\) satisfies the third-order differential equation:_
\[a_{3}(t)\,u^{\prime\prime\prime}(t)+a_{2}(t)\,u^{\prime\prime}(t)+a_{1}(t)\,u ^{\prime}(t)+a_{0}(t)\,u(t)=0, \tag{3.18}\]
_where_
\[a_{3}(t) =\frac{1-\tau}{2}, \tag{3.20}\] \[a_{2}(t) =-\frac{(1-4\tau+\tau^{2})(1+\tau)}{4}\,t+\frac{1-\tau}{t},\] (3.21) \[a_{1}(t) =-(1+\tau)\bigg{[}\frac{3\tau(1-\tau^{2})}{8}\,t^{2}+\frac{1-\tau ^{2}}{2}(N-1)+\frac{1-5\tau+\tau^{2}}{2}\bigg{]}-\frac{1-\tau}{t^{2}},\] (3.22) \[a_{0}(t) =-\bigg{[}\frac{\tau^{2}(1+\tau)}{8}\,t^{2}+\frac{\tau(1+\tau)}{ 2}\,(N-1)+\frac{5\tau(1-\tau)}{8}\bigg{]}(1+\tau)^{2}\,t. \tag{3.19}\]
**Remark 3.6** (**MGF of the GUE**).: Recall that the confluent hypergeometric functions \(M(a,b,z)\) and \(U(a,b,z)\) solve the Kummer's differential equation
\[z\frac{d^{2}w}{dz^{2}}+(b-z)\frac{dw}{dz}-aw=0, \tag{3.23}\]
see e.g. [67, Chapter 13]. Then it follows from (3.8) that if \(\tau=1\), the expression (3.16) reduces to
\[\begin{split} u(t)&=e^{\frac{t^{2}}{2}}\sum_{k=0}^{ N-2}\frac{1}{k!}\binom{N-1}{k+1}\,t^{2k}\\ &=(N-1)e^{\frac{t^{2}}{2}}M(2-N,2,-t^{2})=\frac{1}{(N-2)!}e^{ \frac{t^{2}}{2}}U(2-N,2,-t^{2}).\end{split} \tag{3.24}\]
We mention that the last expressions in (3.24) were used in [56]. On the one hand, for \(\tau=1\), the linear coefficients \(a_{k}\) in Proposition 3.5 are simplified as
\[a_{3}(t)=0,\qquad a_{2}(t)=t,\qquad a_{1}(t)=3,\qquad a_{0}(t)=-t\Big{(}t^{2}+ 4(N-1)\Big{)}. \tag{3.25}\]
Then the resulting differential equation coincides with the differential equation derived in [59, Eq. (18)] with \(N\mapsto N-1\), cf. (3.12) and (3.13). One can also use (3.23) and (3.24) to directly verify the differential equation (3.18) for \(\tau=1\).
We now formulate a differential equation that links the MGF \(M(t)\) of the elliptic GinOE with the MGF \(u(t)\) of the elliptic GinUE, restricted to the real axis.
**Proposition 3.7** (**Differential equation for the mixed MGFs**).: _For any even integer \(N\geq 2\) and \(\tau\in[0,1]\), we have_
\[\begin{split}&\frac{1-\tau}{2}M^{\prime\prime\prime}(t)-\frac{(1+ \tau)(\tau^{2}-3\tau+1)}{2}\,t\,M^{\prime\prime}(t)\\ &-\frac{1-\tau^{2}}{2}\Big{(}2\tau(1+\tau)\,t^{2}+(N-1)(1+\tau)+ 2-\tau\Big{)}M^{\prime}(t)\\ &-\frac{\tau(1+\tau)^{2}}{2}\Big{(}\tau(1+\tau)\,t^{2}+(N-1)(1+ \tau)+3-2\tau\Big{)}\,t\,M(t)\\ &=\frac{1-\tau}{2}u^{\prime\prime\prime}(t)-\frac{(1+\tau)(1-4 \tau+\tau^{2})}{4}\,t\,u^{\prime\prime}(t)\\ &\quad-(1+\tau)\Big{(}\frac{3\tau(1-\tau^{2})}{8}\,t^{2}+\frac{ 1-\tau^{2}}{2}(N-1)+\frac{1-\tau}{2}\Big{)}u^{\prime}(t)\\ &\quad-\tau(1+\tau)^{2}\Big{(}\frac{\tau(1+\tau)}{8}\,t^{2}+\frac {1+\tau}{2}(N-1)+\frac{5+\tau}{8}\Big{)}\,t\,u(t).\end{split} \tag{3.26}\]
**Remark 3.8**.: The differential equation in Proposition 3.7 can be used to derive a recurrence relation for the mixed moments of the elliptic GinOE and elliptic GinUE. However, it is a bit ambiguous, especially regarding the "real moment" of the elliptic GinUE. Nonetheless, in the Hermitian limit \(\tau=1\), the statistical interpretation of the differential equation (3.26) becomes clear, cf. [59, Proposition 4]. In particular, it gives rise to the recurrence relation (1.8) of the mixed spectral moments of the GOE and GUE.
We now define
\[\begin{split} V(t)&:=\frac{1-\tau}{2}M^{\prime\prime \prime}(t)-\frac{(1+\tau)(\tau^{2}-3\tau+1)}{2}\,t\,M^{\prime\prime}(t)\\ &\quad-\frac{1-\tau^{2}}{2}\Big{(}2\tau(1+\tau)\,t^{2}+(N-1)(1+ \tau)+2-\tau\Big{)}M^{\prime}(t)\\ &\quad-\frac{\tau(1+\tau)^{2}}{2}\Big{(}\tau(1+\tau)\,t^{2}+(N-1 )(1+\tau)+3-2\tau\Big{)}\,t\,M(t).\end{split} \tag{3.27}\]
Note that this is the right-hand side of the equation (3.26). In the following lemma, we relate the functions \(u(t)\) and \(V(t)\).
**Lemma 3.9** (**Expression of \(u\) in terms of \(V\)**).: _For any even integer \(N\geq 2\) and \(\tau\in[0,1]\), we have_
\[u(t)=\alpha(t)V(t)+\beta(t)V^{\prime}(t)+\gamma(t)V^{\prime\prime}(t), \tag{3.28}\]
_where_
\[\alpha(t)=\frac{a(t)}{\delta(t)},\qquad\beta(t)=\frac{b(t)}{\delta(t)},\qquad \gamma(t)=\frac{c(t)}{\delta(t)} \tag{3.29}\]
_and_
\[\delta(t):=-N\tau^{2}(1+\tau)^{5}\,d(t). \tag{3.30}\]
_Here \(a(t),b(t),c(t)\) and \(d(t)\) are given by (2.1), (2.2), (2.3) and (2.4)._
We are now ready to prove Theorem 1.4.
Proof of Theorem 1.4.: It suffices to show that
\[B_{4}(t)V^{(4)}(t)+B_{3}(t)V^{\prime\prime\prime}(t)+B_{2}(t)V^{\prime\prime}(t )+B_{1}(t)V^{\prime}(t)+B_{0}(t)V(t)=0, \tag{3.31}\]
where \(B_{k}\)'s (\(k=0,1,\dots,4\)) are given in Subsection 2.1. Then the theorem follows by substituting (3.27) into (3.31). By Proposition 3.7 and (3.27), we have
\[\begin{split} V(t)&=\frac{1-\tau}{2}u^{\prime \prime\prime}(t)-\frac{(1+\tau)(1-4\tau+\tau^{2})}{4}\,t\,u^{\prime\prime}(t) \\ &\quad-(1+\tau)\Big{(}\frac{3\tau(1-\tau^{2})}{8}\,t^{2}+\frac{1- \tau^{2}}{2}(N-1)+\frac{1-\tau}{2}\Big{)}u^{\prime}(t)\\ &\quad-\tau(1+\tau)^{2}\Big{(}\frac{\tau(1+\tau)}{8}\,t^{2}+ \frac{1+\tau}{2}(N-1)+\frac{5+\tau}{8}\Big{)}\,t\,u(t).\end{split} \tag{3.32}\]
Furthermore by using Proposition 3.5, we have
\[4(1-\tau)\,t\,u^{\prime\prime}(t)+\Big{(}2\tau(1+\tau)(4-\tau)\,t^{2}-4(1- \tau)\Big{)}\,u^{\prime}(t)+3\tau^{2}(1+\tau)^{2}\,t^{3}\,u(t)+4t^{2}\,V(t)=0. \tag{3.33}\]
Then the desired formula (3.31) follows from long but straightforward computations using Lemmas 3.9 and (3.33).
By means of Theorem 1.4, we now complete the proof of Theorem 1.1.
Proof of Theorem 1.1.: Note that by (3.2), we have
\[M_{\tau}^{(k)}(t)=\sum_{j=k}^{\infty}\frac{t^{j-k}}{(j-k)!}M_{j,\tau}=\sum_{j =0}^{\infty}\frac{t^{j}}{j!}M_{j+k,\tau}. \tag{3.34}\]
We substitute this expression into (1.21) and collect the coefficient of \(t^{j}\). This gives
\[A_{k}(t)M_{\tau}^{(k)}(t) =\sum_{l=0}^{\lfloor(17-k)/2\rfloor}\mathfrak{a}_{k,17-k-2l}\,t^{1 7-k-2l}\,\sum_{s=0}^{\infty}\frac{M_{s+k,\tau}}{s!}\,t^{s}\] \[=\sum_{j=0}^{\infty}\bigg{(}\sum_{l=0}^{\lfloor(17-k)/2\rfloor} \mathfrak{a}_{k,17-k-2l}\frac{M_{j+2k+2l-17}}{(k+j+2l-17)!}\bigg{)}\,t^{j}\] \[=\sum_{j=0}^{\infty}\bigg{(}\sum_{l=0}^{\lfloor(17-k)/2\rfloor} \mathfrak{a}_{k,17-k-2l}(j+k+2l-16)_{17-k-2l}\,M_{j+2k+2l-17}\bigg{)}\,\frac{t ^{j}}{j!},\]
where \((a)_{n}=a(a+1)\ldots(a+n-1)\) is the Pochhammer symbol. Then it follows from (1.21) that
\[\sum_{k=0}^{7}\sum_{l=0}^{\lfloor(17-k)/2\rfloor}\mathfrak{a}_{k,17-k-2l}(j+k +2l-16)_{17-k-2l}\,M_{j+2k+2l-17}=0. \tag{3.35}\]
In particular, after some computations, it follows that the coefficient of \(M_{j+3}\) is given by
\[\mathfrak{a}_{3,0}+\mathfrak{a}_{4,1}j+\mathfrak{a}_{5,2}(j-1)_{ 2}+\mathfrak{a}_{6,3}(j-2)_{3}+\mathfrak{a}_{7,4}(j-3)_{4}\] \[=-64(j-5)(j-3)(j-1)(j+4)(1-\tau)^{6}\] \[\quad\times\Big{(}1+6\tau+2N(1-\tau^{2})\Big{)}\Big{(}4+17\tau+6 \tau^{2}+2N(1-\tau^{2})\Big{)}^{2}.\]
By letting \(j=2p-3\), we conclude that for any integer \(p\geq 10\),
\[\begin{split}& 2(2p+1)(1-\tau)^{6}\Big{(}1+6\tau+2N(1-\tau^{2}) \Big{)}\Big{(}4+17\tau+6\tau^{2}+2N(1-\tau^{2})\Big{)}^{2}M_{2p}\\ &=\sum_{l=1}^{10}\bigg{[}\sum_{k=0}^{\min\{10-l,7\}}\frac{(2p-k-2 l+1)_{k+2l-3}}{256(p-4)(p-3)(p-2)}\,\mathfrak{a}_{k,k+2l-3}\bigg{]}M_{2p-2l}. \end{split} \tag{3.36}\]
This completes the proof.
## 4. Derivations of linear differential equations
In this section, we provide remaining proofs of Propositions 3.4, 3.5, 3.7 and Lemma 3.9. In Subsection 4.1, we first establish Proposition 3.4 and Lemma 3.9. Subsections 4.2 and 4.3 are devoted to the proofs of Propositions 3.5 and 3.7 respectively, the main steps for completing the proof of Theorem 1.4.
### Proofs of Proposition 3.4 and Lemma 3.9
We begin by proving Proposition 3.4. The challenge arises due to the absence of the classical Christoffel-Darboux formula for analyzing \(R_{N}^{(1)}\) in (3.4), which is applicable only in the Hermitian case (\(\tau=1\)). Our main idea is to utilize the formula (4.3) below.
Proof of Proposition 3.4.: For any \(\tau\in(0,1]\), let
\[F_{N}(x):=\sum_{k=0}^{N-2}\frac{(\tau/2)^{k}}{k!}H_{k}\Big{(}\frac{x}{\sqrt{2 \tau}}\Big{)}^{2}. \tag{4.1}\]
Note here that by (3.5), we have
\[R_{N}^{(1)}(x)=\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,F_{N}(x). \tag{4.2}\]
Then for any \(N\geq 2\), it follows from [18, Lemma 4.2] that
\[\frac{1+\tau}{2}\,F_{N}^{\prime}(x)=x\,F_{N}(x)-\frac{(\tau/2)^{N-\frac{3}{2}} }{(N-2)!}H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}. \tag{4.3}\]
This equation is known as the generalised Christoffel-Darboux formula, which first appeared in [60].
In the sequel, we shall frequently use the Gaussian integration by parts: for a differentiable function \(f\) with polynomial growth,
\[\int_{\mathbb{R}}xf(x)\,e^{-\frac{x^{2}}{1+\tau}}\,dx=\frac{1+\tau}{2}\int_{ \mathbb{R}}f^{\prime}(x)\,e^{-\frac{x^{2}}{1+\tau}}\,dx. \tag{4.4}\]
By differentiating (3.14), we obtain
\[u^{\prime}(t) =\int_{\mathbb{R}}x\,e^{tx}\,R_{N}^{(1)}(x)\,dx=\int_{\mathbb{R} }x\,e^{tx}\,F_{N}(x)\,\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\] \[=\frac{1+\tau}{2}\int_{\mathbb{R}}\Big{[}t\,e^{tx}\,F_{N}(x)+e^{ tx}\,F_{N}^{\prime}(x)\Big{]}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\] \[=\frac{(1+\tau)\,t}{2}\,u(t)+\frac{1+\tau}{2}\int_{\mathbb{R}}e ^{tx}\,F_{N}^{\prime}(x)\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx, \tag{4.5}\]
where we have used (4.1), (4.2) and (4.4). Furthermore, it follows from (4.3) that
\[\frac{1+\tau}{2}\int_{\mathbb{R}}e^{tx}\,F_{N}^{\prime}(x)\frac{ e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx =\int_{\mathbb{R}}e^{tx}\,\bigg{[}x\,F_{N}(x)-\frac{(\tau/2)^{N- \frac{3}{2}}}{(N-2)!}H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}\Big{(} \frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2 \pi}}\,dx\] \[=u^{\prime}(t)-\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\int_{ \mathbb{R}}e^{tx}\,H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}\Big{(} \frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx.\]
Combining the above equations, we conclude the desired identity (3.16).
The second expression (3.17) follows from (3.16) and the integral
\[\begin{split}&\int_{\mathbb{R}}e^{-(x-y)^{2}}H_{m}(\alpha x)H_{n}( \alpha x)\,dx\\ &=\sqrt{\pi}\sum_{k=0}^{\min\{m,n\}}2^{k}\,k!\binom{m}{k}\binom{n }{k}(1-\alpha^{2})^{\frac{m+n}{2}-k}H_{m+n-2k}\Big{(}\frac{\alpha y}{\sqrt{1- \alpha^{2}}}\Big{)}\end{split} \tag{4.6}\]
that can be found in [53, (7.374-9)].
Next, we prove Lemma 3.9, which relies on straightforward computations using Proposition 3.5.
Proof of Lemma 3.9.: By multiplying \(t\) in the expression (3.32) and taking the derivative, we have
\[V(t)+t\,V^{\prime}(t) =-(1-\tau)u^{\prime\prime\prime}(t)-\Big{(}\frac{\tau(1+\tau)(4- \tau)}{2}\,t-\frac{1-\tau}{t}\Big{)}\,u^{\prime\prime}(t)\] \[\quad-\Big{(}\frac{3\tau^{2}(1+\tau)^{2}}{4}\,t^{2}+\frac{\tau(1+ \tau)(4-\tau)}{2}+\frac{1-\tau}{t^{2}}\Big{)}\,u^{\prime}(t)-\frac{3\tau^{2}(1 +\tau)^{2}}{2}\,t\,u^{\prime}(t).\]
Then by using Proposition 3.5,
\[V(t)+t\,V^{\prime}(t)=\Big{(}2a_{3}(t)-(1-\tau)\Big{)}u^{\prime \prime\prime}(t)+\left[2a_{2}(t)-\Big{(}\frac{\tau(1+\tau)(4-\tau)}{2}\,t-\frac{ 1-\tau}{t}\Big{)}\right]u^{\prime\prime}(t)\] \[+\left[2a_{1}(t)-\Big{(}\frac{3\tau^{2}(1+\tau)^{2}}{4}\,t^{2}+ \frac{\tau(1+\tau)(4-\tau)}{2}+\frac{1-\tau}{t^{2}}\Big{)}\right]u^{\prime}(t) +\left[2a_{0}(t)-\frac{3\tau^{2}(1+\tau)^{2}}{2}\,t\right]\!u(t).\]
This gives rise to
\[V(t)+t\,V^{\prime}(t)=\bigg{[}-\frac{1+\tau}{2}\,t+\frac{3(1- \tau)}{t}\,\bigg{]}u^{\prime\prime}(t)\] \[+\bigg{[}-(1+\tau)\Big{(}\frac{3\tau(1+\tau)}{4}\,t^{2}+(1-\tau^{ 2})(N-1)+\frac{2-6\tau+\tau^{2}}{2}\Big{)}-\frac{3(1-\tau)}{t^{2}}\bigg{]}\,u^ {\prime}(t)\] \[-\bigg{[}\frac{\tau(1+\tau)}{4}\,t^{2}+(1+\tau)\,(N-1)+\frac{5+ \tau}{4}\bigg{]}\tau(1+\tau)^{2}\,t\,u(t). \tag{4.7}\]
Taking one more derivative and using Proposition 3.5 once again, after simplifications as above, we obtain
\[2V^{\prime}(t)+t\,V^{\prime\prime}(t)\] \[=\bigg{[}-\frac{(1+\tau)^{3}(1-2\tau)}{4(1-\tau)}\,t^{2}+(1+\tau) \Big{(}1-3\tau+\tau^{2}-(N-1)(1-\tau^{2})\Big{)}-\frac{12(1-\tau)}{t^{2}} \bigg{]}\,u^{\prime\prime}(t)\] \[\quad-\bigg{[}\frac{\tau(1+\tau)^{3}(3+2\tau)}{8}\,t^{3}+\frac{(1 +\tau)^{2}}{2(1-\tau)}\Big{(}1-4\tau+5\tau^{2}-5\tau^{3}+(N-1)(1-\tau^{2})(1+2 \tau)\Big{)}\,t\] \[\quad-(1+\tau)\Big{(}2-15\tau+3\tau^{2}+3(N-1)(1-\tau^{2})\Big{)} \frac{1}{t}-\frac{12(1-\tau)}{t^{3}}\bigg{]}\,u^{\prime}(t)\] \[\quad+\bigg{[}-\frac{\tau(1+\tau)^{2}}{8(1-\tau)}\,t^{4}-\frac{1+ \tau}{8(1-\tau)}\Big{(}5(1-\tau)+4(N-1)(1+\tau)\Big{)}t^{2}\] \[\quad+2(N-1)(1+\tau)+\frac{5}{2}-4\tau\bigg{]}\tau(1+\tau)^{2}\,u (t). \tag{4.8}\]
Note that if \(u\) is of the form (3.28), we have
\[t^{2}u(t) =\Big{(}\alpha(t)t^{2}-\beta(t)t+2\gamma(t)\Big{)}V(t)\] \[\quad+\Big{(}\beta(t)\,t-2\gamma(t)\Big{)}\Big{(}V(t)+tV^{\prime} (t)\Big{)}+\gamma(t)t\Big{(}2V^{\prime}(t)+tV^{\prime\prime}(t)\Big{)}. \tag{4.9}\]
We now substitute (3.32), (4.7) and (4.8) into (4.9). Then by comparing the coefficients of the \(u\), \(u^{\prime}\), and \(u^{\prime\prime}\) terms, we derive a system of algebraic equations for \(\alpha,\beta\), and \(\gamma\). Solving this system involves lengthy yet straightforward computations, ultimately yielding the explicit formulas of \(\alpha\), \(\beta\), and \(\gamma\) in (3.28). This completes the proof.
### Proof of Proposition 3.5
In this subsection, we prove Proposition 3.5. For this, we shall utilize the integral representation (3.16) of \(u(t)\). We first derive a differential equation for a related but simpler function \(\sigma\) to analyse, for which we replace the integrand \(H_{N-2}H_{N-1}\) in (3.16) with \(H_{N-1}^{2}\).
**Lemma 4.1**.: _For even integer \(N\geq 2\) and \(\tau\in[0,1]\), let_
\[\sigma(t):=\frac{2}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\,\int_{ \mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}^{2}\frac{e^{- \frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx. \tag{4.10}\]
_Then we have_
\[\begin{split}\frac{4(1-\tau)}{1+\tau}\,\sigma^{\prime\prime\prime}(t )&=2(1-4\tau+\tau^{2})\,t\,\sigma^{\prime\prime}(t)\\ &\quad+\bigg{[}3\tau(1-\tau^{2})\,t^{2}+4\Big{(}(N-1)(1-\tau^{2}) +1-2\tau\Big{)}\bigg{]}\sigma^{\prime}(t)\\ &\quad+\bigg{[}\tau^{2}(1+\tau)\,t^{2}+4(N-1)\tau(1+\tau)+\tau( 5-\tau)\bigg{]}(1+\tau)\,t\,\sigma(t).\end{split} \tag{4.11}\]
Proof.: Differentiating (4.10) and applying the Gaussian integration by parts (4.4), we have
\[\begin{split}\sigma^{\prime}(t)&=\frac{2}{1+\tau} \frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\,\int_{\mathbb{R}}x\,e^{tx}\,H_{N-1} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}^{2}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt {2\pi}}\,dx\\ &=\frac{1+\tau}{2}\,t\,\sigma(t)+\frac{(\tau/2)^{N-2}}{(N-2)!} \,\int_{\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1} ^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}} {\sqrt{2\pi}}\,dx.\end{split} \tag{4.12}\]
Taking one more derivative and using (4.4) once again, we have
\[\begin{split}&\sigma^{\prime\prime}(t)=\frac{1+\tau}{2}\sigma(t) +\frac{1+\tau}{2}t\,\sigma^{\prime}(t)+\frac{(\tau/2)^{N-2}}{(N-2)!}\int_{ \mathbb{R}}x\,e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}^{ \prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{ \sqrt{2\pi}}\,dx\\ &=\frac{1+\tau}{2}\sigma(t)+\frac{1+\tau}{2}t\,\sigma^{\prime}(t )+\frac{1+\tau}{2}\frac{(\tau/2)^{N-2}}{(N-2)!}\int_{\mathbb{R}}\frac{d}{dx} \Big{[}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}^{\prime} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\Big{]}\frac{e^{-\frac{x^{2}}{1+\tau}}}{ \sqrt{2\pi}}\,dx.\end{split}\]
Combining the above equations, we obtain
\[\begin{split}&\sigma^{\prime\prime}(t)-(1+\tau)t\,\sigma^{ \prime}(t)+\Big{[}\Big{(}\frac{1+\tau}{2}t\Big{)}^{2}-\frac{1+\tau}{2}\Big{]} \sigma(t)\\ &=\frac{1+\tau}{2\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\, \int_{\mathbb{R}}e^{tx}\,\bigg{[}H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}^{2}+H_{N-1}^{\prime\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^{-\frac{x^{2}}{1+\tau}}}{ \sqrt{2\pi}}\,dx.\end{split} \tag{4.13}\]
Next, we show the following assertions.
* **(Claim 1)** We have (4.14) \[\begin{split}&\frac{1+\tau}{2\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N- 2)!}\,\int_{\mathbb{R}}e^{tx}\,H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}^{2}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\\ &=(1+\tau)\Big{(}\frac{1+\tau}{4}\,t^{2}+\frac{1+\tau}{2\tau}(N-1 )+\frac{1-\tau}{4\tau}\Big{)}\sigma(t)+\frac{(1+\tau)(1-3\tau)}{4\tau}\,t\, \sigma^{\prime}(t)+\frac{\tau-1}{2\tau}\sigma^{\prime\prime}(t).\end{split}\]
* **(Claim 2)** Let (4.15) \[h(t):=\frac{1+\tau}{2\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\int_{\mathbb{ R}}e^{tx}\,H_{N-1}^{\prime\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi }}\,dx.\] Then we have (4.16) \[\begin{split}& h(t)+\frac{1-\tau}{\tau(1+\tau)}\,\frac{h^{\prime }(t)}{t}=\bigg{[}\frac{(1+\tau)^{2}}{4}\,t^{2}+(N-1)\frac{(1+\tau)^{2}}{2 \tau}+\frac{(1+\tau)(2-\tau)}{2\tau}\bigg{]}\sigma(t)\\ &\quad+\bigg{[}\frac{(1-2\tau)(1+\tau)}{2\tau}\,t+\bigg{(}(N-1) \frac{1-\tau^{2}}{2\tau^{2}}+\,\frac{1-4\tau+\tau^{2}}{2\tau^{2}}\bigg{)} \frac{1}{t}\bigg{]}\sigma^{\prime}(t)\\ &\quad+\frac{(1-\tau)(1-5\tau)}{4\tau^{2}}\,\sigma^{\prime\prime} (t)-\frac{1}{t}\,\frac{(1-\tau)^{2}}{2\tau^{2}(1+\tau)}\sigma^{\prime\prime \prime}(t).\end{split}\]
We first complete the proof of Lemma 4.1 using (4.14) and (4.16). By combining (4.13) and (4.14), we have
\[h(t)=\frac{1+\tau}{2\tau}\sigma^{\prime\prime}(t)-\frac{(1+\tau)^{2}}{4\tau}\,t \,\sigma^{\prime}(t)-\Big{(}\frac{(1+\tau)^{2}}{2\tau}(N-1)+\frac{(1+\tau)^{2} }{4\tau}\Big{)}\sigma(t). \tag{4.17}\]
Then the lemma follows by substituting (4.17) into (4.16).
It remains to prove (4.14) and (4.16). Let us first show (4.14). For this, recall that the Hermite polynomial satisfies the relation
\[H_{k}^{\prime\prime}(x)-2xH_{k}^{\prime}(x)=-2kH_{k}(x). \tag{4.18}\]
Using this and (4.10), we have
\[\begin{split}&(N-1)\sigma(t)=\frac{2(N-1)}{1+\tau}\frac{(\tau/2)^ {N-\frac{3}{2}}}{(N-2)!}\int_{\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{ 2\tau}}\Big{)}^{2}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\\ &=-\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\int_ {\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{[}H_{N-1} ^{\prime\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}-\sqrt{\frac{2}{\tau}}\,x \,H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^{-\frac{ x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx.\end{split} \tag{4.19}\]
We shall often utilize the following version of the Gaussian integration by parts associated with the the Ornstein-Uhlenbeck operator: for differentiable functions \(f\) and \(g\) with polynomial growth,
\[\int_{\mathbb{R}}f(x)\,\Big{(}g^{\prime\prime}(x)-\frac{2}{1+\tau}\,x\,g^{ \prime}(x)\Big{)}e^{-\frac{x^{2}}{1+\tau}}\,dx=-\int_{\mathbb{R}}f^{\prime}(x )g^{\prime}(x)e^{-\frac{x^{2}}{1+\tau}}\,dx. \tag{4.20}\]
In order to apply this formula to (4.19), we decompose the integral as
\[\begin{split}&\int_{\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}\bigg{[}H_{N-1}^{\prime\prime}\Big{(}\frac{x}{\sqrt{2\tau }}\Big{)}-\sqrt{\frac{2}{\tau}}\,x\,H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2 \tau}}\Big{)}\bigg{]}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\\ &=\int_{\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}\bigg{[}H_{N-1}^{\prime\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}- \frac{2\sqrt{2\tau}}{1+\tau}\,x\,H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau} }\Big{)}\bigg{]}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\\ &\quad+\Big{(}\frac{2\sqrt{2\tau}}{1+\tau}-\sqrt{\frac{2}{\tau} }\Big{)}\int_{\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\, x\,H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+ \tau}}}{\sqrt{2\pi}}\,dx.\end{split}\]
Then it follows from (4.20) that
\[\begin{split}&-\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N -2)!}\int_{\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{[} H_{N-1}^{\prime\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}-\sqrt{\frac{2}{ \tau}}\,x\,H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^ {-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\\ &=\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\sqrt{ 2\tau}\,t\,\int_{\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)} H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+ \tau}}}{\sqrt{2\pi}}\,dx\\ &\quad+\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!} \int_{\mathbb{R}}e^{tx}H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}^{2} \frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\\ &\quad-\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!} \Big{(}\frac{2\sqrt{2\tau}}{1+\tau}-\sqrt{\frac{2}{\tau}}\Big{)}\int_{ \mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\,x\,H_{N-1}^{ \prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{ \sqrt{2\pi}}\,dx.\end{split}\]
Note that by (4.12), we have
\[\sigma^{\prime}(t)-\frac{1+\tau}{2}\,t\,\sigma(t)=\frac{(\tau/2)^{N-2}}{(N-2)!} \int_{\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}^{ \prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{ \sqrt{2\pi}}\,dx. \tag{4.21}\]
Thus it follows that
\[\begin{split}&(N-1)\sigma(t)=\frac{\tau}{1+\tau}\,t\,\Big{(}\sigma ^{\prime}(t)-\frac{1+\tau}{2}\,t\,\sigma(t)\Big{)}\\ &+\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\,\int_{ \mathbb{R}}e^{tx}H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}^{2}\frac {e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\\ &-\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\Big{(} \frac{2\sqrt{2\tau}}{1+\tau}-\sqrt{\frac{2}{\tau}}\Big{)}\int_{\mathbb{R}}x\, e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H^{\prime}_{N-1}\Big{(} \frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx.\end{split} \tag{4.22}\]
Furthermore, by differentiating (4.21), we have
\[\begin{split}&\sigma^{\prime\prime}(t)-\frac{1+\tau}{2}\sigma(t)- \frac{1+\tau}{2}t\,\sigma^{\prime}(t)\\ &=\frac{(\tau/2)^{N-2}}{(N-2)!}\int_{\mathbb{R}}x\,e^{tx}\,H_{N- 1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2 \tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx.\end{split} \tag{4.23}\]
Combining (4.21), (4.22) and (4.23), we obtain (4.14).
Next, we prove (4.16). By using (4.12) and (4.18), we have
\[\begin{split}&(N-1)\Big{(}\sigma^{\prime}(t)-\frac{1+\tau}{2}\,t \,\sigma(t)\Big{)}\\ &=(N-1)\frac{(\tau/2)^{N-2}}{(N-2)!}\int_{\mathbb{R}}e^{tx}\,H_{N- 1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2 \tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\\ &=-\frac{1}{2}\frac{(\tau/2)^{N-2}}{(N-2)!}\int_{\mathbb{R}}e^{ tx}\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{[}H^{\prime \prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}-\sqrt{\frac{2}{\tau}}\,x\,H^ {\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^{-\frac{x^ {2}}{1+\tau}}}{\sqrt{2\pi}}\,dx.\end{split} \tag{4.24}\]
As before, we use a similar decomposition and (4.20) to derive
\[\begin{split}&\int_{\mathbb{R}}e^{tx}\,H^{\prime}_{N-1}\Big{(} \frac{x}{\sqrt{2\tau}}\Big{)}\bigg{[}H^{\prime\prime}_{N-1}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}-\sqrt{\frac{2}{\tau}}\,x\,H^{\prime}_{N-1}\Big{(}\frac{x} {\sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx \\ &=-\sqrt{2\tau}\,t\,\int_{\mathbb{R}}e^{tx}\,H^{\prime}_{N-1} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}^{2}\frac{e^{-\frac{x^{2}}{1+\tau}}}{ \sqrt{2\pi}}\,dx-\int_{\mathbb{R}}e^{tx}\,H^{\prime\prime}_{N-1}\Big{(}\frac{ x}{\sqrt{2\tau}}\Big{)}H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)} \frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\\ &\quad+\Big{(}\frac{2\sqrt{2\tau}}{1+\tau}-\sqrt{\frac{2}{\tau}} \Big{)}\int_{\mathbb{R}}e^{tx}\,x\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2 \tau}}\Big{)}^{2}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx.\end{split}\]
Then by (4.24), we have
\[\begin{split}&(N-1)\Big{(}\sigma^{\prime}(t)-\frac{1+\tau}{2}\,t \,\sigma(t)\Big{)}\\ &=\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\,t\,\int_{\mathbb{R}}e^{ tx}\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}^{2}\frac{e^{-\frac{x^{2}}{1 +\tau}}}{\sqrt{2\pi}}\,dx\\ &\quad+\frac{1}{2}\frac{(\tau/2)^{N-2}}{(N-2)!}\int_{\mathbb{R}}e ^{tx}\,H^{\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H^{\prime}_{N- 1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2 \pi}}\,dx\\ &\quad-\frac{1}{2}\frac{(\tau/2)^{N-2}}{(N-2)!}\Big{(}\frac{2 \sqrt{2\tau}}{1+\tau}-\sqrt{\frac{2}{\tau}}\Big{)}\int_{\mathbb{R}}e^{tx}\,x\,H^{ \prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}^{2}\frac{e^{-\frac{x^{2}}{1+ \tau}}}{\sqrt{2\pi}}\,dx.\end{split} \tag{4.25}\]
On the other hand, by differentiating (4.14), we have
\[\begin{split}&\tau\,t\,\sigma(t)+\Big{(}N-1+\frac{1-2\tau}{1+\tau}+ \frac{\tau}{2}\,t^{2}\Big{)}\sigma^{\prime}(t)+\frac{1-3\tau}{2(1+\tau)}\,t\, \sigma^{\prime\prime}(t)+\frac{\tau-1}{(1+\tau)^{2}}\sigma^{\prime\prime \prime}(t)\\ &=\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\int_{ \mathbb{R}}e^{tx}\,x\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}^{2} \frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx.\end{split} \tag{4.26}\]
By combining (4.14), (4.25) and (4.26), we obtain
\[\begin{split}&\frac{1}{2}\frac{(\tau/2)^{N-2}}{(N-2)!}\int_{ \mathbb{R}}e^{tx}\,H^{\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H ^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+ \tau}}}{\sqrt{2\pi}}\,dx\\ &=-t\,\bigg{[}\frac{1+\tau}{2}\Big{(}3N-3+\frac{1-\tau}{1+\tau} +\tau\,t^{2}\Big{)}+(1-\tau)\bigg{]}\sigma(t)\\ &\quad+\bigg{[}(N-1)\frac{2\tau-1}{\tau}+t^{2}(2\tau-1)-\frac{(1- \tau)(1-2\tau)}{\tau(1+\tau)}\bigg{]}\sigma^{\prime}(t)\\ &\quad-\frac{(1-\tau)(1-5\tau)}{2(1+\tau)\tau}\,t\,\sigma^{\prime \prime}(t)+\frac{(1-\tau)^{2}}{\tau(1+\tau)^{2}}\sigma^{\prime\prime\prime}(t).\end{split} \tag{4.27}\]
Note that by the well-known differentiation rule
\[H^{\prime}_{k}(x)=2k\,H_{k-1}(x) \tag{4.28}\]
of the Hermite polynomials, we have
\[\begin{split}(N-2)H^{\prime}_{N-1}(x)&=2(N-2)(N-1 )H_{N-2}(x)=-(N-1)\Big{(}H^{\prime\prime}_{N-2}(x)-2xH^{\prime}_{N-2}(x)\Big{)} \\ &=-\frac{1}{2}\Big{(}H^{\prime\prime\prime}_{N-1}(x)-2xH^{\prime \prime}_{N-1}(x)\Big{)}.\end{split} \tag{4.29}\]
Using this together with (4.12) and (4.18), it follows that
\[\begin{split}&(N-2)\Big{(}\sigma^{\prime}(t)-\frac{1+\tau}{2} \,t\,\sigma(t)\Big{)}=(N-2)\frac{(\tau/2)^{N-2}}{(N-2)!}\int_{\mathbb{R}}e^{ tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H^{\prime}_{N-1}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\\ &=-\frac{1}{2}\frac{(\tau/2)^{N-2}}{(N-2)!}\int_{\mathbb{R}}e^{ tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{[}H^{\prime\prime\prime}_{N-1} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}-\sqrt{\frac{2}{\tau}}\,x\,H^{\prime \prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^{-\frac{x^{2 }}{1+\tau}}}{\sqrt{2\pi}}\,dx.\end{split}\]
Here, by using the Gaussian integration by parts (4.20), after a similar decomposition as above, we obtain
\[\begin{split}&\int_{\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}\bigg{[}H^{\prime\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{ 2\tau}}\Big{)}-\sqrt{\frac{2}{\tau}}\,x\,H^{\prime\prime}_{N-1}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx \\ &=-\sqrt{2\tau}\,t\,\int_{\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac {x}{\sqrt{2\tau}}\Big{)}H^{\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx-\int_{\mathbb{R}}e^ {tx}\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H^{\prime\prime}_{N-1 }\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2 \pi}}\,dx\\ &\quad+\Big{(}\frac{2\sqrt{2\tau}}{1+\tau}-\sqrt{\frac{2}{\tau}} \Big{)}\int_{\mathbb{R}}e^{tx}\,x\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H ^{\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1 +\tau}}}{\sqrt{2\pi}}\,dx.\end{split}\]
Combining these equations, after long but straightforward computations, we obtain
\[\begin{split}&\frac{1+\tau}{2\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N -2)!}\int_{\mathbb{R}}e^{tx}\,H^{\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau }}\Big{)}H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+ \tau}}}{\sqrt{2\pi}}\,dx\\ &+\frac{1}{t}\,\frac{1-\tau}{2\tau^{2}}\,\frac{(\tau/2)^{N-\frac {3}{2}}}{(N-2)!}\int_{\mathbb{R}}e^{tx}\,x\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau }}\Big{)}H^{\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{- \frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\\ &=\frac{1+\tau}{2\tau}(N-2)\Big{(}\frac{\sigma^{\prime}(t)}{t}- \frac{1+\tau}{2}\,\sigma(t)\Big{)}\\ &\quad-\frac{1}{t}\,\frac{1+\tau}{4\tau}\frac{(\tau/2)^{N-2}}{(N -2)!}\int_{\mathbb{R}}e^{tx}\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}H^{\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{- \frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx.\end{split} \tag{4.30}\]
Now the desired equation (4.16) follows from (4.27), (4.30) and
\[\frac{1-\tau}{\tau(1+\tau)}\,h^{\prime}(t)=\frac{1-\tau}{2\tau^{2}}\frac{(\tau /2)^{N-\frac{3}{2}}}{(N-2)!}\int_{\mathbb{R}}e^{tx}\,x\,H^{\prime\prime}_{N-1} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)} \frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx. \tag{4.31}\]
This completes the proof.
We are now ready to prove Proposition 3.5
Proof of Proposition 3.5.: Let us define
\[\rho(t):=\frac{2}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\int_{\mathbb{ R}}e^{tx}\,H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx. \tag{4.32}\]
Notice that by definition, we have
\[t\,u(t)=\rho(t). \tag{4.33}\]
We also write
\[\widetilde{\rho}(t):=(N-1)\sqrt{\frac{2}{\tau}}(1+\tau)\rho(t). \tag{4.34}\]
Then by (4.21) and (4.28), we have
\[\widetilde{\rho}(t)=2\sigma^{\prime}(t)-(1+\tau)\,t\,\sigma(t). \tag{4.35}\]
By taking derivatives, we have
\[\begin{split}&\quad-\frac{2(1-\tau)}{(1+\tau)}\widetilde{\rho}^{ \prime\prime}(t)-\tau(3-\tau)\,t\,\widetilde{\rho}^{\prime}(t)-\Big{(}\tau^{2 }(1+\tau)t^{2}-2(N-1)(1-\tau^{2})+2\tau\Big{)}\widetilde{\rho}(t)\\ &=-\frac{4(1-\tau)}{(1+\tau)}\,\sigma^{\prime\prime\prime}(t)+2 (1-4\tau+\tau^{2})\,t\,\sigma^{\prime\prime}(t)\\ &\quad+\bigg{[}3\tau(1-\tau^{2})\,t^{2}+4\Big{(}(N-1)(1-\tau^{2 })+1-2\tau\Big{)}\bigg{]}\sigma^{\prime}(t)\\ &\quad+\bigg{[}\tau^{2}(1+\tau)t^{2}-2(N-1)(1-\tau^{2})+\tau(5- \tau)\bigg{]}(1+\tau)\,t\,\sigma(t).\end{split} \tag{4.36}\]
Combining this with Lemma 4.1, we obtain
\[\begin{split}&\quad 2(N-1)(1+\tau)^{3}\,t\,\sigma(t)\\ &=\frac{2(1-\tau)}{(1+\tau)}\widetilde{\rho}^{\prime\prime}(t)+ \tau(3-\tau)\,t\,\widetilde{\rho}^{\prime}(t)+\Big{(}\tau^{2}(1+\tau)\,t^{2} -2(N-1)(1-\tau^{2})+2\tau\Big{)}\widetilde{\rho}(t).\end{split} \tag{4.37}\]
Differentiating this expression, we obtain
\[\begin{split}& 2(N-1)(1+\tau)^{3}\,\Big{(}\sigma(t)+t\,\sigma^{ \prime}(t)\Big{)}\\ &=\frac{2(1-\tau)}{(1+\tau)}\widetilde{\rho}^{\prime\prime\prime }(t)+\tau(3-\tau)\,t\,\widetilde{\rho}^{\prime\prime}(t)\\ &\quad+\Big{(}\tau^{2}(1+\tau)\,t^{2}-2(N-1)(1-\tau^{2})+\tau(5- \tau)\Big{)}\widetilde{\rho}^{\prime}(t)+2\tau^{2}(1+\tau)\,t\,\widetilde{ \rho}(t).\end{split} \tag{4.38}\]
Then by the linear combination of (4.37) and (4.38) together with the definition of \(\widetilde{\rho}\) in (4.35), we conclude
\[\begin{split} 0&=\frac{4(1-\tau)}{(1+\tau)}\,t\, \rho^{\prime\prime\prime}(t)-\Big{(}2(1-4\tau+\tau^{2})\,t^{2}+\frac{4(1-\tau) }{(1+\tau)}\Big{)}\,\rho^{\prime\prime}(t)\\ &\quad-\bigg{[}3\tau(1-\tau^{2})\,t^{2}+4(N-1)(1-\tau^{2})-4 \tau\bigg{]}t\,\rho^{\prime}(t)\\ &\quad-\bigg{[}\tau^{2}(1+\tau)^{2}\,t^{4}+2\tau(1-\tau^{2})\,t^ {2}+4\tau+4(1+\tau)\Big{(}\tau(1+\tau)\,t^{2}-(1-\tau)\Big{)}(N-1)\bigg{]}\rho( t).\end{split} \tag{4.39}\]
Now the proposition follows from the relation (4.33).
### Proof of Proposition 3.7
In this subsection, we prove Proposition 3.7. Recall that by (3.15), we have \(M(t)=u(t)+v(t)\). Therefore, it suffices to show the following equivalent proposition.
**Proposition 4.2**.: _For any even integer \(N\geq 2\) and \(\tau\in[0,1]\), we have_
\[\begin{split}& 4(1-\tau)v^{\prime\prime\prime}(t)-4(1+\tau)( \tau^{2}-3\tau+1)\,t\,v^{\prime\prime}(t)\\ &\quad-4(1-\tau^{2})\Big{(}2\tau(1+\tau)\,t^{2}+(N-1)(1+\tau)+2- \tau\Big{)}v^{\prime}(t)\\ &\quad-4\tau(1+\tau)^{2}\Big{(}\tau(1+\tau)\,t^{2}+(N-1)(1+\tau) +3-2\tau\Big{)}\,t\,v(t)\\ &=2(1-\tau)^{2}(1+\tau)\,t\,u^{\prime\prime}(t)+(1-\tau^{2}) \Big{(}5\tau(1+\tau)\,t^{2}+4(1-\tau)\Big{)}u^{\prime}(t)\\ &\quad+\tau(1+\tau)^{2}\Big{(}3\tau(1+\tau)\,t^{2}+7-9\tau\Big{)} \,t\,u(t).\end{split} \tag{4.40}\]
For this, we first prove the following lemma.
**Lemma 4.3**.: _For any even integer \(N\geq 2\) and \(\tau\in[0,1]\), we have_
\[\begin{split}&\frac{2}{1+\tau}v^{\prime\prime}(t)-2(1-\tau)\,t\,v^ {\prime}(t)-2\Big{(}\tau(1+\tau)\,t^{2}+(1+\tau)(N-1)+1\Big{)}v(t)\\ &=\rho^{\prime}(t)+\tau(1+\tau)\,t\,\rho(t)-\frac{(\tau/2)^{N- \frac{3}{2}}}{(N-2)!}\sqrt{2\tau}\int_{\mathbb{R}}e^{tx}\,\frac{e^{-\frac{x^{2 }}{1+\tau}}}{\sqrt{2\pi}}H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\,H^{ \prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\,dx,\end{split} \tag{4.41}\]
_where \(\rho\) is given by (4.32)._
Proof.: Note that by (3.6), we have
\[v(t)=\frac{(\tau/2)^{N-\frac{3}{2}}}{1+\tau}\frac{1}{(N-2)!}\int_{\mathbb{R}} e^{tx}\,\frac{e^{-\frac{x^{2}}{2(1+\tau)}}}{\sqrt{2\pi}}H_{N-1}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}\,\bigg{[}\int_{0}^{x}e^{-\frac{u^{2}}{2(1+\tau)}}H_{N-2} \Big{(}\frac{u}{\sqrt{2\tau}}\Big{)}\,du\bigg{]}\,dx. \tag{4.42}\]
Therefore we have
\[v^{\prime}(t)=\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\int_{\mathbb{R}}\frac{d }{dx}\bigg{[}e^{tx}\,\frac{e^{-\frac{x^{2}}{2(1+\tau)}}}{\sqrt{2\pi}}H_{N-1} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\,\int_{0}^{x}e^{-\frac{u^{2}}{2(1+\tau)}} H_{N-2}\Big{(}\frac{u}{\sqrt{2\tau}}\Big{)}\,du\bigg{]}\,dx.\]
Here, by using the definition of \(\rho\) in (4.32), we have
\[\begin{split} v^{\prime}(t)&=(1+\tau)\,t\,v(t)+\frac{1+ \tau}{2}\rho(t)\\ &\quad+\,\frac{1}{\sqrt{2\tau}}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N- 2)!}\int_{\mathbb{R}}e^{tx}\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}e^{-\frac{x^{2}}{2(1+\tau)}}\bigg{[}\int_{0}^{x}H_{N-2}\Big{(}\frac{u}{ \sqrt{2\tau}}\Big{)}\,\frac{e^{-\frac{u^{2}}{2(1+\tau)}}}{\sqrt{2\pi}}\,du \bigg{]}\,dx.\end{split} \tag{4.43}\]
By differentiating this, we also obtain
\[\begin{split} v^{\prime\prime}(t)&=(1+\tau)\,v(t)+( 1+\tau)\,t\,v^{\prime}(t)+\frac{1+\tau}{2}\rho^{\prime}(t)\\ &\quad+\frac{1}{\sqrt{2\tau}}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N- 2)!}\int_{\mathbb{R}}x\,e^{tx}\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}e^{-\frac{x^{2}}{2(1+\tau)}}\bigg{[}\int_{0}^{x}H_{N-2}\Big{(}\frac{u}{ \sqrt{2\tau}}\Big{)}\,\frac{e^{-\frac{u^{2}}{2(1+\tau)}}}{\sqrt{2\pi}}\,du \bigg{]}\,dx.\end{split} \tag{4.44}\]
We now make use of the recurrence relation (4.18) to write
\[\begin{split}&(N-1)v(t)=-\frac{1}{2}\frac{(\tau/2)^{N-\frac{3}{2}} }{1+\tau}\frac{1}{(N-2)!}\\ &\quad\times\int_{\mathbb{R}}e^{tx}\,\frac{e^{-\frac{x^{2}}{2(1+ \tau)}}}{\sqrt{2\pi}}\bigg{[}H^{\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau }}\Big{)}-\sqrt{\frac{2}{\tau}}\,x\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2 \tau}}\Big{)}\bigg{]}\bigg{[}\int_{0}^{x}e^{-\frac{u^{2}}{2(1+\tau)}}H_{N-2} \Big{(}\frac{u}{\sqrt{2\tau}}\Big{)}\,du\bigg{]}\,dx.\end{split} \tag{4.45}\]
As before, we write
\[\begin{split}&\int_{\mathbb{R}}e^{tx}\,\frac{e^{-\frac{x^{2}}{2(1+ \tau)}}}{\sqrt{2\pi}}\bigg{[}H^{\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau }}\Big{)}-\sqrt{\frac{2}{\tau}}\,x\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2 \tau}}\Big{)}\bigg{]}\bigg{[}\int_{0}^{x}e^{-\frac{u^{2}}{2(1+\tau)}}H_{N-2} \Big{(}\frac{u}{\sqrt{2\tau}}\Big{)}\,du\bigg{]}\,dx\\ &=\int_{\mathbb{R}}e^{tx}\,\frac{e^{-\frac{x^{2}}{2(1+\tau)}}}{ \sqrt{2\pi}}\bigg{[}H^{\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)} -\frac{\sqrt{2\tau}}{1+\tau}\,x\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}\bigg{]}\bigg{[}\int_{0}^{x}e^{-\frac{u^{2}}{2(1+\tau)}}H_{N-2}\Big{(} \frac{u}{\sqrt{2\tau}}\Big{)}\,du\bigg{]}\,dx\\ &\quad-\frac{1}{1+\tau}\sqrt{\frac{2}{\tau}}\int_{\mathbb{R}}e^{ tx}\,\frac{e^{-\frac{x^{2}}{2(1+\tau)}}}{\sqrt{2\pi}}\,x\,H^{\prime}_{N-1} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{[}\int_{0}^{x}e^{-\frac{u^{2}}{2(1+ \tau)}}H_{N-2}\Big{(}\frac{u}{\sqrt{2\tau}}\Big{)}\,du\bigg{]}\,dx.\end{split}\]
Then by applying (4.20) with
\[f(x)=e^{tx}\,\int_{0}^{x}\frac{e^{-\frac{u^{2}}{2(1+\tau)}}}{\sqrt{2\pi}}H_{N- 2}\Big{(}\frac{u}{\sqrt{2\tau}}\Big{)}\,du,\qquad g(x)=2\tau\,H_{N-1}\Big{(} \frac{x}{\sqrt{2\tau}}\Big{)},\]
it follows that
\[\begin{split}&\int_{\mathbb{R}}e^{tx}\,\frac{e^{-\frac{x^{2}}{2(1+ \tau)}}}{\sqrt{2\pi}}\bigg{[}H^{\prime\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau }}\Big{)}-\sqrt{\frac{2}{\tau}}\,x\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2 \tau}}\Big{)}\bigg{]}\bigg{[}\int_{0}^{x}e^{-\frac{u^{2}}{2(1+\tau)}}H_{N-2} \Big{(}\frac{u}{\sqrt{2\tau}}\Big{)}\,du\bigg{]}\,dx\\ &=-\sqrt{2\tau}\int_{\mathbb{R}}e^{tx}\,\frac{e^{-\frac{x^{2}}{2(1+ \tau)}}}{\sqrt{2\pi}}H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\,H^{\prime}_{ N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\,dx\\ &\quad-\sqrt{2\tau}\,t\int_{\mathbb{R}}e^{tx}\,\frac{e^{-\frac{x^{2 }}{2(1+\tau)}}}{\sqrt{2\pi}}\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}\bigg{[}\int_{0}^{x}e^{-\frac{u^{2}}{2(1+\tau)}}H_{N-2}\Big{(}\frac{u}{ \sqrt{2\tau}}\Big{)}\,du\bigg{]}\,dx\\ &\quad-\frac{1}{1+\tau}\sqrt{\frac{2}{\tau}}\int_{\mathbb{R}}e^{ tx}\,\frac{e^{-\frac{x^{2}}{2(1+\tau)}}}{\sqrt{2\pi}}\,x\,H^{\prime}_{N-1} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{[}\int_{0}^{x}e^{-\frac{u^{2}}{2(1+ \tau)}}H_{N-2}\Big{(}\frac{u}{\sqrt{2\tau}}\Big{)}\,du\bigg{]}\,dx.\end{split}\]
Furthermore, by using (4.43) and (4.45), we obtain
\[\Big{(}2\tau(1+\tau)\,t^{2}+2(1+\tau)(N-1)\Big{)}v(t)-2\tau\,t\,v^{ \prime}(t)+\tau(1+\tau)\,t\,\rho(t)\] \[=\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\sqrt{2\tau}\int_{\mathbb{ R}}e^{tx}\,\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}H_{N-2}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\,dx\] \[\quad+\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\frac{1}{1+\tau} \sqrt{\frac{2}{\tau}}\int_{\mathbb{R}}e^{tx}\,\frac{e^{-\frac{x^{2}}{2(1+\tau) }}}{\sqrt{2\pi}}\,x\,H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)} \bigg{[}\int_{0}^{x}e^{-\frac{u^{2}}{2(1+\tau)}}H_{N-2}\Big{(}\frac{u}{\sqrt{ 2\tau}}\Big{)}\,du\bigg{]}\,dx. \tag{4.46}\]
Combining this with (4.44), the lemma follows.
We now complete the proof of Proposition 4.2.
Proof of Proposition 4.2.: By taking the derivatives of (4.32) and using the Gaussian integration by parts (4.4), we have
\[\rho^{\prime}(t)-\frac{1+\tau}{2}\,t\,\rho(t)\] \[=\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\,\frac{1}{\sqrt{2\tau}} \int_{\mathbb{R}}e^{tx}\,\bigg{[}H^{\prime}_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}+H_{N-2}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]} \frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx \tag{4.47}\]
and
\[2\tau\rho^{\prime\prime}(t)-\tau(1+\tau)\,t\,\rho^{\prime}(t)- \tau(1+\tau)\rho(t)\] \[=\frac{2(\tau/2)^{N-1}}{(N-2)!}\int_{\mathbb{R}}x\,e^{tx}\,\bigg{[} H^{\prime}_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}+H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H^{\prime}_{N -1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^{-\frac{x^{2}}{1+\tau} }}{\sqrt{2\pi}}\,dx. \tag{4.48}\]
Furthermore, by using (4.18) and a similar decomposition as above, we have
\[(N-1)\rho(t) =\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\sqrt{2 \tau}\,t\,\int_{\mathbb{R}}e^{tx}\,H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)} H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+ \tau}}}{\sqrt{2\pi}}\,dx\] \[\quad+\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\int _{\mathbb{R}}e^{tx}H^{\prime}_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H^{ \prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+ \tau}}}{\sqrt{2\pi}}\,dx\] \[\quad-\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!} \Big{(}\frac{2\sqrt{2\tau}}{1+\tau}-\sqrt{\frac{2}{\tau}}\Big{)}\int_{\mathbb{ R}}e^{tx}\,H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\,x\,H^{\prime}_{N-1} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2 \pi}}\,dx\]
and
\[(N-2)\rho(t) =\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\sqrt{2 \tau}\,t\,\int_{\mathbb{R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)} H^{\prime}_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+ \tau}}}{\sqrt{2\pi}}\,dx\] \[\quad+\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\int _{\mathbb{R}}e^{tx}H^{\prime}_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H^{ \prime}_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+ \tau}}}{\sqrt{2\pi}}\,dx\] \[\quad-\frac{1}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!} \Big{(}\frac{2\sqrt{2\tau}}{1+\tau}-\sqrt{\frac{2}{\tau}}\Big{)}\int_{\mathbb{ R}}e^{tx}\,H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\,x\,H^{\prime}_{N-2} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2 \pi}}\,dx.\]
Subtracting these equations, we obtain
\[\rho(t)=\frac{\sqrt{2\tau}}{1+\tau}\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\,t\, \int_{\mathbb{R}}e^{tx}\,\bigg{[}H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N -1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}-H_{N-2}^{\prime}\Big{(}\frac{x }{\sqrt{2\tau}}\Big{)}H_{N-1}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]}\frac{ e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\]
Combining this expression with (4.47), we obtain
\[\tau\,\rho^{\prime}(t)-\frac{\tau(1+\tau)}{2}\,t\,\rho(t)+\frac{1+ \tau}{2}\frac{\rho(t)}{t}\] \[=\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\sqrt{2\tau}\,\int_{ \mathbb{R}}e^{tx}\,H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}^{\prime} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2 \pi}}\,dx\] \[\quad+\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\,\frac{1}{\sqrt{2 \tau}}\frac{1-\tau}{1+\tau}\,\frac{1}{t}\] \[\quad\times\int_{\mathbb{R}}x\,e^{tx}\,\bigg{[}H_{N-2}\Big{(} \frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}-H_{N-2}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}\Big{(} \frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2 \pi}}\,dx. \tag{4.49}\]
Then it follows from Lemma 4.3 and (4.49) that
\[\frac{2}{1+\tau}v^{\prime\prime}(t)-2(1-\tau)\,t\,v^{\prime}(t)-2 \Big{(}\tau(1+\tau)\,t^{2}+(1+\tau)(N-1)+1\Big{)}v(t)\] \[=(1-\tau)\rho^{\prime}(t)+\frac{1+\tau}{2}\Big{(}3\tau\,t-\frac{1 }{t}\Big{)}\rho(t)\] \[\quad+\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\frac{1}{\sqrt{2 \tau}}\frac{1-\tau}{1+\tau}\,\frac{1}{t}\] \[\quad\times\int_{\mathbb{R}}x\,e^{tx}\,\bigg{[}H_{N-2}\Big{(} \frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}-H_{N-2}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1}\Big{(} \frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2 \pi}}\,dx. \tag{4.50}\]
On the other hand, by differentiating (4.41) in Lemma 4.3 and using (4.48), we obtain
\[-\frac{(\tau/2)^{N-\frac{3}{2}}}{(N-2)!}\sqrt{2\tau}\int_{\mathbb{ R}}x\,e^{tx}\,\bigg{[}H_{N-2}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}H_{N-1} \Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}-H_{N-2}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}H_{N-1}^{\prime}\Big{(}\frac{x}{\sqrt{2\tau}}\Big{)}\bigg{]}\frac{e^{- \frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\,dx\] \[=2(1-\tau)\rho^{\prime\prime}(t)+3\tau(1+\tau)\,t\,\rho^{\prime}( t)+3\tau(1+\tau)\rho(t)\] \[\quad-\frac{4}{1+\tau}v^{\prime\prime\prime}(t)+4(1-\tau)\,t\,v^ {\prime\prime}(t)+4\Big{(}\tau(1+\tau)\,t^{2}+(1+\tau)(N-1)+2-\tau\Big{)}v^{ \prime}(t)+8\tau(1+\tau)\,t\,v(t).\]
Substituting this into (4.50), after simplifications, we obtain
\[4(1-\tau)v^{\prime\prime\prime}(t)-4(1+\tau)(\tau^{2}-3\tau+1)\,t \,v^{\prime\prime}(t)\] \[\quad-4(1-\tau^{2})\Big{(}2\tau(1+\tau)\,t^{2}+(N-1)(1+\tau)+2- \tau\Big{)}v^{\prime}(t)\] \[\quad-4\tau(1+\tau)^{2}\Big{(}\tau(1+\tau)\,t^{2}+(N-1)(1+\tau)+3- 2\tau\Big{)}\,t\,v(t)\] \[=2(1-\tau)^{2}(1+\tau)\rho^{\prime\prime}(t)+5\tau(1-\tau)(1+\tau )^{2}\,t\,\rho^{\prime}(t)+\tau(1+\tau)^{2}\Big{(}3\tau(1+\tau)\,t^{2}+2-4\tau \Big{)}\rho(t). \tag{4.51}\]
Finally, the desired equation (4.40) follows from (4.33).
## Appendix A Integrable structure of the elliptic GinOE
In this appendix, we give a brief exposition of the integrable structure of real eigenvalues of the elliptic GinOE established by Forrester and Nagao in [43], along with additional details on certain computations. In addition to this, we prove (1.41) and (1.44) in Remark 1.7.
Let us write
(A.1) \[C_{k}(x):=\Big{(}\frac{\tau}{2}\Big{)}^{k/2}H_{k}\Big{(}\frac{x}{\sqrt{2\tau}} \Big{)}\]
for the scaled monic Hermite polynomials. Note that
(A.2) \[\frac{d}{dx}C_{n}(x)=nC_{n-1}(x),\qquad x\,C_{n}(x)=C_{n+1}(x)+n\tau\,C_{n-1}( x).\]
It was shown in [43, Theorem 1] that
(A.3) \[p_{2k}(x):=C_{2k}(x),\qquad p_{2k+1}(x):=C_{2k+1}(x)-2k\,C_{2k-1}(x).\]
form skew-orthogonal polynomials associated with real eigenvalues of the elliptic GinOE. This is indeed one of the very few examples of explicit construction of skew-orthogonal polynomials for asymmetric random matrices in the symmetry class of the GinOE. (See [7] for another example in the context of the chiral GinOE.) We also denote
(A.4) \[\begin{split}\Phi_{k}(x)&:=\int_{\mathbb{R}}\,\operatorname {sgn}(x-y)\,p_{k}(y)e^{-\frac{y^{2}}{2(1+\tau)}}\,dy\\ &=\int_{\mathbb{R}}p_{k}(y)e^{-\frac{y^{2}}{2(1+\tau)}}\,dy-2 \int_{x}^{\infty}p_{k}(y)e^{-\frac{y^{2}}{2(1+\tau)}}\,dy.\end{split}\]
Then one can notice that
(A.5) \[\frac{d}{dx}\Phi_{k}(x)=2\,p_{k}(x)e^{-\frac{x^{2}}{2(1+\tau)}}.\]
From the general theory of the Pfaffian point process, it follows that all correlation functions of real eigenvalues of the elliptic GinOE can be expressed in terms of the Pfaffian of a skew-kernel constructed using (A.3) and (A.4). In particular, the density of real eigenvalues is given by
(A.6) \[R_{N}(x)=\frac{e^{-\frac{x^{2}}{2(1+\tau)}}}{2\sqrt{2\pi}(1+\tau)}\,\sum_{k=0 }^{N/2-1}\frac{1}{(2k)!}\Big{(}\Phi_{2k}(x)p_{2k+1}(x)-\Phi_{2k+1}(x)p_{2k}(x) \Big{)},\]
see [43, Eq. (6.6)]. Each summation in this expression can be computed as follows, see [43, Eqs. (6.7) and (6.11)]. This leads to the expression of \(R_{N}\) in (3.4).
**Proposition A.1**.: _For any even integer \(N\geq 2\) and \(\tau\in[0,1]\), we have_
(A.7) \[-\sum_{k=0}^{N/2-1}\frac{\Phi_{2k+1}(x)p_{2k}(x)}{(2k)!}=2(1+\tau) e^{-\frac{x^{2}}{2(1+\tau)}}\sum_{k=0}^{N/2-1}\frac{C_{2k}(x)^{2}}{(2k)!},\] (A.8) \[\sum_{k=0}^{N/2-1}\frac{\Phi_{2k}(x)p_{2k+1}(x)}{(2k)!}=2(1+\tau) e^{-\frac{x^{2}}{2(1+\tau)}}\sum_{k=0}^{N/2-2}\frac{C_{2k+1}(x)^{2}}{(2k+1)!}+ \frac{C_{N-1}(x)\Phi_{N-2}(x)}{(N-2)!}.\]
_In particular, we have_
(A.9) \[R_{N}(x)=\frac{e^{-\frac{x^{2}}{1+\tau}}}{\sqrt{2\pi}}\sum_{k=0}^{N-2}\frac{ C_{k}(x)^{2}}{k!}+\frac{e^{-\frac{x^{2}}{2(1+\tau)}}}{2\sqrt{2\pi}(1+\tau)}\frac{C_{ N-1}(x)\Phi_{N-2}(x)}{(N-2)!}\]
_and_
(A.10) \[\int_{\mathbb{R}}R_{N}(x)\,dx=\sqrt{\frac{2}{\pi}}\sum_{k=0}^{N/2-1}\frac{(\tau/2 )^{2k}}{(2k)!}\int_{\mathbb{R}}e^{-\frac{x^{2}}{1+\tau}}H_{2k}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}^{2}\,dx.\]
Proof.: Using (A.2), one can observe that
(A.11) \[p_{2k+1}(x)=-(1+\tau)e^{\frac{x^{2}}{2(1+\tau)}}\frac{d}{dx}\Big{[}e^{-\frac{x ^{2}}{2(1+\tau)}}C_{2k}(x)\Big{]},\]
and
(A.12) \[p_{2k+2}(x)-(2k+1)p_{2k}(x)=-(1+\tau)e^{\frac{x^{2}}{2(1+\tau)}}\frac{d}{dx} \Big{[}e^{-\frac{x^{2}}{2(1+\tau)}}C_{2k+1}(x)\Big{]}.\]
Then by (A.11), we have
\[\Phi_{2k+1}(x)p_{2k}(x) =\int_{\mathbb{R}}\operatorname{sgn}(x-y)p_{2k+1}(y)p_{2k}(x)e^{- \frac{y^{2}}{2(1+\tau)}}\,dy\] \[=-(1+\tau)C_{2k}(x)\bigg{(}\int_{-\infty}^{x}\frac{d}{dy}\Big{[}e ^{-\frac{y^{2}}{2(1+\tau)}}C_{2k}(y)\Big{]}\,dy-\int_{x}^{\infty}\frac{d}{dy} \Big{[}e^{-\frac{y^{2}}{2(1+\tau)}}C_{2k}(y)\Big{]}\,dy\bigg{)}\] \[=-2(1+\tau)e^{-\frac{x^{2}}{2(1+\tau)}}C_{2k}(x)^{2},\]
which gives (A.7). On the other hand, since
\[\Phi_{2k}(x)p_{2k+1}(x) =\int_{\mathbb{R}}\operatorname{sgn}(x-y)p_{2k}(y)p_{2k+1}(x)e^{- \frac{y^{2}}{2(1+\tau)}}\,dy\] \[=\Big{(}C_{2k+1}(x)-2kC_{2k-1}(x)\Big{)}\int_{\mathbb{R}} \operatorname{sgn}(x-y)C_{2k}(y)e^{-\frac{y^{2}}{2(1+\tau)}}\,dy,\]
we have
\[\sum_{k=0}^{N/2-1}\frac{\Phi_{2k}(x)p_{2k+1}(x)}{(2k)!} =\sum_{k=0}^{N/2-1}\frac{C_{2k+1}(x)-2kC_{2k-1}(x)}{(2k)!}\int_{ \mathbb{R}}\operatorname{sgn}(x-y)C_{2k}(y)e^{-\frac{y^{2}}{2(1+\tau)}}\,dy\] \[=\sum_{k=0}^{N/2-1}\frac{C_{2k+1}(x)}{(2k)!}\int_{\mathbb{R}} \operatorname{sgn}(x-y)C_{2k}(y)e^{-\frac{y^{2}}{2(1+\tau)}}\,dy\] \[\quad-\sum_{k=0}^{N/2-2}\frac{C_{2k+1}(x)}{(2k+1)!}\int_{\mathbb{R }}\operatorname{sgn}(x-y)C_{2k+2}(y)e^{-\frac{y^{2}}{2(1+\tau)}}\,dy.\]
It now follows from (A.12) that
\[\sum_{k=0}^{N/2-1}\frac{\Phi_{2k}(x)p_{2k+1}(x)}{(2k)!}-\frac{C_ {N-1}(x)\Phi_{N-2}(x)}{(N-2)!}\] \[=-\sum_{k=0}^{N/2-2}\frac{C_{2k+1}(x)}{(2k+1)!}\int_{\mathbb{R}} \operatorname{sgn}(x-y)\Big{(}C_{2k+2}(y)-(2k+1)C_{2k}(y)\Big{)}e^{-\frac{y^{2 }}{2(1+\tau)}}\,dy\] \[=(1+\tau)\sum_{k=0}^{N/2-2}\frac{C_{2k+1}(x)}{(2k+1)!}\bigg{(} \int_{-\infty}^{x}\frac{d}{dy}\Big{[}e^{-\frac{y^{2}}{2(1+\tau)}}C_{2k+1}(y) \Big{]}\,dy-\int_{x}^{\infty}\frac{d}{dy}\Big{[}e^{-\frac{y^{2}}{2(1+\tau)}}C_{ 2k+1}(y)\Big{]}\,dy\bigg{)}.\]
This gives rise to (A.8). Now the expression (A.9) immediately follows from (A.6), (A.7) and (A.8). Note that by (A.5), we have
(A.13) \[2e^{-\frac{x^{2}}{2(1+\tau)}}\Big{(}\Phi_{2k}(x)p_{2k+1}(x)+\Phi_{2k+1}(x)p_{2k} (x)\Big{)}=\frac{d}{dx}\Big{[}\Phi_{2k}(x)\Phi_{2k+1}(x)\Big{]}.\]
Therefore, we have
(A.14) \[\sum_{k=0}^{N/2-1}\int_{\mathbb{R}}e^{-\frac{x^{2}}{2(1+\tau)}}\frac{\Phi_{2k} (x)p_{2k+1}(x)}{(2k)!}\,dx=-\sum_{k=0}^{N/2-1}\int_{\mathbb{R}}e^{-\frac{x^{2} }{2(1+\tau)}}\frac{\Phi_{2k+1}(x)p_{2k}(x)}{(2k)!}\,dx,\]
which gives (A.10).
**Remark A.2**.: We also note that the expression (1.13) follows from (A.10) and
(A.15) \[\int_{\mathbb{R}}e^{-\frac{x^{2}}{1+\tau}}H_{2k}\Big{(}\frac{x}{ \sqrt{2\tau}}\Big{)}^{2}\,dx=\sqrt{2\tau}\int_{\mathbb{R}}e^{-\frac{2\tau}{1+ \tau}u^{2}}H_{2k}(u)^{2}\,du\] \[=\sqrt{2\tau}\,2^{2k-\frac{1}{2}}\Big{(}\frac{\tau}{1+\tau} \Big{)}^{-2k-\frac{1}{2}}\Big{(}\frac{1-\tau}{1+\tau}\Big{)}^{2k}\Gamma\Big{(} 2k+\frac{1}{2}\Big{)}_{2}F_{1}\Big{(}-2k,-2k;\frac{1}{2}-2k;-\frac{\tau}{1- \tau}\Big{)}\] \[=\Big{(}\frac{1+\tau}{1-\tau}\Big{)}^{\frac{1}{2}}\Big{(}\frac{ \tau}{2}\Big{)}^{-2k}\Gamma\Big{(}2k+\frac{1}{2}\Big{)}_{2}F_{1}\Big{(}\frac{ 1}{2},\frac{1}{2};\frac{1}{2}-2k;-\frac{\tau}{1-\tau}\Big{)},\]
where we have used [53, (7.374-5)] and the Euler's transformation [67, Eq. (15.8.1)].
Now we prove (1.41) and (1.44). Before the proof, we mention that the function \(M_{0}^{\rm w}\) also appears in a seemingly different context of the number variance of the GinUE, see [5, Proposition 2.4]. For the derivations, we make use of the power series expansion of the error function
(A.16) \[\operatorname{erf}(z)=\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{ n!\,(2n+1)}z^{2n+1},\]
see [67, Eq. (7.6.1)]. By combining (A.16) with the Euler beta integral
(A.17) \[\int_{0}^{1}s^{2p}\,(1-s^{2})^{n+\frac{1}{2}}\,ds=\frac{\Gamma(n+\frac{3}{2}) \Gamma(p+\frac{1}{2})}{2\,\Gamma(n+p+2)},\]
we obtain
\[\frac{1}{2\alpha\sqrt{\pi}}\int_{-2}^{2}x^{2p}\,\operatorname{ erf}\Big{(}\frac{\alpha}{2}\sqrt{4-x^{2}}\Big{)}\,dx =\frac{1}{\alpha\pi}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!\,(2n+1 )}\Big{(}\frac{\alpha}{2}\Big{)}^{2n+1}\int_{-2}^{2}x^{2p}\,(4-x^{2})^{n+ \frac{1}{2}}\,dx\] \[=\frac{1}{\pi}\sum_{n=0}^{\infty}\frac{(-1)^{n}\alpha^{2n}2^{2p+ 1}}{n!\,(2n+1)}\frac{\Gamma(n+\frac{3}{2})\Gamma(p+\frac{1}{2})}{\Gamma(n+p+2 )}.\]
This gives rise to (1.41). On the other hand, (1.44) can be derived using the expansions
(A.18) \[I_{0}\Big{(}\frac{\alpha^{2}}{2}\Big{)}e^{-\frac{\alpha^{2}}{2}} =\sum_{k=0}^{\infty}\frac{(2k-1)!!}{(k!)^{2}}(-1)^{k}\Big{(}\frac {\alpha^{2}}{2}\Big{)}^{k},\] (A.19) \[I_{1}\Big{(}\frac{\alpha^{2}}{2}\Big{)}e^{-\frac{\alpha^{2}}{2}} =\sum_{k=0}^{\infty}\frac{(2k-1)!!}{(k-1)!(k+1)!}(-1)^{k+1}\Big{(} \frac{\alpha^{2}}{2}\Big{)}^{k},\]
both of which follow from the definition (1.43) of the modified Bessel function. |
2304.00074 | Bayesian Clustering via Fusing of Localized Densities | Bayesian clustering typically relies on mixture models, with each component
interpreted as a different cluster. After defining a prior for the component
parameters and weights, Markov chain Monte Carlo (MCMC) algorithms are commonly
used to produce samples from the posterior distribution of the component
labels. The data are then clustered by minimizing the expectation of a
clustering loss function that favours similarity to the component labels.
Unfortunately, although these approaches are routinely implemented, clustering
results are highly sensitive to kernel misspecification. For example, if
Gaussian kernels are used but the true density of data within a cluster is even
slightly non-Gaussian, then clusters will be broken into multiple Gaussian
components. To address this problem, we develop Fusing of Localized Densities
(FOLD), a novel clustering method that melds components together using the
posterior of the kernels. FOLD has a fully Bayesian decision theoretic
justification, naturally leads to uncertainty quantification, can be easily
implemented as an add-on to MCMC algorithms for mixtures, and favours a small
number of distinct clusters. We provide theoretical support for FOLD including
clustering optimality under kernel misspecification. In simulated experiments
and real data, FOLD outperforms competitors by minimizing the number of
clusters while inferring meaningful group structure. | Alexander Dombowsky, David B. Dunson | 2023-03-31T18:47:22Z | http://arxiv.org/abs/2304.00074v3 | # Bayesian Clustering via Fusing of Localized Densities
###### Abstract
Bayesian clustering typically relies on mixture models, with each component interpreted as a different cluster. After defining a prior for the component parameters and weights, Markov chain Monte Carlo (MCMC) algorithms are commonly used to produce samples from the posterior distribution of the component labels. The data are then clustered by minimizing the expectation of a clustering loss function that favours similarity to the component labels. Unfortunately, although these approaches are routinely implemented, clustering results are highly sensitive to kernel misspecification. For example, if Gaussian kernels are used but the true density of data within a cluster is even slightly non-Gaussian, then clusters will be broken into multiple Gaussian components. To address this problem, we develop Fusing of Localized Densities (FOLD), a novel clustering method that melts components together using the posterior of the kernels. FOLD has a fully Bayesian decision theoretic justification, naturally leads to uncertainty quantification, can be easily implemented as an add-on to MCMC algorithms for mixtures, and favours a small number of distinct clusters. We provide theoretical support for FOLD including clustering optimality under kernel misspecification. In simulated experiments and real data, FOLD outperforms competitors by minimizing the number of clusters while inferring meaningful group structure.
_Keywords--_ Bayes; Clustering; Decision theory; Kernel misspecification; Mixture models; Statistical distances
## 1 Introduction
Clustering data into groups of relatively similar observations is a canonical task in exploratory data analysis. Algorithmic clustering methods, such as k-means, k-medoids, and hierarchical clustering, rely on dissimilarity metrics, an approach which is often heuristic but may perform well in practice; see Hastie et al. (2009), Jain (2010), and Kiselev et al. (2019) for an overview. In comparison, model-based clustering methods utilize mixtures of probability kernels to cluster data, ordinarily by inferring the component labels (Fraley and Raftery, 2002). Choices of kernel depend on the type of data being considered, with the Gaussian mixture model (GMM) particularly popular for continuous data. A conceptual advantage of model-based methods is the ability to express uncertainty in clustering. For example, the expectation-maximization (EM) algorithm uses maximum likelihood estimates of the component-specific parameters and weights to calculate each observations' posterior component allocation probabilities, which are interpreted as a measure of clustering uncertainty (Bensmail et al., 1997). As estimation of the weights and component-specific parameters is crucial for model-based clustering, there is a wide literature on quantifying rates of convergence for various mixture models, which is usually expressed in terms of the mixing measure (see Guha et al., 2021 and references therein).
Bayesian mixture models have received increased attention as a clustering method in recent years. The Bayesian framework can account for uncertainty in the number of components and incorporate prior information on the component-specific parameters, with Markov chain Monte Carlo (MCMC) algorithms employed to generate posterior samples for the mixture weights, kernel parameters, and component labels for each data point. Based on posterior samples of the component labels, one can obtain Monte Carlo approximations to Bayes clustering estimators. For \(d\)-dimensional data \(\mathbf{X}=(X_{1},\ldots,X_{n})\), the Bayes estimator \(\mathbf{c}^{*}\) is the minimizer of an expected loss function conditional on \(\mathbf{X}\) over all possible clusterings \(\mathbf{c}=(c_{1},\ldots,c_{n})\). There are several popular choices of loss, including Binder's (Binder, 1978) and Variation of Information (VI) (Meila, 2007), which are distance metrics over the space of
Figure 1: An example of over-clustering. A Bayesian Gaussian mixture model with 10 components and Dirichlet prior concentration parameter equal to 1/10 is fit to data generated from a mixture of skew Gaussian distributions with overlapping densities.
partitions and favour clusterings that are similar to the component labels. Further details on set partitions and their role in Bayesian clustering can be found in Meila (2007), Wade and Ghahramani (2018), and Paganin et al. (2021). Several measures of uncertainty in clustering with Bayesian mixtures exist, including the posterior similarity matrix and credible balls of partitions (Wade and Ghahramani, 2018).
However, model-based clustering approaches, including Bayesian implementations, can be brittle in applications due to unavoidable impacts of kernel misspecification. Mixtures will often induce over-clustering in this setting by dividing dense groups of observations whose empirical density cannot be approximated by members of the assumed kernel family. An example of this phenomena is shown in Figure 1, where a 10 component Bayesian GMM is fit to data generated from a mixture of bivariate skew Gaussian distributions. Despite using a concentration parameter of 1/10 in the symmetric Dirichlet prior to induce a small number of clusters (Rousseau and Mengersen, 2011), the GMM allocates the data into 10 poorly defined groups. A group of observations in the bottom third of the sample space is split into two, while several observations are placed into their own groups despite being near dense collections of data.
Several approaches have been proposed to address the issue of over-clustering due to kernel misspecification. A natural solution is to define a flexible class of kernels, exemplified by the mixtures in Karlis and Santourian (2009), Juarez and Steel (2010), O'Hagan et al. (2016), Tortora et al. (2019), and Dang et al. (2023). To increase flexibility further, Rodriguez and Walker (2014) propose a mixture of nonparametric unimodal kernels. Similarly, Bartolucci (2005), Li (2005), Di Zio et al. (2007), and Malsiner-Walli et al. (2017) use carefully chosen mixtures of Gaussians to characterize the data within each cluster. However, there is an unfortunate pitfall with the general strategy of using flexible families of kernels. In particular, as the flexibility of the kernel increases, identifiability and optimal estimation rates for inferring the mixing measure tend to weaken, especially when the true number of mixture components is unknown (Nguyen, 2013; Ho and Nguyen, 2016; Heinrich and Kahn, 2018). Even the transition from location Gaussian kernels with known covariance to location-scale Gaussian kernels can have substantial consequences on the convergence rate of the component means (Manole and Ho, 2022). Such problems motivated Ho et al. (2020) to propose an alternative estimator of the mixing measure that is more robust than maximum likelihood and Bayesian approaches and can achieve optimal convergence rates.
Alternatively, one can develop generalized Bayesian methods of clustering that avoid defining a fully generative probabilistic model for the data. For example, Duan and Dunson (2021) propose to conduct model-based clustering based on a pairwise distance matrix instead of the data directly to reduce sensitivity to kernel misspecification. Rigon et al. (2023) instead define a Gibbs posterior for clustering incorporating a clustering loss function in place of a likelihood function, completely bypassing modeling of the data. An alternative that maintains a fully generative model while robustifying inferences to misspecification is to use a coarsened posterior (Miller and Dunson, 2019; Gorsky et al., 2023). This coarsening approach often has good practical performance in reducing over-clustering due to kernel misspecification.
Unfortunately, these approaches for accommodating kernel misspecification have various drawbacks. Along with slower convergence rates, flexible kernels typically require a large number of parameters, worsening the already burdensome computational cost of Bayesian
clustering. The generalized Bayes approaches can perform well in certain settings. However, both Gibbs posteriors and coarsened posteriors are highly sensitive to key tuning parameters, which can be difficult to choose objectively in practice.
A key advance is given in Aragam et al. (2020), which attempts to solve the problem of clustering based on a mixture model by merging components in a Gaussian mixture. They rely on a two-stage procedure that lacks uncertainty quantification and assumes that the true number of kernels is known. However, the approach of viewing clusters as arising from merging closely overlapping kernels is promising. Related merging ideas have been implemented in both frequentist and Bayesian approaches in a variety of settings, and several algorithms exist for deciding how and when to combine components together (Chan et al., 2008; Baudry et al., 2010; Hennig, 2010; Melnykov, 2016; Guha et al., 2021; Manole and Khalili, 2021).
In this paper, we propose a novel decision theoretic method for Bayesian clustering that mitigates the effects of model misspecification. Let \(X_{1},\ldots,X_{n}\sim f_{0}\), where \(f_{0}\) is an unknown true probability density. Suppose we model the data with a Bayesian mixture model, with \(L\) components, component labels \(\mathbf{s}=(s_{1},\ldots,s_{n})\), component-specific atoms \(\theta_{1},\ldots,\theta_{L}\), and kernels \(g(\theta_{1}),\ldots,g(\theta_{L})\). Rather than focusing on \(\mathbf{s}\), we compute clusters with the localized densities \(\left\{g(\theta_{s_{i}})\right\}_{i=1}^{n}\). We define a loss function for any clustering \(\widehat{\mathbf{c}}\) that favours allocating \(i\) and \(j\) to the same cluster when the Hellinger distance between the densities \(g(\theta_{s_{i}})\) and \(g(\theta_{s_{j}})\) is small, encouraging grouping observations with overlapping component kernels. We cluster \(X_{1},\ldots,X_{n}\) with a Bayes estimator, interpreted as a Fusing of Localized Densities (FOLD). Our method has a fully decision theoretic justification, leads to interpretable uncertainty quantification, and can be readily implemented using the output of existing MCMC algorithms for mixtures.
Though previous methods have utilized merging kernels to account for kernel misspecification, to our knowledge none have a formal Bayesian decision theoretic justification. Although FOLD requires combinatorial optimization, we employ a method for speeding up computation based on hierarchical clustering of the Hellinger distances between the localized densities. We show concentration of the joint posterior for the weights and atoms as a byproduct of contraction of the mixing measure (Nguyen, 2013; Ho and Nguyen, 2016; Guha et al., 2021). This allows us to show clustering optimality of FOLD in misspecified and well-specified kernel regimes.
In Section 2, we explain our clustering method from a Bayesian decision theoretic perspective, provide a framework for uncertainty quantification, and demonstrate how to implement FOLD in practice. In Section 3, we show asymptotic concentration of FOLD for misspecified kernel regimes. We evaluate FOLD against other model-based clustering methods on simulated examples in Section 4, and consider an application to a cell line data set in Section 5. Finally, we provide concluding remarks and some extensions in Section 6.
## 2 Clustering with Localized Densities
### Notation and Setup
Let \(X_{i}=(X_{i1},\ldots,X_{id})\in\mathbb{R}^{d}\) be multivariate observations, collected into \(\mathbf{X}=(X_{1},\ldots,X_{n})\). Assume that \(X_{1},\ldots,X_{n}\) are generated from an unknown mixture model: \(X_{i}\sim f_{0}\), where \(f_{0}=\sum_{k=1}^{K_{0}}\pi_{0k}\tau_{0k}\), \(\pi_{0k}>0\) for \(k=1,\ldots,K_{0}\), \(\sum_{k=1}^{K_{0}}\pi_{0k}=1\), and \(K_{0}<\infty\). Since \(f_{0}\) is a mixture, the data generating process can be equivalently stated with the addition of latent variables \(s_{0i}\in\{1,\ldots,K_{0}\}\), so that \((X_{i}\mid s_{0i})\sim\tau_{0s_{0i}}\) for all \(i=1,\ldots,n\). \(\mathbb{P}_{0}\) refers to probability and convergence statements with respect to the true data generating process, with \(\mathbb{P}_{0}(A)=\int_{A}f_{0}(x)dx\) for any set \(A\) and \(\mathbb{E}_{0}\left\{\phi(X)\right\}=\int_{\mathbb{R}^{d}}\phi(x)f_{0}(x)dx\) for any function \(\phi\).
Let \(\Theta\) denote a parameter space and \(\mathcal{G}=\left\{g(\theta):\theta\in\Theta\right\}\) be a family of continuous parametric probability distributions with supports on \(\mathbb{R}^{d}\). Suppose a Bayesian mixture model over \(\mathcal{G}\) is fit to \(\mathbf{X}\): \(X_{i}\sim f\), where \(f=\sum_{l=1}^{L}a_{l}g(\theta_{l})\) and \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{L})\). We set \(\lambda=\sum_{l=1}^{L}a_{l}\delta_{\theta_{l}}\) as the mixing measure associated with \(f\), with \(\delta_{\theta}\) a degenerate measure concentrated at the atom \(\theta\), and will occasionally use the notation \(f^{(\lambda)}\) instead of \(f\) to reflect this dependence. For all parameters, \(\Pi(\cdot)\) is the prior distribution and \(\Pi(\cdot\mid\mathbf{X})\) is the posterior distribution. We are primarily motivated to address the misspecified kernel case, in which \(\tau_{0k}\neq g(\theta_{l})\) for any choice of \(\theta_{l}\). However, our method can be applied to the well-specified case as well.
Let \(\mathbf{s}=(s_{1},\ldots,s_{n})\), \(s_{i}\in\{1,\ldots,L\}\), be component labels, inducing the partition \(S=\{S_{1},\ldots,S_{L}\}\) of \(\{1,\ldots,n\}\). Exchangeable priors on partitions are referred to as Exchangeable Partition Probability Functions (EPPFs) (Pitman, 1995). If \(\Pi(\mathbf{s})\) is an EPPF then only the sizes \(|S_{1}|,\ldots,|S_{L}|\) and number of non-empty partition sets impact the prior probability \(\Pi(\mathbf{s})\). Assuming the prior for the atoms is independent of the labels, we obtain the joint posterior,
\[\Pi(\mathbf{s},\mathbf{\theta}\mid\mathbf{X})\propto\Pi(\mathbf{s})\Pi(\mathbf{\theta})\prod_{i=1 }^{n}g(X_{i};\theta_{s_{i}}). \tag{1}\]
There is a rich literature on MCMC algorithms for sampling from (1). There are many choices of \(\Pi(\mathbf{s})\) that can be used in practice, ranging from the EPPF corresponding to a finite symmetric Dirichlet prior to more elaborate choices such as the Pitman-Yor. Regardless of the choice of \(\Pi(\mathbf{s})\), when the prior distribution on the atoms is conjugate to \(\mathcal{G}\), Gibbs sampling tends to be the most popular choice for posterior sampling.
To cluster \(\mathbf{X}\), we typically calculate the Bayes estimator with respect to a clustering loss function that favours similarity to \(\mathbf{s}\). A decision theoretic approach to Bayesian clustering was proposed in Binder (1978), which introduced Binder's loss function. This loss penalizes allocating two observations to the same cluster when they are generated from different mixture components (and vice versa), and is thoroughly examined in Lau and Green (2007). Wade and Ghahramani (2018) advocates for the Variation of Information (VI) loss (Meila, 2007), which is motivated by information theory and often produces a smaller number of clusters than Binder's loss. Generalized versions of Binder's and VI loss are proposed in Dahl et al. (2022), and tend to improve performance. Various statistical software packages exist to obtain Bayesian clustering estimators under these losses using samples from \(\Pi(\mathbf{s}\mid\mathbf{X})\) as input, including mcclust(Fritsch, 2022), mcclust.ext(Wade, 2015), and salso(Dahl et al., 2020).
### Decision Theory Formulation
We aim to estimate a clustering \(\widehat{\mathbf{c}}=(\hat{c}_{1},\ldots,\hat{c}_{n})\) of \(\mathbf{X}\) and partition \(\widehat{\mathbf{C}}=\left\{\hat{C}_{1},\ldots,\hat{C}_{\hat{K}_{n}}\right\}\) based on merging components from the posterior (1) that have similar kernels \(g(\theta_{l})\). The motivation here is to remedy the cluster splitting that occurs in using multiple parametric kernels \(g(\theta_{l})\) to represent each 'true' kernel \(\tau_{0k}\). We refer to \(\left\{g(\theta_{s_{i}})\right\}_{i=1}^{n}\) as _localized densities_, corresponding to the density of each data point, \((X_{i}\mid s_{i},\mathbf{\theta})\sim g(\theta_{s_{i}})\), under the assumed mixture model being fit to the data.
To counteract cluster splitting, we will define the loss of assigning two observations into the same or different group as a function of the statistical distance between their localized densities. This loss is motivated by the notion that data generated from the same \(\tau_{0k}\) will tend to be assigned overlapping kernels under the misspecified mixture. Figure 2 shows the behaviour of the posterior distributions of the localized densities when the model \(f=\sum_{l=1}^{30}a_{l}\mathcal{N}_{2}(\theta_{l},0.02\mathbf{I}_{2})\) is fit to a version of the moons data. The red points plot \(\mathbb{E}_{\Pi}(\theta_{s_{i}}\mid\mathbf{X})\) for each \(i=1,\ldots,n\), with gray circles representing the 95% high density regions of a \(\mathcal{N}_{2}(\mathbb{E}_{\Pi}(\theta_{s_{i}}\mid\mathbf{X}),0.02\mathbf{I}_{2})\) distribution. The crescent clusters are split into multiple overlapping Gaussian kernels, but are recovered by fusing these kernels together.
We define the loss of any clustering \(\widehat{\mathbf{c}}\) resulting from the component labels \(\mathbf{s}\) of a mixture over \(\mathcal{G}\) to be
\[\mathcal{L}_{\mathcal{G}}(\widehat{\mathbf{c}},\mathbf{s})=\sum_{i<j}\left\{\mathbf{1 }_{\hat{c}_{i}=\hat{c}_{j}}\mathcal{H}_{ij}+\omega\mathbf{1}_{\hat{c}_{i} \neq\hat{c}_{j}}\left(1-\mathcal{H}_{ij}\right)\right\}, \tag{2}\]
where \(\omega>0\) and \(\mathcal{H}_{ij}=h\left\{g(\theta_{s_{i}}),g(\theta_{s_{j}})\right\}\) is the Hellinger distance (Hellinger, 1909) between \(g(\theta_{s_{i}})\) and \(g(\theta_{s_{j}})\). Observe that \(\mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta})\) is non-negative for any \(\widehat{\mathbf{c}}\) since \(\mathcal{H}_{ij}\in[0,1]\). The loss of assigning \(\hat{c}_{i}=\hat{c}_{j}\) is \(\mathcal{H}_{ij}\). If \(s_{i}=s_{j}\), then there is no loss incurred. When \(s_{i}\neq s_{j}\), the loss will remain small when the kernels \(g(\theta_{s_{i}}),g(\theta_{s_{j}})\) are similar under the Hellinger distance. Conversely, allocating \(i\) and \(j\) to different clusters results in a loss of \(\omega(1-\mathcal{H}_{ij})\). If \(\mathcal{H}_{ij}=1\), which occurs when the supports of \(g(\theta_{s_{i}})\) and \(g(\theta_{s_{j}})\) are disjoint, then setting \(\hat{c}_{i}\neq\hat{c}_{j}\) accumulates zero loss. Otherwise, the loss depends on the Hellinger separation between the localized densities and the loss parameter \(\omega\).
Figure 2: A Bayesian location Gaussian mixture is fit to a version of the moons data set from the RSSL package (Krijthe and Loog, 2015; Krijthe, 2016). The localized densities are inferred using an MCMC algorithm, then meld together into clusters based on their pairwise Hellinger distance.
Our loss \(\mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta})\) is invariant to permutations of the data indices and of the labels in either \(\widehat{\mathbf{c}}\) or \(\mathbf{s}\), which is desirable for clustering losses (Binder, 1978). In addition, \(\mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta})\) is a continuous relaxation of Binder's loss,
\[\mathcal{L}_{\mathrm{B}}(\widehat{\mathbf{c}},\mathbf{s})=\sum_{i<j}\left\{\mathbf{1}_{ \widehat{c}_{i}=\hat{c}_{j}}\mathbf{1}_{s_{i}\neq s_{j}}+\omega\mathbf{1}_{\widehat{c} _{i}\neq\hat{c}_{j}}\mathbf{1}_{s_{i}=s_{j}}\right\}. \tag{3}\]
A key property of Binder's loss is that it is a quasimetric over the space of partitions (Wade and Ghahramani, 2018; Dahl et al., 2022). \(\mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta})\) is not a quasimetric in general, but instead aims to mitigate the problem of cluster splitting arising under kernel misspecification. One can show that \(\mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta})\) can be rewritten as the sum of \(\mathcal{L}_{\mathrm{B}}(\widehat{\mathbf{c}},\mathbf{s})\) and a remainder term \(\mathcal{B}_{\mathcal{G}}(\widehat{\mathbf{c}},\mathbf{s})\), and that \(\mathcal{L}_{\mathcal{G}}(\mathbf{s},\mathbf{s})=0\) only when the components of \(f\) are completely separated. The closed form of the remainder \(\mathcal{B}_{\mathcal{G}}(\widehat{\mathbf{c}},\mathbf{s})\) is given in Appendix A.1, and is interpreted as an added cost to Binder's loss that encourages merging components.
**Proposition 1**.: Let \(\mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta})\) be the loss function defined in (2) and \(\mathcal{L}_{\mathrm{B}}(\widehat{\mathbf{c}},\mathbf{s})\) be the form of Binder's loss in (3). Then,
\[\mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta})=\mathcal{L}_{\mathrm{B}} (\widehat{\mathbf{c}},\mathbf{s})+\mathcal{B}_{\mathcal{G}}(\widehat{\mathbf{c}},\mathbf{s}), \tag{4}\]
where \(\mathcal{B}_{\mathcal{G}}(\mathbf{s},\mathbf{s})=0\) if and only if \(h(g(\theta_{m}),g(\theta_{l}))=1\) for all component pairs \(m,l=1,\ldots,L\).
The parameter \(\omega\) on the second term in our loss calibrates separation of the clusters. For example, suppose we compare the clustering \(\widehat{\mathbf{c}}_{1}\), which includes clusters \(\hat{C}_{h}\) and \(\hat{C}_{k}\), with the clustering \(\widehat{\mathbf{c}}_{2}\), which is equivalent to \(\widehat{\mathbf{c}}_{1}\) but now contains the merged cluster \(\hat{C}_{h}\cup\hat{C}_{k}\). The difference in their losses is
\[\mathcal{L}_{\mathcal{G}}(\widehat{\mathbf{c}}_{1},\mathbf{s})-\mathcal{L}_{\mathcal{G }}(\widehat{\mathbf{c}}_{2},\mathbf{s})=\omega\sum_{i\in\hat{C}_{h},j\in\hat{C}_{k}} \left(1-\mathcal{H}_{ij}\right)-\sum_{i\in\hat{C}_{h},j\in\hat{C}_{k}}\mathcal{ H}_{ij}.\]
The loss of \(\widehat{\mathbf{c}}_{2}\) is less than that of \(\widehat{\mathbf{c}}_{1}\) when
\[\frac{1}{|\hat{C}_{h}||\hat{C}_{k}|}\sum_{i\in\hat{C}_{h},j\in\hat{C}_{k}} \mathcal{H}_{ij}<\gamma:=\frac{\omega}{1+\omega}. \tag{5}\]
This implies that large values of \(\omega\) promote fusing clusters, while smaller values lead to more clusters having less between-cluster heterogeneity. When \(\widehat{\mathbf{c}}_{1}=\mathbf{s}\), it is clear that a smaller loss can be attained by merging components with average pairwise Hellinger distance less than \(\gamma\). Trivial clusterings are favoured when \(\omega\) is taken to its lower and upper limits, similar to both Binder's loss and the VI loss (Wade and Ghahramani, 2018; Dahl et al., 2022). As \(\omega\to 0\), our loss is minimized in placing each observation in its own cluster, while, as \(\omega\to\infty\), all observations are placed in a single cluster.
The risk of any clustering \(\widehat{\mathbf{c}}\) is the posterior expectation of its loss, which integrates over the uncertainty in the labels and atoms after observing the data;
\[\mathcal{R}_{\mathcal{G}}(\widehat{\mathbf{c}}\mid\mathbf{X})=\mathbb{E} _{\Pi}\left\{\mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta})\mid\mathbf{X}\right\}\] \[=\sum_{i<j}\left\{\mathbf{1}_{\hat{c}_{i}=\hat{c}_{j}}\Delta_{ij}+ \omega\mathbf{1}_{\hat{c}_{i}\neq\hat{c}_{j}}\left(1-\Delta_{ij}\right)\right\}, \tag{6}\]
where \(\Delta_{ij}=\mathbb{E}_{\Pi}(\mathcal{H}_{ij}\mid\mathbf{X})\) is the expected Hellinger distance between the localized densities \(g(\theta_{s_{i}})\) and \(g(\theta_{s_{j}})\) for observations \(i\) and \(j\), respectively. The point estimator of the clustering of \(\mathbf{X}\) is given by the Bayes estimator, denoted \(\mathbf{c}_{\mathrm{FOLD}}\), which is the minimizer of (6) over the space of all possible cluster allocations of \(\{1,\ldots,n\}\),
\[\mathbf{c}_{\mathrm{FOLD}}=\underset{\widehat{\mathbf{c}}}{\mathrm{argmin}}\ \mathcal{R}_{\mathcal{G}}(\widehat{\mathbf{c}}\mid\mathbf{X}). \tag{7}\]
The expected Hellinger distance terms in the risk (6) have several appealing properties that distinguish FOLD from quasimetric loss approaches. We collect these terms into a matrix and show that they exhibit bounded and metric properties in the following proposition.
**Proposition 2**.: Let \(\Delta=(\Delta_{ij})_{i,j}\), where \(\Delta_{ij}=\mathbb{E}_{\Pi}(\mathcal{H}_{ij}\mid\mathbf{X})\). Then \(\Delta\) satisfies the following with \(\mathbb{P}_{0}\)-probability equal to \(1\) for all \(i,j,h=1,\ldots,n\):
1. \(0\leq\Delta_{ij}\leq 1\).
2. \(\Delta_{ii}=0\), \(\Delta_{ji}=\Delta_{ij}\), and \(\Delta_{ih}\leq\Delta_{ij}+\Delta_{jh}\).
3. \(\Delta_{ij}\leq\Pi(s_{i}\neq s_{j}\mid\mathbf{X})\).
The boundedness and symmetry of the Hellinger distance are preserved after taking the posterior expectation, implying that \(\mathcal{R}_{\mathcal{G}}(\widehat{\mathbf{c}}\mid\mathbf{X})\) is non-negative for any clustering and that the sum in (6) may be taken over the indices \(i<j\). We can also deduce from these properties that FOLD will induce coarser clusterings than Binder's loss. Similar to (5), one can show that Binder's loss prefers assigning \(i\) and \(j\) to the same cluster when \(\Pi(s_{i}\neq s_{j}\mid\mathbf{X})<\gamma\). If \(\Delta_{ij}<\gamma<\Pi(s_{i}\neq s_{j}\mid\mathbf{X})\), FOLD will disagree with usual Bayesian model-based clustering under Binder's loss, with FOLD preferring to merge \(i\) and \(j\) into the same cluster.
### Uncertainty Quantification
In the previous section, we discussed obtaining an estimated clustering \(\mathbf{c}_{\mathrm{FOLD}}\) by minimizing the expectation of \(\mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta})\) over \(\Pi(\mathbf{s},\mathbf{\theta}\mid\mathbf{X})\), and contrasted our approach with the clusterings given by Binder's and the VI loss. Under these latter loss functions, one implicitly assumes that the best clustering of the data is given by the component labels \(\mathbf{s}\). Hence, the value of a Bayes estimator \(\mathbf{c}^{*}\) corresponding to these losses can be interpreted as capturing some notion of centrality for the posterior \(\Pi(\mathbf{s}\mid\mathbf{X})\), akin to minimizing the squared error loss when estimating a real-valued parameter. In applications of Bayesian clustering to real data, we often find that there is substantial uncertainty in the component labels a posteriori. To avoid overstating the significance of \(\mathbf{c}^{*}\), it is thus important to express this uncertainty in a clear and interpretable manner.
A standard choice of uncertainty quantification is the posterior similarity matrix (PSM) \(P=\left\{\Pi(s_{i}=s_{j}\mid\mathbf{X})\right\}_{i,j}\), with entries ordered by the labels in \(\mathbf{c}^{*}\) and visualized as a heatmap. The PSM measures the posterior probability of two observations being allocated to the same mixture component. When ordered by \(\mathbf{c}^{*}\), this gives an idea of the uncertainty of clustering assignment within and between clusters. However, \(P\) can under-represent uncertainty in \(\mathbf{c}^{*}\)(Wade and Ghahramani, 2018) and differs from typical notions of uncertainty quantification in Bayesian inference, such as credible intervals for real-valued parameters. One can also
quantify uncertainty with a set of high probability groupings that are close to \(\mathbf{c}^{*}\) under a partition distance \(D(\cdot,\cdot)\). This notion is formalized by creating a 95% credible ball around \(\mathbf{c}^{*}\), defined by the smallest radius \(\epsilon\geq 0\) with \(\Pi\left\{D(\mathbf{c}^{*},\mathbf{s})\leq\epsilon\mid\mathbf{X}\right\}\geq 0.95\)(Wade and Ghahramani, 2018). Since \(\mathbf{c}^{*}\) roughly corresponds to the centre of \(\Pi(\mathbf{s}\mid\mathbf{X})\), the credible ball is effectively a central region of the posterior distribution for \(\mathbf{s}\). In light of these measures for expressing uncertainty under Binder's and the VI loss, we propose analogs of credible balls and the PSM that are unique to FOLD.
We focus on the clustering that minimizes (2),
\[\mathbf{c}_{\mathcal{G}}=\underset{\widehat{\mathbf{c}}}{\text{argmin}}\ \mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta}). \tag{8}\]
In Proposition 1, we showed that \(\mathbf{c}_{\mathcal{G}}\) is only equal to \(\mathbf{s}\) if the entries of \(\mathbf{\theta}\) are perfectly separated via the Hellinger distance. An implication of (8) is that \(\mathbf{c}_{\mathcal{G}}\) depends on \(\omega\), while the component labels clearly do not. To see this, we recall (5), which shows that we can acquire a clustering with loss smaller than \(\mathbf{s}\) by merging any two components with Hellinger distance below \(\gamma\). Thus, we formulate our measures of uncertainty with respect to the FOLD posterior \(\Pi(\mathbf{c}_{\mathcal{G}}\mid\mathbf{X})\), instead of \(\Pi(\mathbf{s}\mid\mathbf{X})\).
Accordingly, the 95% credible ball around \(\mathbf{c}_{\text{FOLD}}\) is defined as
\[B_{D}(\mathbf{c}_{\text{FOLD}})=\{\mathbf{c}:D(\mathbf{c}_{\text{FOLD}},\mathbf{c})\leq \epsilon_{\text{FOLD}}\}\]
where \(\epsilon_{\text{FOLD}}\geq 0\) is the smallest radius such that \(\Pi\left\{D(\mathbf{c}_{\text{FOLD}},\mathbf{c}_{\mathcal{G}})\leq\epsilon_{\text{ FOLD}}\mid\mathbf{X}\right\}\geq 0.95\). We interpret \(B_{D}(\mathbf{c}_{\text{FOLD}})\) as a neighbourhood of clusterings centred at our Bayes estimator with posterior probability mass of at least 0.95. Larger \(\epsilon_{\text{FOLD}}\) means that the FOLD posterior
Figure 3: A clustering \(\mathbf{c}_{\text{FOLD}}\) of the moons data, with \(\omega=118.2\), is shown in (a). (b)-(d) display the horizontal, vertical lower, and vertical upper bounds of the 95% credible ball, with \(D(\cdot,\cdot)\) as the VI loss.
distributes its mass across a larger set around \(\mathbf{c}_{\text{FOLD}}\), implying that we are more uncertain about the cluster allocations of the point estimator. In practice, we characterize the credible ball with bounds that give a sense of the clusterings contained within it (Wade and Ghahramani, 2018), much like we would for an interval on the real line. The horizontal bounds of the credible ball consist of the clusterings \(\mathbf{c}\in B_{D}(\mathbf{c}_{\text{FOLD}})\) for which \(D(\cdot,\mathbf{c}_{\text{FOLD}})\) attains its maximum value. We can also impose further restrictions on our bounds, such as requiring that they contain the minimum or maximum number of clusters over \(B_{D}(\mathbf{c}_{\text{FOLD}})\) while also maximizing \(D(\cdot,\mathbf{c}_{\text{FOLD}})\) amongst groupings with the same number of clusters, giving rise to the notions of vertical upper and vertical lower bounds, respectively.
Figure 3 shows the 95% credible ball of a clustering point estimator calculated using the same location Gaussian mixture fit in Figure 2. Here, we set \(\omega=118.2\), leading to three clusters in the point estimator. The bounds of the credible ball convey our uncertainty in \(\mathbf{c}_{\text{FOLD}}\), particularly in the region of the sample space where the tips of the crescents are closest to each other. The vertical lower bound communicates that this region could be split into multiple clusters, while the horizontal and vertical upper bound melt two of the clusters in \(\mathbf{c}_{\text{FOLD}}\), corresponding exactly to the crescents.
Alternatively, one can display the PSM for \(\mathbf{c}_{\mathcal{G}}\), \(\mathcal{P}=\{\Pi(c_{i\mathcal{G}}=c_{j\mathcal{G}}\mid\mathbf{X})\}_{i,j}\), as a heatmap, with the entries ordered by \(\mathbf{c}_{\text{FOLD}}\). In general, \(\mathcal{P}\neq P\), further demonstrating the distinction between \(\Pi(\mathbf{c}_{\mathcal{G}}\mid\mathbf{X})\) and \(\Pi(\mathbf{s}\mid\mathbf{X})\). \(\mathcal{P}\) is useful for visualization, as the uncertainty in allocating two observations to the same or different cluster is given directly by inspection of the FOLD posterior.
### Implementation with MCMC Output
Generally, \(\Pi(\mathbf{s},\mathbf{\theta}\mid\mathbf{X})\) will not be available in closed form, so Markov chain Monte Carlo (MCMC) algorithms are used to generate posterior samples \(\mathbf{s}^{(t)}\) and \(\mathbf{\theta}^{(t)}\) for \(t=1,\ldots,T\) iterations. There is a vast literature proposing a rich variety of MCMC algorithms for mixture models ranging from mixtures of finite mixtures (Miller and Harrison, 2018) to
Figure 4: Candidate clusterings of the moons data under a location Gaussian mixture model, generated from average linkage hierarchical clustering on \(\Delta\).
models allowing infinitely many components, such as Dirichlet process mixtures (Neal, 2000; Jain and Neal, 2004, 2007) and Pitman-Yor process mixtures (Ishwaran and James, 2001, 2003).
The FOLD methodology relies on samples from \(\Pi(\mathbf{s},\mathbf{\theta}\mid\mathbf{X})\), which are used to estimate pairwise Hellinger distances,
\[\Delta_{ij}\approx\frac{1}{T}\sum_{t=1}^{T}\mathcal{H}_{ij}^{(t)}, \tag{9}\]
where \(\mathcal{H}_{ij}^{(t)}=h\left\{g(\theta_{s_{i}}^{(t)}),g(\theta_{s_{j}}^{(t)})\right\}\), and \(\theta_{s_{i}}^{(t)},\theta_{s_{j}}^{(t)}\) are computed directly from \(\mathbf{s}^{(t)}\) and \(\mathbf{\theta}^{(t)}\). For many families of kernels, such as the Gaussian family, \(h\left\{g(\theta_{s_{i}}),g(\theta_{s_{j}})\right\}\) is available in closed form. The label-switching problem, which results from the inherent non-identifiability of mixture models with any permutation of the labels yielding the same likelihood (Redner and Walker, 1984), has no impact on FOLD as \(\Delta_{ij}\) is invariant to labeling. FOLD can be easily applied to virtually any mixture model for which samples from \(\Pi(\mathbf{s},\mathbf{\theta}\mid\mathbf{X})\) can be obtained and Hellinger distances between kernels can be estimated.
In practice, \(\mathbf{c}_{\text{FOLD}}\) is obtained by minimizing (6) over a tree of clusterings produced by average linkage hierarchical clustering on \(\Delta\). This procedure is motivated by the appearance of average linkage as a criterion for merging clusters in (5). Figure 4 displays several clusterings of the moons data that result from average linkage clustering on \(\Delta\). The true grouping of the data is present amongst the candidates. Since these groupings are hierarchical, Figure 4 can be interpreted as an enumeration of clusterings favoured by \(\mathcal{R}_{\mathcal{G}}(\widehat{\mathbf{c}}\mid\mathbf{X})\), arranged by increasing values of \(\omega\). A similar method with single-linkage is defined in Aragam et al. (2020), and Medvedovic and Sivaganesan (2002) and Fritsch and Ickstadt (2009) also use hierarchical clustering to approximately minimize clustering loss functions. Recent Bayesian clustering algorithms for minimizing loss functions, such as SALSO (Dahl et al., 2020), can also be applied to FOLD.
To express uncertainty in \(\mathbf{c}_{\text{FOLD}}\), we rely on samples from \(\Pi(\mathbf{c}_{\mathcal{G}}\mid\mathbf{X})\). Given \(\mathbf{s}^{(t)}\) and \(\mathbf{\theta}^{(t)}\), FOLD generates an approximate sample \(\mathbf{c}_{\mathcal{G}}^{(t)}\) with
\[\mathbf{c}_{\mathcal{G}}^{(t)}=\operatorname*{argmin}_{\widehat{\mathbf{c}}\in\mathcal{ C}(\mathcal{H}^{(t)})}\sum_{i<j}\left\{\mathbf{1}_{\hat{c}_{i}=\hat{c}_{j}} \mathcal{H}_{ij}^{(t)}+\omega\mathbf{1}_{\hat{c}_{i}\neq\hat{c}_{j}}\left(1- \mathcal{H}_{ij}^{(t)}\right)\right\}, \tag{10}\]
where \(\mathcal{C}(\mathcal{H}^{(t)})\) is a list of clusterings generated by average-linkage clustering on \(\mathcal{H}^{(t)}=(\mathcal{H}_{ij}^{(t)})_{i.j}\). Using these samples, \(B_{D}(\mathbf{c}_{\text{FOLD}})\) and \(\mathcal{P}\) can be estimated. The posterior probability of a credible ball of radius \(\epsilon\) is approximated by
\[\Pi\left\{D(\mathbf{c}_{\mathcal{G}},\mathbf{c}_{\text{FOLD}})\leq\epsilon\mid\mathbf{X} \right\}\approx\frac{1}{T}\sum_{t=1}^{T}\mathbf{1}_{D(\mathbf{c}_{\mathcal{G}}^{(t )},\mathbf{c}_{\text{FOLD}})\leq\epsilon}, \tag{11}\]
and FOLD computes \(\epsilon_{\text{FOLD}}\) by incrementally increasing the value of \(\epsilon\) over a grid. The entries of \(\mathcal{P}\) are estimated via
\[\mathcal{P}_{ij}=\Pi(c_{i\mathcal{G}}=c_{j\mathcal{G}}\mid\mathbf{X})\approx\frac{ 1}{T}\sum_{t=1}^{T}\mathbf{1}_{c_{i\mathcal{G}}^{(t)}=c_{j\mathcal{G}}^{(t)}}. \tag{12}\]
To ensure efficient computation, the 95% credible ball around \(\mathbf{c}_{\rm FOLD}\) and posterior similarity matrix are obtained by applying the corresponding R functions in mcclust.ext(Wade and Ghahramani, 2018) and mcclust(Fritsch, 2022), respectively, to these samples. FOLD summarizes the credible ball by supplying the user with the vertical and horizontal bounds of \(B_{D}(\mathbf{c}_{\rm FOLD})\), which can then be plotted in the same manner as Figure 3. \(D(\cdot,\cdot)\) can be set to either the VI loss or Binder's loss, with VI as the default.
The loss parameter \(\omega\) controls the separation among the inferred clusters, and impacts not only \(\mathbf{c}_{\rm FOLD}\) but also \(B_{D}(\mathbf{c}_{\rm FOLD})\) and \(\mathcal{P}\). Similarly to common practice in choosing key tuning parameters in other loss-based clustering contexts, we rely on an elbow plot diagnostic. At a grid of possible \(\omega\) values, we calculate
\[r_{\omega}=\frac{\sum_{h=1}^{K_{\omega}}\sum_{i,j\in C_{\omega,h}}\Delta_{ij}}{ \sum_{i<j}\Delta_{ij}}, \tag{13}\]
where \(C_{\omega,h}\) is the \(h\)th cluster and \(K_{\omega}\) is the number of clusters in the minimizer of (6). The numerator in (13) estimates the total within-cluster Hellinger distance between localized densities for a particular \(\omega\),
\[\sum_{h=1}^{K_{\omega}}\sum_{i,j\in C_{\omega,h}}\mathcal{H}_{ij}=\sum_{i<j} \mathcal{H}_{ij}-\sum_{h<k}\sum_{\begin{subarray}{c}i\in C_{\omega,h}\\ j\in C_{\omega,k}\end{subarray}}\mathcal{H}_{ij}, \tag{14}\]
and the denominator normalizes so that \(r_{\omega}\in[0,1]\) for all \(\omega\).
Observe that \(r_{0}=0\), \(r_{\infty}=1\), and \(r_{\omega}\) is an increasing function of \(\omega\). Since we optimize (7) over a set of candidate groupings produced by hierarchical clustering, as \(\omega\) increases clusters will gradually be merged together. The value \(r_{w}\) can be interpreted as the normalized Hellinger cost of pooling two candidate clusters together. For small \(\omega\), observations whose localized densities are largely overlapping will be allocated to the same group. However, as \(\omega\) increases, eventually observations with limited overlap in their localized densities will be clustered together, leading to a sharp increase in the within-cluster Hellinger distance. We propose to increment \(\omega\) until this threshold is reached, and then choose the clustering at the \(r_{\omega}\) threshold.
Alternatively, one can use the default value \(\omega^{\rm AVG}=\gamma^{\rm AVG}/(1-\gamma^{\rm AVG})\), with \(\gamma^{\rm AVG}=\frac{2}{n(n-1)}\sum_{i<j}\Delta_{ij}\). Under this choice of \(\omega\), candidate clusters are combined when the average of \(\Delta_{ij}\) between the clusters is smaller than the average of \(\Delta_{ij}\) across the entire sample. \(\gamma^{\rm AVG}\) estimates a weighted sum of the total pairwise Hellinger distances between the kernels in \(f\),
\[\bar{\mathcal{H}}=\frac{1}{\binom{n}{2}}\sum_{i<j}\mathcal{H}_{ij}=\sum_{l<m} \frac{|S_{l}||S_{m}|}{\binom{n}{2}}h\left\{g(\theta_{l}),g(\theta_{m})\right\}. \tag{15}\]
Each component is weighted by the number of observations that are allocated to it by \(\mathbf{s}\). If we use \(\bar{\mathcal{H}}/(1-\bar{\mathcal{H}})\) in place of \(\omega\) in (2), (5) implies that FOLD will favour merging mixture components when \(h\left\{g(\theta_{l}),g(\theta_{m})\right\}<\bar{\mathcal{H}}\). Importantly, the decision to merge components will depend on how separated they are from the others, as well as the number of observations assigned to each component. To see this, consider the following example, in which we
fit a mixture with \(L=3\) components, where \(h\left\{g(\theta_{1}),g(\theta_{2})\right\}=\epsilon>0\), \(h\left\{g(\theta_{1}),g(\theta_{3})\right\}=h\left\{g(\theta_{2}),g(\theta_{3}) \right\}=\delta>0\), and \(|S_{1}|=|S_{2}|=|S_{3}|=n/3\). Then, as \(n\) grows large, \(\bar{\mathcal{H}}\rightarrow(2/9)\epsilon+(4/9)\delta\), and so FOLD will favour merging \(S_{1}\) and \(S_{2}\) into one cluster when \(\epsilon<\frac{4}{7}\delta\). The smaller \(\delta\) is, the smaller \(\epsilon\) must be in order to merge \(S_{1}\) and \(S_{2}\). Hence, \(\omega^{\text{AVG}}\) excels at problems in which \(f_{0}\) is composed of well-separated kernels that are approximated by multiple components in \(f\). We show that \(\omega^{\text{AVG}}\) performs very well in this setting in Section 4.
## 3 Asymptotic Analysis
Most of the study of large sample properties of Bayesian clustering has assumed the kernel is correctly specified, though even in that case problems can result in using infinite mixtures when the truth is finite (Miller and Harrison, 2013, 2014; Cai et al., 2021). Results on the posterior contraction behaviour of the mixing measure (Nguyen, 2013; Ho and Nguyen, 2016; Guha et al., 2021) provide valuable insight into the large sample behaviour of FOLD. This is because the posterior distribution of the localized densities depends heavily on the mixing measure, which can be deduced by integrating over \(\lambda\),
\[\Pi(\theta_{s_{i}}\mid\mathbf{X})=\int_{\lambda}\frac{\lambda(\theta_{s_{i}})g(X_{ i};\theta_{s_{i}})}{f^{(\lambda)}(X_{i})}d\Pi(\lambda\mid\mathbf{X}). \tag{16}\]
We will show that concentration of \(\lambda\) causes \(\Delta\) to approach optimal values in misspecified regimes. Before we discuss this in full, we briefly summarize the conditions for our analysis. We collect all formal assumptions and proofs for our asymptotic results in the Supplementary Material, which also contains theoretical support for FOLD in the well-specified kernel case.
### General Assumptions
We assume the model satisfies some standard requirements: the parameter space \(\Theta\subset\mathbb{R}^{d}\) must be a compact set and \(f^{(\lambda)}\) must be identifiable with respect to the mixing measure, that is, \(f^{(\lambda_{1})}=f^{(\lambda_{2})}\) if and only if \(\lambda_{1}=\lambda_{2}\). Identifiability of the mixing measure is satisfied by location and location-scale GMMs, as well as various exponential family mixtures (Barndorff-Nielsen, 1965) and location family mixtures (Teicher, 1961). We opt for simplicity by focusing on overfitted mixtures that fix \(L<\infty\) and choose \(\Pi(\mathbf{a})\) as a symmetric Dirichlet prior with concentration parameter \(0<\alpha<1\). In addition, we assume the prior \(\Pi(\theta_{l})\) is absolutely continuous and with full support on \(\Theta\).
We assume second-order identifiability of the kernel family \(\mathcal{G}\)(Ho and Nguyen, 2016), requiring the density function and the first and second derivatives of \(g(x;\theta)\) with respect to \(\theta\) to be linearly independent. The location Gaussian family and certain other location families are second-order identifiable (Chen, 1995; Manole and Ho, 2022), but the location-scale Gaussian family is not (Ho and Nguyen, 2016). For continuity, we will require the second derivatives of \(g(x;\theta)\) satisfy an integral Lipschitz condition in \(\theta\) with respect to \(f_{0}\), as in Guha et al. (2021), which can be satisfied for location-Gaussians (Ho et al., 2020). These identifiability and continuity conditions will be useful for posterior contraction of \(\lambda\).
### KL-Oracle Concentration
When \(f_{0}\) is not a mixture over \(\mathcal{G}\) the posterior may assign samples to an exorbitant number of components as the sample size grows. To study this behaviour, we examine posterior contraction of the mixing measure. Guha et al. (2021) showed that \(\lambda\) contracts in the 2-Wasserstein metric to the Kullback-Leibler (KL) divergence minimizer,
\[\lambda^{*}=\operatorname*{argmin}_{\lambda\in\Omega(\Theta)}\,\text{KL}(f_{0},f^{(\lambda)}), \tag{17}\]
where \(\Omega(\Theta)\) is the set of all probability measures on \(\Theta\). We assume that \(\lambda^{*}\) exists, is unique, and has finite support, with \(\lambda^{*}=\sum_{k=1}^{K^{*}}a_{k}^{*}\delta_{\theta_{k}^{*}}\) and \(f^{*}=f^{(\lambda^{*})}=\sum_{k=1}^{K^{*}}a_{k}^{*}g(\theta_{k}^{*})\) for some \(K^{*}<L\). Depending on the degree of misspecification, \(K^{*}\) can be very large, making the component labels impractical for cluster analysis. Contraction of \(\lambda\) implies the following concentration behaviour for the weights and atoms.
**Proposition 3**.: Let \(\epsilon_{n}=(\log n/n)^{1/4}\), \(0<\delta<1\), and \(R^{*}=L-K^{*}\). For any \(h\in[K^{*}]\) and \(\mathcal{I}\subset[L]\), define the events
\[A_{h,\mathcal{I}}=\left\{\max_{l\in\mathcal{I}}\|\theta_{l}- \theta_{h}^{*}\|\underset{\sim}{<}\epsilon_{n},\bigg{|}\sum_{l\in\mathcal{I}} a_{l}-a_{h}^{*}\bigg{|}\underset{\sim}{<}(\epsilon_{n}\vee\epsilon_{n}^{2 \delta})\right\},\] \[B_{\mathcal{I}}=\left\{\sum_{l\in\mathcal{I}}a_{l}\underset{\sim} {<}\epsilon_{n}^{2\delta}\right\}.\]
Then under conditions (A1)-(A4), as \(n\to\infty\),
\[\mathbb{E}_{0}\left[\Pi\left\{\left(\bigcap_{h=1}^{K^{*}}\bigcup_{|\mathcal{I }|\geq 1}A_{h,\mathcal{I}}\right)\cap\left(\bigcup_{|\mathcal{I}|\leq R^{*}}B_{ \mathcal{I}}\right)\,\bigg{|}\mathbf{X}\right\}\right]\to 1. \tag{18}\]
Proposition 3 provides the following conclusion on the limiting values of the atoms and weights as \(\lambda\) contracts to \(\lambda^{*}\). For any component \(k\) of \(\lambda^{*}\), there exists a subset of components in \(\lambda\) so that each atom is within an \(\epsilon_{n}\) neighbourhood of \(\theta_{k}^{*}\), and the cumulative weight of that subset is within a \(\max(\epsilon_{n},\epsilon_{n}^{2\delta})\) neighbourhood of \(a_{h}^{*}\). Any other components in \(\lambda\) have total weight that diminishes at a rate of \(\epsilon_{n}^{2\delta}\). (18) highlights that both the atoms and weights are useful in merging mixture components, as there is no guarantee on the asymptotic behaviour of atoms in \(\lambda\) that are not within \(\epsilon_{n}\) neighbourhoods of the atoms of \(\lambda^{*}\). Conveniently, these potentially problematic components are made redundant by their diminishing weight. The behaviour of the components in (18) is a direct consequence of 2-Wasserstein concentration in \(\lambda\), and has been exploited for mixing measure estimation in well-specified regimes in the Merge-Truncate-Merge algorithm (Guha et al., 2021).
Proposition 3 can be used to show that \(\Delta\) will concentrate on optimal values with respect to our model class. We define the KL-oracle clustering as \(\mathbf{c}_{\text{FOLD}}\) when \(\lambda^{*}\) and \(f^{*}\) are known,
\[\mathbf{c}_{\text{FOLD}}^{\text{KL}}=\operatorname*{argmin}_{\widehat{\mathbf{c}}}\ \sum_{i<j}\left\{\mathbf{1}_{\widehat{c}_{i}=\hat{c}_{j}}\Delta_{ij}^{*}+\omega\bm {1}_{\widehat{c}_{i}\neq\hat{c}_{j}}(1-\Delta_{ij}^{*})\right\}, \tag{19}\]
where \(\Delta^{*}\) has entries \(\Delta^{*}_{ij}=\mathbb{E}_{\Pi}(\mathcal{H}_{ij}\mid\lambda^{*},\mathbf{X})=\sum_{k< h}h\left\{g(\theta^{*}_{k}),g(\theta^{*}_{h})\right\}q^{kh*}_{ij}\),
\[q^{kh*}_{ij}=\Pi\{(s_{i},s_{j}\in S_{k}\cup S_{h})\cap(s_{i}\neq s_{j})\mid\mathbf{ X},\lambda^{*}\}.\]
As \(n\) grows arbitrarily large, the cluster allocations in \(\mathbf{c}_{\mathrm{FOLD}}\) and \(\mathbf{c}_{\mathrm{FOLD}}^{\mathrm{KL}}\) coincide. This limiting behaviour follows directly from (18), as concentration of \(\mathbf{\theta}\) and \(\mathbf{a}\) can be used to show that \(\Delta_{ij}\) and \(\Delta^{*}_{ij}\) become arbitrarily close.
In order to prove this formally, we need to make two further assumptions on \(\mathcal{G}\). First, we assume that \(\mathcal{G}\) is a location family, i.e., \(g(x;\theta)=\tilde{g}(x-\theta)\) for some probability density function \(\tilde{g}\). Second, we require that \(\tilde{g}(z)\) is \(\zeta\)-Holder continuous for some \(\zeta>0\). It can be verified that location Gaussian kernels satisfy these assumptions with \(\zeta=1\).
**Theorem 1**.: Let \(\epsilon_{n}=(\log n/n)^{1/4}\), \(0<\delta<1\), and assume conditions (A1)-(A6). Then, for all \(n\geq N\) for some \(N\in\mathbb{N}\) and each pair of observations \(i<j\), there exists a constant \(M>0\) and random variables \(\Delta^{*-}_{ij}=\Delta^{*}_{ij}-o_{\mathbb{P}_{0}}(1)\) and
\[\Delta^{*+}_{ij}=\Delta^{*}_{ij}+o_{\mathbb{P}_{0}}(1)+\frac{M\left\{\binom{L} {2}-\binom{K^{*}}{2}\right\}\epsilon_{n}^{2\delta}}{f^{*}(X_{i})f^{*}(X_{j})-o _{\mathbb{P}_{0}}(1)} \tag{20}\]
so that with \(\mathbb{P}_{0}\)-probability equal to \(1\),
\[\Delta^{*-}_{ij}(1-o_{\mathbb{P}_{0}}(1))+o_{\mathbb{P}_{0}}(1)\leq\Delta_{ij} \leq\Delta^{*+}_{ij}(1-o_{\mathbb{P}_{0}}(1))+o_{\mathbb{P}_{0}}(1). \tag{21}\]
Pragmatically, Proposition 3 and Theorem 1 give a concrete notion of the limiting values of \(\mathbf{c}_{\mathrm{FOLD}}\). As we observe more data, minimizing (6) becomes equivalent to the KL-oracle clustering, in which we group observations generated from \(f_{0}\) using our knowledge of the best approximation \(f^{*}\) to the true state of nature. Also, the tightness of the bounds in (21) provide some understanding of the effects on \(\mathbf{c}_{\mathrm{FOLD}}\) of overfitting finite mixtures and observing outliers. \(|\Delta^{*+}_{ij}-\Delta^{*}_{ij}|\) is smaller when the remainder term in (20) is small. The larger \(L\) is relative to \(K^{*}\), the larger this remainder becomes. Likewise, observations that are given low density by \(f^{*}\) also blow up \(|\Delta^{*+}_{ij}-\Delta^{*}_{ij}|\). In this case, provided that \(\int_{\mathbb{R}^{d}}\frac{f_{0}(x)}{f^{*}(x)}dx<\infty\), the remainder will be small on average as \(n\) grows, since
\[\mathbb{E}_{0}\left\{\frac{M\left\{\binom{L}{2}-\binom{K^{*}}{2}\right\} \epsilon_{n}^{2\delta}}{f^{*}(X_{i})f^{*}(X_{j})}\right\}\lesssim(\log n/n)^{ \delta/2}. \tag{22}\]
### Location Gaussian Mixtures
When \(\mathcal{G}=\{\mathcal{N}_{d}(\theta,\Sigma):\theta\in\Theta\}\) for some covariance matrix \(\Sigma\), the Hellinger distance between the kernels in \(f^{*}\) is directly related to the Mahalanobis distance between their means,
\[1-h^{2}\left\{g(\theta^{*}_{k}),g(\theta^{*}_{h})\right\}\propto \exp\left(-\frac{1}{8}\mathcal{M}^{2}_{kh}\right), \tag{23}\] \[\mathcal{M}_{kh}=\sqrt{(\theta^{*}_{k}-\theta^{*}_{h})^{T}\Sigma ^{-1}(\theta^{*}_{k}-\theta^{*}_{h})}. \tag{24}\]
Intuitively, this tells us that in the large sample limit, if two observations are best modeled as being generated from separate components in \(f^{*}\), FOLD will favour allocating them to the
same cluster when the means of their associated components are close under the Mahalanobis distance. Hence, the role of \(\Sigma\) is very important for merging mixture components. To see this better, consider the case where \(\Sigma=\sigma^{2}\mathbf{I}_{d}\). We can rewrite the Mahalanobis distance as a weighted version of the Euclidean norm, \(\mathcal{M}_{kj}=\left\|\theta_{k}^{*}-\theta_{h}^{*}\right\|/\sigma\). It follows that
\[\lim_{\sigma\to 0}\Delta_{ij}^{*}=\Pi(s_{i}\neq s_{j}\mid\mathbf{X}, \lambda^{*}), \lim_{\sigma\rightarrow\infty}\Delta_{ij}^{*}=0. \tag{25}\]
The latter result is expected, as higher variance in the kernels will effectively fit one large Gaussian kernel to the entire dataset. At the other extreme, small variance causes FOLD to converge to KL-oracle values of Binder's loss. This behaviour is reminiscent of Proposition 1 because the kernels in \(f^{*}\) will be perfectly separated for very small \(\sigma\).
For applications, we recommend giving \(\Sigma\) a weakly informative prior distribution, allowing the covariance to be informed by the data rather than selected by the user. We also find that location-scale Gaussian mixtures perform well with FOLD, in part due to their added flexibility that generally leads to fewer mixture components. However, as mentioned previously, location-scale Gaussians fail the second-order identifiability requirement of our asymptotic theory. Theoretical guarantees on location-scale Gaussian mixtures and other weakly identifiable mixtures is an open area of research, but recent progress has been made for maximum likelihood estimates (Ho and Nguyen, 2016; Manole and Ho, 2022).
## 4 Simulation Studies
In this section, we demonstrate with several numerical examples that FOLD is robust to a diverse range of cluster shapes despite the data being modeled with a Gaussian mixture. Code for the simulated data and computation of \(\mathbf{c}_{\text{FOLD}}\) is available on the GitHub page of the first author. See the Supplementary Material for further details on all simulations and an additional simulation study focused on credible balls.
### Effect of Increasing \(n\) on Model-Based Clustering
In light of the asymptotic results shown in the previous section, we examine the effect of increasing \(n\) on FOLD and other model-based clustering methods.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline & \(n\) & FOLD & Oracle FOLD & VI & Binder’s & Mclust \\ \hline No. of Clusters & 100 & 2.93 (0.383) & 3.00 (0.00) & 8.02 (5.714) & 19.76 (0.818) & 3.10 (0.414) \\ & 500 & 3.06 (0.278) & 3.00 (0.00) & 3.53 (0.926) & 19.89 (0.852) & 3.00 (0.00) \\ & 1000 & 3.05 (0.219) & 3.00 (0.00) & 3.30 (0.541) & 19.96 (0.400) & 3.01 (0.100) \\ & 2500 & 3.05 (0.219) & 3.00 (0.00) & 3.08 (0.273) & 20.00 (0.00) & 3.00 (0.00) \\ \hline Adj. Rand Index & 100 & 0.903 (0.087) & 0.985 (0.021) & 0.852 (0.088) & 0.822 (0.061) & 0.962 (0.052) \\ & 500 & 0.979 (0.013) & 0.998 (0.008) & 0.974 (0.014) & 0.947 (0.012) & 0.987 (0.008) \\ & 1000 & 0.985 (0.006) & 0.987 (0.006) & 0.982 (0.007) & 0.968 (0.007) & 0.985 (0.024) \\ & 2500 & 0.986 (0.004) & 0.987 (0.004) & 0.984 (0.004) & 0.979 (0.005) & 0.987 (0.004) \\ \hline \end{tabular}
\end{table}
Table 1: Averages and standard deviations (in parentheses) for the number of clusters and adjusted Rand index with \(\mathbf{s}_{0}\) on 100 replications from a mixture of bivariate Gaussian kernels.
repeatedly simulate observations with varying \(n\in\{100,500,1000,2500\}\) for 100 replications each. We compare FOLD with \(\omega=\omega^{\text{AVG}}\) to clusterings returned by minimizing Binder's loss and the VI loss using the packages mccclust and mccclust.ext, respectively. Along with these Bayesian approaches, we also cluster the data using mclust(Scrucca et al., 2016), which groups observations based on the EM algorithm for a Gaussian mixture.
For each replication and \(n\), we run the EM algorithm and fit a Bayesian location-scale GMM,
\[X_{i}\mid\mathbf{a},\mathbf{\mu},\mathbf{\Sigma}\sim\sum_{l=1}^{L}a_{l} \mathcal{N}_{2}(\mu_{l},\Sigma_{l}); \tag{26}\] \[(\mu_{l},\Sigma_{l})\sim\mathcal{N}-\mathcal{IW}(\mu,\kappa,\nu, \Psi),\,\,l=1,\ldots,L;\] (27) \[\mathbf{a}\sim\text{Dirichlet}\left((\alpha,\ldots,\alpha)\right). \tag{28}\]
We then compute clusterings with FOLD, VI, and Binder's loss using the posterior samples from the Bayesian model. For each clustering, we save the number of clusters and the adjusted Rand index (Rand, 1971) with \(\mathbf{s}_{0}\). The hyperparameters are set at \(L=30\), \(\alpha=1/2\), and \(\mu=0_{d}\), \(\kappa=1\), \(\nu=d+2\), and \(\Psi=\mathbf{I}_{d}\). We run a Gibbs sampler for \(9,000\) iterations with a burn-in of \(1,000\), then use every third iteration for computing the Bayesian clusterings. For mclust, the number of clusters is automatically selected each replication by minimizing the Bayesian information criterion (BIC).
In the first scenario, we let \(f_{0}(x)=\sum_{k=1}^{3}\pi_{0k}\mathcal{N}_{2}(x;\mu_{0k},\Sigma_{0k})\), with weights \(\mathbf{\pi}_{0}=(0.45,0.25,0.3)\)
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline & \(n\) & FOLD & VI & Binder’s & Mclust \\ \hline No. of Clusters & 100 & 3.02 (0.141) & 3.16 (0.395) & 13.84 (6.080) & 3.14 (0.377) \\ & 500 & 3.03 (0.171) & 3.42 (0.755) & 6.15 (2.258) & 3.23 (0.529) \\ & 1000 & 3.20 (0.449) & 3.53 (0.881) & 9.32 (3.816) & 4.33 (1.429) \\ & 2500 & 3.19 (0.443) & 3.40 (0.682) & 8.93 (2.409) & 7.99 (0.959) \\ \hline Adj. Rand Index & 100 & 0.992 (0.016) & 0.987 (0.021) & 0.915 (0.043) & 0.989 (0.030) \\ & 500 & 0.999 (0.003) & 0.998 (0.003) & 0.992 (0.006) & 0.980 (0.049) \\ & 1000 & 0.999 (0.002) & 0.997 (0.007) & 0.990 (0.009) & 0.866 (0.149) \\ & 2500 & 0.999 (0.001) & 0.990 (0.020) & 0.952 (0.014) & 0.576 (0.071) \\ \hline \end{tabular}
\end{table}
Table 2: Averages and standard deviations (in parentheses) for the number of clusters and adjusted Rand index with \(\mathbf{s}_{0}\) on 100 replications from a mixture of bivariate skew Gaussian kernels.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline & \(n\) & FOLD & VI & Binder’s & Mclust \\ \hline No. of Clusters & 100 & 3.25 (0.500) & 3.89 (1.014) & 12.62 (4.292) & 3.56 (0.925) \\ & 500 & 3.23 (0.601) & 3.64 (0.894) & 13.94 (4.552) & 4.17 (0.779) \\ & 1000 & 3.10 (0.302) & 4.83 (1.025) & 15.73 (3.795) & 5.08 (0.442) \\ & 2500 & 3.10 (0.302) & 4.58 (0.843) & 18.99 (2.634) & 6.20 (0.876) \\ \hline Adj. Rand Index & 100 & 0.982 (0.029) & 0.974 (0.026) & 0.800 (0.094) & 0.884 (0.177) \\ & 500 & 0.995 (0.032) & 0.921 (0.136) & 0.799 (0.147) & 0.715 (0.159) \\ & 1000 & 0.975 (0.085) & 0.691 (0.057) & 0.665 (0.037) & 0.574 (0.054) \\ & 2500 & 0.967 (0.098) & 0.679 (0.012) & 0.639 (0.027) & 0.512 (0.056) \\ \hline \end{tabular}
\end{table}
Table 3: Averages and standard deviations (in parentheses) for the number of clusters and adjusted Rand index with \(\mathbf{s}_{0}\) on 100 replications from the skew-symmetric mixture.
and atoms \(\mu_{01}=(6.5,5)\), \(\Sigma_{01}=\mathbf{I}_{2}\), \(\mu_{02}=(0,0)\), \(\Sigma_{02}=\text{diag}(5,2)\), and \(\mu_{03}=(-5,-5)\), \(\Sigma_{03}=\text{diag}(3,1)\). As shown in Figure 11(a) in the Supplementary Material, the three kernels are well separated. Since the data generating process is a Gaussian mixture, we have access to the oracle FOLD clusters,
\[\mathbf{c}_{\text{FOLD}}^{\text{oracle}}=\underset{\widehat{\mathbf{c}}}{\text{argmin} }\ \sum_{i<j}\left\{\mathbf{1}_{\hat{c}_{i}=\hat{c}_{j}}\Delta_{ij}^{0}+\omega \mathbf{1}_{\hat{c}_{i}\neq\hat{c}_{j}}(1-\Delta_{ij}^{0})\right\}, \tag{29}\]
with \(\Delta_{ij}^{0}=\mathbb{E}_{\Pi}(\mathcal{H}_{ij}\mid\mathbf{X},\lambda_{0})\) and \(\omega=\omega^{\text{AVG}}\). Similar to the KL-oracle in misspecified regimes, we expect \(\mathbf{c}_{\text{FOLD}}\) and \(\mathbf{c}_{\text{FOLD}}^{\text{oracle}}\) to become increasingly similar as \(n\) grows due to posterior contraction of \(\lambda\)(Nguyen, 2013; Guha et al., 2021). The averages and standard deviations for the number of clusters and adjusted Rand index are given in Table 1, with boxplots of these benchmarks displayed in Figure 5. FOLD, oracle FOLD, and mclust achieve high adjusted Rand index with increasing sample size. Oracle FOLD is perfect at computing the true number of clusters, and mclust and FOLD are also excellent. Furthermore, the number of clusters and adjusted Rand index for \(\mathbf{c}_{\text{FOLD}}\) gradually approach the values produced by \(\mathbf{c}_{\text{FOLD}}^{\text{oracle}}\). The VI loss improves in the number of clusters as \(n\) increases, but Binder's loss falls short, consistently producing 20 clusters with diminishing variation across replications. VI and Binder's loss produce high adjusted Rand indices that improve with \(n\).
Next, we let \(f_{0}(x)=\sum_{k=1}^{3}\pi_{0k}\mathcal{S}\mathcal{N}_{2}(x;\mu_{0k},\Sigma_{0 k},\psi_{0k})\), where \(\mathcal{S}\mathcal{N}_{2}(x;\cdot,\cdot,\cdot)\) denotes the PDF of a bivariate skew Gaussian distribution (Azzalini and Valle, 1996; Azzalini and Capitanio, 1999) and \(\mathbf{\pi}_{0}=(0.45,0.25,0.3).\) These kernels have the same location and scale parameters as the bivariate Gaussian mixture in simulation case 1, and skewness parameters \(\psi_{01}=(1,1)\), \(\psi_{02}=(-10,15)\), and \(\psi_{03}=(4,-17)\). Figure 11(b) in the Supplementary Material shows a contour plot of \(f_{0}\). Results across the replications are reported in Figure 7 and Table 2. For all values of \(n\), FOLD attains high accuracy with a small number of clusters. As \(n\) increases, all four clustering methods achieve high levels of the adjusted Rand index, though FOLD
Figure 5: Comparison of the number of clusters and the adjusted Rand index with \(\mathbf{s}_{0}\) on 100 replications from a mixture of well-separated multivariate Gaussian kernels.
results in values close to \(1\) while the VI loss and Binder's loss decline slightly. FOLD tends to produce between \(3\) and \(4\) clusters for each \(n\) and reports less than or equal to the number of clusters induced by the VI loss in \(94.5\%\) of all instances.
In the final simulation case, we take \(f_{0}\) to be a bivariate mixture of three kernels with weights \(\mathbf{\pi}_{0}=(0.55,0.3,0.15)\). \(\tau_{01}\) is a mixture of one skew Gaussian kernel and two Gaussian kernels, \(\tau_{02}\) is a skew Gaussian kernel, and \(\tau_{03}\) is a Gaussian kernel. We refer to \(f_{0}\) in this case as a skew-symmetric mixture. As can be seen in Figure 11(c) in the Supplementary, \(\tau_{01}\) is a non-Gaussian, multimodal density. Similarly, data generated from \(\tau_{02}\), the oblong kernel in the lower third of the sample space, will most likely be approximated by multiple components, despite only constituting one cluster. The results of the simulations are summarized in Table 3 and Figure 6. FOLD generally allocates observations to \(3\) clusters, but the VI loss favours between \(3\) and \(6\) clusters. In \(96.5\%\) of the replicates, the number of clusters for FOLD is less than or equal to the number for VI. As in simulation case \(2\), Binder's loss and mclust frequently return a larger number of clusters than the truth for larger \(n\). The adjusted Rand index between \(\mathbf{c}_{\text{FOLD}}\) and \(\mathbf{s}_{0}\) is close to \(1\) across all sample sizes. In contrast, the three other methods achieve high adjusted Rand index for small \(n\), but these values sharply drop for \(n\geq 1000\). This is usually the result of splitting \(\tau_{01}\) or \(\tau_{02}\) into multiple clusters.
## 5 Example: GSE81861 Cell Line Dataset
We apply FOLD to the GSE81861 cell line dataset (Li et al., 2017), which measures single cell counts in \(57,241\) genes from \(630\) single-cell transcriptomes. There are \(7\) distinct cell lines present in the dataset, so we compare FOLD and other model-based clustering methods to the true cell line labels as a performance benchmark. We first discard cells with low read counts, giving \(n=519\) total cells. We then normalize the data using SCRAN (Lun et al., 2016) and select informative genes with M3Drop (Andrews and Hemberg, 2019). We
Figure 6: Comparison of the number of clusters and the adjusted Rand index with \(\mathbf{s}_{0}\) on \(100\) replications from a mixture of multivariate skew Gaussian distributions.
use principal component analysis (PCA) for dimension reduction by taking \(\mathbf{X}\) to be the projection of the normalized cell counts onto the first \(d=5\) principal components, then scale the projections to have zero mean and unit variance.
We fit a \(d\)-dimensional Bayesian location-scale Gaussian mixture to \(\mathbf{X}\) with \(L=50\) components, Dirichlet concentration parameter \(\alpha=1/2\), and a Normal-Inverse-Wishart prior with parameters \(\mu=0_{d}\), \(\kappa=1\), \(\nu=d+2\), and \(\Psi=\mathbf{I}_{d}\). We generate \(25,000\) posterior samples after a burn-in of \(1,000\). Every fourth iteration is discarded and we estimate \(\Delta\) with the remaining \(6,000\) samples. We choose \(\mathbf{c}_{\text{FOLD}}\) using an elbow plot, and calculate clusterings with the VI loss and mclust. The clusterings of both FOLD and the VI are computed using the same MCMC samples.
For FOLD, we create candidate clusterings by average-linkage clustering on \(\Delta\), then choose \(\omega\) by consulting the elbow plot in Figure 8. The plot suggests 6 clusters, which corresponds to \(\omega=25\). As in our simulations, we choose the number of clusters in mclust using BIC. Figure 9 shows the UMAP plots (McInnes et al., 2018) of the original normalized count data along with colours indicating the true cell types and the clusterings from the three methods. The adjusted Rand index with the true cell types for \(\mathbf{c}_{\text{FOLD}}\), the VI clusters, and the mclust clusters are \(0.995\), \(0.915\), and \(0.854\), respectively. Both the VI loss and mclust split the GM12878 cell type into two clusters, though \(\mathbf{c}_{\text{FOLD}}\) keeps that type as one cluster. Similarly, the H1 cell type is split into multiple clusters by both the VI loss and mclust, but kept as a single cluster by \(\mathbf{c}_{\text{FOLD}}\). Cluster splitting of the GM12878 and H1 types is expected with this dataset, as these cell types were each sequenced in two separate batches (Li et al., 2017). However, \(\mathbf{c}_{\text{FOLD}}\) underestimates the number of groups by combining the H1437 and IMR90 cell types. The other methods merge these types as well, with the VI loss producing 9 clusters and mclust giving 7 clusters.
We express uncertainty in \(\mathbf{c}_{\text{FOLD}}\) with a 95% credible ball using \(D(\cdot,\cdot)\) as the VI, and display the associated bounds in Figure 10. The credible ball communicates that there is
Figure 7: Comparison of the number of clusters and the adjusted Rand index with \(\mathbf{s}_{0}\) on 100 replications from the skew-symmetric mixture.
substantial uncertainty in cell types where batch effects occur. The horizontal and vertical upper bounds effectively merge the GM12878 type with the H1437 cell type, which is likely due to the proximity of these two types in the sample space. Conversely, the vertical lower bound splits the GM12878 cell type into two clusters. We are also uncertain in our classification of the H1 cell type, which is similarly split into multiple clusters by the vertical lower bound because of separate batching. Interestingly, in all bounds, the H1437 and IMR90 cell types are allocated to the same cluster. This type merging could be explained by the fact that IMR90 consists of a small number of cells or that both types are isolated from the lung (Li et al., 2017).
## 6 Discussion
Fusing of Localized Densities (FOLD) is a Bayesian method for cluster analysis that characterizes clusters as possibly containing multiple mixture components. We first detailed the decision theoretic justification behind our approach, in which we obtain a point estimate of the clustering by minimizing a novel loss function. Our loss function has several appealing properties, such as favouring the merging of overlapping kernels, simplification to Binder's loss when all the mixture components are well separated, and invariance to permutations of the labels. Uncertainty in cluster allocations is expressed with a credible ball and posterior similarity matrix. We have given concrete guidance on tuning the loss parameter, including an elbow plot method and default value that performs excellently on simulated examples.
Throughout the article, we have primarily focused on the Gaussian mixture model because of its ubiquity in the literature and useful theoretical properties. However, FOLD can be applied to any parametric mixture in which the Hellinger distance between kernels is simple to compute, which includes the beta, exponential, gamma, and Weibull families. A near identical approach can be applied to discrete families, where localized mass functions would replace the role of the localized densities.
Building on asymptotic theory from Guha et al. (2021), we have shown that our clustering point estimate is equivalent to a KL-oracle in the large sample limit. Our proof relied on
Figure 8: Elbow plot for choosing the number of clusters in \(\mathbf{c}_{\text{FOLD}}\) with the cell line dataset. We choose 6 clusters, corresponding to \(\omega=25\).
Figure 10: The clustering \(\mathbf{c}_{\rm FOLD}\) along with the horizontal, vertical lower, and vertical upper bounds for the cell line dataset. Here, \(D(\cdot,\cdot)\) is the VI.
Figure 9: UMAP plots of the cell line dataset with colours corresponding to the true cell types, \(\mathbf{c}_{\rm FOLD}\), VI, and mclust, respectively.
consistency of the mixture parameters at the KL-minimizer (Guha et al., 2021). For an overfitted misspecified mixture, a subset of the atoms will concentrate near the atoms of the KL-minimizer, and the remaining atoms will have diminishing weights. This suggests that some of the atoms can be merged together, while other atoms can be discarded.
Though we focused on the Hellinger distance because of its boundedness and simple closed form in many cases, other distribution metrics, such as the total variation distance and the Wasserstein metric, could be used instead. Clusters can be created by treating \(\Delta\) as a distance matrix and applying k-medoids, hierarchical clustering, or other distance-based methods. An even simpler variant is a distance matrix of the localized atoms \(\theta_{s_{i}}\) under an appropriate metric. However, the boundedness of the Hellinger distance is crucial for our loss function, and so deriving the form of a coherent loss for these variants provides an intriguing avenue for future research.
In our theory, we required that the fitted mixture model has a finite number of components, satisfies \(L>K^{*}\), and has kernels belonging to a family that is second-order identifiable. In particular, the latter assumption includes location Gaussian mixtures, but excludes location-scale Gaussian mixtures (Ho and Nguyen, 2016), though the location-scale family is first-order identifiable. Furthermore, modeling the data with an infinite mixture, in which underfitting the number of components has non-zero mass a priori and the KL-minimizer may have infinite components, would markedly impact the concentration of the entries of \(\Delta\). An interesting extension of our results is examining the asymptotic behaviour of FOLD for first-order identifiable families and infinite mixtures.
## Appendix A Basic Properties of FOLD
### Proof of Proposition 1
Proof.: We proceed as in Dahl et al. (2022) and decompose the loss function into separate terms that relate to the contingency table between \(\widehat{\mathbf{c}}\) and \(\mathbf{s}\). For any \(m,l\in\{1,\ldots,L\}\), let \(\eta_{ml}=h(g(\theta_{m}),g(\theta_{l}))\). For groups \(\hat{C}_{k}\) and \(S_{l}\), set \(n_{k\cdot}=|\hat{C}_{k}|\), \(n_{\cdot l}=|S_{l}|\), and \(n_{kl}=|\hat{C}_{k}\cap S_{l}|\).
\[\mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta})=\sum_{i<j} \left\{\mathbf{1}_{\hat{c}_{i}=\hat{c}_{j}}\mathcal{H}_{ij}+\omega\mathbf{1}_{ \hat{c}_{i}\neq\hat{c}_{j}}\left(1-\mathcal{H}_{ij}\right)\right\}\] \[=\sum_{i<j}\left\{\mathbf{1}_{\hat{c}_{i}=\hat{c}_{j}}\left(1- \left(1-\mathcal{H}_{ij}\right)\right)+\omega\left(1-\mathbf{1}_{\hat{c}_{i}= \hat{c}_{j}}\right)\left(1-\mathcal{H}_{ij}\right)\right\}\] \[=\sum_{i<j}\mathbf{1}(\hat{c}_{i}=\hat{c}_{j})+\omega\sum_{i<j} \left(1-\mathcal{H}_{ij}\right)-\left(1+\omega\right)\sum_{i<j}\mathbf{1}_{ \hat{c}_{i}=\hat{c}_{j}}\left(1-\mathcal{H}_{ij}\right)\] \[=\sum_{k=1}^{\hat{K}}\binom{n_{k\cdot}}{2}+\omega\sum_{l=1}^{L} \binom{n_{\cdot l}}{2}+\omega\sum_{m<l}n_{\cdot m}n_{\cdot l}\left(1-\eta_{ml }\right)\] \[-(1+\omega)\sum_{k=1}^{\hat{K}_{n}}\sum_{l=1}^{L}\binom{n_{kl}}{2 }-(1+\omega)\sum_{k=1}^{\hat{K}_{n}}\sum_{m<l}n_{km}n_{kl}\left(1-\eta_{ml} \right).\]
Consequently, this shows that the loss is related to the Binder's loss with unit costs \(a=1\) and \(b=\omega\),
\[\mathcal{L}_{g}(\widehat{\mathbf{c}},\mathbf{s};\mathbf{\theta})=\mathcal{L}_{ \mathrm{B}}(\widehat{\mathbf{c}},\mathbf{s})+\mathcal{B}_{g}(\widehat{\mathbf{c}},\mathbf{s}; \theta)\]
where
\[\mathcal{B}_{\mathcal{G}}(\widehat{\mathbf{c}},\mathbf{s}) =\omega\sum_{m<l}n_{\cdot m}n_{\cdot l}\left(1-\eta_{ml}\right) \tag{30}\] \[-(1+\omega)\sum_{k=1}^{\hat{K}}\sum_{m<l}n_{km}n_{kl}\left(1-\eta _{ml}\right). \tag{31}\]
It is clear that
\[\mathcal{B}_{\mathcal{G}}(\mathbf{s},\mathbf{s})=\omega\sum_{m<l}n_{\cdot m }n_{\cdot l}\left(1-\eta_{ml}\right) \tag{32}\]
which is equal to zero if and only if \(\eta_{ml}=1\) for all \(m,l\) pairs.
### Proof of Proposition 2
Proof.: We proceed by showing that the bounded metric properties of the Hellinger distance are preserved by taking the posterior expectation.
1. For each \(i\leq j\), \(0\leq h(g(\theta_{s_{i}}),g(\theta_{s_{j}}))\leq 1\) almost surely, and so \(0\leq\Delta_{ij}\leq 1\).
2. We next verify the metric axioms on the entries of \(\Delta\). 1. For each \(i=1,\ldots,n\), \(h(g(\theta_{s_{i}}),g(\theta_{s_{i}}))=0\) almost surely, and so \(\Delta_{ii}=0\). 2. \(\Delta_{ij}=\Delta_{ji}\) since the Hellinger distance is symmetric and \((\theta_{s_{1}},\ldots,\theta_{s_{n}})\) are exchangeable conditional on \(\mathbf{X}\). 3. \(\Delta_{ih}\leq\Delta_{ij}+\Delta_{jh}\) since the triangle inequality in the Hellinger distance holds almost surely for \(\theta_{s_{i}},\theta_{s_{j}},\theta_{s_{h}}\) conditional on \(\mathbf{X}\).
3. Observe that \[\Delta_{ij} =\mathbb{E}_{\Pi}[\mathbb{E}_{\Pi}[h(g(\theta_{s_{i}}),g(\theta_{ s_{j}}))\mid\lambda,\mathbf{X}]\mid\mathbf{X}]\] \[\qquad=\int_{\lambda}\sum_{m<l}\eta_{ml}^{(\lambda)}q_{ij}^{( \lambda)ml}\,d\Pi(\lambda\mid\mathbf{X}),\] where \[q_{ij}^{(\lambda)ml} =\Pi(\{s_{i}=l,s_{j}=m\}\cup\{s_{i}=m,s_{j}=l\}\mid\lambda,\mathbf{X}),\] \[\eta_{ml}^{(\lambda)} =h(g(\theta_{m}^{(\lambda)}),g(\theta_{l}^{(\lambda)})),\] for all \(1\leq m<l\leq L\). Since \(\eta_{ml}^{(\lambda)}\leq 1\), \[\Delta_{ij} \leq\int_{\lambda}\sum_{m<l}q_{ij}^{(\lambda)ml}d\Pi(\lambda\mid \mathbf{X})=\Pi(s_{i}\neq s_{j}\mid\mathbf{X}).\]
## Appendix B Supplementary Material
### Preliminaries for Asymptotic Results
Our proofs leverage on the mixing measure concentration results in Nguyen (2013); Ho and Nguyen (2016), and Guha et al. (2021). We refer the reader to these papers for more information on many of the concepts and conditions we will discuss below. Our general strategy is as follows. First, we extrapolate convergence in the mixing measure to convergence of the weights and atoms. We then translate these convergence rates to the entries of \(\Delta\) by exploiting the dependence of \(\theta_{s_{i}}\) on \(\lambda\).
#### b.1.1 Additional Notation
Given \(x=(x_{1},\ldots,x_{d})^{T}\in\mathbb{R}^{d}\), let \(\|x\|=\sqrt{\sum_{s=1}^{d}x_{s}^{2}}\). We will only consider mixtures having identifiable mixing measures. That is, if \(f^{(\lambda_{1})}(x)=f^{(\lambda_{2})}(x)\) for almost all \(x\), then \(\lambda_{1}=\lambda_{2}\). This identifiability condition is satisfied by Gaussian mixtures, along with several exponential family mixtures (Barndorff-Nielsen, 1965) and location family mixtures (Teicher, 1961). Any parameters and quantities related to a mixture will be denoted in expanded notation to highlight the dependence on the mixing measure. For any \(L\in\mathbb{N}\), let
\[\Lambda_{L}(\Theta)=\left\{\lambda=\sum_{l=1}^{L}a_{l}^{(\lambda)}\delta_{ \theta_{l}^{(\lambda)}}:\theta_{l}^{(\lambda)}\in\Theta,a_{l}^{(\lambda)}>0 \;\forall l,\sum_{l=1}^{L}a_{l}^{(\lambda)}=1\right\},\]
and, for any \(\lambda\in\Lambda_{L}(\Theta)\), let \(f^{(\lambda)}=\sum_{l=1}^{L}a_{l}^{(\lambda)}g(\theta_{l}^{(\lambda)})\). We will also also denote \(\Omega_{L}(\Theta)=\bigcup_{l=1}^{L}\Lambda_{l}(\Theta)\), which is the set of mixing measures with at most \(L\) distinct components. For \(p\geq 1\), the \(p\)-th order Wasserstein distance between the mixing measures \(\lambda=\sum_{l=1}^{L}a_{l}^{(\lambda)}\delta_{\theta_{l}^{(\lambda)}}\) and \(\tilde{\lambda}=\sum_{m=1}^{L}\tilde{a}_{m}^{(\tilde{\lambda})}\delta_{\tilde {\theta}_{m}^{(\tilde{\lambda})}}\) is
\[W_{p}(\lambda,\tilde{\lambda})=\inf_{\mathbf{b}\in\mathcal{C}(\mathbf{a}^{(\lambda)}, \tilde{\mathbf{a}}^{(\tilde{\lambda})})}\left(\sum_{l,m}b_{lm}\left\|\theta_{l}^{( \lambda)}-\tilde{\theta}_{m}^{(\tilde{\lambda})}\right\|^{p}\right)^{1/p},\]
where \(\mathcal{C}(\mathbf{a}^{(\lambda)},\tilde{\mathbf{a}}^{(\tilde{\lambda})})\) is the set of all couplings with marginal distributions \(\mathbf{a}^{(\lambda)}\) and \(\tilde{\mathbf{a}}^{(\tilde{\lambda})}\). That is, for all \(l=1,\ldots,L\), \(\sum_{m=1}^{L}b_{lm}=a_{l}^{(\lambda)}\) and for all \(m=1,\ldots,\tilde{L}\), \(\sum_{l=1}^{L}b_{lm}=\tilde{a}_{m}^{(\tilde{\lambda})}\).
Recall that \(\mathbb{P}_{0}\) refers to probability statements with respect to the true data generating process, with \(\mathbb{P}_{0}(A)=\int_{A}f_{0}(x)dx\). Positive constants that do not depend on \(n\) will typically be denoted by \(M\). For any sequences \(y_{n}\) and \(z_{n}\), \(y_{n}<z_{n}\) if there exists a \(N\in\mathbb{N}\) and constant \(B>0\) so that \(|y_{n}|\leq B|z_{n}|\) for all \(n\geq N\); \(y_{n}\asymp z_{n}\) if \(y_{n}<z_{n}\) and \(z_{n}<y_{n}\); and \(y_{n}\sim z_{n}\) if \(y_{n}/z_{n}\to 1\). A random variable \(Z_{n}=o_{\mathbb{P}_{0}}(1)\) if \(Z_{n}\to 0\) in \(\mathbb{P}_{0}\)-probability.
### Strong Identifiability and Lipschitz Conditions
As stated in Section 3, Wasserstein contraction of \(\lambda\) will require that \(\mathcal{G}\) is strongly identifiable, referring to a linear independence condition on the derivatives of \(g(x;\theta)\). We formally define
these concepts below. Recall that \(\Theta\subset\mathbb{R}^{d}\) is assumed to be compact and \(f^{(\lambda)}\) to be identifiable in \(\lambda\). In these definitions, \(\nabla_{g}(\theta)\) and \(\mathbf{H}_{g}(\theta)\) denote the gradient and Hessian of \(g(x;\theta)\) with respect to \(\theta\).
**Definition B.1**.: The family \(\mathcal{G}=\{g(\theta):\theta\in\Theta\}\) is first order identifiable if \(g(x;\theta)\) is differentiable in \(\theta\) and, given \(r\) atoms \(\theta_{1},\ldots,\theta_{r}\in\Theta\), if there exists \(\beta_{1},\ldots,\beta_{r}\in\mathbb{R}\) and \(\rho_{1},\ldots,\rho_{r}\in\mathbb{R}^{d}\) so that for all \(x\)
\[\sum_{s=1}^{r}\big{\{}\beta_{s}g(x;\theta_{s})+\rho_{s}^{T}\nabla_{g}(\theta_ {s})\big{\}}=0, \tag{33}\]
then \(\beta_{s}=0\) and \(\rho_{s}=\mathbf{0}\) for all \(s=1,\ldots,r\).
**Definition B.2**.: The family \(\mathcal{G}=\{g(\theta):\theta\in\Theta\}\) is second order identifiable if \(g(x;\theta)\) is twice differentiable in \(\theta\) and, given \(r\) atoms \(\theta_{1},\ldots,\theta_{r}\in\Theta\), if there exists \(\beta_{1},\ldots,\beta_{r}\in\mathbb{R}\), \(\rho_{1},\ldots,\rho_{r}\in\mathbb{R}^{d}\), and \(\nu_{1},\ldots,\nu_{r}\in\mathbb{R}^{d}\) so that for all \(x\)
\[\sum_{s=1}^{r}\big{\{}\beta_{s}g(x;\theta_{s})+\rho_{s}^{T}\nabla_{g}(\theta_ {s})+\nu_{s}^{T}\mathbf{H}_{g}(\theta_{s})\nu_{s}\big{\}}=0, \tag{34}\]
then \(\beta_{s}=0\) and \(\rho_{s}=\nu_{s}=\mathbf{0}\) for all \(s=1,\ldots,r\).
It is clear from the two definitions that second-order identifiability implies first-order identifiability. These notions are commonly used in the literature because they bridge the connection between convergence from \(f\) to \(f_{0}\) (or \(f^{*}\)) in the Hellinger distance and convergence from \(\lambda\) to \(\lambda_{0}\) (or \(\lambda^{*}\)) in Wasserstein distance. This will be useful for showing concentration of the weights and atoms, as this is straightforward to extract from convergence of the mixing measure. A canonical example of a second-order identifiable family are Gaussian location mixtures for a known covariance matrix \(\Sigma\), i.e. \(\mathcal{G}=\{\mathcal{N}_{d}(\theta,\Sigma):\theta\in\Theta\}\).
Recall that we also impose Lipschitz conditions on the derivatives of \(g(x;\theta)\). The strongest condition we make is the second order integral Lipschitz property, as introduced in Guha et al. (2021).
**Definition B.3**.: \(\mathcal{G}\) satisfies the integral Lipschitz property of the second order with respect to the mixing measures \(\lambda_{0}\) and \(\lambda_{*}\) if \(g(x;\theta)\) is twice differentiable in \(\theta\) and for all \(x\in\chi\),
\[|\zeta^{T}(\mathbf{H}_{g}(\theta_{1})-\mathbf{H}_{g}(\theta_{2}))\zeta|\leq M (x)\left\|\theta_{1}-\theta_{2}\right\|^{\xi}\left\|\zeta\right\|^{2} \tag{35}\]
for any \(\zeta\in\mathbb{R}^{d}\) and for some \(\xi>0\) that is independent of \(x\) and \(\theta_{1},\theta_{2}\), where \(M(x)\) is a function of \(x\) so that \(\mathbb{E}_{0}[M(X)/f^{*}(X)]<\infty\).
Note that this is not only a condition on \(\mathcal{G}\), but also on \(\lambda_{0}\) and \(f_{0}\). The inequality in (35) is effectively a smoothness condition on the Hessian of \(g(x;\theta)\), where the Lipschitz constant is a function of \(x\). We will need to assume that the integral Lipschitz property holds in order to invoke concentration of the mixing measure for the misspecified regime, which is instrumental for proving Proposition 3. Related to the integral Lipschitz property is the uniform Lipschitz property, which was introduced in Ho and Nguyen (2016).
**Definition B.4**.: \(\mathcal{G}\) satisfies the uniform Lipschitz property of the second order if \(g(x;\theta)\) is twice differentiable in \(\theta\) and for all \(x\in\chi\),
\[|\zeta^{T}(\mathbf{H}_{g}(\theta_{1})-\mathbf{H}_{g}(\theta_{2}))\zeta|\leq M \left\|\theta_{1}-\theta_{2}\right\|^{\xi}\left\|\zeta\right\|^{2} \tag{36}\]
for any \(\zeta\in\mathbb{R}^{d}\) and for some \(\xi,M>0\) that are independent of \(x\) and \(\theta_{1},\theta_{2}\).
The uniform Lipschitz property will be useful when we consider the well-specified regime. As in (35), it is also a Lipschitz condition on the Hessian of \(g(x;\theta)\). The second order Lipschitz property is satisfied by the location-scale Gaussian family and location-scale Student's t family (Ho et al., 2020).
Lipschitz conditions on the second derivatives of a probability density function are frequently assumed in the literature on mixing measure concentration. See, for example, Lemma 2 in Chen (1995) and Theorem 1 in Nguyen (2013). While it is difficult to characterize what choices of \(f_{0}\) and \(\mathcal{G}\) can admit the integral Lipschitz property, the property will hold if \(\mathcal{G}\) is uniformly Lipschitz and \(\int_{\mathbb{R}^{2}}\frac{f_{0}(x)}{f^{*}(x)}dx<\infty\). Hence, for uniformly Lipschitz families, such as location Gaussians, the integral Lipschitz property can be reformulated as a tail condition on \(f^{*}\) and \(f_{0}\).
### Conditions and Wasserstein Concentration
For the overfitted and misspecified case, we require the following conditions.
**Assumption 1**.: Suppose that the following conditions are satisfied by \(\mathcal{G}\), \(\Pi\), \(f_{0}\), and \(\lambda_{0}\).
* The KL minimizer exists and is unique, with \(K^{*}<L<\infty\).
* The family \(\mathcal{G}\) is second-order identifiable and admits the uniform integral Lipschitz property up to the second order. Additionally, for any \(x\in\mathbb{R}^{d}\), \(g(x;\theta)\) is continuous in \(\theta\), and there exists a constant \(M_{0}>0\) so that \(\mathbb{P}_{0}\left(\max_{\theta\in\Theta}g(X;\theta)\leq M_{0}\right)=1\). For any \(\theta^{\prime}\in\Theta\), \(h(g(\theta),g(\theta^{\prime}))\) is continuous at \(\theta=\theta^{\prime}\).
* There exists an \(\epsilon>0\) so that \(\mathbb{E}_{0}\left\{f^{*}(X)/f^{(\lambda)}(X)\right\}\leq R^{*}(\epsilon)\) when \(W_{1}(\lambda,\lambda_{*})\leq\epsilon\) for any \(\lambda\in\Omega_{K^{*}}(\Theta)\), where \(R^{*}(\epsilon)\) is a function of \(\epsilon\) that only depends on \(\lambda_{*}\), \(\lambda_{0}\), and \(\Theta\).
* \(\Pi(\mathbf{a})\) is a symmetric Dirichlet prior, with concentration parameter \(\alpha<1\). \(\Pi(\theta)\) is absolutely continuous with respect to the Lebesgue measure with probability density function \(p(\theta)\), which satisfies \(\int_{\Theta}p(\theta)d\theta=1\) and \(\min_{\theta\in\Theta}p(\theta)>0\).
* \(\mathcal{G}=\left\{\tilde{g}(x-\theta):\theta\in\Theta\right\}\) for some probability density function \(\tilde{g}\). Also, there exists a \(\zeta>0\) so that \(\tilde{g}(z)\) is \(\zeta\)-Holder continuous.
* There exists a constant \(\epsilon^{*}>0\) so that \(\mathbb{P}_{0}(f_{*}(X)\geq\epsilon^{*})=1\).
Condition (A2) is satisfied by location Gaussian mixtures when \(\int_{\mathbb{R}^{d}}\frac{f_{0}(x)}{f^{*}(x)}dx<\infty\). Condition (A3) can be interpreted as a continuity condition on \(\lambda\) for convergence to \(f^{*}\) while averaging over \(f_{0}\). (A4) is a standard condition that is often implemented in Bayesian finite
mixtures. Similar to Rousseau and Mengersen (2011), we also have a threshold on \(\alpha\) that determines the asymptotic behaviour of the weights. In contrast, our threshold is independent of dimension. (A6) essentially requires that \(f_{*}\) is a good approximation and that \(f_{0}\) has bounded support, which will be useful for several of the bounds we show. In fact, (A6) is generally not needed, as \(\mathbb{P}_{0}(f_{*}(X)\geq\epsilon_{n})\to 1\) for any \(\epsilon_{n}\to 0\) by the dominated convergence theorem. This would change Theorem 1 to a high probability statement, rather than a probability equal to one statement, and lead to a slightly different form of the remainder in \(\Delta_{ij}^{*+}\) (e.g. see Theorem 4). We next state a general result from Guha et al. (2021) on Wasserstein convergence of the mixing measure under these conditions.
**Theorem 2**.: Let (A1)-(A4) hold. Then,
\[\Pi\left\{\lambda\in\Lambda_{L}(\Theta):W_{2}(\lambda,\lambda^{*})\underset{ \sim}{>}(\log n/n)^{1/4}\mid\mathbf{X}\right\}\xrightarrow{\mathbb{P}_{0}}0. \tag{37}\]
Proof.: The proof is nearly identical to the proof of Theorem 4.3 from Guha et al. (2021), in which the conditions of Theorem C.2 from that article are verified. For sake of brevity and completeness, we do not redefine all the notation and concepts from their paper, but instead outline the modification of their proof to our case. We use the sieve \(G_{n}=\Lambda_{L}(\Theta)\), and let \(\epsilon_{n}\) denote a sequence such that \(\epsilon_{n}\to 0\) and \(\frac{n}{(\log(1/\epsilon_{n}))^{2}}\) is bounded away from \(0\). Conditions (A2) and (A3) allow us to lower bound the prior probability of a generalized Kullback-Leibler neighbourhood of \(\lambda^{*}\) with \(\Pi(\lambda\in\Lambda_{L}(\theta):W_{1}(\lambda,\lambda^{*})\underset{\sim}{ <}\epsilon_{n}^{2})\). Under condition (A4), we invoke the proof of Theorem 3.1 in Guha et al. (2021) and the tail bounds on the Dirichlet distribution given in Lemma G.13 of Ghosal and Van der Vaart (2017) to show that
\[\Pi\left\{\lambda\in\Lambda_{L}(\Theta):W_{1}(\lambda,\lambda^{*})\underset{ \sim}{<}\epsilon_{n}^{2}\right\}\underset{\sim}{>}\epsilon_{n}^{2(c_{\Pi}+L \alpha)} \tag{38}\]
where \(c_{\Pi}>0\) depends on the prior \(p(\theta)\). Finally, for the Hellinger information of \(G_{n}\), \(\bar{\Psi}_{G_{n}}(r)\geq\bar{\Psi}_{\Omega_{L}(\Theta)}(r)\underset{\sim}{ >}r^{4}\), implying that many of the derivations shown in the proof of Theorem 4.3 of Guha et al. (2021) apply to our case as well. In particular, we can verify Step 1 by setting \(\epsilon_{n}=1/n\) and \(M_{n}=An^{3/4}(\log n)^{1/4}\), where \(A>0\) is a sufficiently large constant. Step 2 naturally follows from our choice of sieves, as \(\Pi(G_{n})=1\). Finally, using condition (A3) and the fact that \(\bar{\Psi}_{G_{n}}(r)\underset{\sim}{>}r^{4}\), Step 3 is also verified by the proof of Theorem 4.3 from Guha et al. (2021).
### Proof of Proposition 3
Proof.: We show concentration of the weights and atoms using a technique from Nguyen (2013). Let \(\epsilon_{n}:=(\log n/n)^{1/4}\), \(0<\delta<1\), \(a_{\min}^{*}=\min_{h}a_{h}^{*}>0\), and fix \(1\leq h\leq K^{*}\).
\[M^{2}\epsilon_{n}^{2}>W_{2}^{2}(\lambda,\lambda_{*})\geq\sum_{l=1 }^{L}b_{lh}^{(\lambda)}\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\] \[\geq a_{h}^{*}\min_{l}\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{* }\right\|^{2}\geq a_{\min}^{*}\min_{l}\left\|\theta_{l}^{(\lambda)}-\theta_{h} ^{*}\right\|^{2}. \tag{39}\]
For large enough \(n\), \(a_{\min}^{*}\geq M^{2\delta}\epsilon_{n}^{2\delta}\), implying
\[W_{2}^{2(1-\delta)}(\lambda,\lambda_{*})\geq\min_{l}\left\|\theta_{l}^{(\lambda) }-\theta_{h}^{*}\right\|^{2}. \tag{40}\]
Next, let
\[\mathcal{I}^{(\lambda)}=\left\{l=1,\ldots,L:\exists h=1,\ldots,K^{*}\text{ such that }\min_{l}\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\leq W_{2}^{2(1- \delta)}(\lambda,\lambda_{*})\right\}, \tag{41}\]
and
\[\mathcal{I}_{h}^{(\lambda)}=\left\{l=1,\ldots,L:\min_{l}\left\| \theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\leq W_{2}^{2(1-\delta)}( \lambda,\lambda_{*})\right\}. \tag{42}\]
It follows that
\[W_{2}^{2}(\lambda,\lambda_{*}) \geq\sum_{l\not\in\mathcal{I}^{(\lambda)}}a_{l}^{(\lambda)}\min_ {h}\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}>W_{2}^{2(1- \delta)}(\lambda,\lambda_{*})\sum_{l\not\in\mathcal{I}^{(\lambda)}}a_{l}^{( \lambda)} \tag{43}\] \[\implies\sum_{l\not\in\mathcal{I}^{(\lambda)}}a_{l}^{(\lambda)}< W_{2}^{2\delta}(\lambda,\lambda^{*})<M^{2\delta}\epsilon_{n}^{2\delta}. \tag{44}\]
Now, we will show concentration of the weights. For fixed \(h\),
\[M^{2}\epsilon_{n}^{2}>W_{2}(\lambda,\lambda_{*})\geq\sum_{l\in \mathcal{I}^{(\lambda)}\backslash\mathcal{I}_{h}^{(\lambda)}}b_{lh}^{(\lambda) }\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\geq\min_{l\in \mathcal{I}^{(\lambda)}\backslash\mathcal{I}_{h}^{(\lambda)}}\left\|\theta_{l }^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\sum_{l\in\mathcal{I}^{(\lambda)} \backslash\mathcal{I}_{h}^{(\lambda)}}b_{lh}^{(\lambda)}\] \[=\min_{l\in\mathcal{I}^{(\lambda)}\backslash\mathcal{I}_{h}^{( \lambda)}}\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\left(a_{h}^ {*}-\sum_{l\in\mathcal{I}_{h}^{(\lambda)}}b_{lh}^{(\lambda)}-\sum_{l\not\in \mathcal{I}^{(\lambda)}}b_{lh}^{(\lambda)}\right)\] \[\geq\min_{l\in\mathcal{I}^{(\lambda)}\backslash\mathcal{I}_{h}^{( \lambda)}}\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\left(a_{h}^ {*}-\sum_{l\in\mathcal{I}_{h}^{(\lambda)}}a_{l}^{(\lambda)}-\sum_{l\not\in \mathcal{I}^{(\lambda)}}b_{lh}^{(\lambda)}\right)\] \[\geq\min_{l\in\mathcal{I}^{(\lambda)}\backslash\mathcal{I}_{h}^{( \lambda)}}\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\left(a_{h}^ {*}-\sum_{l\in\mathcal{I}_{h}^{(\lambda)}}a_{l}^{(\lambda)}-\sum_{l\not\in \mathcal{I}^{(\lambda)}}b_{lh}^{(\lambda)}\right)\] \[=\min_{l\in\mathcal{I}^{(\lambda)}\backslash\mathcal{I}_{h}^{( \lambda)}}\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\left(\sum_{l \in\mathcal{I}_{h}^{(\lambda)}}b_{lh}^{(\lambda)}-\sum_{l\in\mathcal{I}_{h}^{ (\lambda)}}a_{l}^{(\lambda)}\right)\] \[=-\min_{l\in\mathcal{I}^{(\lambda)}\backslash\mathcal{I}_{h}^{( \lambda)}}\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\sum_{l\in \mathcal{I}_{h}^{(\lambda)}}\sum_{k\neq h}b_{lk}^{(\lambda)}.\]
For any \(k\neq h\), \(m\in\mathcal{I}_{k}^{(\lambda)}\), and \(l\in\mathcal{I}_{h}^{(\lambda)}\),
\[\min_{l\in\mathcal{I}^{(\lambda)}\setminus\mathcal{I}_{h}^{(\lambda )}}\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|\leq\left\|\theta_{m}^{ (\lambda)}-\theta_{h}^{*}\right\|\] \[\leq\left\|\theta_{m}^{(\lambda)}-\theta_{k}^{*}\right\|+\left\| \theta_{l}^{(\lambda)}-\theta_{k}^{*}\right\|+\left\|\theta_{l}^{(\lambda)}- \theta_{h}^{*}\right\|\leq M\epsilon_{n}+\left\|\theta_{l}^{(\lambda)}- \theta_{k}^{*}\right\|.\]
Since \(\Theta\) is compact, this shows that
\[-\min_{l\in\mathcal{I}^{(\lambda)}\setminus\mathcal{I}_{h}^{( \lambda)}}\left\|\theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\sum_{l\in \mathcal{I}_{h}^{(\lambda)}}\sum_{k\neq h}b_{lk}^{(\lambda)}\] \[\geq-M\epsilon_{n}-\sum_{l\in\mathcal{I}_{h}^{(\lambda)}}\sum_{ k\neq h}b_{lk}^{(\lambda)}\left\|\theta_{l}^{(\lambda)}-\theta_{k}^{*}\right\|^{2} \geq-M\epsilon_{n}-W_{2}(\lambda,\lambda_{*})^{2}.\]
Therefore, there is a positive constant \(M>0\) so that
\[\min_{l\in\mathcal{I}^{(\lambda)}\setminus\mathcal{I}_{h}^{(\lambda)}}\left\| \theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\left|a_{h}^{*}-\sum_{l\in \mathcal{I}_{h}^{(\lambda)}}a_{l}^{(\lambda)}-\sum_{l\not\in\mathcal{I}^{( \lambda)}}b_{lh}^{(\lambda)}\right|\leq M\epsilon_{n}. \tag{45}\]
Observe that there exists \(k\neq h\) and constant \(M^{\prime}>0\) so that
\[\min_{l\in\mathcal{I}^{(\lambda)}\setminus\mathcal{I}_{h}^{(\lambda)}}\left\| \theta_{l}^{(\lambda)}-\theta_{h}^{*}\right\|^{2}\geq\left\|\theta_{h}^{*}- \theta_{k}^{*}\right\|^{2}-M\epsilon_{n}. \tag{46}\]
The above derivation implies that there exists constants \(M,M^{\prime}\), and \(M^{\prime\prime}>0\) so that
\[\left|a_{h}^{*}-\sum_{l\in\mathcal{I}_{h}^{(\lambda)}}a_{l}^{( \lambda)}-\sum_{l\not\in\mathcal{I}^{(\lambda)}}b_{lh}^{(\lambda)}\right| \leq\frac{M\epsilon_{n}}{\left\|\theta_{h}^{*}-\theta_{k}^{*}\right\|^{2}-M^{ \prime}\epsilon_{n}} \tag{47}\] \[\implies\left|a_{h}^{*}-\sum_{l\in\mathcal{I}_{h}^{(\lambda)}}a_ {l}^{(\lambda)}\right|\leq\nu_{n}:=\frac{M\epsilon_{n}}{\left\|\theta_{h}^{* }-\theta_{k}^{*}\right\|^{2}-M^{\prime}\epsilon_{n}}+(L-K^{*})M^{\prime\prime }\epsilon_{n}^{2\delta}\underset{\sim}{\sim}\max(\epsilon_{n},\epsilon_{n}^{2 \delta}). \tag{48}\]
Finally, let
\[\mathcal{S}_{n}=\bigcap_{h=1}^{K^{*}}\bigcup_{\mathcal{I}\subset[L ]\setminus\emptyset}\left\{\max_{l\in\mathcal{I}}\left\|\theta_{l}-\theta_{h}^ {*}\right\|\underset{\sim}{\sim}\epsilon_{n},\left|\sum_{l\in\mathcal{I}}a_{l }-a_{h}^{*}\right|\underset{\sim}{\sim}\max(\epsilon_{n},\epsilon_{n}^{2 \delta})\right\}, \tag{49}\] \[\mathcal{T}_{n}=\bigcup_{\begin{subarray}{c}\mathcal{I}\subset[L ]\\ |\mathcal{I}|\leq R^{*}\end{subarray}}\left\{\sum_{l\in\mathcal{I}}a_{l} \underset{\sim}{\sim}\epsilon_{n}^{2\delta}\right\}. \tag{50}\]
We have shown that for all \(n\geq N\), \(\Pi(\mathcal{S}_{n}\cap\mathcal{T}_{n}\mid\boldsymbol{X})\geq\Pi(W_{2}( \lambda,\lambda^{*})<M\epsilon_{n}\mid\boldsymbol{X})\), and so, by Theorem 2, \(\Pi(\mathcal{S}_{n}\cap\mathcal{T}_{n}\mid\boldsymbol{X})\overset{\mathbb{P}_{ 0}}{\longrightarrow}1\). Proposition 3 follows by applying the dominated convergence theorem.
### Proof of Theorem 1
Proof.: Let \(\epsilon_{n}=(\log n/n)^{1/4}\). Under (A1)-(A4), Theorem 2 implies that for large enough \(n\),
\[\Delta_{ij}=\int_{B^{(n)}_{W_{2}}(\lambda_{*})}\sum_{m<l}\eta^{(\lambda)}_{ml}\;q ^{ml(\lambda)}_{ij}d\Pi(\lambda\mid\mathbf{X})+o_{\mathbb{P}_{0}}(1), \tag{51}\]
where \(B^{(n)}_{W_{2}}=\{\lambda\in\Lambda_{L}(\Theta):W_{2}(\lambda,\lambda_{*})<M \epsilon_{n}\}\) for some constant \(M>0\). Observe that we can decompose the integrand in (51):
\[\sum_{m<l}\eta^{(\lambda)}_{ml}q^{ml(\lambda)}_{ij} \tag{52}\] \[=\sum_{l,m\in\mathcal{I}^{(\lambda)}}\eta^{(\lambda)}_{ml}q^{ml( \lambda)}_{ij}+\sum_{(l\not\in\mathcal{I}^{(\lambda)})\text{ or }(m\not\in\mathcal{I}^{( \lambda)})}\eta^{(\lambda)}_{ml}q^{ml(\lambda)}_{ij}\] (53) \[=\sum_{h=1}^{K^{*}}\sum_{l,m\in\mathcal{I}^{(\lambda)}_{h}}\eta^ {(\lambda)}_{ml}q^{ml(\lambda)}_{ij}+\sum_{h<k}\sum_{\begin{subarray}{c}l\in \mathcal{I}^{(\lambda)}_{h}\\ m\in\mathcal{I}^{(\lambda)}_{k}\end{subarray}}\eta^{(\lambda)}_{ml}q^{ml( \lambda)}_{ij}+\sum_{(l\not\in\mathcal{I}^{(\lambda)})\text{ or }(m\not\in \mathcal{I}^{(\lambda)})}\eta^{(\lambda)}_{ml}q^{ml(\lambda)}_{ij}. \tag{54}\]
For brevity, let \(H(\theta,\theta^{\prime}):=h(g(\theta),g(\theta^{\prime}))\) and \(\psi^{*}_{h}(\theta)=H(\theta,\theta^{*}_{h})\). By (A2), Lemma 1, and Remark 1, for any \(1\leq h\leq K^{*}\),
\[\max_{l\in\mathcal{I}^{(\lambda)}_{h}}H(\theta^{(\lambda)}_{l},\theta^{*}_{h} )\leq\kappa_{nh}=\max_{\theta:\left\|\theta-\theta^{*}_{h}\right\|\leq M \epsilon_{n}}\varphi^{*}_{h}(\theta) \tag{55}\]
and \(\kappa_{nh}\to 0\). Let \(\kappa_{n}:=\max_{h=1,\ldots,K^{*}}\kappa_{nh}\). Then, for any \(l,m\in\mathcal{I}^{(\lambda)}_{h}\),
\[H(\theta^{(\lambda)}_{l},\theta^{(\lambda)}_{m})\leq 2\kappa_{n}. \tag{56}\]
and for any \(l\in\mathcal{I}^{(\lambda)}_{h},m\in\mathcal{I}^{(\lambda)}_{k}\),
\[H(\theta^{*}_{h},\theta^{*}_{k})-2\kappa_{n}\leq H(\theta^{(\lambda)}_{l}, \theta^{(\lambda)}_{m})\leq H(\theta^{*}_{h},\theta^{*}_{k})+2\kappa_{n}. \tag{57}\]
(A5) implies similar concentration for the densities at \(X_{i}\) and \(X_{j}\). For \(X\sim f_{0}\),
\[|g(X;\theta^{(\lambda)}_{l})-g(X;\theta^{*}_{h})|=|\tilde{g}(X- \theta^{(\lambda)}_{l})-\tilde{g}(X-\theta^{*}_{h})|\] \[\leq M\left\|X-\theta^{(\lambda)}_{l}-\theta^{*}_{h}-X\right\|^{ \zeta}=M\left\|\theta^{(\lambda)}_{l}-\theta^{*}_{h}\right\|^{\zeta}\leq M \epsilon^{\zeta}_{n}.\]
Let \(\xi_{n}=M\epsilon^{\zeta}_{n}\). It follows that,
\[\bigg{|}\sum_{h=1}^{K^{*}}\sum_{l\in\mathcal{I}^{(\lambda)}_{h}}a^{(\lambda)}_ {l}g(X_{i};\theta^{(\lambda)}_{l})-f^{*}(X_{i})\bigg{|}\underset{\sim}{<}\max( \xi_{n},\nu_{n}) \tag{58}\]
where \(\nu_{n}\) is defined in (48). Condition (A6) implies that for large \(n\), the following holds with \(\mathbb{P}_{0}\)-probability 1,
\[0\leq f^{(n)}_{-}(X_{i})\leq f^{(\lambda)}(X_{i})\leq f^{(n)}_{+}(X_{i})+M \epsilon^{2\delta}_{n}, \tag{59}\]
where
\[f_{-}^{(n)}(X_{i}) =f^{*}(X_{i})-M\max(\xi_{n},\nu_{n}); \tag{60}\] \[f_{+}^{(n)}(X_{i}) =f^{*}(X_{i})+M\max(\xi_{n},\nu_{n}). \tag{61}\]
We can now begin to bound some of the terms in (54). For any \(m,l\not\in\mathcal{I}^{(\lambda)}\),
\[0\leq q_{ij}^{ml(\lambda)} \tag{62}\] \[=a_{l}^{(\lambda)}a_{m}^{(\lambda)}\frac{g(X_{i};\theta_{l}^{( \lambda)})g(X_{j};\theta_{m}^{(\lambda)})+g(X_{i};\theta_{m}^{(\lambda)})g(X_{ i};\theta_{l}^{(\lambda)})}{f^{(\lambda)}(X_{i})f^{(\lambda)}(X_{j})}\] (63) \[\leq\frac{M\epsilon_{n}^{4\delta}}{f_{-}^{(n)}(X_{i})f_{-}^{(n)}( X_{j})}. \tag{64}\]
Similarly, for any \(l\not\in\mathcal{I}^{(\lambda)}\) and \(m\in\mathcal{I}^{(\lambda)}\),
\[0\leq q_{ij}^{ml(\lambda)}\leq\frac{M\epsilon_{n}^{2\delta}}{f_{-}^{(n)}(X_{i })f_{-}^{(n)}(X_{j})}. \tag{65}\]
Next, if \(l,m\in\mathcal{I}_{h}^{(\lambda)}\), by (56),
\[0\leq\eta_{ml}^{(\lambda)}q_{ij}^{ml(\lambda)}\leq 2\kappa_{n}. \tag{66}\]
For any \(h,k\),
\[\left|\begin{array}{c}\sum_{\begin{subarray}{c}l\in\mathcal{I}_{h}^{( \lambda)}\\ m\in\mathcal{I}_{k}^{(\lambda)}\end{subarray}}a_{l}^{(\lambda)}a_{m}^{( \lambda)}g(X_{i};\theta_{l}^{(\lambda)})g(X_{j};\theta_{m}^{(\lambda)})-a_{h}^ {*}a_{k}^{*}g(X_{i};\theta_{h}^{*})g(X_{j};\theta_{k}^{*})\right|\underset{ \sim}{\sim}\max(\xi_{n},\nu_{n}) \tag{67}\]
Therefore,
\[(\eta_{hk}^{*}-2\kappa_{n})R_{ij}^{hk} \leq\sum_{\begin{subarray}{c}l\in\mathcal{I}_{h}^{(\lambda)}\\ m\in\mathcal{I}_{k}^{(\lambda)}\end{subarray}}\eta_{ml}^{(\lambda)}q_{ij}^{ml (\lambda)}\leq(\eta_{hk}^{*}+2\kappa_{n})S_{ij}^{hk}; \tag{68}\] \[R_{ij}^{hk} =\frac{a_{h}^{*}a_{k}^{*}\left\{g(X_{i};\theta_{h}^{*})g(X_{j}; \theta_{k}^{*})+g(X_{j};\theta_{h}^{*})g(X_{i};\theta_{k}^{*})\right\}-M\max( \xi_{n},\nu_{n})}{(f_{+}^{(n)}(X_{i})+M\epsilon_{n}^{2\delta})(f_{+}^{(n)}(X_ {j})+M\epsilon_{n}^{2\delta})};\] (69) \[S_{ij}^{hk} =\frac{a_{h}^{*}a_{k}^{*}\left\{g(X_{i};\theta_{h}^{*})g(X_{j}; \theta_{k}^{*})+g(X_{j};\theta_{h}^{*})g(X_{i};\theta_{k}^{*})\right\}+M\max( \xi_{n},\nu_{n})}{f_{-}^{(n)}(X_{i})f_{-}^{(n)}(X_{j})}. \tag{70}\]
Finally, this gives us the following bound on (54) for all \(\lambda\in B_{W_{2}}^{(n)}(\lambda_{*})\),
\[\Delta_{ij}^{*-}\leq\sum_{m<l}\eta_{ml}^{(\lambda)}q_{ij}^{ml(\lambda)}\leq \Delta_{ij}^{*+}, \tag{71}\]
where
\[\Delta_{ij}^{*-}:=\sum_{h<k}(\eta_{hk^{*}}-2\kappa_{n})R_{ij}^{hk}; \tag{73}\]
\[\Delta_{ij}^{*+}:=K^{*}(R^{*}+1)R^{*}\kappa_{n}+\sum_{h<k}(\eta_{hk}^{*}+2\kappa _{n})S_{ij}^{hk}+\frac{M\left\{\binom{L}{2}-\binom{K^{*}}{2}\right\}\epsilon_{n }^{2\delta}}{f_{-}^{(n)}(X_{i})f_{-}^{(n)}(X_{j})}. \tag{74}\]
**Lemma 1**.: Let \(\chi\subset\mathbb{R}^{d}\) be compact and \(f:\chi\rightarrow\mathbb{R}\), where \(f\) is bounded on \(\chi\) and \(f\) is continuous at a point \(y\in\chi\). Let \(\delta_{n}\) be a sequence with \(\lim_{n\rightarrow\infty}\delta_{n}=0\) and \(B_{\delta_{n}}(y)=\{x\in\chi:\|x-y\|\leq\delta_{n}\}\). Then, there exists a sequence \(\epsilon_{n}\) that only depends only \(y\) and with \(\lim_{n\rightarrow\infty}\epsilon_{n}=0\) so that for all \(x\in B_{\delta_{n}}(y)\),
\[|f(x)-f(y)|\leq\epsilon_{n}. \tag{75}\]
Proof.: Consider sequences of within-neighbourhood maximizers and minimizers of \(f\),
\[x_{n} =\underset{x\in B_{\delta_{n}}(y)}{\operatorname{argmax}}f(x); \tag{76}\] \[z_{n} =\underset{x\in B_{\delta_{n}}(y)}{\operatorname{argmin}}f(x). \tag{77}\]
In the case where there are multiple minimizers or maximizers, simply choose one of these points as \(x_{n}\) or \(z_{n}\). Since \(f\) is continuous and bounded and \(B_{\delta_{n}}(y)\) is a compact set, \(x_{n},z_{n}\in B_{\delta_{n}}(y)\). This means that \(x_{n}\to y\) and \(z_{n}\to y\), which implies \(f(x_{n})\to f(y)\) and \(f(z_{n})\to f(y)\) by continuity. Therefore, for any \(x\in B_{\delta_{n}}(y)\),
\[|f(x)-f(y)|\leq\epsilon_{n}:=|f(x_{n})-f(z_{n})|\to 0. \tag{78}\]
**Remark 1**.: Observe that if \(y=\underset{x\in\chi}{\operatorname{argmin}}f(x)\), the proof of Lemma 1 implies that we can instead set \(\epsilon_{n}=|f(x_{n})-f(y)|\).
### The Well-Specified Case
When the model is well-specified, we assume that \(f_{0}=\sum_{h=1}^{K_{0}}a_{h}^{0}g(\theta_{h}^{0})\) for some \(\theta_{1}^{0},\ldots,\theta_{K_{0}}^{0}\in\Theta\) and \(a_{h}^{0}>0\), \(\sum_{h=1}^{K_{0}}a_{h}^{0}=1\). In turn, the posterior distribution of \(\lambda\) will contract towards \(\lambda_{0}=\sum_{h=1}^{K_{0}}a_{h}^{0}\delta_{\theta_{h}^{0}}\). For finite mixtures, the disparity \(R_{0}=L-K_{0}\) between the number of components in the true state of nature \(f_{0}\) and the number of components in \(f\) will affect the contraction rate. Under an _exact-fitted_ mixture, where \(K_{0}=L\), the contraction rate is given by \((\log n/n)^{1/2}\)(Nguyen, 2013; Ho and Nguyen, 2016). When \(K_{0}<L\), the model is overfitted, and \(\lambda\) contracts to \(\lambda_{0}\) at the slower rate of \((\log n/n)^{1/4}\)(Nguyen, 2013; Ho and Nguyen, 2016; Guha et al., 2021). Observe that this rate is identical to the contraction rate of \(\lambda\) to \(\lambda^{*}\) in the misspecified case.
Our conditions for the well-specified case are similar to the misspecified case. In particular, we will still need second-order identifiability and a Lipschitz condition.
**Assumption 2**.: We impose the following conditions on \(\mathcal{G}\), \(\Pi\), \(f_{0}\), and \(\lambda_{0}\).
(C1): \(f_{0}\) is a finite mixture, with \(K_{0}<L<\infty\).
(C2): The family \(\mathcal{G}\) is second-order identifiable and admits the uniform Lipschitz property up to the second order. Additionally, for any \(x\in\mathbb{R}^{d}\), \(g(x;\theta)\) is continuous in \(\theta\), and there exists a constant \(M_{0}>0\) so that \(\mathbb{P}_{0}\left(\max_{\theta\in\Theta}g(X;\theta)\leq M_{0}\right)=1.\) For any \(\theta^{\prime}\in\Theta\), \(h(g(\theta),g(\theta^{\prime}))\) is continuous at \(\theta=\theta^{\prime}\).
(C3): \(\Pi(\mathbf{a})\) and \(\Pi(\theta)\) satisfy condition (A4) in Assumption 1.
(C4): \(\mathcal{G}\) satisfies condition (A5) in Assumption 1.
As in the misspecified case, the canonical example for which conditions (C1)-(C4) apply is a mixture of location Gaussian kernels. We can now state the formal result on posterior contraction of the mixing measure for the well-specified case.
**Theorem 3**.: Let (C1)-(C4) hold. Then,
\[\Pi\left\{\lambda\in\Lambda_{L}(\Theta):W_{2}(\lambda,\lambda_{0})\underset{ \sim}{>}(\log n/n)^{1/4}\right\}\overset{\mathbb{P}_{0}}{\longrightarrow}0. \tag{79}\]
Proof.: See Nguyen (2013) and Guha et al. (2021).
We then reformulate Theorem 3 in terms of the atoms and weights so that we can show concentration of \(\Delta\).
**Proposition 4**.: Let \(\epsilon_{n}=(\log n/n)^{1/4}\) and \(0<\delta<1\). For any \(h\in[K_{0}]\) and \(\mathcal{I}\in[L]\), define the events
\[A_{h,\mathcal{I}}=\left\{\max_{l\in\mathcal{I}}\left\|\theta_{l}-\theta_{h}^{ 0}\right\|\underset{\sim}{<}\epsilon_{n},\bigg{|}\sum_{l\in\mathcal{I}}a_{l}- a_{h}^{0}\bigg{|}\underset{\sim}{<}(\epsilon_{n}\vee\epsilon_{n}^{2\delta}) \right\},\quad B_{\mathcal{I}}=\left\{\sum_{l\in\mathcal{I}}a_{l}\underset{ \sim}{<}\epsilon_{n}^{2\delta}\right\}. \tag{80}\]
Then under conditions (C1)-(C4), as \(n\rightarrow\infty\),
\[\mathbb{E}_{0}\left[\Pi\left\{\left(\bigcap_{h=1}^{K_{0}}\bigcup_{|\mathcal{I }|\geq 1}A_{h,\mathcal{I}}\right)\cap\left(\bigcup_{|\mathcal{I}|\leq R_{0}}B_ {\mathcal{I}}\right)\bigg{.}\bigg{.}\bigg{|}\mathbf{X}\right\}\right] \to 1. \tag{81}\]
Proof.: The result can be shown using the same technique as the proof for Proposition 3, but \(a_{h}^{*}\) and \(\theta_{h}^{*}\) are exchanged with \(a_{h}^{0}\) and \(\theta_{h}^{0}\).
In contrast to the misspecified regime, \(\mathbf{c}_{\text{FOLD}}\) will contract towards the oracle FOLD clusters. We define these as
\[\mathbf{c}_{\text{FOLD}}^{\text{oracle}}=\operatorname*{argmin}_{\widehat{\mathbf{c} }}\ \sum_{i<j}\left\{\mathbf{1}_{\dot{c}_{i}=\dot{c_{j}}}\Delta_{ij}^{0}+\omega\mathbf{1}_{ \dot{c}_{i}\neq\dot{c}_{j}}(1-\Delta_{ij}^{0})\right\}, \tag{82}\]
where \(\Delta_{ij}^{0}=\mathbb{E}_{\Pi}[\mathcal{H}_{ij}\mid\mathbf{X},\lambda_{0}]\). That is, the oracle FOLD clusters arise when the true data generating process is known. We can then rewrite \(\Delta_{ij}^{0}\) as
\[\Delta_{ij}^{0}=\sum_{h<k}h\left\{g(\theta_{h}^{0}),g(\theta_{k}^{0})\right\} \frac{a_{h}^{0}a_{k}^{0}\left\{g(X_{i};\theta_{h}^{0})g(X_{j};\theta_{k}^{0})+g (X_{i};\theta_{k}^{0})g(X_{j};\theta_{h}^{0})\right\}}{f_{0}(X_{i})f_{0}(X_{j })}. \tag{83}\]
\(\Delta^{0}_{ij}\) are a weighted sum of the pairwise Hellinger distances between the component kernels in \(f_{0}\). We can now state our main result on contraction to the oracle clusters.
**Theorem 4**.: Let \(\epsilon_{n}=(\log n/n)^{1/4}\), \(0<\delta<1\), and assume conditions (C1)-(C4) hold. Then, for all \(n\geq N\) for some \(N\in\mathbb{N}\) and fixed \(i<j\), there exists a constant \(M>0\) and random variables \(\Delta^{0-}_{ij}=\Delta^{0}_{ij}-o_{\mathbb{P}_{0}}(1)\) and
\[\Delta^{0+}_{ij}=\Delta^{0}_{ij}+o_{\mathbb{P}_{0}}(1)+\frac{M\left\{\binom{L }{2}-\binom{K_{0}}{2}\right\}\epsilon_{n}^{2\delta}}{f_{0}(X_{i})f_{0}(X_{j}) -o_{\mathbb{P}_{0}}(1)}+\frac{o_{\mathbb{P}_{0}}(1)}{f_{0}(X_{i})f_{0}(X_{j}) -o_{\mathbb{P}_{0}}(1)} \tag{84}\]
so that with \(\mathbb{P}_{0}\)-probability _tending to_\(1\),
\[\Delta^{0-}_{ij}(1-o_{\mathbb{P}_{0}}(1))+o_{\mathbb{P}_{0}}(1)\leq\Delta_{ij} \leq\Delta^{0+}_{ij}(1-o_{\mathbb{P}_{0}}(1))+o_{\mathbb{P}_{0}}(1). \tag{85}\]
**Remark 2**.: Note that we state (85) as a high probability statement, not an almost everywhere statement. Pragmatically, this is because of (59), as we will need to show
\[f_{0}(X)\geq M\max(\epsilon_{n},\nu_{n}) \tag{86}\]
in order for us to construct \(\Delta^{0-}_{ij}\) and \(\Delta^{0+}_{ij}\) without dealing with negative numbers that could change the direction of the inequalities. It is simple to verify that the probability of (86) goes to \(1\) for any probability density function \(f_{0}\) by the dominated convergence theorem.
Proof.: The proof is nearly identical to that of Theorem 1, with two exceptions. First, (59) is not satisfied with \(\mathbb{P}_{0}\)-probability equal to one, but rather \(\mathbb{P}_{0}\)-probability tending to \(1\). More formally, \(\mathbb{P}_{0}(\Delta_{ij}\leq\Delta^{0+}_{ij})\to 1\), so the \(\mathbb{P}_{0}\)-probability of (85) goes to \(1\) as \(n\to\infty\) as well. Second, we must be a little more careful in this case when expanding the middle term in (74), as we cannot necessarily assume that there is some positive \(\epsilon\) with \(\mathbb{P}_{0}(f_{0}(X_{i})\geq\epsilon)=1\). Hence, we have an extra remainder term in \(\Delta^{0+}_{ij}\), though as before, we expect this remainder term to be arbitrarily small in the large sample limit.
**Remark 3**.: A special case is the location mixture of Gaussians. Here, we have that
\[\mathbb{P}_{0}\left\{\mathcal{N}_{d}(X;\theta^{0}_{h};\Sigma) \geq M\max(\epsilon_{n},\nu_{n})\mid s_{0}=h\right\} \tag{87}\] \[=\mathbb{P}_{0}\left[\left\|\Sigma^{-1/2}(X-\theta^{0}_{h}) \right\|^{2}\leq-2\log\left\{(2\pi)^{d/2}\text{det}(\Sigma)^{1/2}M\max( \epsilon_{n},\nu_{n})\right\}\mid s_{0}=h\right]\] (88) \[\geq 1+\frac{d}{2\log\left\{(2\pi)^{d/2}\text{det}(\Sigma)^{1/2}M \max(\epsilon_{n},\nu_{n})\right\}} \tag{89}\]
by Markov's inequality as \(\left\|\Sigma^{-1/2}(X-\theta^{0}_{h})\right\|^{2}\sim\chi_{d}^{2}\). If we set \(a^{0}_{\min}=\min_{h}a^{0}_{h}\), we then have that for any \(M>0\),
\[\mathbb{P}_{0}\left\{f_{0}(X) \geq M\max(\epsilon_{n},\nu_{n})\right\}=\sum_{h=1}^{K_{0}}a^{0}_{ h}\mathbb{P}_{0}\left\{f_{0}(X)\geq M\max(\epsilon_{n},\nu_{n})\mid s_{0}=h\right\} \tag{90}\] \[\geq\sum_{h=1}^{K_{0}}a^{0}_{h}\mathbb{P}_{0}\left\{\mathcal{N}_ {d}(X;\theta^{0}_{h};\Sigma)\geq\frac{M}{a^{0}_{\min}}\max(\epsilon_{n},\nu_{ n})\mid s_{0}=h\right\}\] (91) \[=\mathbb{P}_{0}\left\{\mathcal{N}_{d}(X;\theta^{0}_{h};\Sigma) \geq\frac{M}{a^{0}_{\min}}\max(\epsilon_{n},\nu_{n})\mid s_{0}=h\right\}, \tag{93}\]
because (89) does not depend on \(h\). This tells us that the \(\mathbb{P}_{0}\) probability of (86) goes to \(1\) at an approximately \(\log\left(\log n/n\right)\) rate.
### Additional Details on Credible Balls
We express uncertainty in \(\mathbf{c}_{\text{FOLD}}\) using a \(95\%\) credible ball. For visualizations, we rely on the notions of horizontal and vertical bounds, as introduced in Wade and Ghahramani (2018). For completeness, we include the formal definitions of these bounds below. The horizontal bounds are given by the clusterings in \(B_{D}(\mathbf{c}_{\text{FOLD}})\) furthest from \(\mathbf{c}_{\text{FOLD}}\) by \(D(\cdot,\cdot)\).
**Definition B.5**.: The horizontal bound(s) of \(B_{D}(\mathbf{c}_{\text{FOLD}})\) are
\[\text{H}(\mathbf{c}_{\text{FOLD}})=\{\mathbf{c}\in B_{D}(\mathbf{c}_{\text{FOLD}}):D(\mathbf{c },\mathbf{c}_{\text{FOLD}})\geq D(\mathbf{c}^{\prime},\mathbf{c}_{\text{FOLD}})\,\forall \mathbf{c}^{\prime}\in B_{D}(\mathbf{c}_{\text{FOLD}})\}.\]
For any clustering \(\mathbf{c}\), let \(K_{\mathbf{c}}\) be the number of clusters. The vertical bounds also consider clusterings in \(B_{D}(\mathbf{c}_{\text{FOLD}})\) that are far from \(\mathbf{c}_{\text{FOLD}}\), but impose additional constraints on the number of clusters.
**Definition B.6**.: The vertical upper bound(s) of \(B_{D}(\mathbf{c}_{\text{FOLD}})\) are
\[\text{VU}(\mathbf{c}_{\text{FOLD}}) =\{\mathbf{c}\in B_{D}(\mathbf{c}_{\text{FOLD}}):K_{\mathbf{c}}\leq K_{\mathbf{c} ^{\prime}}\,\forall\mathbf{c}^{\prime}\in B_{D}(\mathbf{c}_{\text{FOLD}}),\] \[D(\mathbf{c},\mathbf{c}_{\text{FOLD}}) \geq D(\mathbf{c}^{\prime},\mathbf{c}_{\text{FOLD}})\,\forall\mathbf{c}^{ \prime}\in B_{D}(\mathbf{c}_{\text{FOLD}})\text{ with }K_{\mathbf{c}}=K_{\mathbf{c}^{\prime}}\}.\]
**Definition B.7**.: The vertical lower bound(s) of \(B_{D}(\mathbf{c}_{\text{FOLD}})\) are
\[\text{VL}(\mathbf{c}_{\text{FOLD}}) =\{\mathbf{c}\in B_{D}(\mathbf{c}_{\text{FOLD}}):K_{\mathbf{c}}\geq K_{\mathbf{c} ^{\prime}}\,\forall\mathbf{c}^{\prime}\in B_{D}(\mathbf{c}_{\text{FOLD}}),\] \[D(\mathbf{c},\mathbf{c}_{\text{FOLD}}) \geq D(\mathbf{c}^{\prime},\mathbf{c}_{\text{FOLD}})\,\forall\mathbf{c}^{ \prime}\in B_{D}(\mathbf{c}_{\text{FOLD}})\text{ with }K_{\mathbf{c}}=K_{\mathbf{c}^{\prime}}\}.\]
In practice, we compute the credible balls using the R function credibleball() Wade (2015). \(D(\cdot,\cdot)\) can be either the Variation of Information distance or Binder's loss.
### Additional Details on Simulations
The goal of our simulations is to show that FOLD can perform reliably when \(n\) increases, counteracting the fact that larger sample size will result in more and more non-empty components in our mixture. We set \(L=30\), the Normal-Inverse-Wishart prior parameters to be \(\mu=0_{d}\), \(\kappa=1\), \(\nu=d+2\), and \(\Psi=\mathbf{I}_{d}\), and the Dirichlet concentration parameter for \(\mathbf{a}\) is \(\alpha=1/2\). The rationale behind setting \(\alpha=1/2\) is to keep the number of non-empty components in \(f\) large, ensuring that we are approximating \(f_{0}\) with \(f\), while also trying to recreate the settings of our theoretical results. We additionally hope to validate \(\omega^{\text{AVG}}\) as a reasonable loss parameter for Gaussian mixtures.
#### b.8.1 Mixture of Bivariate Gaussian Kernels
We sample from a distribution of \(K_{0}=3\) multivariate skew normal kernels;
\[f_{0}(x)=0.45\,\tau_{01}(x)+0.25\,\tau_{02}(x)+0.3\,\tau_{03}(x); \tag{94}\]
Figure 11: Contour plots of the (a) Gaussian mixture, (b) skew Gaussian mixture and (c) skew-symmetric mixture.
where
\[\tau_{01}=\mathcal{N}_{d}\left(\begin{pmatrix}6.5\\ 5\end{pmatrix},\mathbf{I}_{d},\right), \tag{95}\] \[\tau_{02}=\mathcal{S}\mathcal{N}_{d}\left(\begin{pmatrix}0\\ 0\end{pmatrix},\text{diag}(5,2)\right);\] (96) \[\tau_{03}=\mathcal{S}\mathcal{N}_{d}\left(\begin{pmatrix}-5\\ -5\end{pmatrix},\text{diag}(3,1)\right). \tag{97}\]
using the mvtnorm package (Genz and Bretz, 2009; Genz et al., 2021). A contour plot of \(f_{0}\) is given in Figure 11(a). The kernel parameters are chosen so that the components of \(f_{0}\) are well separated in \(\mathbb{R}^{2}\). Given a sample size \(n\), we sample \(X_{1},\ldots,X_{n}\sim f_{0}\), then fit a location-scale Gaussian mixture to the data. The fundamental idea behind this example is to show that FOLD performs well in the best of scenarios: when the model is well-specified and the kernels of \(f_{0}\) do not overlap. Unlike the other examples we consider, we have access to the oracle FOLD clusters (82), which are computed every replication along with \(\mathbf{c}_{\text{FOLD}}\), the VI clusters, Binder's clusters, and the mclust clusters.
#### b.8.2 Mixture of Bivariate Skew Gaussian Kernels
We sample from a distribution of \(K_{0}=3\) multivariate skew normal kernels;
\[f_{0}(x)=0.45\cdot\tau_{01}(x)+0.25\cdot\tau_{02}(x)+0.3\cdot\tau_{03}(x); \tag{98}\]
where
\[\tau_{01}=\mathcal{S}\mathcal{N}_{d}\left(\begin{pmatrix}6.5\\ 5\end{pmatrix},\mathbf{I}_{d},\begin{pmatrix}1\\ 1\end{pmatrix}\right); \tag{99}\] \[\tau_{02}=\mathcal{S}\mathcal{N}_{d}\left(\begin{pmatrix}0\\ 0\end{pmatrix},\text{diag}(5,2),\begin{pmatrix}-10\\ 15\end{pmatrix}\right);\] (100) \[\tau_{03}=\mathcal{S}\mathcal{N}_{d}\left(\begin{pmatrix}-5\\ -5\end{pmatrix},\text{diag}(3,1),\begin{pmatrix}4\\ -17\end{pmatrix}\right); \tag{101}\]
using the sn package (Azzalini, 2022). A contour plot of \(f_{0}\) in this case is given in Figure 11(b). The kernel parameters are chosen so that \(\tau_{01}\), \(\tau_{02}\), and \(\tau_{03}\) are well-separated in \(\mathbb{R}^{2}\), but diverge from Gaussianity. As before, we sample \(X_{1},\ldots,X_{n}\sim f_{0}\), then centre and scale \(\mathbf{X}\) and fit a Gaussian mixture. The motivation behind using skew Gaussian kernels is to show how even a relatively small amount of model misspecification can lead to poor clustering results, especially with mclust and Binder's loss. We find that both FOLD and VI are more robust to this perturbation from Gaussianity, consistently computing a small number of clusters with high adjusted Rand index with \(\mathbf{s}_{0}\).
#### b.8.3 Skew-Symmetric Mixture
Finally, we sample from a mixture of \(K_{0}=3\) kernels. We formulate \(f_{0}\) as
\[f_{0}(x)=0.55\cdot\tau_{01}(x)+0.3\cdot\tau_{02}(x)+0.15\cdot\tau_{03}(x); \tag{102}\]
where
\[\tau_{01}=0.364\cdot\mathcal{S}\mathcal{N}_{d}\left(\begin{pmatrix}2.50 \\ 3.50\end{pmatrix},\mathbf{I}_{d},\begin{pmatrix}-10\\ 15\end{pmatrix}\right)+ \tag{103}\] \[0.212\cdot\mathcal{N}_{d}\left(\begin{pmatrix}2.325\\ 4.381\end{pmatrix},\mathrm{diag}(0.20,0.80)\right)+\] (104) \[0.424\cdot\mathcal{N}_{d}\left(\begin{pmatrix}1.085\\ 2.009\end{pmatrix},\mathrm{diag}(0.70,0.60)\right);\] (105) \[\tau_{02}=\mathcal{S}\mathcal{N}_{d}\left(\begin{pmatrix}0\\ -3.50\end{pmatrix},\mathrm{diag}(5,2),\begin{pmatrix}4\\ -17\end{pmatrix}\right);\] (106) \[\tau_{03}=\mathcal{N}_{d}\left(\begin{pmatrix}-4\\ -2.50\end{pmatrix},\begin{pmatrix}0.50&0.50\\ 0.50&2.50\end{pmatrix}\right). \tag{107}\]
The kernel parameters are chosen so that the data can be allocated into 3 clusters, with one of the clusters being a mixture of one skew Gaussian kernel and two Gaussian kernels. See Figure 11(c) for a contour plot of \(f_{0}\). Our intention with this example is to show that FOLD will tend to select 3-4 clusters despite one cluster itself being a mixture. As with the previous examples, we sample \(X_{1},\ldots,X_{n}\sim f_{0}\), then centre and scale the data.
#### b.8.4 Validation of Credible Balls
In this section, we focus on the 95% credible ball around \(\mathbf{c}_{\mathrm{FOLD}}\). We simulate from a notably difficult scenario for clustering, the spirals dataset, as generated from the KODAMA package
Figure 12: A sample of \(n=300\) observations with the spirals() function, with colours corresponding to \(\mathbf{s}_{0}\). The thin, interlocking nature of the spirals make this dataset a difficult problem for clustering algorithms.
(Cacciatore and Tenori, 2022). Figure 12 shows a scatterplot of one example of the spirals data. The data is partitioned into \(K_{0}=3\) groups, each represented as interlocked spirals. Clustering this data is very difficult because the true clusters have long, thin, non-elliptical shapes, and are quite close to each other in the sample space. Our main interest here is whether \(B_{D}(\mathbf{c}_{\text{FOLD}})\) includes the true grouping of the data, analogous to coverage of a true real parameter by a credible interval.
For each replication, we sample \(n=300\) observations using the spirals() function, where each spiral comprises \(100\) observations. We fit a Bayesian location-scale Gaussian mixture, with \(L=30\), \(\mu=0_{d}\), \(\kappa=0.5/2\), \(\nu=d+2\), and \(\Psi=0.5\mathbf{I}_{d}\). The Dirichlet concentration parameter for \(\mathbf{a}\) is \(\alpha=1/2\) and we use \(\omega=\omega^{\text{AVG}}\) to compute \(\mathbf{c}_{\text{FOLD}}\) and generate samples from \(\Pi(\mathbf{c}_{\mathcal{G}}\mid\mathbf{X})\). In total, we simulate \(R=100\) replications. In each replication, we run a Gibbs sampler for \(15,000\) MCMC iterations, discard \(1,000\) burn-in iterations, and keep every fourth MCMC draw. We compute the credible ball by setting \(D(\cdot,\cdot)\) to be the VI.
For these simulations, our main area of interest is whether \(\mathbf{s}_{0}\in B_{D}(\mathbf{c}_{\text{FOLD}})\). To do this, we simply evaluate if the horizontal bounds cover \(\mathbf{s}_{0}\), that is, if \(\text{VI}(\mathbf{s}_{0},\mathbf{c}_{\text{FOLD}})\leq\text{VI}(\mathbf{c}_{H},\mathbf{c}_{ \text{FOLD}})\), where \(\mathbf{c}_{H}\) is any clustering in \(\text{H}(\mathbf{c}_{\text{FOLD}})\). We also saved the number of clusters in \(\mathbf{c}_{\text{FOLD}}\) and the adjusted Rand index between \(\mathbf{c}_{\text{FOLD}}\) and \(\mathbf{s}_{0}\).
The results are given in Table 4. In 89 of the replications, \(\mathbf{s}_{0}\) is contained in \(B_{D}(\mathbf{c}_{\text{FOLD}})\), despite \(\mathbf{c}_{\text{FOLD}}\) consistently underscoring in the adjusted Rand index. This latter phenomena comes about because \(E_{\Pi}(\Sigma_{l})=0.5\mathbf{I}_{d}\), which means that the model is likely to fit Gaussian kernels that span multiple spirals, rather than approximating each spiral with multiple Gaussian kernels. Interestingly, the average number of clusters achieved by \(\mathbf{c}_{\text{FOLD}}\) is \(3.120\), which is very close to the truth. We can then conclude that, even when the fitted model makes accurately clustering the data difficult, \(\Pi(\mathbf{c}_{\mathcal{G}}\mid\mathbf{X})\) can still achieve high coverage rates of the truth.
|
2309.09912 | Wait, That Feels Familiar: Learning to Extrapolate Human Preferences for
Preference Aligned Path Planning | Autonomous mobility tasks such as lastmile delivery require reasoning about
operator indicated preferences over terrains on which the robot should navigate
to ensure both robot safety and mission success. However, coping with out of
distribution data from novel terrains or appearance changes due to lighting
variations remains a fundamental problem in visual terrain adaptive navigation.
Existing solutions either require labor intensive manual data recollection and
labeling or use handcoded reward functions that may not align with operator
preferences. In this work, we posit that operator preferences for visually
novel terrains, which the robot should adhere to, can often be extrapolated
from established terrain references within the inertial, proprioceptive, and
tactile domain. Leveraging this insight, we introduce Preference extrApolation
for Terrain awarE Robot Navigation, PATERN, a novel framework for extrapolating
operator terrain preferences for visual navigation. PATERN learns to map
inertial, proprioceptive, tactile measurements from the robots observations to
a representation space and performs nearest neighbor search in this space to
estimate operator preferences over novel terrains. Through physical robot
experiments in outdoor environments, we assess PATERNs capability to
extrapolate preferences and generalize to novel terrains and challenging
lighting conditions. Compared to baseline approaches, our findings indicate
that PATERN robustly generalizes to diverse terrains and varied lighting
conditions, while navigating in a preference aligned manner. | Haresh Karnan, Elvin Yang, Garrett Warnell, Joydeep Biswas, Peter Stone | 2023-09-18T16:24:26Z | http://arxiv.org/abs/2309.09912v1 | Wait, That Feels Familiar: Learning to Extrapolate Human Preferences for Preference-Aligned Path Planning
###### Abstract
Autonomous mobility tasks such as last-mile delivery require reasoning about operator-indicated preferences over terrains on which the robot should navigate to ensure both robot safety and mission success. However, coping with _out of distribution_ data from novel terrains or appearance changes due to lighting variations remains a fundamental problem in visual terrain-adaptive navigation. Existing solutions either require labor-intensive manual data re-collection and labeling or use hand-coded reward functions that may not align with operator preferences. In this work, we posit that operator preferences for visually novel terrains, which the robot should adhere to, can often be extrapolated from established terrain preferences within the _inertial-proprioceptive-tactile_ domain. Leveraging this insight, we introduce _Preference extrApolation for Terrain-aware Robot Navigation_ (pattern), a novel framework for extrapolating operator terrain preferences for visual navigation. pattern learns to map inertial-proprioceptive-tactile measurements from the robot's observations to a representation space and performs nearest-neighbor search in this space to estimate operator preferences over novel terrains. Through physical robot experiments in outdoor environments, we assess pattern's capability to extrapolate preferences and generalize to novel terrains and challenging lighting conditions. Compared to baseline approaches, our findings indicate that pattern robustly generalizes to diverse terrains and varied lighting conditions, while navigating in a preference-aligned manner.
## I Introduction
To ensure the safety, mission success, and efficiency of autonomous mobile robots in outdoor settings, the ability to visually discern distinct terrain features is paramount. This necessity stems not only from direct implications for robot functionality but also from the operator-indicated terrain preferences that the robot must adhere to. Often, these preferences are motivated by the desire the protect delicate landscapes, such as flower beds, or to mitigate potential wear and tear on the robot by avoiding hazardous surfaces. However, during autonomous operations, ground robots frequently face unfamiliar terrains [1, 2] and dynamic real-world conditions such as varied lighting, that lie outside the distribution of visually recognized terrains where operator preferences have been pre-defined. This mismatch presents significant challenges for vision-based outdoor navigation [3].
Equipping robots with the capability to handle novel terrain conditions for preference-aligned path planning is a challenging problem in visual navigation. Prior approaches to address this problem include collecting more expert demonstrations [4, 5, 6], labeling additional data [7, 8, 9], and utilizing hand-coded reward functions to assign traversability costs [10, 11, 12]. While these approaches have been successful at visual navigation, collecting more expert demonstration data and labeling may be labor-intensive and expensive, and utilizing hand-coded reward functions may not always align with operator preferences. We posit that in certain cases, while the terrain may look visually distinct in comparison to prior experience, similarities in the inertial-proprioceptive-tactile space may be leveraged to extrapolate operator preferences over such terrains, that the robot must adhere to. For instance, assuming a robot has experienced
Fig. 1: An illustration of the intuition behind preference extrapolation in pattern. Operator preferences of the three known terrains are marked numerically, with 1 being the most preferred and 3 being the least preferred. In the pre-adaptation stage, a novel terrain (pebble pavement) is encountered and the preference order of its nearest neighbor (concrete) inferred from proprioceptive representations is transferred (extrapolated) to the corresponding samples in the visual representation space. The extrapolated preference order is used to update both the visual representations and the visual preference function. The post-adaptation stage shows extrapolated preferences in the updated visual representation space for the novel terrain.
concrete pavement and marble rocks, and prefers the former over the latter (as expressed by the operator), when the robot experiences a visually novel terrain such as pebble pavement which feels inertially similar to traversing over concrete pavement, it is more likely that the operator might also prefer pebble pavement over marble rocks. While it is not possible to know the operator's true preferences without querying them, we submit that in cases where the operator is unavailable, hypothesizing preferences through extrapolation from the inertial-proprioceptive-tactile space is a plausible way to estimate traversability preferences for novel terrains.
Leveraging the intuition of extrapolating operator preferences for visually distinct terrains that are familiar in the inertial-proprioceptive-tactile space (collectively known as _proprioceptive_ for brevity), we introduce _Preference extrapolation for Terrain-awarE Robot Navigation_ (pattern) 1, a novel framework for extrapolating operator terrain preferences for visual navigation. pattern learns a proprioceptive latent representation space from the robot's prior experience and uses nearest-neighbor search in this space to estimate operator preferences for visually novel terrains. Fig. 1 provides an illustration of the intuition behind preference extrapolation in pattern. We conduct extensive physical robot experiments on the task of preference-aligned off-road navigation, evaluating pattern against state-of-the-art approaches, and find that pattern is empirically successful with respect to preference alignment and in adapting to novel terrains and lighting conditions seen in the real world.
Footnote 1: A preliminary version of this work was presented at the PT4R workshop at ICRA 2023 [13]
## II Related Work
In this section, we review related work in visual off-road navigation, with a focus on preference-aligned path planning.
### _Supervised Methods_
To learn terrain-aware navigation behaviors, several prior methods have been proposed that used supervised learning from large curated datasets [8, 9] as supervision to pixel-wise segment terrains [7]. Guan et al. [7] propose a transformer-based architecture (ganav) to segment terrains, and manually assign traversability costs for planning. While successful at preference-aligned navigation, fully-supervised methods suffer from domain shift on novel terrains and may require additional labeling.
### _Self-Supervised Methods_
To alleviate the need for large-scale datasets for visual navigation, several self-supervised learning methods have been proposed that learn from data collected on the robot [14]. Specifically, prior methods in this category have explored using inertial Fourier features [10], contact vibrations [15], proprioceptive feedback [16], odometry errors [12], future predictive models [11], acoustic features [17], and trajectory features [18] to learn traversability costs for visual navigation. While successful in several visual navigation tasks such as _comfort-aware navigation_[10], such methods use a hand-coded reward/cost model to solve a specific task and do not reason about operator preferences over terrains. In contrast with prior methods, pattern utilizes the prior experience of the robot and extrapolates operator preferences to novel terrains.
Sikand et al. propose vrl-pap[6] in which both a visual representation and a visual preference cost are learned for preference-aligned navigation. Similarly, sterling[19] introduces a self-supervised representation learning approach for visual representation learning. However, a limitation for both vrl-pap and sterling is their dependence on additional human feedback when dealing with novel terrains -- feedback that might not consistently be available during deployment. Distinct from vrl-pap and sterling, pattern focuses on extrapolating operator preferences from known terrains to visually novel terrains.
## III Preliminaries
We formulate preference-aligned planning as a local path-planning problem in a state space \(\mathcal{S}\), with an associated action space \(\mathcal{A}\). The forward kino-dynamic transition function is denoted as \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) and we assume that the robot has a reasonable model of \(\mathcal{T}\) (_e.g.,_ using parametric system identification [20] or a learned kino-dynamic model [21, 1, 22]), and that the robot can execute actions in \(\mathcal{A}\) with reasonable precision. For ground vehicles, a common choice for \(\mathcal{S}\) is \(\mathrm{SE}(2)\), which represents the robot's x and y position on the ground plane, as well as its orientation \(\theta\).
The objective of the path-planning problem can be expressed as finding the optimal trajectory \(\Gamma^{*}=\operatorname*{arg\,min}_{\Gamma}\ J(\Gamma,G)\) to the goal \(G\), using any planner (e.g. a sampling-based motion planner like dwa[23]) while minimizing an objective function \(J(\Gamma,G)\), \(J:(\mathcal{S}^{N},\mathcal{S})\rightarrow\mathbb{R}^{+}\). Here, \(\Gamma=\{s_{1},s_{2},\ldots,s_{N}\}\) denotes a sequence of states. The sequence of states in the optimal trajectory \(\Gamma^{*}\) is then translated into a sequence of actions, using a 1-D time-optimal controller, to be played on the robot. For operator preference-aligned planning, the objective function \(J\) is articulated as,
\[J(\Gamma,G)=J_{G}(\Gamma(N),G)+J_{P}(\Gamma), \tag{1}\]
Here, \(J_{G}\) denotes cost based on proximity of the robot's state to the goal \(G\), while \(J_{P}\) imparts a cost based on terrain preference. Crucially, \(J_{P}\) is designed to capture operator preferences over different terrains; less preferred terrains incur a higher cost. Though earlier studies leverage human feedback to ascertain \(J_{P}\) for unfamiliar terrains [6, 19], in this work, we hypothesize that in certain situations, operator preferences for novel terrains can be extrapolated from known terrains, obviating operator dependency during real-world deployment. Thus, our novel contribution is a self-supervised framework for extrapolating \(J_{P}\) from known terrains to visually novel terrains by leveraging inertial-proprioceptive-tactile observations of a robot, without inquiring additional human feedback.
## IV Approach
In this section, we present _Preference extraPolation for Terrain-awardE Robot Navigation_ (pattern), a novel framework for extrapolating operator preferences for preference-aligned navigation. We first detail an existing framework for terrain-preference-aligned visual navigation. We then introduce pattern for self-supervised extrapolation of operator preferences from known terrains to visually novel terrains by leveraging proprioceptive feedback.
### _A Two-Step Framework for Preference-Aligned Planning_
For real-time preference-aligned planning, inspired from earlier studies [6, 19], we postulate that \(J_{P}(\Gamma)\) can be estimated in a two-step approach from visual observations of patches of terrain at \(s\in\mathcal{S}\) along \(\Gamma\). Let \(O\in\mathcal{O}\) represent these observations. We denote \(\Pi\) as a projection operator that extracts visual observation \(O\) of terrain at \(s\) by yielding image patches from homography-transformed bird's eye view images [6, 19]. First, a visual encoder, denoted as \(f_{vis}\), maps \(O\) from the RGB space to a latent vector \(\phi_{vis}\in\Phi_{vis}\) such that observations from identical terrains cluster closely in \(\Phi_{vis}\) and are distinct from those of differing terrains. Next, a real-valued preference utility is estimated from \(\phi_{vis}\) using a learned preference utility function \(u_{vis}:\Phi_{vis}\rightarrow\mathbb{R}^{+}\) trained with ranked preferences of terrains, derived either from demonstrations [6], or by active querying [19]. Adopting the popular formulation of Zucker et al. [24], we train the utility function with the margin-based ranking loss [25]. To estimate \(J_{P}(\Gamma)\) during planning, we employ an exponential cost formulation given by, \(J_{P}(\Gamma)=\sum_{s\in\Gamma}e^{-u_{vis}[f_{vis}(\Pi(s))]}\), which we find works well in practice. This two-step framework for estimating \(J_{P}(\Gamma)\) has been utilized successfully in recent works [6, 19] for operator preference-aligned off-road navigation. Training details of the visual encoder and the utility function are provided in Section V.
While the above two-step framework effectively handles known terrains with pre-defined preferences, it faces challenges when the robot encounters visually novel terrains that lie beyond the training distribution of \(f_{vis}\) and \(u_{vis}\). Towards addressing this problem, the primary contribution of our work is a self-supervised framework to extrapolate operator preferences to novel terrains and adapting \(f_{vis}\) and \(u_{vis}\) to ensure successful preference alignment.
### _Extrapolating Preferences for Visually Novel Terrains_
Leveraging the intuition that in addition to visual appearance, operator preferences over terrains are likely also based on the "feel" of the underlying terrain such as bumpiness, stability, or traction, we posit that in many situations, operator preferences for novel terrains can be deduced by relating the proprioceptive modality to known terrains. Utilizing these rich, alternate data sources offers deeper insight into terrain properties, enabling us to extrapolate terrain preferences when direct operator feedback is unavailable. Upon initially encountering a novel terrain, before undergoing any adaptation, we designate this stage as the _pre-adaptation_ phase. During this phase, the visual encoder and utility function operate based on previously known operator preferences. However, once preferences are extrapolated and the visual encoder and utility functions are subsequently retrained to adapt, the system progresses to the _post-adaptation_ phase, as shown in Fig. 1.
**Inertial-Proprioceptive-Tactile Encoder and Utility Function:** In pattern, in addition to the visual encoder, we introduce a non-visual encoder that independently processes the inertial, proprioceptive (joint angles and velocities), and tactile feet data--collectively referred to as _proprioception_ for brevity--observed by the robot as it traverses a terrain. This encoder maps proprioception observations into a proprioceptive representation space \(\Phi_{pro}\), such that representations \(\phi_{pro}\in\Phi_{pro}\) of the same terrain are closely clustered whereas those of distinct terrains are farther apart. Additionally, a utility function \(u_{pro}:\Phi_{pro}\rightarrow\mathbb{R}^{+}\) maps the proprioceptive representation vector \(\phi_{pro}\in\Phi_{pro}\) to a real-valued preference utility, similar to the visual utility function. Note that, to estimate \(J_{P}(\Gamma)\) during deployment, we only use \(u_{vis}\) and not \(u_{pro}\) since we cannot observe the proprioceptive components of a future state without traversing the terrain first.
**Pre-Adaptation Phase:** While traversing known terrains that are in-distribution, the visual and proprioceptive utility values tend to align closely. However, for visually novel terrains, discrepancies often emerge between the utility values predicted from the visual and proprioceptive modalities. In pattern, we utilize the mean-squared error between the predicted utilities as a signal to detect visually novel, out-of-distribution terrains. Although any novelty detection mechanism can be integrated within pattern, such as the unimodal approach by Burda et al. [26], our primary focus is on a framework that extrapolates operator preferences for novel terrains. Moreover, any foundational approach employing the two-step framework for preference cost estimation [6, 19, 27], as elaborated in Subsection IV-A, can be utilized in the pre-adaptation phase. For clarity, we use the notation pattern\({}^{-}\) to represent the baseline algorithm in its unadapted state and pattern\({}^{+}\) to indicate the updated model in the post-adaptation phase.
**Extrapolating Operator Preferences:** Given a novel terrain segment for which operator preferences are unknown, we propose to self-supervise preference assignment by first clustering its proprioceptive representations \(\phi_{pro}\) and then associating it with the closest known existing cluster in \(\Phi_{pro}\), assigning the same operator preference as the known-cluster, as illustrated in Fig. 1. Following this self-supervised preference assignment, the visual encoder and visual utility function for novel terrain segments are finetuned by aggregating newly gathered experience with existing data.
## V Implementation Details
In this section, we describe the implementation details of pattern. We first describe data pre-processing, followed by training details in the pre-adaptation phase. Finally, we describe adapting the visual encoder and utility function using the extrapolated preference in pattern.
### _Data Pre-Processing_
In tandem with the visual patch extraction process used in the projection operator \(\Pi\) as in prior methods [6, 19], for every state \(s_{t}\), we also extract a 2-second history of time-series inertial (angular velocities along the x and y-axes and linear acceleration in the z-axis), proprioceptive (joint angles and velocities), and tactile (feet depth penetration estimates) data. To ensure the resulting input data representation for training is independent of the length and phase of the signals, we compute statistical measures of center and spread as well as the power spectral density, and maintain that as the input. All the visual patches extracted with the projection operator \(\Pi\) and the non-visual data for each state \(s\) are then tagged with their corresponding terrain name, given that each trajectory uniquely contains a particular terrain type. In addition to processing the recorded data in the pre-adaptation phase, a human operator is queried for a full-order ranking of terrain preference labels.
### _Pre-Adaptation Training_
We use a supervised contrastive learning formulation inspired by Sikand et al. [6] to train the baseline functions \(f_{vis}\) and \(u_{vis}\), represented as neural networks.
**Training the Encoders:** Given labeled visual patches and proprioception data, we generate triplets for contrastive learning such that for any anchor, the positive pair is chosen from the same label and the negative pair is sourced from another label. Given such triplets, we use triplet loss [28] with a margin of \(1.0\) to independently train the visual and proprioception encoders through mini-batch gradient descent using the AdamW optimizer. For the visual encoder, we use a 3-layer CNN of \(5\times 5\) kernels, each followed by ReLU activations. This model, containing approximately 250k parameters, transforms \(64\times 64\) size RGB image patches into an 8-dimensional vector representation \(\phi_{vis}\). Similarly, our inertial encoder is comprised of a 3-layer MLP with ReLU activations, encompassing around 4k parameters, and maps proprioceptive inputs to an 8-dimensional vector \(\phi_{pro}\). To mitigate the risk of overfitting, data is partitioned in a 75-25 split for training and validation, respectively.
**Training the Utility Functions**: In our setup, the utility function is represented as a two-layer MLP with ReLU non-linearity and output activation that maps an 8-dimensional vector into a singular non-negative real value. Given ranked operator preferences of the terrains, we follow Zucker et al. [24] and train the visual utility function \(u_{vis}\) using a margin-based ranking loss [25]. Furthermore, to ensure consistent predictions from \(u_{vis}\) and \(u_{pro}\) for both visual and non-visual observations at identical locations, we update parameters of \(u_{pro}\) using the loss \(\mathcal{L}_{\text{MSE}}(u_{\text{pro}})=\frac{1}{N}\sum_{i=1}^{N}\left(sg(u_ {vis}(\phi_{vis}))-u_{\text{pro}}(\phi_{pro})\right)^{2}\). Here, \(sg(\cdot)\) denotes the stop-gradient operation, and \(\phi_{vis}\) and \(\phi_{pro}\) are the terrain representations from paired visual and non-visual data, respectively, at the same location.
The functions \(f_{vis}\) and \(u_{vis}\) prior to adaptation are collectively termed as pattern\({}^{-}\), signifying their non-adapted state with respect to visually novel terrains. In our implementation, although we use supervised contrastive learning, in instances where explicit terrain labels might be absent, one can resort to self-supervised representation learning techniques, such as sterling[19], to derive \(f_{vis}\) and \(u_{vis}\). pattern can be applied regardless of the specific representation learning approach used.
### _Preference Extrapolation Training_
During deployment, if the robot encounters a visually novel terrain, both visual and inertial-proprioceptive-tactile data is recorded to be used in the adaptation phase in pattern, aiding in preference extrapolation and subsequent model adaptation. We refer to this collected data as the _adaptation-set_. We extract paired visual and non-visual observations at identical locations from the _adaptation-set_ and use \(f_{pro}\) to extract proprioceptive representations \(\phi_{pro}\). We cluster samples of \(\phi_{pro}\) and perform a nearest-neighbor search against existing terrain clusters from the pre-adaptation dataset that is within a threshold \(\mu\). We set this threshold to be the same as the triplet margin value of 1.0 which we find to work well in practice. This procedure seeks a known terrain that "feels" similar to the novel terrain which then inherits the preference of its closest match. Following this self-supervised preference extrapolation framework, the adaptation-set is aggregated with the pre-adaptation training set, and the visual encoder \(f_{vis}\) is retrained using the procedure described in V-B. Additionally, the visual utility function \(u_{vis}\) is retrained with the extrapolated preference for the novel terrain. The updated functions \(f_{vis}\) and \(u_{vis}\) are collectively referred to as pattern\({}^{+}\). Figure 2 illustrates retraining and preference extrapolation as described above.
## VI Experiments
In this section, we describe the physical robot experiments conducted to evaluate pattern against other state-of-the-art visual off-road navigation algorithms. Specifically, our experiments are designed to explore the following questions:
1. Is pattern capable of extrapolating operator preferences accurately to novel terrains?
Fig. 2: An illustration of the training setup for preference extrapolation proposed in pattern. We utilize two encoders to map visual and inertial-proprioceptive-tactile samples to \(\Phi_{vis}\) and \(\Phi_{pro}\) respectively. For a visually novel terrain, a preference is hypothesized and extrapolated from \(\Phi_{pro}\), following which the visual encoder and utility function are retrained.
* How effectively does patern navigate under challenging lighting scenarios such as nighttime conditions?
* How well does patern perform in large-scale real-world off-road conditions?
To study \(Q_{1}\) and \(Q_{2}\), we execute a series of experiments consisting of short-scale outdoor navigation tasks. We then conduct a large-scale autonomous robot deployment along a 3-mile outdoor trail to qualitatively evaluate \(Q_{3}\). We use the legged Boston Dynamics Spot robot for all experiments, equipped with a VectorNav VN100 IMU, front-facing Kinect RGB camera, Velodyne VLP-16 LiDAR for geometric obstacle detection, and a GeForce RTX 3060 mobile GPU. For local planning, we use an open-source sampling-based motion planner called graphnav[29] and augment its sample evaluation function with the inferred preference cost \(L_{P}(\Gamma)\). For real-time planning, we run \(f_{vis}\) and \(u_{vis}\) on the onboard GPU.
### _Data Collection_
To collect labeled data for training patern\({}^{-}\), we manually teleoperated the robot across the UT Austin campus, gathering 8 distinct trajectories each across 3 terrains: concrete, grass, and marble rocks. Each trajectory, five minutes long, is exclusive to a single terrain type for ease of labeling and evaluation. The pre-adaptation training data was collected in the daytime under sunny conditions. Our evaluations then centered on two preference extrapolation scenarios: one, extending to new terrains such as pebble-pavement, concrete-with-shadows, and bushes, all experienced under varying daylight conditions ranging from bright sunlight to overcast skies, and two, adapting to nighttime illumination for familiar terrains that appear visually different. In our experiments, we use the preference order concrete\(\succ\)grass\(\succ\)marble rocks.
### _Quantitative Short-Scale Experiments_
We evaluate patern in three environments with a variety of terrains within the UT Austin campus. We also test under two different lighting conditions, as shown in Fig. 3. The primary task for evaluation is preference-aligned visual off-road navigation, in which the robot is tasked with reaching a goal, while adhering to operator preferences over terrains.
To evaluate the effectiveness of patern, we compare it against five state-of-the-art baseline and reference approaches: **Geometric-Only**[29], a purely geometric obstacle-avoidant planner; **RCA**[10], a self-supervised traversability estimation algorithm based on ride-comfort; **GANav**, a semantic segmentation method 2 trained on RUGD dataset [8]; **Fully-Supervised**, an approach that utilizes a visual terrain cost function comprehensively learned using supervised costs drawn from operator preferences; and lastly, **Human Reference**, which offers a preference-aligned reference trajectory where the robot is teleoperated by a human expert. To train the rca and Fully-Supervised baselines, in addition to the entirety of the pre-adaptation dataset on the 3 known terrains, we additionally collect 8 trajectories each on the novel terrains during daytime.
Footnote 2: [https://github.com/rayguan97/GANav-offroad](https://github.com/rayguan97/GANav-offroad)
In each environment, we perform five trials of each method to ensure consistency in our evaluation. For each trial, the robot is relocalized in the environment, and the same goal location \(G\) is fed to the robot. In the environments where patern\({}^{-}\) fails to navigate in a preference-aligned manner, we run five trials of the self-supervised patern\({}^{+}\) instance that uses experience gathered in these environments to extrapolate preferences to the novel terrains or novel lighting conditions. Fig. 3 shows the qualitative results of trajectories
Fig. 3: Trajectories traced by patern and baseline approaches across three different environments and varied lighting conditions within the UT Austin campus. Note the drastic changes in the appearance of the terrain between day and night, which pose a significant challenge for visual navigation. In environments where patern\({}^{-}\) fails to generalize, patern\({}^{+}\) successfully extrapolates and reaches the goal in a preference-aligned manner.
traced by each method in the outdoor experiments. Only one trajectory is shown for each method unless there is significant variation between trials. Table I shows quantitative results using the mean Hausdorff distance between a human reference trajectory and evaluation trajectories of each method. Table II shows quantitative results for the mean percentage of preference-aligned distance traversed in each trajectory. Note that both the reported metrics may be high if a method does not reach the goal but stays on operator-preferred terrain.
From the quantitative results, we see that, as expected, the \(\mathtt{patern}^{-}\) approach is able to successfully navigate in an operator-preference-aligned manner in Env. 1, which did not contain any novel terrain types. However, \(\mathtt{patern}^{-}\) fails to consistently reach the goal and/or navigate in alignment with operator preferences in the remaining environments. In the daytime experiments, Env. 2 contains a novel terrain (pebble pavement) absent from training data for \(\mathtt{patern}^{-}\), while Env. 3 contains both a novel terrain type (bush) and novel visual terrain appearances caused by tree shadows. In the nighttime experiments, all terrains contain novel visual appearances. Following deployments in Envs. 2 and 3, \(\mathtt{patern}\) extrapolates terrain preferences for new visual data using the corresponding proprioceptive data to retrain environment-specific \(\mathtt{patern}^{+}\) instances. In each environment that the \(\mathtt{patern}^{-}\) model fails, the self-supervised \(\mathtt{patern}^{+}\) model is able to successfully navigate to the respective goal in a preference-aligned manner, without requiring any additional operator feedback during deployment, addressing \(Q_{1}\) and \(Q_{2}\). While the fully-supervised baseline more closely resembles the human reference trajectory compared to \(\mathtt{patern}^{+}\) during the day in Envs. 2 and 3, unlike the fully-supervised approach, \(\mathtt{patern}^{+}\) does not require operator preferences over all terrains and is capable of extrapolating to visually novel terrains.
### _Qualitative Large-Scale Experiment_
To investigate \(Q_{3}\), we execute a large-scale autonomous deployment of \(\mathtt{patern}\) along a challenging 3-mile off-road trail 3. The robot's objective is to navigate in a terrain-aware manner on the trail by preferring \(\mathtt{dirt}\), \(\mathtt{gravel}\) and \(\mathtt{concrete}\) over \(\mathtt{bush}\), \(\mathtt{mulch}\), and \(\mathtt{rocks}\). Failure to navigate in a preference-aligned manner may cause catastrophic effects such as falling into the river next to the trail. An operator is allowed to temporarily take manual control of the robot only to prevent such catastrophic effects, adjust the robot's heading for forks in the trail, or yield to pedestrians and cyclists. The \(\mathtt{patern}^{-}\) model used for short-scale experiments is augmented with approximately five minutes of combined additional data for \(\mathtt{dirt}\), \(\mathtt{bush}\), and \(\mathtt{mulch}\) terrains commonly seen in the trail. Following this preference extrapolation, the \(\mathtt{patern}^{+}\) model is able to successfully navigate the 3-mile trail, while only requiring one human intervention. Fig. 4 shows the trajectory of the robot and a number of settings along the trail, including the single unexpected terrain-related intervention in the lower right corner, for the hour-long deployment. Additionally, we attach a video recording of the robot deployment4. This large-scale study addresses \(Q_{3}\) by qualitatively demonstrating the effectiveness of \(\mathtt{patern}\) in scaling to real-world off-road conditions.
Footnote 3: Ann and Roy Butler trail, Austin, TX, USA
Footnote 4: \(\mathtt{patern}\) deployed in the trail: [https://youtu.be/j7159pE0u6s](https://youtu.be/j7159pE0u6s)
## VII Limitations and Future Work
\(\mathtt{patern}\) uses similarities between novel and known terrains in its learned proprioception representation space to extrapolate preferences. Thus, \(\mathtt{patern}\) needs to have had experiences with terrains bearing close inertial-proprioceptive-tactile resemblances for successful extrapolation. A noticeable limitation is that if a terrain evoking similar proprioceptive features has not been encountered before, \(\mathtt{patern}\) might not be able to extrapolate preferences. Additionally, \(\mathtt{patern}\) utilizes non-visual observations that require a robot to physically drive over terrains, which may be unsafe or infeasible in certain cases. Extending \(\mathtt{patern}\) with depth sensors to handle non-flat terrains is a promising direction for future work.
## VIII Conclusion
In this work, we present _Preference \(\mathtt{extr}\)Apolation for Terrain-\(\mathtt{avar}\)E Robot Navigation_ (\(\mathtt{patern}\)), a novel approach to extrapolate human preferences for novel terrains in visual off-road navigation. \(\mathtt{patern}\) learns an inertial-proprioceptive-tactile representation space to detect similarities between visually novel terrains and the set of known terrains. Through this self-supervision, \(\mathtt{patern}\) successfully extrapolates operator preferences for visually novel terrain segments, without requiring additional human feedback. Through extensive physical robot experiments in challenging outdoor environments in varied lighting conditions, we find that \(\mathtt{patern}\) successfully extrapolates preferences for visually novel terrains and is scalable to real-world off-road conditions.
Fig. 4: Trajectory trace of a large-scale qualitative deployment of \(\mathtt{patern}^{+}\) along a 3-mile segment of the Ann and Roy Butler trail located in Austin, Texas. With only five minutes of supplementary data, \(\mathtt{patern}\) required only one manual intervention to stay on the trail and successfully completes the hike, demonstrating robustness and adaptability to real-world off-road conditions. |
2309.16832 | ALMA Lensing Cluster Survey: average dust, gas, and star formation
properties of cluster and field galaxies from stacking analysis | We develop new tools for continuum and spectral stacking of ALMA data, and
apply these to the ALMA Lensing Cluster Survey (ALCS). We derive average dust
masses, gas masses and star formation rates (SFR) from the stacked observed
260~GHz continuum of 3402 individually undetected star-forming galaxies, of
which 1450 are cluster galaxies and 1952 field galaxies, over three redshift
and stellar mass bins (over $z = 0$-1.6 and log $M_{*} [M_{\odot}] = 8$-11.7),
and derive the average molecular gas content by stacking the emission line
spectra in a SFR-selected subsample. The average SFRs and specific SFRs of both
cluster and field galaxies are lower than those expected for Main Sequence (MS)
star-forming galaxies, and only galaxies with stellar mass of log $M_{*}
[M_{\odot}] = 9.35$-10.6 show dust and gas fractions comparable to those in the
MS. The ALMA-traced average `highly obscured' SFRs are typically lower than the
SFRs observed from optical to near-IR spectral analysis. Cluster and field
galaxies show similar trends in their contents of dust and gas, even when field
galaxies were brighter in the stacked maps. From spectral stacking we find a
potential CO ($J=4\to3$) line emission (SNR $\sim4$) when stacking cluster and
field galaxies with the highest SFRs. | Andrea Guerrero, Neil Nagar, Kotaro Kohno, Seiji Fujimoto, Vasily Kokorev, Gabriel Brammer, Jean-Baptiste Jolly, Kirsten Knudsen, Fengwu Sun, Franz E. Bauer, Gabriel B. Caminha, Karina Caputi, Gerald Neumann, Gustavo Orellana-González, Pierluigi Cerulo, Jorge González-López, Nicolas Laporte, Anton M. Koekemoer, Yiping Ao, Daniel Espada, Alejandra M. Muñoz Arancibia | 2023-09-28T20:24:48Z | http://arxiv.org/abs/2309.16832v1 | ALMA Lensing Cluster Survey: average dust, gas, and star formation properties of cluster and field galaxies from stacking analysis
###### Abstract
We develop new tools for continuum and spectral stacking of ALMA data, and apply these to the ALMA Lensing Cluster Survey (ALCS). We derive average dust masses, gas masses and star formation rates (SFR) from the stacked observed 260 GHz continuum of 3402 individually undetected star-forming galaxies, of which 1450 are cluster galaxies and 1952 field galaxies, over three redshift and stellar mass bins (over \(z=0\)-1.6 and log \(M_{*}[M_{\odot}]=8\)-11.7), and derive the average molecular gas content by stacking the emission line spectra in a SFR-selected subsample. The average SFRs and specific SFRs of both cluster and field galaxies are lower than those expected for Main Sequence (MS) star-forming galaxies, and only galaxies with stellar mass of log \(M_{*}[M_{\odot}]=9.35\)-10.6 show dust and gas fractions comparable to those in the MS. The ALMA-traced average 'highly obscured' SFRs are typically lower than the SFRs observed from optical to near-IR spectral analysis. Cluster and field galaxies show similar trends in their contents of dust and gas, even when field galaxies were brighter in the stacked maps. From spectral stacking we find a potential CO (\(J=4\to 3\)) line emission (SNR \(\sim 4\)) when stacking cluster and field galaxies with the highest SFRs.
keywords: galaxies: star formation - galaxies: evolution - submillimeter: galaxies - radio continuum: galaxies - radio lines: galaxies
## 1 Introduction
Understanding how gas reservoirs and star formation rates (SFR) change with stellar mass and environment density, and evolve over
cosmic time, is crucial to understand galaxy evolution. Since millimetre (mm) and submillimetre (sub-mm) observations can directly (e.g. via the CO rotational transitions) and indirectly (the sub-mm continuum) trace the gas reservoir and also the SFR (e.g. Carilli & Walter, 2013; Scoville, 2013; Scoville et al., 2014, 2016a; Villanueva et al., 2017; Magnelli et al., 2020; Suzuki et al., 2021), the Atacama Large Millimeter/submillimeter Array (ALMA) is a powerful tool in this area.
The mm and sub-mm continuum trace the optically-thin Rayleigh-Jeans tail of dust emission (for typical dust temperatures of \(\sim\)18-50 K). The sub-mm flux can be used to estimate the dust mass assuming, e.g. grey-body dust emission with two parameters: the dust emissivity index \(\beta\) (in the Rayleigh-Jeans limit the grey-body flux varies as \(S_{\nu}\propto\nu^{2+\beta}\)) and the dust temperature \(T\). Values of \(\beta\) are \(1.5-2\) for galaxies at \(z\lesssim 0.1\) (e.g. Clements et al., 2010) and are consistent with \(\beta=1.8\) for galaxies at \(z\sim 2\)-3 (e.g. Chapin et al., 2009). The sub-mm flux can also be used to estimate the molecular gas mass (M\({}_{\rm{med}}\)) and the mass of the interstellar medium (M\({}_{\rm{ISM}}\); atomic plus molecular gas mass; Scoville et al., 2016a). Finally, the sub-mm flux can also be used to estimate the infrared luminosity (e.g. Orellana et al., 2017), and thus SFR, though this conversion is more uncertain than the previously mentioned conversions to dust and ISM masses, since it is more sensitive to the assumed values of the dust temperature(s) and \(\beta\)(s).
Star formation rates, stellar masses, and molecular gas reservoirs of galaxy populations at different redshifts help to constrain the evolution of the main sequence (MS) of star formation - the dependence of SFR on stellar mass (\(M_{*}\)) - and the star formation efficiency (SFE = SFR / M\({}_{\rm{mol}}\)). At a given redshift, higher stellar mass galaxies in the MS have higher star-formation rates, and the median ratio of SFR to stellar mass (the specific SFR, or sSFR) of MS galaxies increases with redshift (e.g. Elbaz et al., 2011; Rodighiero et al., 2011; Speagle et al., 2014).
Galaxies in clusters and in the field show significant differences (e.g. see reviews by Boselli & Gavazzi, 2006, Boselli & Gavazzi, 2014). At low redshift, cluster galaxies tend to be more massive, older, and passive, as compared to field galaxies. At redshifts \(z<1\) cluster galaxies have lower mean SFRs for a fixed stellar mass, when compared to field galaxies at the same redshift (e.g. \(0.04<z<0.07\)Paccagnella et al., 2016; \(0.4<z<0.8\)Vulcani et al., 2010). Low redshift cluster galaxies also have lower dust-to-stellar mass ratios than field galaxies (Bianconi et al., 2020, \(z\sim 0.2\)).
Low redshift cluster galaxies typically exhibit lower molecular gas fractions than field galaxies (e.g. Zabel et al., 2019, Morokuma-Matsui et al., 2021), though in some cases they can have a comparable molecular gas content (Cairns et al., 2019). As redshift increases, there is an increase of blue, star-forming and spiral galaxies in clusters (the Butcher-Oemler effect, e.g. Butcher & Oemler, 1978, 1984; Pimbblet, 2003). At redshifts \(z\gtrsim 1\), mean SFRs increase with environmental density in both groups (e.g. Elbaz et al., 2007) and clusters (e.g. Popesso et al., 2011).
While large-area or deep pencil beam continuum and spectral line surveys from ALMA are increasingly available (e.g. Oteo et al., 2016, Walter et al., 2016, Gonzalez-Lopez et al., 2017, Franco et al., 2018), each has revealed relatively few individual detections of dust continuum and/or line emission. A promising avenue to better exploit these data is through _stacking_ analysis. Stacking averages the data of \(N\) sources (the noise decreases by factor \(\sim\)\(\sqrt{N}\); e.g. Fabello et al., 2011, Delhaize et al., 2013) in order to detect the average value of the stacked sources. In this regard, we could take advantage of stacking to combine the undetected signal of several galaxies in order to look for an average detection.
Stacked detections, as compared to statistics of a few individually detected sources, therefore better trace the average properties of a population. Finally, using stacking analysis in subsamples selected by stellar mass, redshift, and environmental density, even when individual galaxies are undetected, can provide unique constraints on the properties and evolution of the true underlying galaxy population, rather than only a few luminous or bright sources (e.g. Coppin et al., 2015; Simpson et al., 2019; Carvajal et al., 2020).
The ALMA Lensing Cluster Survey (ALCS) is the largest - in area - among the ALMA surveys targeting galaxy clusters. Combined with previous ALMA observations, it has completed observations of 33 massive galaxy clusters. All clusters have been previously imaged with the Hubble Space Telescope (HST), enabling accurate positions and other quantities derived from HST photometry. Initial results using the ALCS include an ALMA-Herschel study of star-forming galaxies at \(z\simeq 0.5-6\)(Sun et al., 2022); the discovery of faint lensed galaxy at \(z\geq 6\)(Laporte et al., 2021); a spectral stacking analysis of the undetected [C ii] line in lensed galaxies at \(z\sim 6\)(Jolly et al., 2021); the analysis of bright [C ii] 158\(\mu\)m line detections of a lyman-break lensed galaxy at \(z\sim 6\)(Fujimoto et al., 2021); and the discovery using the Multi Unit Spectroscopic Explorer (MUSE) of a galaxy group at \(z\sim 4.3\) lensed by the cluster ACT-CL J0102-4915, also called 'El Gordo (Caputi et al., 2021).
In this work we apply newly developed continuum and spectral stacking tools to ALCS maps and datacubes, in order to contrast the average dust masses, gas masses, and star formation rates in cluster versus field galaxy subsamples at \(z\lesssim 1\). Given the large survey area, relatively short integration times (\(\sim\)5 min) per pointing, and the large number of known optical/IR counterparts, the ALCS is one of the best existing ALMA datasets for a stacking analysis to compare the average properties of cluster and field galaxies.
This paper is organised as follows: in Section 2 we briefly describe the ALCS data and the photometric and spectroscopic catalogues used for our stacking analysis; in Section 3 we describe our methods; in Section 4 we present the results of our stacking analysis, and (sub)mm-derived dust masses, gas masses and SFRs; in Section 5 and 6 we discuss and summarise our results, respectively.
Throughout the paper we assume a spatially flat \(\Lambda\)CDM cosmological model with H\({}_{0}\) = 70 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}\) = 0.3 and \(\Omega_{\Lambda}\) = 0.7 and the stellar initial mass function (IMF) of Chabrier (2003).
## 2 Data and catalogues
We use data cubes and continuum (moment 0) maps from the ALCS (PI: K. Kohno; Kohno et al., 2023). The ALCS large program, with over 100 hours of integration, observed 33 massive clusters of galaxies at intermediate redshifts, between \(0.187<z_{\rm{spec}}<0.87\) (see Table 1). These clusters were selected from previous HST programs, including 5 galaxy clusters from the Hubble Frontier Fields (HFF; Lotz et al., 2017), 12 galaxy clusters from the Cluster Lensing and Supernova Survey with Hubble (CLASH; Postman et al., 2012) and 16 galaxy clusters from the Reionization Lensing Cluster Survey (RELICS; Coe et al., 2019). The ALMA survey covers a total area of 110 arcmin\({}^{2}\) (primary beam factor cut at 0.5) using a 15-GHz-wide spectral scan and reaching a typical depth of 70\(\mu\)Jy/beam (\(1\sigma\)) at 1.2 mm. For each cluster, ALMA mosaicked a region of about \(\sim\)\(2^{\prime}\times 2^{\prime}\) around the core of the cluster. Spectral scans were used to cover two frequency ranges of 250.1-257.5 GHz and 265.1-272.5 GHz. For the redshifts of our clusters, the resulting rest-frame spectra include the CO (\(J=3\to 2\)) or CO (\(J=4\to 3\)) line for the majority of the cluster galaxies. The wider redshift distribution of field galaxies
result in rest-frame wavelengths of 250-1800 GHz, thus including more steps in the CO ladder (\(J=3\to 2\) and higher), and even the [C i] and [C ii] lines.
In this work, for spectral stacking we use dirty cubes (i.e. no cleaning was performed when making the cubes) with 60 km s\({}^{-1}\) channels and the intrinsic spatial resolution of \(\sim\)1-1.5\({}^{\prime\prime}\). For continuum stacking we use dirty maps (moment 0 images; created by summing all channels in the datacube) with a tapered resolution of 2\({}^{\prime\prime}\). All data were taken in phase-calibrated mode and using a nearby well studied phase calibrator with a highly accurate position. The positional information for any given spatial pixel in the dataset is thus accurate to \(\leq 0\aas@@fstack{\prime\prime}1\). These ALMA datasets are described in Fujimoto et al. (2023).
Since spectral stacking requires accurate positions and redshifts to correctly align stacked emission lines (e.g Maddox et al., 2013; Elson et al., 2019, ), we have compiled a single ALCS spectroscopic catalogue by combining literature spectroscopic redshifts for the 25 clusters with available data, as summarised in Table 1. The majority of the redshifts are from catalogues which used VLT-MUSE datasets. In constructing these catalogues, the authors used positions and sizes from imaging data to extract aperture spectra, and then used manual fitting or cross correlated with templates to identify multiple emission lines, single strong emission lines, or well defined continuum features, in order to determine a reliable or likely redshift. From these original catalogues, we kept only meaningful extragalactic redshifts (\(z_{\rm spec}\geq 0\)), and eliminated redshifts which the authors flagged as unreliable or low quality (typically quality flag qf=1). This process resulted in a total of 9668 spectroscopic redshifts in our catalogue.
Of these a total of 2461 galaxies at redshifts \(0.0<z_{\rm spec}<1.0\) and 1348 at redshifts \(1.0<z_{\rm spec}<6.6\) fall inside the ALMA maps of the clusters. This subset of spectroscopic redshift sample was used in our spectral stacking analysis. Most of the redshift compilations used here do not explicitly specify a redshift error (only a redshift quality flag). Redshifts from MUSE are expected to typically have errors of \(\delta z\lesssim 0.001\)(Karman et al., 2017), i.e. a velocity error of \(\lesssim 300\) km s\({}^{-1}\).
For continuum stacking of the ALCS ALMA datasets we use the version 1.0 (v1.0) photometric catalogues of Kokorev et al. (2022, hereafter K&B catalogues) which include all 33 ALCS clusters. The v1.0 K&B catalogues apply the EAZY code (Brammer et al., 2008) to HST and Spitzer photometry to derive photometric redshifts. The best-fitting templates to the observed-frame UV-to-near-IR photometry are also used to derive stellar masses and SFRs. The full photometric catalogue includes \(\sim\)200,000 sources at redshifts \(0<z_{\rm phot}<12\). The K&B catalogue photometry was typically extracted over 0\(\aas@@fstack{\prime\prime}\)7 to \(\sim\)3\(\aas@@fstack{\prime\prime}\)5 diameter apertures, roughly comparable to the 1\({}^{\prime\prime}\) resolution ALMA continuum images used by us and smaller than the 20\({}^{\prime\prime}\) stamp-size we use in our continuum stacking.
Using the K&B catalogue for continuum stacking, instead of the spectroscopic redshift catalogue, gives us two advantages: (a) a larger number of targets to stack and, therefore, a higher signal to noise ratio in the stacked images. Note that the photometric redshift errors are typically smaller than the redshift bins we use for stacking; (b) the stellar masses and SFRs derived in these catalogues allow us to stack in bins of stellar mass and SFR. Note that these quantities are derived from template fitting, and it is nontrivial to scale them to a different (spectroscopically derived) redshift.
The K&B catalogues provide magnification factors for lensed sources; their listed fluxes, SFRs, and stellar masses are not corrected for this magnification. We use these magnification factors to derive the demagnified stellar mass, SFR and flux for each object. Therefore, henceforth all physical properties from the K&B catalogues refer to the demagnified quantities.
We refer the reader to Kokorev et al. (2022) for a full comparison between their photometric redshifts and the \(\sim\)7000 spectroscopic redshifts available in their fields. They found a good agreement of the two in \(\sim\)80% of their sample. The remaining \(\sim\)20% have large, sometimes catastrophic \(|z_{\rm phot}\) - \(z_{\rm spec}\)\(|\sim 3\), errors in photometric redshifts. These photometric redshift failures are mainly due to confusion of the Lyman, Balmer, and 4000A breaks in the template SEDs. The catastrophic redshift errors result in high redshift galaxies being assigned a photometric redshift \(z_{\rm phot}\sim 0\), and vice-versa.
Using the full K&B catalogue for our cluster and field galaxies produces noticeable'striping' when plotting stellar mass as a function of redshift, likely due to erroneous photometric redshifts. To avoid galaxies with 'catastrophic redshift errors' from the K&B catalogue (which would produce erroneous stellar masses and SFRs), to more reliably separate cluster and field galaxies, to avoid strong line-emitting galaxies producing false continuum detections, and to eliminate passive galaxies, we apply the following filters to the catalogue:
1. Magnitude cutoff of 24 in the H-band, to minimise contamination caused by blue faint galaxies.
2. Discarding galaxies without IRAC photometry available
\begin{table}
\begin{tabular}{c c c c} \hline HST field & Cluster & \(z_{\rm spec}\) & References \\ \hline & Abell 2744 & 0.308 & Richard et al. (2021) \\ & Abell S1063 & 0.348 & Karman et al. (2017) \\ & Mercurio et al. (2021) & & \\ HFF & Abell370 & 0.375 & Richard et al. (2021) \\ & MACS J0146.10-2403 & 0.396 & Richard et al. (2021) \\ & MACS J1149.5+2223 & 0.543 & Grillo et al. (2016) \\ & Treu et al. (2016) & & \\ \hline & Abell1983 & 0.187 & Geller et al. (2014) \\ & Abell209 & 0.206 & Annuzziella et al. (2016) \\ & RXJ2129.7+0005 & 0.234 & Jauzac et al. (2021) \\ & MACS1931.8-2635 & 0.352 & Caminha et al. (2019) \\ & MACS1159.+0129 & 0.352 & Caminha et al. (2019) \\ & MACS0249.6-0253 & 0.399 & Caminha et al. (2019) \\ CLASH & MACS1206.2-0847 & 0.440 & Richard et al. (2021) \\ & MACS0329.7-0211 & 0.450 & Richard et al. (2021) \\ & RXJ 1347.1145 & 0.451 & Richard et al. (2021) \\ & MACS1311.0-0310 & 0.494 & Caminha et al. (2019) \\ & MACS1423.8+2404 & 0.545 & Treu et al. (2015) \\ & Schmidt et al. (2014) & & \\ & MACS2129.4-0741 & 0.570 & Jauzac et al. (2021) \\ \hline & Abell 2163 & 0.2030 & Rescigno et al. (2020) \\ & PLCG 671.9-40.7 & 0.2700 & & \\ & Abell 2537 & 0.2966 & Foëx et al. (2017) \\ & AbellS295 & 0.3000 & Bayliss et al. (2016) \\ & MACSJ0035.45-2015 & 0.3520 & - \\ & RXC J0498.9+1707 & 0.3826 & - \\ & SMACSJ0723.3-7327 & 0.3900 & - \\ RELICS & RXC J0032.1+1808 & 0.3956 & - \\ & RXC J2211.7-0350 & 0.3970 & - \\ & MACS0191.98-0849 & 0.4050 & Stern et al. (2010) \\ & Abell 3192 & 0.4250 & - \\ & MACSJ0553.4-3342 & 0.4300 & Ebeling et al. (2017) \\ & MACSJ0417.5-1154 & 0.4430 & Jauzac et al. (2019) \\ & RXC J0600.1-2007 & 0.4600 & - \\ & MACSJ0527.1-2325 & 0.5049 & Stern et al. (2010) \\ & ACT-CLJ0102-49151 & 0.8700 & Sifon et al. (2013) \\ \hline \end{tabular}
\end{table}
Table 1: Spectroscopic catalogues for ALCS Clusters
(flagged as bad_phot = 1), to avoid selecting galaxies with low quality photometry.
3. Selection of galaxies with \(|z_{\rm err}|/z_{\rm phot}<0.5\), where \(z_{\rm err}\) is the '16 percentile to 84 percentile error' of \(z_{\rm phot}\) listed in the catalogues. Here we do not use the typical \(|z_{\rm err}|\) / (1+\(z_{\rm phot}\)) criterion since we later require to reliably separate cluster and field galaxies.
4. Selection of galaxies with \(z_{\rm phot}>0.05\), to avoid \(z\sim 4\) galaxies for which \(z_{\rm phot}\sim 0\) (see above).
5. Using a lensing magnification cutoff of \(\mu<20\), to avoid galaxies that may have an erroneous large magnification factor.
6. Discarding galaxies with individual emission line detections (Fujimoto et al., 2023, Gonzalez et al., in prep.).
7. Selection of galaxies with \(M_{*}>1\times 10^{8}\)\(M_{\odot}\) to eliminate galaxies with erroneously low stellar masses due to erroneous redshifts.
8. Selection of galaxies with sSFR \(\geq\) (MS \(-\) 1 dex), i.e. higher than 1 dex below the expected MS, to eliminate passive galaxies and to eliminate galaxies with erroneously low SFR due to erroneous redshifts. The typical \(1\sigma\) spread of the Main Sequence is \(\pm 0.3\) dex (e.g. Elbaz et al., 2011). Given the likely larger errors in K&B catalogue SFR and stellar masses (due to these depending on photo-z determinations), and the clear clump of passive galaxies seen in the top left panel of Fig. 1, we use a larger spread of 1 dex (above dashed red line) in order to capture most star-forming galaxies below the fiducial MS. As seen in the left panels of Fig. 1, this sSFR selection eliminates most passive galaxies in the cluster galaxy sample.
After applying these filters, the catalogue has \(\sim\)13000 sources, at redshifts \(0<z_{\rm phot}<12\).
We then divided the filtered photometric catalogue into cluster and field galaxy sub-catalogues at \(z_{\rm phot}\leq 1.6\). Cluster galaxies were selected as those at \(z_{\rm phot}\pm 0.1\) from the cluster redshift, and field galaxies as those that do not fulfil this condition. The \(\pm 0.1\) range is used to avoid significant contamination of the galaxy cluster subsample given that the photometric redshift errors of the filtered sample at \(z_{\rm phot}\leq 1.6\) is \(|z_{\rm err}|/z_{\rm phot}<0.1\).
The final catalogue we use is thus left with 3321 cluster galaxies at \(0.0<z_{\rm phot}\leq 1.0\) and 7421 field galaxies at \(0.0<z_{\rm phot}\leq 1.6\). We define this filtered catalogue of star-forming galaxies as 'ALCS-SF'. The left panels of Fig. 1 show the distributions of sSFR for all galaxies in our sample (but not yet applying filter (viii) above), to clearly distinguish the star-forming galaxies we will use in the stacking analysis (above the dashed red line), the passive galaxy sample (between the two red lines), and the galaxies we do not use (below the dotted red line). The middle and right panels show the stellar mass and SFR as a function of photometric redshift, for the cluster (top) and field (bottom) galaxies selected after all filters (including filter (viii)) were applied, i.e. the 'ALCS-SF' sample.
In the 'ALCS-SF' sample, 1656 cluster galaxies and 2223 field galaxies fall inside the ALMA maps of the clusters. These ALMA-observed galaxies were further divided into stellar mass and redshift bins.
Details on this binning, and the final number of galaxies stacked, can be found in Section 4.3.1. A comparison of spectroscopic and photometric redshifts of the galaxies used in the stack gives a median error of \(|z_{\rm spec}-z_{\rm phot}|\) / \(z_{\rm spec}\) = 0.07, justifying our use of the \(z\pm 0.1\) criterion for cluster galaxies.
Our continuum-stacking results primarily contrast cluster and field galaxies at \(0.0<z_{\rm phot}\leq 1\); field galaxies at \(1<z_{\rm phot}\leq 1.6\) are only used to calibrate the redshift dependence of the scaling relations we use for converting stacked radio fluxes into physical quantities. A stacking analysis of sources at \(z_{\rm phot}\geq 1\) will be presented by Jolly et al. (in prep).
Figure 1: Left panels: Specific SFR (sSFR) as a function of photometric redshift for all cluster (top) and field (bottom) galaxies in our parent sample with all filters applied except for filter (viii) described in Sect. 2. The black line and cyan shaded region show the predicted redshift evolution of the main sequence (MS) of star formation from Elbaz et al. (2011), and its \(\sim\)0.3 dex dispersion. The red dashed lines trace 1 dex below the MS; galaxies above this line are in our star forming galaxies (‘ALCS-SF’) sample used in the stacking analysis. The dotted red lines trace 2 dex below the MS; galaxies below this line are not used in our analysis. Galaxies between two red lines are considered passive galaxies. Middle and right panels: stellar masses (middle) and SFRs (right panels), as a function of photometric redshift, for the ‘ALCS-SF’ sample, i.e. galaxies above the dashed red line in the left panels.
## 3 Methods
### Stacking software
We have developed continuum and spectral stacking codes, which are made public here. Both have to be executed within the Common Astronomy Software Applications package (CASA; McMullin et al., 2007). Both primarily rely on python packages included in CASA and are thus easy to execute in current and future versions of CASA.
Publicly available stacking softwares for ALMA data include the continuum (in image and \(uv\)-domain) stacking package of Lindroos et al. (2015) and the spectral stacking package of Jolly et al. (2020). Our new stacking codes are based on the Lindroos et al. (2015) code - specifically, we use their functions to handle the input maps/cubes and the stacking positions - but with several modifications in order to optimise for the case of incremental stacking of very large datasets, where stacking requires to be rerun whenever a single or a few new maps/cubes are added to the overall large set of input maps/cubes. This is handled by storing intermediate results in the stacking process; when the stacking is rerun, the user has the option to use these intermediate results together with the few new apertures or images to be extracted before the final stack. The stored intermediate results are useful for posterior analysis. For example, in the case of image stacking, all extracted sub-images are stored in an ALMA cube format. This allows the user to easily stack subparts of the 'cube' or use Monte Carlo (MC) analysis to test the robustness of the stacked result in case of e.g. position errors or to test the effect of a single to a few subimages on the stacked result.
At the end of the stacking process, plotting scripts provide easy to visualise plots and analyses of the intermediate and final extracted images or spectra. The stacking software is detailed in Appendix A and its documentation is available on GitHub1.
Footnote 1: [https://github.com/guerrero-andrea/stacking_codes](https://github.com/guerrero-andrea/stacking_codes)
### Star Formation Rates, Dust masses and ISM masses from ALMA 1.2 mm fluxes
We derive SFRs from the observed-frame 1.2 mm ALMA fluxes (individual fluxes of detected sources or fluxes in stacked maps). Fluxes in individual or stacked maps are measured using the source extraction software Blodcat (see Section 4.3.1). The measured flux is immediately corrected for lensing magnification using the magnification listed in the K&B catalogue for an individual source, or the mean magnification of all sources in the stack for stacked maps.
First, we use the photometric redshift (individually detected sources) or mean photometric redshift of the stacked sample (stacked detections) to derive a representative rest wavelength [1.2mm/(1+\(<\)z\(>\))] for the map and test if this is closer to 350\(\mu\)m or 850\(\mu\)m, the two wavelengths at which we have a reliable flux to SFR conversion from Orellana et al. (2017). The measured flux (the total flux from a 2D Gaussian fit, see Sect. 4.3.1) in the map is then extrapolated to the closest of the above two wavelengths assuming a grey body spectrum;
\[I_{\nu}=\mathrm{B}_{\nu}(T_{\mathrm{d}})^{\beta} \tag{1}\]
where we choose \(\beta=1.8\); this value for \(\beta\) is supported in several samples of low and high redshift galaxies (e.g. Scoville et al., 2016, 2017; Dunne et al., 2022). Using values of \(\beta=1.2\) instead would change our flux extrapolations by \(\sim\)5-30% over the redshift range we use. The luminosity-weighted dust temperature (T\({}_{\mathrm{d}}\)) is calibrated further in this subsection.
The extrapolated flux at 350\(\mu\)m or 850\(\mu\)m is then converted into a specific luminosity \(L_{\nu}\) [W Hz\({}^{-1}\)] at either \(\nu=857\) GHz or 353 GHz. This specific luminosity is, in turn, converted to an infrared luminosity \(L_{\mathrm{IR}}\), and then to SFR, using the equations of Orellana et al. (2017):
\[\log\left(\frac{L_{\mathrm{IR}}}{[L_{\odot}]}\right) = 1.017\log\left(\frac{L_{\mathrm{350}\mu m}}{[\mathrm{W~{}Hz^{-1} }]}\right) \tag{2}\] \[+ 0.118\left(\frac{T_{\mathrm{cold,~{}dust}}}{[K]}\right)-16.45\] \[\log\left(\frac{L_{\mathrm{IR}}}{[L_{\odot}]}\right) = 1.01\log\left(\frac{L_{\mathrm{850}\mu m}}{[\mathrm{W~{}Hz^{-1} }]}\right)\] (3) \[+ 0.15\left(\frac{T_{\mathrm{cold,~{}dust}}}{[K]}\right)-15.93\]
\[\mathrm{SFR}[M_{\odot}\mathrm{yr}^{-1}]=1.05\times 10^{-10}L_{\mathrm{IR}}[L_{\odot}]. \tag{4}\]
These Orellana et al. (2017) calibrations were based on a large sample of redshift zero galaxies with _Planck_ flux measurements. Our use of 1.2 mm observed frame fluxes, thus rest-frame fluxes at \(\geq\)600\(\mu\)m, justifies the use of T\({}_{\mathrm{d}}=\) T\({}_{\mathrm{cold,dust}}\) for our galaxies or stacks, though we caution the reader these zero redshift calibrations are being used here to derive SFRs out to redshift 1.6. A justification of the value of T\({}_{\mathrm{d}}\) used in these equations is provided further below in this section. We further note that Orellana et al. (2017) used the relationship of Kennicutt 1998 and a Salpeter (1955) IMF. We divided their equation by factor 1.7 (e.g. Genzel et al., 2010, Man et al., 2016, Lagani et al., 2016) in order to base our SFR (eqn. 4) on a Chabrier (2003) IMF.
Since we will use these equations in stacked fluxes, we tested if the estimated SFR from stacked galaxies (i.e. using the mean flux and mean redshift to derive a mean/stacked SFR) is equivalent to computing the SFR per galaxy and then taking the mean (i.e. the true mean SFR). A full description of this test can be found in Appendix B. For this test, we use MC simulations to compare between the
Figure 2: Ratio between SFRs derived from a single ALMA flux using our method (based on equations 1 to 4) and the SFRs listed in the K&B photometric catalogues, for individually detected sources from Fujimoto et al. (2023). The sample was divided into two bins and we find the best median agreement when using T \(=22.8\) K for sources at \(0.0<z\leq 0.6\) and T \(=22.4\) K for sources at \(0.6<z\leq 1.6\). The median value for each bin is shown with the red dashed lines.
derived SFR for a source with constant flux as a function of redshift, the derived mean SFR from a mean flux and redshift of random sources (stacked SFR), and the derived mean SFR from individual SFR measurements per random source (true mean SFR). For the last two cases, each source is simulated to have a random flux between 0 and 0.1 mJy, which are typical values of noise within the continuum images of 1.2mm. These scenarios are shown in Fig. 11 (top panel). These MC results can be compared to those for a source with a constant flux of 0.05 mJy (the mean value of the random fluxes used) over the full redshift range. As an additional comparison, we show how the true mean SFR of individually detected galaxies from Fujimoto et al. (2023) was compared to the SFR value derived from their continuum stacked image.
In Fig. 11 we used a constant dust temperature with redshift. For this test, we used \(\rm T_{d}=22K\), which is within the temperature range seen in both local star forming (Dunne et al., 2022) and in quiescent galaxies (Magdis et al., 2021), although any temperature in this range could have been used. Clearly, for \(z\gtrsim 1\), the estimated SFR (for the constant flux) is relatively independent of redshift: thus the true mean SFR and the stacked SFR are relatively commutative at these redshifts. However, at \(z\lesssim 1\), the SFR of a constant flux source varies significantly with redshift, so the mean SFR of several detected sources is not necessarily equivalent to the SFR obtained by first stacking the maps, and then using the average flux and redshift to obtain the stacked SFR. Nevertheless, we argue that this SFR derivation can also be used at \(z\lesssim 1\) for the following reasons.
The redshift distributions of each stellar mass sub-bin of each redshift bin are not significantly different in most bins (each bin description is found in Section 4.3.1). In the absence of strong systematic effects (evolution of temperature or SFR with redshift) we would expect that the process of stacking and then estimating the SFR via the mean redshift is not too different from first estimating individual SFRs and then later averaging. In fact, Fig. 11 shows that, for these redshift distributions, the mean SFR of the galaxies in the sample is fairly similar to the stack-derived mean SFR and remains within 3% for the simulated sources and 30% for detected galaxies, under the assumption of no systematic effects.
The remaining unknown is thus the (single dust component) temperature, \(\rm T_{d}\) to be used in equations 1, 2 and 3. To calibrate this temperature (and its variation with redshift) we compared our derived SFRs to those listed in the K&B catalogue, under the assumption that the unobscured and obscured star formation are correlated. For this we selected the continuum sources individually detected in the ALCS fields at signal to noise ratios (SNR) \(>\) 5 (Fujimoto et al., 2023), and classified as 'ALCS-SF' galaxies in our sample, and used these fluxes to derive the 'highly obscured' SFR via our method (eqns. 1 to 4; hereafter the 'O17' method). We first searched for the galaxies that were also in the K&B catalogue, and we found 53 sources. Then we selected only the ones with redshift \(z\leq 1.6\), resulting in 29 galaxies. We divided the sample into two redshift bins, \(0<z\leq 0.6\) and \(0.6<z\leq 1.6\). For each bin, we assume a single temperature and derive the O17 SFR for the galaxies. We varied \(\rm T_{d}\) in order to get a median ratio close to 1 for the O17 to K&B SFRs. This resulted in \(\rm T_{d}=22.8\) K for the lowest redshift bin, and \(\rm T_{d}=22.4\) K for the second redshift bin. Fig. 2 shows the ratio of our O17-derived SFRs (derived from a single observed 1.2 mm flux) to the K&B catalogue SFRs.
A detailed analysis of SF galaxies and SMGs (Dunne et al., 2022) found SF galaxies to have mass-weighted temperatures (\(\rm T_{mw}\)) in the range 20-25 K with a median of 23 K, a value slightly lower than the 25 K used by (Scoville et al., 2016) in a mixed sample of SFs and SMGs. The luminosity weighted dust temperature (\(\rm T_{d}\)) is expected to be higher than the mass-weighted equivalent (\(\rm T_{mw}\)) by a few degrees (e.g. Scoville et al., 2016), but in the absence of detailed multi-component dust temperature fits, we assume that \(\rm T_{d}=\rm T_{mw}\). Hereafter we use a value of 23 K for all of \(\rm T_{d}\), \(\rm T_{cold,dust}\), and \(\rm T_{mw}\), and note that this temperature is slightly higher than the luminosity weighted temperatures (\(\sim\)19-21 K) seen in quiescent galaxies over
Figure 3: Left panels: continuum stacks of all undetected sources from the K&B catalogue. Right panel: spectral stack of all undetected sources from our compiled spectroscopic catalogue. The left upper (lower) panel shows the mean (median) continuum stack of the individually undetected sources. Above each panel, we show the number of sources stacked (N), the average rms of all input maps that were stacked (\(\vartheta_{\mu MI}\)) and the rms of the final stacked map (\(\sigma_{stack}\)). The black crosses show the centre of each map and the black circle shows the synthesised ALMA beam size. The black contours show the levels of [\(-3\sigma\), \(-2\sigma\), \(2\sigma\), \(3\sigma\), \(4\sigma\)], where \(\sigma\) is the standard deviation of the edge of the image (outside the black square). The right panel shows an excerpt of the frequency range from a mean spectral stack of all sources with redshifts in our compiled spectroscopic catalogue, which have no individually detected emission lines. The upper panel shows the stacked spectrum as a function of the rest-frame frequency. The dashed red line shows the mean standard deviation \(\sigma\) of the spectrum. The lower panel shows the number of objects stacked at each frequency.
this redshift range (e.g. Fig. 6 of Magdis et al., 2021). Varying T\({}_{\rm d}\) by \(\pm 2\) K results in +100% and \(-50\)% changes in the derived SFRs via eqns. 1 to 4.
We estimate dust masses using the following equations from Orellana et al. (2017) with T\({}_{\rm d}\) = 23 K:
\[\log\left(\frac{M_{\rm dust}}{[M_{\odot}]}\right) = 0.940\log\left(\frac{L_{350\mu m}}{\rm[W~{}Hz^{-1}]}\right) \tag{5}\] \[- 0.0791\left(\frac{T_{\rm cold,\,dust}}{[K]}\right)-12.60\] \[\log\left(\frac{M_{\rm dust}}{[M_{\odot}]}\right) = 0.993\log\left(\frac{L_{850\mu m}}{\rm[W~{}Hz^{-1}]}\right)\] (6) \[- 0.054\left(\frac{T_{\rm cold,\,dust}}{[K]}\right)-13.310\]
Varying T\({}_{\rm d}\) by \(\pm 2\) K would result in a \(\pm 30\)% change in the dust mass in the inverse sense.
We derived dust-corrected near ultraviolet (NUV) SFRs using NUV fluxes from the K&B catalogues and IR luminosities derived from the 1.2mm ALMA fluxes (eqns 2 and 3). For this we followed the method described by Hao et al. (2011). The corrected NUV luminosity, L(NUV)\({}_{\rm corr}\), is estimated as,
\[\rm L(NUV)_{\rm corr}=L(NUV)_{\rm obs}[erg~{}s^{-1}]+0.27\it L_{\rm IR}[erg~{ }s^{-1}] \tag{7}\]
Where L(NUV)\({}_{\rm obs}[erg~{}s^{-1}]\) is measured from the NUV flux densities using \(L=v\it L_{\nu}\). Then, the corrected SFR is measured as,
\[\rm log~{}SFR_{\rm corr}[\it M_{\odot}yr^{-1}]=\log L(NUV)_{\rm corr}-42.959 \tag{8}\]
Finally, the value of SFR\({}_{\rm corr}\) derived above is divided by a factor 1.7 in order to convert a Salpeter based SFR into a Chabrier (2003) based SFR.
This single-component dust temperature of 23 K is also used, together with the observed 1.2 mm flux, to estimate the molecular gas mass (\(M_{\rm mol}\)) using eqns. 6, A14 and A16 of Scoville et al. (2016, 2016), and using \(\alpha_{\rm 850}=5.5\times 10^{19}\) erg s\({}^{-1}\) Hz\({}^{-1}\)\(M_{\odot}^{-1}\), the average value they obtain for SF galaxies. Note that Scoville et al. (2016) used \(\alpha_{\rm CO}\) = 6.5 M\({}_{\odot}\) pc\({}^{-2}\) (K km/s)\({}^{-1}\) to derive the molecular gas mass. The more recent exhaustive analysis of (Dunne et al., 2022) finds that SF galaxies have a median \(\alpha_{\rm 850}\) =5.9, coincidentally close to the value found by Scoville et al. (2016) given that the latter recommended the use of \(\alpha_{\rm CO}\) = 4.3. Similar to the test of a stacked-derived SFR versus the true mean of individual SFRs described above, we compare these two cases for dust masses and \(M_{\rm mol}\) masses in Appendix B and Fig. 11 (middle and bottom panels), and reach similar conclusions i.e. we do not expect significant differences between deriving quantities from a stacked image, as compared to averaging individually measured values.
For individually detected or stacking detected CO emission lines, we convert the emission line flux into a molecular gas mass following Solomon et al. (1992), after using CO ladder luminosity ratios from Carilli and Walter (2013) to convert luminosities of higher CO rotational transitions into CO (\(J=1\to 0\)) luminosities, and \(\alpha_{\rm CO}\) = 4.3 M\({}_{\odot}\) pc\({}^{-2}\) (K km/s)\({}^{-1}\)(Dunne et al., 2022).
## 4 Results
Our stacking codes provide mean and median stacked results (for both continuum and spectral stacking). We performed the spectral stacking using a stamp-size of 1\(\arcsec\) and the continuum stacking with a stamp-size of 20\(\arcsec\times 20\arcsec\). The continuum stacking code also provides separate results for the full sample, only individually detected sources, and only individually undetected sources. Even though the number of individual continuum detections in the ALCS fields is relatively low (e.g. 95 of the SNR \(>\)5 continuum detections in the ALCS fields (Fujimoto et al., 2023) are in the K&B catalogues), the crowded and clustered nature of our source catalogue means that a few bright individually detected sources could bias both the detected central flux, and the outskirts of the stacked maps. We thus, unless otherwise mentioned, use and present results based on mean continuum stacking after eliminating stamps in which a central or peripheral source is individually detected, with peak flux \(>\) 5 times the rms of the stamp. For spectral stacking we also show mean stacking results, unless otherwise stated. All stacks were performed without weighting.
### Full sample stacks
To illustrate the power, and reliability, of stacking in the ALCS, we stacked all sources - cluster and field - in our ALCS continuum maps using the K&B catalogues with the extra filters described in Sect. 2, but for the full redshift range of the catalogue, i.e. \(0<z<12\). The left
Figure 4: Spectral stacking results: The left panel shows the stacked CO (\(J=3\to 2\)) spectrum of all (5) sources for which redshifts are available in our compiled spectroscopic catalogue and in which this line is individually detected. The right panel shows the spectrum of the CO (\(J=4\to 3\)) line in a spectral stack of sources in our compiled spectroscopic catalogue which individually show no detected emission lines, and are in the uppermost 15-percentile SFR bin, where the SFRs were taken from the K&B catalogue. In both panels, the mean stacked profiles are shown in blue, the best-fitting Gaussian to this profile in red, and the median stack line profile in black. The bottom sub-panels show the number of objects stacked.
panel of Fig. 3 shows the continuum stacked maps of the individually non-detected sources (mean and median stacks). As expected, the rms of the stacked image decreases by \(\sim\)\(\sqrt{N}\), where N is the number of objects stacked. The mean stacked continuum map of the individual non-detections has an rms of \(\sigma_{stack}=1.4\mu\)Jy: equivalent to a \(\sim\)8 day integration with ALMA at 250 GHz.
An example spectral stacking result of all galaxies with accurate spectroscopic redshifts is shown in the right panel of Fig. 3; this figure shows an excerpt of the spectral stack of all sources in which no line was individually detected. While there is no clear line detection here (further below we present a potential CO (\(J=4\to 3\)) stacked line detection in a high-SFR subsample), the figure demonstrates the power of spectral stacking in ALMA datasets.
### Spectral stacking in subsamples
We performed spectral stacking in several subsamples within the compiled spectroscopic catalogue. Here we present results for the two subsamples in which stacked emission lines are detected. Note that, except for the results in Sect. 4.2.1, subsamples were chosen by stellar masses and SFRs from the K&B catalogues; i.e. derived using photometric, rather than spectroscopic, redshifts.
#### 4.2.1 Stack of detected CO (\(J=3\to 2\)) emission lines
A stacked spectrum of all 5 sources for which redshifts exist in our compiled spectroscopic catalogue and in which the CO (\(J=3\to 2\)) line is individually detected, is shown in the left panel of Fig. 4. These lines were detected in galaxies in the clusters Abell 370, Abell 2744 and MACS1931.8-2635. The stacked emission line profile has broader wings as compared to a single Gaussian. While this could be a sign of outflows, it could be only an effect of redshift errors, so we do not further interpret the wings. The best-fit Gaussian to the mean stacked spectrum gives a peak flux of 8.5 \(\pm\)0.1 mJy, total flux of 2.84 \(\pm\) 0.05 Jy km/s, and FWHM of 293 \(\pm\)4 km/s. These five galaxies have \(z_{\rm spec}=0.293\)-0.359, and in the K&B catalogues, a log \(M_{*}=10\)-10.4 M\({}_{\odot}\) and SFR = 1-10.7 M\({}_{\odot}\)/yr, i.e. they are between normal star forming galaxies and Luminous Infrared Galaxies (LIRGs). Thus to convert the CO (\(J=3\to 2\)) flux to a molecular gas mass we use (a) the Milky Way (MW) value of \({\rm L}^{\prime}_{\rm CO\ J\ J\ J\ L^{\prime}}\)\({\rm L}^{\prime}_{\rm J\ J\ L^{\prime}}\)\({\rm L}^{
described in Sect. 2, since in this specific stacking analysis we do not use the photometric redshift, and our results are not thus strongly affected by SFR errors.
We then selected all sources in the uppermost 15-percentile of SFRs, with no individual line detections. This subsample includes \(\sim\)300 galaxies with SFRs of \(\sim\)0.5 to \(\sim\)2000 \(\,\mathrm{M}_{\odot}\)/yr. From that selection, only \(\sim\)20 of the galaxies had a rest-frame spectrum between \(\sim\)460 GHz and \(\sim\)463GHz, thus contributing to the line, with SFRs of \(\sim\)1 to \(\sim\)25 \(\,\mathrm{M}_{\odot}\)/yr, and a mean of \(\sim\)6 \(\,\mathrm{M}_{\odot}\)/yr. Also, they have stellar masses of log \(M_{*}=\)8.3-10.9 \(\,\mathrm{M}_{\odot}\) and a mean of log \(M_{*}=\)10.1 \(\,\mathrm{M}_{\odot}\). The spectral stack of these sources resulted in an emission line at the expected location of the CO (\(J=4\to 3\)) line detected with SNR = 4. This emission line is shown in the right panel of Fig. 4.
The best-fitting Gaussian to the mean stacked spectrum gives a peak flux, total flux, and width of \(0.22\pm 0.02\) mJy, \(45\pm 6\) mJy km/s and \(179\pm 20\) km/s, respectively. Since some of the galaxies in the stack were magnified, we divided this total flux by the mean magnification \(\mu\), which corresponds to \(\sim\)3.6. This yields a demagnified total flux of \(12.3\pm 1.6\) mJy.
Following the same procedure as for the spectral stack of the individually detected emission line galaxies above, the implied mean molecular gas mass, using the MW value of L\({}_{\mathrm{CO}}^{\prime}\)(J=4\(\to\)4)/L\({}_{\mathrm{CO}}^{\prime}\)(J=1\(\to\)0) = 0.17 (Weiss et al., 2005; Carilli and Walter, 2013; Daddi et al., 2015), is \(M_{\mathrm{mol}}=5.9\times 10^{8}M_{\odot}\)(\(\alpha_{\mathrm{CO}}\)/4.3). Using the mean SFRs and stellar masses of the stacked galaxies from the K&B catalogs, this implies an average SFE \(=1.2\times 10^{-8}\)yr\({}^{-1}\) (4.3/\(\alpha_{\mathrm{CO}}\)), equivalent to \(\tau_{\mathrm{dep}}=0.09\) Gyr and an average molecular gas to stellar mass ratio of 0.04 (\(\alpha_{\mathrm{CO}}\)/4.3). These molecular gas mass, gas fractions and depletion times are consistent with the sample of passive and star-forming local galaxies from Saintonge et al. (2017).
Of the 20 galaxies which contributed to the stacked CO (\(J=4\to 3\)) emission line (bottom right panel of Fig. 4), one galaxy from the cluster MACS0429.6-0253 has a potential individual detection of the CO (\(J=4\to 3\)) line, which was not identified in previous analyses. Eliminating this galaxy from the stack weakens the CO (\(J=4\to 3\)) stacked detection to SNR \(\sim 3.3\). In this case the best-fitting Gaussian to the mean stacked spectrum gives a peak flux, total flux, and width of \(0.2\pm 0.02\) mJy, 44 mJy km/s and \(195\pm 0.05\) km/s respectively. Although the fitted parameters of the emission line do not change significantly, the SNR is lowered primarily due to the increase in rms.
If we consider this latter line profile as a non-detection, the 3\(\sigma\) upper limits to the M\({}_{\mathrm{mol}}\) and molecular gas to stellar mass ratio are factor 2 higher than the values listed above, and the SFE lower limit is half the value of the SFE listed above.
### Continuum stacking in subsamples
Similarly to spectral stacking, continuum stacking was performed in different subsamples. Since the K&B catalogues provide physical properties, the galaxies in the catalogues were stacked in bins of redshift and (de-lensed) stellar mass. From these bins several physical properties are derived, including dust, gas masses, and SFRs.
#### 4.3.1 Fluxes, dust masses and ISM masses
We divided both the cluster and field catalogues into bins of redshift and then sub-bins of stellar mass. The sizes, and ranges, of the bins were driven by the requirement of having sufficient sources in each bin in order to obtain meaningful stacked detections in a significant number of bins. We also selected only individually undetected galaxies. The cluster catalogue was thus divided into two redshift bins, \(0.0<z_{1}\leq 0.4\) (633 galaxies) and \(0.4<z_{2}\leq 1.0\) (817 galaxies), and the field catalogue was divided into three bins, \(0.0<z_{1}\leq 0.4\) (304 galaxies), \(0.4<z_{2}\leq 1.0\) (1162 galaxies) and \(1.0<z_{3}\leq 1.6\) (486 galaxies). In total, this results in 1450 cluster galaxies and 1952 field galaxies. The highest redshift field bin was used primarily as a sanity check of our results at lower redshifts.
Figure 6: Left: The stacked dust to stellar mass ratio \(f_{\mathrm{dust}}=M_{\mathrm{dust}}/\,M_{*}\) (M\({}_{\mathrm{dust}}\) from the stacked 1.2 mm flux and the mean \(M_{*}\) (demagnified) from the K&B catalogues) as a function of redshift for all redshift and stellar mass bins and for both cluster (squares) and field (circles) galaxies. Colours distinguish stellar mass bins: purple, green, and red symbols are used for the lowest, middle, and highest stellar mass bins. Downward pointing triangles denote upper limits. The overlaid lines show the expected values (Liu et al., 2019, using a dust to gas ratio of 0.01) for galaxies in the MS (solid; SFR = SFR\({}_{\mathrm{MS}}\)) and 0.5 dex below the MS (dashed; SFR \(\sim 0.3\times\) SFR\({}_{\mathrm{MS}}\)) for galaxies with log \(M_{*}=11.0\) (red) and 9.9 (green), which correspond to the bin midpoints of our middle and high stellar mass bins. Right: As in the left panel, but for the ISM mass fraction (ISM mass to stellar plus ISM mass ratio) as a function of redshift. Black diamonds and their error bars show the gas mass fractions derived by Scoville et al. (2017) for star-forming log \(M_{*}\sim 11\) galaxies in the COSMOS field; these points are connected for easier visualisation.
Each redshift bin was then sub-divided into three stellar mass bins: \(1\times 10^{8}<M_{1}\leq 2.25\times 10^{9}\), \(2.25\times 10^{9}<M_{2}\leq 4.2\times 10^{10}\), \(4.2\times 10^{10}<M_{3}\leq 5\times 10^{11}\).
Figures 1 and 2 show the stacked maps obtained for the stacks of all cluster and field galaxies, in the redshift and stellar mass bins mentioned above, respectively. In each stacked map, we used the source extraction software Blobcat(Hales et al., 2012), which detects sources via a flood fill algorithm. When a blob is detected, flux densities of Gaussian fits to the blob are provided, along with measures of SNR, peak flux, and spatial position.
We consider a stacked source as a detection if: (a) a blob at SNR \(\geq 5\) is found, whose peak position falls in the central 1/3rd of the stamp; or (b) a blob at SNR \(\geq 4\) is found in the central 1/3rd of the stamp and no other Blobcat detection is obtained in the map.
For detected sources, we list the SNR corrected for peak bias and the integrated flux corrected for clean bias (see Hales et al. (2012) for details) reported by Blobcat, together with their associated errors. For non-detections, we calculated the mean rms (standard deviation) of the stacked map after excising the central 1/3rd of the map, and reported a \(5\sigma\) upper limit based on these rms.
The 1.2 mm flux (the total flux of the Gaussian fit) or upper limit for each redshift bin in these stacked maps, as a function of the stellar mass bin, are shown in the left panel of Fig. 5. In order to take into account the flux from lensed galaxies, the stacked flux per bin was divided by the mean lensing magnification factor \(\mu\) of the galaxies within that bin. The derived dust masses from these stacked fluxes are shown in the right panel of Fig. 5.
Since the conversion between the \(y\)-axes of these two panels is almost linear, the results are similar. For the first redshift bin (\(0<z_{1}\leq 0.4\)), few conclusions can be drawn since many of the bins are undetected. But it is notable that for field galaxies, dust masses increase from the lowest stellar mass bin (\(1\times 10^{8}<M_{1}\leq 2.25\times 10^{9}\)) to the middle stellar mass bin (\(2.25\times 10^{9}<M_{2}\leq 4.2\times 10^{10}\)), by at least factor 2. The same pattern is seen for the next redshift bin (\(0.4<z_{2}\leq 1.0\)), for both cluster and field galaxies (factors \(\sim\)3 and \(\sim\)1.3, respectively), and for the highest (field-only) redshift bin (\(1.0<z_{3}\leq 1.6\) ; factor \(\sim\)4).
Dust masses in cluster galaxies are similar those of field galaxies for the middle redshift bin (green) and the middle stellar mass bin (\(2.25\times 10^{9}<M_{2}\leq 4.2\times 10^{10}\)), but are higher by \(\sim\)1.5 for field galaxies at \(0<z_{1}\leq 0.4\) in the middle stellar mass bin. In terms of redshift, dust masses increase by factor \(\sim\)1.5 in cluster galaxies between \(0.0<z_{1}\leq 0.4\) and \(0.4<z_{2}\leq 1.0\), while for field galaxies they increase by factor \(\sim\)2 between \(0.0<z\leq 1.0\) and \(1.0<z_{3}\leq 1.6\).
The left panel of Figure 6 shows the redshift evolution of \(f_{\rm dust}\) (the dust to stellar mass ratio) and compares them with the expected values from Liu et al. (2019), assuming a dust to gas ratio of 0.01. We show the expected values for MS galaxies of \(\log(M_{*}/M_{\odot})=11\), and the values for galaxies with a MS offset of 0.5 dex. Since this falls within the range of the highest stellar mass bin, \(4.2\times 10^{10}<M_{3}\leq 5\times 10^{11}\), the lines are shown with the same colour as the galaxies in that bin (red). Similarly, the expectations for MS galaxies of \(\log(M_{*}/M_{\odot})=9.9\) (and the ones with an offset of 0.5 dex from the MS) are shown in green, since they fall within the middle stellar mass bin \(2.25\times 10^{9}<M_{2}\leq 4.2\times 10^{10}\).
Since the \(M_{1}\) (purple) and \(M_{3}\) (red) stellar mass bins have few to no detections, most conclusions can be drawn only for galaxies in the middle stellar mass bin (\(2.25\times 10^{9}<M_{2}\leq 4.2\times 10^{10}\) ; green). For field galaxies in this bin, the mean \(f_{dust}\) appear near the expectation of MS galaxies only for the lowest redshift bin \(0.0<z_{1}\leq 0.4\), and then at higher redshifts, the dust fractions fall below these expectations by more than the 0.5 dex offset shown in dotted lines. Cluster galaxies at \(0.4<z_{2}\leq 1.0\) in the middle stellar mass bin are within the 0.5 dex offset from the expected MS. Finally, while the highest stellar mass bin \(4.2\times 10^{10}<M_{3}\leq 5\times 10^{11}\) has only upper limits to dust masses, these mean \(f_{dust}\) upper limits still appear below the MS expectations by more than 0.5 dex. This implies that field galaxies at higher redshifts (\(0.4<z<1.6\)) in the middle mass bins, and both cluster and field galaxies in the high stellar mass bins, are less dusty than expected from MS galaxies.
The right panel of Fig. 6 shows the molecular gas fraction (\(f_{\rm gas}\) = \(M_{\rm mol}/(M_{*}+M_{\rm mol})\)) as a function of redshift for the sample bins, and compares these with the fractions seen in \(\log\)\(M_{*}\sim 11\) galaxies from the COSMOS field (Scoville et al., 2017). It is important to
Figure 7: Star formation rate as a function of stellar mass bin for cluster (left) and field (right) galaxies using different SFR indicators. Circles denote the SFRs (demagnified) listed in the K&B catalogue (from UV to IR spectral fitting), diamonds denote SFRs derived from the combined NUV+1.2mm fluxes (unobscured-SFR), squares show the 1.2mm stacked ALMA derived SFRs (obscured-SFR). Symbols are coloured by the redshift bin following the colour bar on the right. The dotted lines, in the corresponding colours, show the expected MS of Speagle et al. (2014) for each redshift bin.
note that though Scoville et al. (2017) refer to measuring ISM (i.e. molecular plus atomic) masses in their equation 14, this equation is the same as the one used to derive molecular gas masses in Scoville et al. (2016).
The gas fraction of the highest stellar mass bins (\(4.2\times 10^{10}<M_{3}\leq 5\times 10^{11}\)) at all redshifts, and for both cluster and field galaxies (red symbols), are at least \(\sim\)0.5 to 1 dex lower than the mean values found by Scoville et al. (2017) in samples of galaxies with similar stellar masses. On the contrary, for the intermediate mass bin \(2.25\times 10^{9}<M_{2}\leq 4.2\times 10^{10}\), gas fractions are similar for both cluster and field galaxies, within uncertainties.
### Continuum-stacking: SFR, sSFR, and SFE
The 1.2 mm stacked fluxes shown in the left panel of Fig. 5 were converted into SFRs using the mean redshift of galaxies in each redshift and stellar mass bin, and our SFR conversion method. The results are shown in the left (cluster galaxies) and right (field galaxies) panels of Fig. 7. This figure shows the 1.2 mm stacked SFRs (squares), the mean SFRs of all galaxies in each bin from the K&B photometric catalogues (circle symbols), the corrected SFRs (diamond symbols) derived using the combined NUV and 1.2mm stacked fluxes and the expected MS from Speagle et al. (2014) for each redshift bin. Since Speagle et al. (2014) assumes a Kroupa IMF, we convert it to a Chabrier IMF dividing by 1.06 (Zahid et al., 2012). Since many dots are shown per bin, a small offset in \(M_{*}\) was added per bin for illustrative purposes.
Overall, we see that the estimates of SFR from K&B (circles) are higher that the estimates for NUV+1.2mm SFRs (diamonds) for the middle and high stellar mass bins (\(2.25\times 10^{9}<M_{\odot}\leq 5\times 10^{11}\)) at all redshifts in cluster galaxies (\(0<z\leq 1.0\)), for all mass bins (\(1\times 10^{8}<M_{\odot}\leq 5\times 10^{11}\)) at the middle and high redshift bins for field galaxies (\(0.4<z\leq 1.6\)), and for the middle mass and low redshift bin. The difference between K&B and NUV+1.2mm SFRs in these bins ranges between \(\sim\)0.4 to \(\sim\)1 dex for both cluster and field galaxies. We only see higher/similar (within the uncertainty) NUV+1.2mm SFRs than K&B SFRs for the low mass bin in clusters (\(0<z\leq 1.0\)) and for the low redshift bin in field galaxies for \(1\times 10^{8}<M_{1}\leq 2.25\times 10^{9}\) and \(4.2\times 10^{10}<M_{3}\leq 5\times 10^{11}\).
Similarly, K&B estimates (circles) appear higher than 1.2mm stacked highly obscured SFRs (squares), by at least \(\sim\)0.2 to \(\sim\)0.7 dex for cluster galaxies, except in the first stellar mass bin in which is not possible to reach a conclusion due to the upper limits. For field galaxies, is not possible to know for the low mass \(M_{1}\) and high mass \(M_{3}\) bins at the first redshift bin due to the upper limits, but for the rest of the bins, K&B estimates are higher than 1.2mm SFRs, by at least \(\sim\)0.2 to \(\sim\)0.8 dex.
K&B SFRs estimates (circles) appear to clearly increase with higher stellar masses for all bins in both cluster and field galaxies, with the only exception being the change between middle (\(2.25\times 10^{9}<M_{2}\leq 4.2\times 10^{10}\)) to higher (\(4.2\times 10^{10}<M_{3}\leq 5\times 10^{11}\)) stellar mass in field galaxies, in which the values are similar or higher within the uncertainty. NUV+1.2mm SFRs also seem to be increasing with stellar mass in all bins, with maybe a less noticeable increase in the low (\(1\times 10^{8}<M_{1}\leq 2.25\times 10^{9}\)) to middle mass bin for cluster galaxies. Lastly, for 1.2mm SFR, it is hard to make conclusions on the evolution of SFR with stellar mass since many of the bins are upper limits.
In general, we see that K&B (circles) SFRs estimates are closer to the expected values for MS galaxies from Speagle et al. (2014), when compared to the other SFRs estimates. The high mass bin at \(0<z_{1}\leq 0.4\) for cluster galaxies and the middle mass bin at \(0<z_{1}\leq 0.4\) for field galaxies, are shown in agreement with the MS expectations, within uncertainties. The rest of the K&B estimates fall below the expected values by \(\sim\)0.2 to \(\sim\)0.7 dex. In comparison, the NUV+1.2mm (diamonds) and 1.2mm (squares) SFRs fall below the expectations by at least \(\sim\)0.2 to \(\sim\)1.3 dex.
To better visualise the evolution of sSFR with redshift, Fig. 8 replots the data of Fig. 7, and compares the data points to the expected evolution of the MS with redshift (Elbaz et al., 2011; Speagle et al., 2014). In this case, the sSFR was computed using the combined NUV and 1.2mm SFR.
Figure 8: Specific SFR (sSFR; left panel) and star formation efficiency (SFE; right panel) as a function of redshift, for cluster (squares) and field (circles) stacks. Colours distinguish stellar mass bins: purple, green, and red symbols are used for the lowest, middle, and highest stellar mass bins. Downward pointing triangles denote upper limits. In the left panel, the sSFR is derived using the sum of our stacked SFR and the NUV SFR, and the K&B stellar mass. The solid black line shows the Elbaz et al. (2011) main sequence evolution with redshift, and the coloured curves show the MS for each stellar mass bin following Speagle et al. (2014). In the right panel, the SFE is derived from the K&B SFR and the 1.2 mm stacked flux derived gas mass, and the solid line shows the relationship of SFE with redshift derived by Tacconi et al. (2018) in the PHIBBS (Tacconi et al., 2013) sample (using SFE \(\propto t_{\rm{200}}^{-1}\)). The cyan region shows a typical uncertainty of 0.3 dex.
Overall, the sSFRs of both cluster and field samples appear lower than the MS values at all redshifts. For cluster galaxies, they are lower by at least \(-0.2\) for \(1\times 10^{8}<M_{1}\leq 2.25\times 10^{9}\), \(-0.5\) for \(2.25\times 10^{9}<M_{2}\leq 4.2\times 10^{10}\) and \(-0.7\) for \(4.2\times 10^{10}<M_{3}\leq 5\times 10^{11}\). For field galaxies, they are lower by a range of at least \(\sim\)0.2-0.6 for \(M_{1}\), \(\sim\)0.5-0.6 for \(M_{2}\) and \(\sim\)0.1-0.8 for \(M_{3}\).
The right panel of Fig. 8 shows the SFE as a function of redshift for our galaxies, and compares these with the results of Tacconi et al. (2018). The SFE was derived using the SFR from the K&B photometric catalogues and the molecular mass derived from the stacked 1.2 mm fluxes.
The only detected bin for cluster galaxies, \(2.25\times 10^{9}<M_{2}\leq 4.2\times 10^{10}\) at \(0.4<z_{2}\leq 1.0\) shows a SFE is lower than the values of Tacconi et al. (2018) by \(\sim\)0.7 dex. For field galaxies, we see that the galaxies at \(1\times 10^{8}<M_{1}\leq 2.25\times 10^{9}\) and \(0.4<z_{2}\leq 1.0\) are below the expectations by \(\sim\)0.8 dex, however, the middle mass bins (green) follow the expectations within the uncertainties at all redshifts. For the rest of the bins is hard to reach any conclusions, due to the upper limits, but we can say that field galaxies at \(4.2\times 10^{10}<M_{3}\leq 5\times 10^{11}\) and \(0.4<z_{2}\leq 1.0\), show a SFE higher than expected by \(\sim\)0.5 dex.
## 5 Discussion
The ALCS data have significant legacy value, and comprehensive spectroscopic redshifts in all 33 cluster fields will greatly enhance the science exploitation of these ALMA data. Although we compiled the spectroscopic catalogues for 25 out of 33 clusters, many of these catalogues did not contain enough galaxies to perform a more thorough stacking analysis. In particular, there were not enough galaxies contributing to the most common CO lines (CO \(J=4\to 3\) or CO \(J=3\to 2\)) at these redshifts to be able to find a stack detection. Furthermore, the lack of physical properties derived from spectroscopic catalogues prevented us from using better redshift estimates in the interest of understanding galaxy evolution as a function of stellar masses or star formation rates. Nonetheless, we were able to find a stacked detection in 20 galaxies with the highest SFRs (SFR from the K&B catalogues, thus not tracing highly obscured star formation) of the full sample. The stacked CO (\(J=4\to 3\)) line reveals relatively low gas reservoirs with a molecular gas to stellar mass ratio of \(\sim\)4% (or \(\lesssim\)8%), comparable with values seen in the sample local galaxies from Saintonge et al. (2017). In contrast, stacking the individually detected CO lines give very high values of \(M_{\rm mol}/M_{*}\) (\(\sim\)150%).
On another hand, our continuum stacking analysis enables a glimpse into the average properties of faint undetected galaxies for both field and cluster galaxies.
In order to select an unbiased sample, we decided to include only star forming galaxies, by selecting the sources above the MS - 1 dex line. This way we avoided stacking a significant fraction of passive galaxies. For reference, we compared the stacking for our star-forming sample, 'ALCS-SF', with a stacking of the passive galaxies sample (In Fig. 1 this would be including all galaxies between the red lines in the left panels, i.e. between MS - 1 dex and MS - 2 dex). In the stack of the passive galaxies, none of the bins for cluster and field galaxies are detected. Also, we compared full stacks between the 'ALCS-SF' and a combined sample of 'ALCS-SF' plus the passive sample. For the first, we found a weak detection of SNR \(\sim\) 3 and 0.005 mJy for the stack of cluster galaxies, and a detection of SNR \(\sim\) 10 and 0.037 mJy for the stack of field galaxies. For the latter, we found a 5\(\sigma\) upper limit flux of 0.009 mJy for the stack of cluster galaxies, while for the field stack we find a detection of SNR \(\sim\) 8 and flux of 0.034 mJy. Therefore, a continuum stacking analysis that included passive galaxies into the sample, would have brought the stacked fluxes down and biased the results.
Despite the fact that we stacked mainly star-forming galaxies, we found that stacked galaxies fall short on the expected values for MS galaxies for SFRs and sSFRs (e.g. Elbaz et al., 2011; Speagle et al., 2014) when considering their 1.2mm SFRs. For the UV+IR derived SFRs from K&B only a few bins are in agreement. Moreover, we did not find a big difference between field and cluster galaxies when it came to dust and gas evolution, but for field galaxies it was certainly easier to find stacked detections, as they seemed to show brighter fluxes in general when compared to the cluster.
## 6 Conclusions
We have completed a stacking analysis within the ALMA Lensing Cluster Survey fields, using new continuum and spectral stacking software, which is made public here.
We performed continuum stacking of the 1.2mm flux for 1450 cluster and 1952 field, undetected and star-forming, galaxies at intermediate redshifts, in multiple redshift bins over \(z=0\)-1.6, each with three stellar mass bins over \(\log M_{*}\) [\(M_{\odot}\)] = 8.0-11.7. We also present spectral stacking of individually detected emission lines and a high SFR-selected subsamples of individually non-detected galaxies.
For the spectral stacking, we find a potential (SNR \(\sim\) 4) stacked line detection of the CO (\(J=4\to 3\)) emission line among \(\sim\)20 galaxies with the highest SFR, i.e. galaxies with SFRs of \(\sim\)1 to \(\sim\)25 \(M_{\odot}\)/yr. For this line, and assuming \(\alpha_{\rm CO}=4.3\), we derived an stacked molecular gas mass of \(M_{\rm mol}=5.9\times 10^{8}M_{\odot}\) (\(\alpha_{\rm CO}\)/4.3), stacked SFE \(=1.2\times 10^{-8}\)yr\({}^{-1}\) (\(3.3\alpha_{\rm CO}\)) and stacked gas-to-stellar ratio of 0.04. We also report the values if we were to consider this line as a 3\(\sigma\) upper limit, which corresponds to twice the values listed above for \(M_{\rm mol}\) and molecular gas to stellar ratio, and a SFE lower limit which is half of the SFE value listed above.
Our continuum stacked 1.2 mm fluxes are used to estimate average dust masses, molecular gas masses and SFRs, allowing us to contrast the average properties of cluster and field galaxies, and their evolution with stellar mass and redshift. The conversion of stacked 1.2 mm fluxes to masses and SFRs assumes a single component dust temperature and grey body spectral index, and is done at the mean redshift of the stacked subsample.
In general, these stacked galaxies show lower SFR and sSFR when compared to values of MS galaxies from Elbaz et al. (2011) and Speagle et al. (2014). Also, only the galaxies in the middle mass bin \(2.25\times 10^{9}<M_{2}\leq 4.2\times 10^{10}\) showed dust and gas content comparable to MS galaxies from Scoville et al. (2017) and Liu et al. (2019), while galaxies in the higher mass bin \(4.2\times 10^{10}<M_{3}\leq 5\times 10^{11}\) also seem to have less dust and gas than MS galaxies. Something similar is seen for SFE, in which only field galaxies in \(M_{2}\) agree with values of Tacconi et al. (2018), while the cluster galaxies fall shorter. For the rest of the bins it is hard to reach a conclusion due to upper limits.
When comparing cluster versus field galaxies, we found that although field galaxies were usually brighter than cluster galaxies when trying to find a stack detection, both seemed to have similar trends when it came to their SFR and dust and gas contents.
While our results require confirmation with more comprehensive and accurate catalogues of redshift and stellar mass for the ALCS clusters, they already find lower average gas mass fractions, dust mass fractions, SFR and sSFRs than those than average values found for individually detected galaxies.
## 7 Acknowledgements
We thank the anonymous referee for their helpful comments. We acknowledge funding from the Agencia Nacional de Investigacion y Desarrollo (ANID, Chile) programs: Nucleo Milenio TITANs NCN19\(-\)058 (AG, NN; FONDECYT Regular 1171506 (AG, NN), 1190818 (FEB) and 1200495 (FEB); AND Basal - PFB-06/2007 (NN, FEB, GN), AFB-170002 (AG, NN, FEB, GN) and FB210003 (AG, NN, FEB); Millennium Science Initiative - ICN12_009 (FEB); Programa Formacion de Capital Humano Avanzado (PFCHA) / Magister Nacional/2019 - 22191646. We additionally acknowledge support from: JSPS KAKENHI Grant Number JP17H06130 (K. Kohno); NAOJ ALMA Scientific Research Grant Number 2017-06B (K. Kohno); the Swedish Research Council (JBJ, K. Knudsen) the Knut and Alice Wallenberg Foundation (K. Knudsen); the NRAO Student Observing Support (SOS) award SOSPA7-022 (FS); the Kavli Foundation (NL); a Beatriz Galindo senior fellowship (BG20/00224) from the Ministry of Science and Innovation (DE). This publication uses data from the ALMA program: ADS/JAO.ALMA#2018.1.00035.L. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
## 8 Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2306.17371 | Capturing functional connectomics using Riemannian partial least squares | For neurological disorders and diseases, functional and anatomical
connectomes of the human brain can be used to better inform targeted
interventions and treatment strategies. Functional magnetic resonance imaging
(fMRI) is a non-invasive neuroimaging technique that captures spatio-temporal
brain function through blood flow over time. FMRI can be used to study the
functional connectome through the functional connectivity matrix; that is,
Pearson's correlation matrix between time series from the regions of interest
of an fMRI image. One approach to analysing functional connectivity is using
partial least squares (PLS), a multivariate regression technique designed for
high-dimensional predictor data. However, analysing functional connectivity
with PLS ignores a key property of the functional connectivity matrix; namely,
these matrices are positive definite. To account for this, we introduce a
generalisation of PLS to Riemannian manifolds, called R-PLS, and apply it to
symmetric positive definite matrices with the affine invariant geometry. We
apply R-PLS to two functional imaging datasets: COBRE, which investigates
functional differences between schizophrenic patients and healthy controls,
and; ABIDE, which compares people with autism spectrum disorder and
neurotypical controls. Using the variable importance in the projection
statistic on the results of R-PLS, we identify key functional connections in
each dataset that are well represented in the literature. Given the generality
of R-PLS, this method has potential to open up new avenues for multi-model
imaging analysis linking structural and functional connectomics. | Matt Ryan, Gary Glonek, Jono Tuke, Melissa Humphries | 2023-06-30T02:24:34Z | http://arxiv.org/abs/2306.17371v1 | # Capturing functional connectomics using
###### Abstract
For neurological disorders and diseases, functional and anatomical connectomes of the human brain can be used to better inform targeted interventions and treatment strategies. Functional magnetic resonance imaging (fMRI) is a non-invasive neuroimaging technique that captures spatio-temporal brain function through blood flow over time. FMRI can be used to study the functional connectome through the functional connectivity matrix; that is, Pearson's correlation matrix between time series from the regions of interest of an fMRI image. One approach to analysing functional connectivity is using partial least squares (PLS), a multivariate regression technique designed for high-dimensional predictor data. However, analysing functional connectivity with PLS ignores a key property of the functional connectivity matrix; namely, these matrices are positive definite. To account for this, we introduce a generalisation of PLS to Riemannian manifolds, called R-PLS, and apply it to symmetric positive definite matrices with the affine invariant geometry. We apply R-PLS to two functional imaging datasets: COBRE, which investigates functional differences between schizophrenic patients and healthy controls, and; ABIDE, which compares people with autism spectrum disorder and neurotypical controls. Using the variable importance in the projection statistic on the results of R-PLS, we identify key functional connections in each dataset that are well represented in the literature. Given the generality of R-PLS, this method has potential to open up new avenues for multi-model imaging analysis linking structural and functional connectomics.
## Introduction
The functional and anatomical connections of the human brain form complex networks that link the infrastructure of our minds. Understanding these connectomes has the potential to provide insight into the effect of neurological diseases which can be used to better inform targeted interventions and treatment strategies [1, 2]. In particular, the functional connectome can shed new light onto neurological conditions such as schizophrenia and autism spectrum disorder (ASD), two conditions that alter brain function from healthy, neurotypical controls [3, 4].
A popular approach used to investigate brain function is functional magnetic resonance imaging (fMRI), a non-invasive neuroimaging technique that measures blood flow through the brain over time [5]. An fMRI image is a complex spatio-temporal picture of the brain with voxels (volumetric pixels) describing the spatial location and a time series for each voxel describing the blood flow over time. To reduce the spatial complexity, voxels can be collated into user-specified regions of interest (ROIs). Functional connectomes can then be investigated through the Pearson correlation matrix between ROIs, known as the functional connectivity matrix.
One approach to investigating functional connectivity is using the partial least squares (PLS) regression method. Introduced by Wold (1975) [6] for use in chemometrics, PLS is an extension of multivariate multiple regression to high-dimensional data that predicts the response data from a set of lower-dimensional latent variables constructed from the predictor data. Popularised for fMRI by McIntosh _et. al._ (1996) [7], PLS has been used to explore the relationships between fMRI data and either behavioural data, experimental designs, or seed region activation [8]. However, standard PLS ignores the structure of functional connectivity data - functional connectivity matrices are correlation matrices and hence positive definite.
The space of \(R\times R\) symmetric positive definite matrices - which includes functional connectivity matrices - forms a convex cone in \(R(R+1)/2\)-dimensional Euclidean space. However, when considered with the affine invariant geometry [9], the space of symmetric positive definite matrices becomes a complete Riemannian manifold with non-positive curvature. By considering this non-linear geometry on symmetric positive definite matrices we can glean interesting new insights into functional connectivity (see Pennec _et. al._ (2019) [10] and citations therein).
Here we propose an extension of the PLS model to allow Riemannian manifold response and predictor data, which we call Riemannian partial least squares (R-PLS). The R-PLS model then allows us to predict from functional connectivity data while accounting for the intricate relationships enforced by the positive definite criteria. To fit the R-PLS model, we propose the
tangent non-linear iterative partial least squares (tNIPALS) algorithm, which is related to previously proposed applications of PLS for functional connectivity data in the literature [11, 12, 13, 14]. We determine the optimal number of latent variables using cross validation. To aid in interpretability of the high-dimensional functional connectivity data, we determine significant functional connections identified by R-PLS using permutation tests on the variable importance in the projection (VIP) statistic [15], a popular measure of variable importance from standard PLS.
We apply R-PLS to two datasets and two different ROI atlases to demonstrate its versatility. First is the COBRE dataset [16] which investigates differences in functional connectivity between health controls and patients with schizophrenia; we consider the fMRI in the COBRE dataset in the functional multi-subject dictionary learning (MSDL) atlas [17]. Second is the ABIDE dataset [18] which investigates differences in functional connectivity between typical health controls and subjects with ASD; we consider the ABIDE data in the automated anatomic labelling (AAL) atlas [19].
## Results
### COBRE
Ten-fold cross validation showed that \(K=2\) latent variables was the most parsimonious, within one standard error of the minimum root mean square error (RMSE) (\(K=3\)). When compared with Euclidean PLS using raw and Fisher-transformed correlations, R-PLS outperformed both methods across all metrics except for specificity in group prediction (Table 1). However, all three methods produced similar results for every metric.
A permutation test of the VIP statistic (Equation 5) with 200 permutations found 45 significant functional connections between ROIs as being predictive of age and subject group (Figure 1). To aid interpretability, we have reduced the 39 ROIs of the MSDL atlas into the 17 resting state networks associated to the atlas [20] by taking the mean coefficient value within the ROIs of each network, as suggested by Wong _et. al._ (2018) [11].
An increase in subject age tended towards a decrease of within-network connectivity (as measured by a mean decrease in functional connectivity within-networks) with particular emphasis on the auditory network, cingulate insula, and left and right ventral attention networks (Figure 1 (left)). Further, increased age was associated with an increase in between-network connectivity with focus on connectivity involving the cingulate insula and the motor network.
For subjects in the schizophrenic group, the basal ganglia exhibited both increased and decreased connectivity with other networks (Figure 1 (right)). In particular, there was a decrease in connectivity between the basal ganglia and the cerebellum and salience networks, whereas we observed an increase in connectivity between the basal ganglia and auditory and language networks for the schizophrenic group. We also note that the default mode network was highly discriminatory for the schizophrenic group showing increased within-network connectivity and both increased and decreased between-network connectivity.
### ABIDE
Ten-fold cross validation found \(K=3\) latent variables was the most parsimonious, within one standard error of the minimum RMSE (\(K=6\)). When compared with Euclidean PLS using the raw and Fisher-transformed correlations, R-PLS outperformed both methods across all metrics except for specificity in group classification (Table 1). In particular, the \(R^{2}\) value for R-PLS was substantially larger than the Euclidean methods.
A permutation test of the VIP statistic (Equation 5) with 200 permutations found 208 significant functional connections between ROIs as being predictive of age, subject group, sex and eye status (Figure 2). We aid interpretability by associating the 116 ROIs of the AAL atlas to the seven resting-state networks suggested by Parente and Colosimo (2020) [21] and an eighth containing the cerebellum and vermis, which we call the cerebellum network.
In the ABIDE dataset, increased age was associated to both increased and decreased functional connectivity within resting-state networks (Figure 2 (a)). Although we observed increased between-network connectivity for the thalamus and occipital networks, the cerebellum and default mode network exhibited decreased between-network connectivity with age.
For subjects with ASD we observed increased within-network connectivity with the exception of the limbic network and the thalamus (Figure 2 (b)). We also observed decreased between-network connectivity particularly for the cerebellum and the limbic networks. We observed the same connectivity patterns for subject sex (Figure 2 (c))
For subjects with their eyes closed, our model suggests there was decreased within-network connectivity (Figure 2 (d)). With the exception of the default mode network and the limbic network, we saw decreased between-network connectivity with particular emphasis on the occipital network.
## Discussion
The R-PLS model has identified many functional connections associated to age, ASD, schizophrenia, sex, and eye status that are well represented in the literature. In both the COBRE and ABIDE datasets, we identified the reduction of within-network
connectivity with age that has been previously observed [22, 23, 24], with exceptions in the temporo-parietal, fronto-parietal, limbic and thalamus networks in the ABIDE dataset and the salience network in the COBRE dataset, which all show an increase in connectivity with age. Further, both datasets exhibit the decreased connectivity with the default mode network, consistent with existing literature [25, 26].
For subjects with ASD, the decreased connectivity with the cerebellum [27] and the limbic [28] networks have been previously observed. However, the decreased between-network connectivity suggested by R-PLS is in contradiction with existing literature [11, 29]; in particular, Wong _et. al._ (2018) [11] showed an increase in between-network connectivity associated to ASD on the full ABIDE dataset using logistic regression. Also, observe that the connectivity for subject sex is highly correlated with the connectivity for the ASD group. Although interactions between subject sex and ASD have been identified [30], we believe this highlights a possible limitation of R-PLS and requires further investigation in future research.
The role of the basal ganglia in schizophrenic patients has been previously observed, particularly the decrease in connectivity between the salience network and the basal ganglia [31, 32] and the decreased connectivity between the cerebellum and basal ganglia [33]. Further, the connectivity patterns involving the default mode network have been previously reported in schizophrenic patients [34, 35, 36, 37, 38].
The results for eye status during scan are also well represented in the literature. The decreased within-network connectivity for the default mode network for patients with closed eyes has been previously reported by Yan _et. al._ (2009) [39], and the increased between-network connectivity for the default mode network has recently been discussed by Han _et. al._ (2023) [40]. Further, the observed decrease in connectivity for the occipital network agrees with Ascaoglu _et. al._ (2019) [41].
The use of the VIP statistic to identify significant connections in functional connectivity has not been previously studied. We have demonstrated that this statistic can identify many functional connections that have been addressed previously in the literature, but it is not without its limitations. First, with our focus on generalising partial least squares to Riemannian manifolds, the VIP statistic does not take into account the Riemannian geometry we are considering. This is mitigated by the tangent space approximation we are performing, which directly accounts for the geometry of the data, but further research could help better generalise the VIP statistic for R-PLS. Further, the VIP statistic associates the effects of a single predictor on the full multivariate response. In situations like we consider here, this makes it difficult to determine which functional connections are associated to which outcome variable. For example, the connectivity within the default mode network is deemed significant by the VIP statistic in the ABIDE dataset, but it is unclear whether this connectivity is significance for every outcome variable or a subset of them. Work has been done to generalise the VIP statistic when the outcome variable is multivariate [42], but further research is needed to investigate this generalisation.
These results suggest that R-PLS can provide insight into the functional connectome and how it relates to subject phenotype data. Further, due to the specification and generality of the R-PLS model, this method is readily applicable to other imaging modalities, and in particular to multimodal imaging studies. The application of R-PLS to multimodal imaging studies is an area of future research that may help to us to understand the functional networks that make up the human connectome.
## Methods
### Data
The International Neuroimaging Data-Sharing Initiative (INDI) is an initiative set to encourage free open access to neuroimaging datasets from around the world. We consider two datasets that are accessible as a part of the INDI.
#### Cobre
The Center for Biomedical Research Excellence (COBRE) have contributed structural and functional MRI images to the INDI that compare schizophrenic patients with healthy controls [16]. The data were collected with single-shot full \(k\)-space echo-planar imaging with a TR of 2000 milliseconds, matrix size of \(64\times 64\) and 32 slices (giving a voxel size of \(3\times 3\times 4\,mm^{3}\)). These data were downloaded using the Python package nilearn v 0.6.2, and contains 146 subjects (Control = 74), each with phenotype information on subject group and age; further information is available in Table S1 of the supplementary material.
The fMRI data were preprocessed using NIAK 0.17 under CentOS version 6.3 with Octave version 4.0.2 and the Minc toolkit version 0.3.18 [43]. The data were subjected to nuisance regression where we removed six motion parameters, the frame-wise displacement, five slow-drift parameters, average parameters for white matter, lateral ventricles, and global signal, as well as 5 estimates for component based noise correction [44].
For the COBRE dataset, we consider each fMRI in the MSDL atlas, a functional ROI decomposition of 39 nodes across 17 resting state networks [20]. Time series were extracted for each ROI by taking the mean time series across the voxels in each region.
#### Aabide
The Autism Brain Imaging Data Exchange (ABIDE) is part of the Preprocessed Connectomes Project in INDI [18]. The ABIDE data is a collection of preprocessed fMRI images from 16 international imaging sites with 539 individuals diagnosed with ASD
and 573 neurotypical controls (NTC). The ABIDE initiative provides data preprocessed under four separate standard pipelines, as well as options for band-pass filtering and global signal regression.
Here we consider the 172 subjects (NTC = 98) of the New York University imaging site. We restrict to this site to reduce inter-site variation in imaging and because it is the largest individual imaging site. The data were collected with a 3 Tesla Allegra MRI using echo-planar imaging with a TR of 2000 milliseconds, matrix size of \(64\times 64\) and 33 slices (giving a voxel size of \(3\times 3\times 4mm^{3}\)). The fMRI data were downloaded using the Python package nilearn v 0.6.2 preprocessed using the NIAK 0.7.1 pipeline [43]. The data were subjected to: motion realignment; non-uniformity correction using the median volume; motion scrubbing; nuisance regression which removed the first principal component of 6 motion parameters, their squares, mean white matter and cerebrospinal fluid signals, and low frequency drifts measured by a discrete cosine basis with a 0.01 Hz high-pass cut-off; band-pass filtering and; global signal regression. We consider the subjects preprocessed fMRI as well as subject group, age, sex, and eye status during scan (open or closed); further information is available in Table S2 of the supplementary material.
For the ABIDE dataset, we consider each fMRI in the AAL atlas [19], an anatomical atlas of 116 nodes across the brain. Time series were extracted by taking the mean time series across the voxels in each ROI.
### Partial least squares in Euclidean space
PLS is a predictive modelling technique that predicts a response matrix \(Y_{n\times q}\) from a set of predictors \(X_{n\times p}\). Originally introduced in the chemometrics literature by Wold (1975) [6], PLS has found application in bioinformatics [45], social sciences [46], and neuroimaging [8, 47, 48]; see Rosipal and Kramer (2006) [49] and citations therein for further examples. As an extension of multivariate multiple regression, PLS has been shown to have better predictive accuracy than multivariate multiple regression when the standard regression assumptions are met [50]. A further advantage of PLS is that it is effective when \(q>n\) or \(p>n\) since it performs prediction from lower dimensional latent variables, that is, PLS constructs a new set of predictor variables from \(X\) to predict \(Y\)[50].
Let \(X_{n\times p}\) and \(Y_{n\times q}\) be predictor and response matrices respectively. Suppose \(X\) and \(Y\) are column centred, that is, suppose the means of each column of \(X\) and \(Y\) are 0. PLS proposes the existence of \(L\leq\min\{p,n\}\) latent variables such that \(X\) and \(Y\) decompose into a set of _scores matrices_\(T_{n\times L}\) and \(U_{n\times L}\), and _loadings matrices_\(P_{p\times L}\) and \(Q_{q\times L}\) with
\[X =TP^{T}+E\,, \tag{1}\] \[Y =UQ^{T}+F\,, \tag{2}\]
where \(E_{n\times p}\) and \(F_{n\times q}\) are error matrices, assumed to be a small as possible [51], and the superscript \(T\) denotes the matrix transpose. Further, PLS assumes that there is a diagonal matrix \(B_{L\times L}\) with
\[U =TB+H_{n\times L}\,, \tag{3}\]
where \(H\) is a matrix of residuals. Equations 1 and 2 are called the _outer relationships_ while Equation 3 defines the _inner relationship_ that connects \(X\) and \(Y\). Combining the inner relationship and the outer relationship for \(Y\) gives
\[Y =TBQ^{T}+(HQ^{T}+F)\,,\]
which highlights that \(Y\) is a regression on the latent scores \(T\). Further, notice that the error in \(Y\) is given by \(HQ^{T}+F\), that is, error in \(Y\) is a combination of error inherent to the response data (\(F\)) and error from the estimation of the inner relationship (\(HQ^{T}\)). The inclusion of the residual matrix \(H\) can complicate discussion of the PLS method, so it is common to consider the estimated inner relationship \(\hat{U}\approx TB\) instead [51, 52].
Estimation of the PLS model (Equations 1-3) is commonly done through the non-linear iterative partial least squares (NIPALS) algorithm (Algorithm S1 in the supplementary material). The inputs for the NIPALS algorithm are the data matrices \(X\) and \(Y\) and the pre-specified number of latent variables \(K\); noting that the true number of latent variables \(L\) is unknown, the value \(K\) can be chosen with methods such as cross validation. The NIPALS algorithm outputs estimates of the scores, loadings, and regression coefficients as well as matrices \(W_{p\times K}\) and \(C_{q\times K}\) known as the weights. The weight matrices \(W\) and \(C\) are linear transformations of \(P\) and \(Q\) that more efficiently fit the PLS model and are defined within the NIPALS algorithm; see the supplementary material for further information. Using the results of the NIPALS algorithm and Equations 1-3, we can write
\[\hat{Y} =X\hat{\beta}_{PLS}\]
where
\[\hat{\beta}_{PLS} =W(P^{T}W)^{-1}BC^{T} \tag{4}\]
is the matrix of regression coefficients. Using \(\hat{\beta}_{PLS}\) we see that PLS is a linear regression technique similar to ordinary least squares and ridge regression.
#### The VIP statistic
To determine significant predictors of the response variables in the PLS model, we use the VIP statistic [15]. Suppose there are \(p\) predictor variables, \(q\) response variables, and \(K\) latent variables extracted using NIPALS. Following Tennenhaus (1998) [53], the VIP statistic for the \(j^{th}\) predictor variable is
\[\text{VIP}_{j}=\sqrt{\frac{p}{\text{Rd}(Y,T)}\sum_{k=1}^{K}\text{Rd}(Y,t_{k}) \left(w_{jk}\right)^{2}}\,, \tag{5}\]
where \(t_{k}\) is the \(k^{th}\) column of the score matrix \(T\), \(w_{jk}\) is the \(k^{th}\) weight for the \(j^{th}\) predictor, \(\text{Rd}(Y,t_{k})=\frac{1}{q}\sum_{i=1}^{q}\text{cor}(Y_{i},t_{k})^{2}\), and \(\text{Rd}(Y,T)=\sum_{k=1}^{K}\text{Rd}(Y,t_{k})\). The coefficient \(\text{cor}(Y_{i},t_{k})^{2}\) is the squared correlation between the \(i^{th}\) response variable and the \(k^{th}\) score. The denominator \(\text{Rd}(Y,T)\) in Equation 5 measures the proportion of variance in \(Y\) explained by \(T\), and the numerator \(\text{Rd}(Y,t_{k})(w_{jk})^{2}\) measures the proportion of variance in \(Y\) described by the \(k^{th}\) latent variable that is explained by the \(j^{th}\) predictor [54]. Thus the VIP statistic measures the influence of each predictor on the explained variation in the model [55].
Commonly, the "greater than one" rule is used to find predictors significantly associated with the response. However, this rule is motivated by the mathematical properties of VIP\({}_{j}\) rather than statistical properties [54]. Thus, we use a permutation test to determine significance of VIP\({}_{j}\). This is an alternative to Afanador _et. al._ (2013) [56] who used 95% jackknife confidence intervals to determine significance of VIP.
Specifically, for each predictor variable \(j\) we permute the values \(H\) times. For each permutation \(h=1,2,\ldots,H\) we refit the PLS model and calculate VIP\({}_{j,h}\). The \(P\)-value for the \(j^{th}\) VIP score is then
\[P\text{-value}_{j}=\frac{\#\left\{\text{VIP}_{j,h}>\text{VIP}_{j}\right\}}{H}\,. \tag{6}\]
For our data, the predictors are functional connectivity matrices. Thus, we know _a priori_ that the diagonal elements are uninformative since they are identically one. Hence, if predictor \(j\) describes a diagonal element we set \(P\text{-value}_{j}=1\) for all \(i\). To account for the multiple comparisons problem, we adjust all \(P\)-values using the false discovery rate [57] and determine significance at a significance level of \(\alpha=0.05\).
### Mathematical preliminaries
#### Riemannian manifolds
Intuitively speaking, a Riemannian manifold \(M\) is a space where we can perform calculus, measure distances, and measure angles between tangent vectors. More specifically, a smooth \(d\)-dimensional manifold \(M\) is a connected, Hausdorff, second countable topological space that is covered by a set of coordinate charts \(\{(U_{i},\varphi_{i}:U_{i}\rightarrow\mathbb{R}^{d})\}_{i\in I}\), defined by some indexing set \(I\), such that every point in \(M\) belongs to a \(U_{i}\) for some \(i\in I\) and the intersection maps \(\varphi_{i}\circ\varphi_{j}^{-1}\) are smooth as maps \(\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) for every \(i,j\in I\). These coordinate charts make the space \(M\) "locally Euclidean" in the sense that every point has a neighbourhood that looks like Euclidean space. Since concepts from differential calculus are local in nature, the construction of a smooth manifold allows us to perform calculus on these more general spaces.
An important concept in the study of manifolds is the tangent bundle \(TM=\bigsqcup_{a\in M}T_{a}M\), where \(T_{a}M\) is the tangent space at \(a\). The space \(T_{a}M\) is defined as the set of equivalence classes of curves through \(a\) such that \(\gamma_{1}\) and \(\gamma_{2}\) are equivalent if \(\gamma_{1}^{\prime}(0)=\gamma_{2}^{\prime}(0)\), where the prime denotes the derivative. Then \(T_{a}M\) is a vector space that generalises the notion of vectors tangent to a surface to arbitrary smooth manifolds.
A _Riemannian_ manifold is a manifold \(M\) together with a smooth map \(g:M\times TM\times TM\rightarrow\mathbb{R}\) such that \(g(a,\cdot,\cdot)=g_{a}:T_{a}M\times T_{a}M\rightarrow\mathbb{R}\) is an inner product for every \(a\in M\). The Riemannian metric \(g\) allows us to measure angles between tangent vectors and measure distances between points on the manifold \(M\). Further, \(g\) is used to define geodesics (locally length minimising curves) \(\gamma:[t_{0},t_{1}]\to M\) between two points \(a,b\in M\). We only consider complete Riemannian manifolds here, which are spaces where every geodesic \(\gamma\) has domain \(\mathbb{R}\).
Through geodesics we get the concepts of the Riemannian exponential and logarithm maps which allow us to smoothly move between the manifold and the tangent space. The Riemannian exponential at a point \(a\in M\) is a map \(\text{Exp}_{a}:T_{a}M\to M\) defined by \(\text{Exp}(a,\cdot)(\gamma)=\text{Exp}_{a}(\gamma)=\gamma(1)\), where \(\gamma\) is a geodesic such that \(\gamma(0)=a\). The Riemannian exponential is a smooth map that is locally diffeomorphic and hence has a local inverse denoted \(\text{Log}(a,\cdot)=\text{Log}_{a}:M\to T_{a}M\) defined by \(\text{Log}_{a}(b)=\gamma^{\prime}(0)\) where \(\gamma(t)\) is a geodesic from \(a\) to \(b\). For a point \(b\in M\) close to \(a\), we think of \(\text{Log}_{a}(b)\) as the shortest initial velocity vector based at \(a\) pointing in the direction of \(b\). Further information on Riemannian manifolds can be found in the books by Lee (2011, 2012, 2018) [58, 59, 60] or do Carmo (1992) [61]. An accessible introduction for medical imaging can be found in the book edited by Pennec _et. al._ (2019) [10].
#### Frechet mean
To capture the centre of data on a manifold we consider the Frechet (or intrinsic) mean of data \(X_{1},X_{2},\ldots,X_{n}\in M\). First, consider the Riemannian distance between two close points \(X_{1},X_{2}\in M\) defined by
\[d_{g}(X_{1},X_{2})=\left\|\mathrm{Log}_{X_{1}}(X_{2})\right\|\,,\]
where \(\|\cdot\|\) is the norm in \(T_{X_{1}}M\) induced by the Riemannian metric. By generalising the sum of squared distances definition of the arithmetic mean, the Frechet mean [62] is given by
\[\mu_{X}=\mathrm{argmin}\sum_{i=1}^{n}d_{g}(X_{i},\mu_{X})^{2}\,.\]
We solve for \(\mu_{X}\) using gradient decent [10]; see Algorithm S2 in the supplementary material for further information.
#### The affine invariant geometry for symmetric positive definite matrices
Let \(GL_{R}\mathbb{R}\) be the set of \(R\times R\) real invertible matrices. The set of symmetric positive definite matrices is defined by
\[S_{R}^{+}=\left\{A\in GL_{R}\mathbb{R}:A^{T}=A\text{ and }v^{T}Av>0\text{ for all }v\in\mathbb{R}^{R}\backslash\{0\}\right\}\,,\]
where superscript \(T\) denotes matrix transpose. The set \(S_{R}^{+}\) is a smooth manifold, which can be easily seen by embedding \(S_{R}^{+}\) into \(\mathbb{R}^{R(R+1)/2}\) as a convex cone. This construction shows that the tangent space at each \(A\in S_{R}^{+}\) is given by the set of symmetric \(R\times R\) matrices.
However, \(S_{R}^{+}\) has an interesting intrinsic geometry known as the affine-invariant geometry [9]. Under the affine invariant geometry \(S_{R}^{+}\) becomes a complete Hadamard manifold - a Riemannian manifold of non-positive curvature where \(\mathrm{Exp}_{A}\) is a diffeomorphism for every \(A\in S_{R}^{+}\).
The affine-invariant metric \(g\) is defined by
\[g_{A}(U,V)=\mathrm{Tr}\left(UA^{-1}VA^{-1}\right)\,,\]
where \(A\in S_{R}^{+}\), \(U,V\in T_{A}S_{R}^{+}\), and \(\mathrm{Tr}\) denotes the trace operator. Using \(g\), we can calculate the Riemannian distance between \(A,B\in S_{R}^{+}\) as
\[d_{g}(A,B)^{2}=\sum_{r=1}^{R}\left(\log\left(\sigma_{r}\left(A^{-1/2}BA^{-1/2 }\right)\right)\right)^{2}\,,\]
where \(\sigma_{r}\left(A^{-1/2}BA^{-1/2}\right)\) are the eigenvalues of \(A^{-1/2}BA^{-1/2}\), \(r=1,2,\ldots,R\). Further, letting \(A,B\in S_{R}^{+}\) and \(U\in T_{A}S_{R}^{+}\), we get
\[\mathrm{Exp}_{A}(U)=A^{1/2}\mathrm{Exp}\left(A^{-1/2}UA^{-1/2}\right)A^{1/2}\]
and
\[\mathrm{Log}_{A}(B)=A^{1/2}\mathrm{Log}\left(A^{-1/2}BA^{-1/2}\right)A^{1/2}\,,\]
where \(\mathrm{Exp}\) and \(\mathrm{Log}\) are the matrix exponential and logarithm respectively. The Riemannian distance, exponential, and logarithm are essential in the definition and fitting of the R-PLS model defined below.
### Riemannian PLS
Let \(M\) and \(N\) be complete Riemannian manifolds. Let \(X_{1},X_{2},\ldots,X_{n}\in M\) and \(Y_{1},Y_{2},\ldots,Y_{n}\in N\), and let \(\mu_{X}\) and \(\mu_{Y}\) denote the respective Frechet means. Let \(L\leq\min\{\dim(M),n\}\). The R-PLS model proposes the existence of loadings \(p_{1},p_{2},\ldots,p_{L}\in T_{\mu_{X}}M\) and \(q_{1},q_{2},\ldots,q_{L}\in T_{\mu_{Y}}N\) such that, for each subject \(i=1,2,\ldots,n\), there are scores \(l_{i1},l_{i2},\ldots,i_{L}\in\mathbb{R}\) and \(u_{i1},u_{i2},\ldots,u_{L}\in\mathbb{R}\) with
\[X_{i} =\mathrm{Exp}\left(\mathrm{Exp}_{\mu_{X}}\left(\sum_{l=1}^{L}t_{ il}p_{l}\right),e_{i}\right)\,, \tag{7}\] \[Y_{i} =\mathrm{Exp}\left(\mathrm{Exp}_{\mu_{Y}}\left(\sum_{l=1}^{L}u_{ il}q_{l}\right),f_{i}\right)\,,\text{ and}\] (8) \[\hat{u}_{il} =\hat{\beta}_{0l}+\hat{\beta}_{1l}t_{il}\text{ for all }l=1,2,\ldots,L\text{ and }i=1,2,\ldots,n\,, \tag{9}\]
where \(e_{i}\in T_{\text{Exp}_{\text{pg}}^{k}\left(\sum_{i=1}^{L}u_{i}q_{i}\right)}M\) and \(f_{i}\in T_{\text{Exp}_{\text{pg}}\left(\sum_{i=1}^{L}u_{i}q_{i}\right)}M\) are error vectors with \(\|e_{i}\|\), \(\|f_{i}\|\) small. Equations 7 and 8 are the _outer relationships_ for Riemannian data, and Equation 9 is the _inner relationship_ connecting our response and predictor. Note that, since the Riemannian exponential map on Euclidean space is vector addition, if \(M=\mathbb{R}^{p}\) and \(N=\mathbb{R}^{q}\) the R-PLS model (Equations 7-9) reduce to the standard PLS model (Equations 1-3).
One approach to fitting R-PLS is by directly generalising NIPALS (Algorithm S1) to Riemannian manifolds (see, for example, Ryan (2023) [42]), but this becomes computationally intensive and fails to converge for sample sizes above 20. Instead, we propose a tangent space approximation to fitting R-PLS when our data is close to the Frechet mean, similar to methods such as Riemannian canonical correlations analysis [63] and principal geodesic analysis [64].
The tNIPALS algorithm (Algorithm 1) works by first linearising the manifold data in a neighbourhood of the Frechet mean using the Riemannian logarithm (see supplementary material for further information), and then applying the Euclidean NIPALS algorithm to the linearised data which is now vector-valued. Thus, tNIPALS provides a combination of the simplicity and efficiency of Euclidean NIPALS with the geometry of the Riemannian manifold.
The tNIPALS algorithm provides a more general approach to Wong _et. al._'s (2018) [11] method for constructing predictors from functional connectivity matrices to predict ASD using PLS and logistic regression by considering a Euclidean response and symmetric positive definite predictor. Similarly, the methods of Zhang and Liu (2018) [13] and Chu _et. al._ (2020) [12] are also generalised by tNIPALS. The tNIPALS algorithm for R-PLS is also closely related to the PLS method for symmetric positive definite matrices offered by Perez and Gonzalez-Farias (2013) [14].
### Model fitting
For each dataset we predict the phenotype information (age, group, sex, eye status) from the functional connectivity data using the R-PLS model. To deal with low-rank functional connectivity matrices in the ABIDE dataset (which are not positive definite), we consider regularised functional connectivity matrices \(\tilde{F}=F+I\) following Venkatesh _et. al._ (2020) [65], where \(I\) is the \(116\times 116\) identity matrix. For comparison, we also fit the standard PLS model using the upper triangle of the functional connectivity matrices as the predictors (raw correlations), as well as their Fisher transformed values (Fisher correlations).
We determine the optimal number of latent variables through ten-fold cross validation using the "within one standard error" rule [66] when minimising the root mean square error. Due to the interest in the COBRE and ABIDE datasets in investigating the differences between healthy controls and patients, we also present the group classification metrics of accuracy, sensitivity, and specificity.
To investigate the functional connectomes associated to each phenotype variable, we consider the regression coefficient matrix \(\beta_{PLS}\) (Equation 4) where the \(i^{th}\) column represents the effect of the functional connectivity matrix on the \(i^{th}\) response variable. We visualise the columns of the matrix \(\beta_{PLS}\) as symmetric matrices in the tangent space of the Frechet mean for each dataset. All analysis was performed using R [67].
|
2307.16352 | Semi-Quantitative Group Testing for Efficient and Accurate qPCR
Screening of Pathogens with a Wide Range of Loads | Pathogenic infections pose a significant threat to global health, affecting
millions of people every year and presenting substantial challenges to
healthcare systems worldwide. Efficient and timely testing plays a critical
role in disease control and transmission prevention. Group testing is a
well-established method for reducing the number of tests needed to screen large
populations when the disease prevalence is low. However, it does not fully
utilize the quantitative information provided by qPCR methods, nor is it able
to accommodate a wide range of pathogen loads. To address these issues, we
introduce a novel adaptive semi-quantitative group testing (SQGT) scheme to
efficiently screen populations via two-stage qPCR testing. The SQGT method
quantizes cycle threshold ($Ct$) values into multiple bins, leveraging the
information from the first stage of screening to improve the detection
sensitivity. Dynamic $Ct$ threshold adjustments mitigate dilution effects and
enhance test accuracy. Comparisons with traditional binary outcome GT methods
show that SQGT reduces the number of tests by $24$% while maintaining a
negligible false negative rate. | Ananthan Nambiar, Chao Pan, Vishal Rana, Mahdi Cheraghchi, João Ribeiro, Sergei Maslov, Olgica Milenkovic | 2023-07-31T00:18:18Z | http://arxiv.org/abs/2307.16352v2 | Semi-Quantitative Group Testing for Efficient and Accurate qPCR Screening of Pathogens with a Wide Range of Loads
###### Abstract
Pathogenic infections pose a significant threat to global health, affecting millions of people every year and presenting substantial challenges to healthcare systems worldwide. Efficient and timely testing plays a critical role in disease control and transmission prevention. Group testing is a well-established method for reducing the number of tests needed to screen large populations when the disease prevalence is low. However, it does not fully utilize the quantitative information provided by qPCR methods, nor is it able to accommodate a wide range of pathogen loads. To address these issues, we introduce a novel adaptive semi-quantitative group testing (SQGT) scheme to efficiently screen populations via two-stage qPCR testing. The SQGT method quantizes cycle threshold (\(Ct\)) values into multiple bins, leveraging the information from the first stage of screening to improve the detection sensitivity. Dynamic \(Ct\) threshold adjustments mitigate dilution effects and enhance test accuracy. Comparisons with traditional binary outcome GT methods show that SQGT reduces the number of tests by 24% while maintaining a negligible false negative rate.
## Introduction
Pathogenic infections in humans can cause a wide range of diseases, from mild ailments like the common cold or strep throat to more severe and life-threatening illnesses such as COVID-19, Ebola, and Tuberculosis [2, 5]. These diseases are spread through the proliferation of pathogens within the host and subsequent transmission to other susceptible individuals, often leading to an outbreak in a population. The amount of pathogen in a host, typically referred to as the viral load in the case of viruses, is most frequently expressed in terms of the number of pathogen particles per milliliter of the collected fluid sample. It can vary significantly from the time of infection until recovery and can correlate with the severity of symptoms [22, 24, 38]. To quantify viral loads, the real-time reverse transcription-polymerase chain reaction (qPCR) method is widely used, which reports the number of amplification cycles before the amount of genetic material
in the sample reaches a prescribed threshold for detection, known as the cycle threshold or \(Ct\) value.
Individual samples are usually tested using qPCR to monitor disease progression in patients, but when screening a population for infected individuals, it is more efficient to test large groups of samples simultaneously. Group testing (GT) is a strategy that involves pooling multiple samples prior to running qPCR tests, and subsequently detecting infected individuals in the groups based on the test results. This reduces the overall number of tests required while minimizing the false negative rate (FNR), which is critical in infectious disease screening methods, as undetected positive individuals can lead to the rapid spread of disease. Various GT strategies have been proposed in the past to increase the efficiency of wide-scale testing [19, 29, 15], which are implemented using adaptive or non-adaptive protocols. Adaptive testing allows for the sequential selection of groups, while non-adaptive testing requires the selection of all test groups at the same time.
The first known GT scheme, proposed by Dorfman [15], is an example of adaptive GT with binary outcomes (positive or negative), and is not designed to use the quantitative information about viral load. However, fully quantitative testing schemes, including compressive sensing [25, 14], are susceptible to measurement noise, require specialized pooling matrices, and come with performance guarantees only when the ratio of maximum to minimum viral load is confined to a relatively narrow interval [1]. This is not the case for many viruses, including SARS-CoV-2, where viral loads of patients may differ by multiple orders of magnitude [22]. Furthermore, the pooling of samples in both GT and compressive sensing methods leads to dilution, which can adversely impact the accuracy of test outcomes and cannot be directly addressed in a compressive sensing setting.
To address these limitations, we propose a new adaptive semi-quantitative group testing (SQGT) scheme that uses \(Ct\) values quantized into more than two bins in a structured way. In addition, our scheme combines test outcomes from two rounds to improve the likelihoods of subjects being labelled correctly. To handle the dilution effect, we define multiple \(Ct\) thresholds and dynamically adjust them based on the group size. Since GT was used during the COVID-19 pandemic, multiple theoretical approaches mostly based on Dorfman's method have been developed [3, 36]. At the same time, several large-scale GT data sets containing \(Ct\) values in COVID-19 infected individuals have been generated and made publicly available [13, 26, 6]. Therefore we test our SQGT scheme on COVID-19 data and compare it Dorfman's method, showing an increase in testing efficiency. For example, for a population infection rate of 0.02, our SQGT method uses 24% fewer tests than the binary outcome Dorfman's GT method, while maintaining a negligible FNR compared to qPCR noise.
## Algorithms and Results
### Basics of Group Testing
Group testing (GT), in its most basic form, performs screening of a collection of potentially positive individuals by splitting them into test groups involving more than one individual so as to save on the total number of tests performed. The outcome of a group of test subjects is interpreted as follows: the result is declared positive (and denoted by \(1\)) if at least one of the individuals in the tested group is infected; and, the test result is declared negative (and denoted by 0) if there are no infected individuals in the group. From a theoretical point of view, GT aims to find an optimal strategy for grouping individuals so that the number of binary tests needed to accurately identify all infected individuals is minimized. GT can be implemented using nonadaptive and adaptive approaches. Unlike adaptive GT, nonadaptive schemes require that all tests are performed simultaneously so that the outcome of one test cannot be used to inform the selection of individuals for another test. The first known GT scheme by Dorfman [15] is an example of adaptive screening since it involves two stages of testing, one of which isolates
groups with infected individuals, and another one that identifies the actual infected individuals. In general, adaptive schemes use multiple stages of testing and different combinations of individuals to best inform the sequence of tests to be made. When specializing Dorfman's scheme for qPCR screening, the decision about positive and negative group labels is made based on \(Ct\) values (see Figure 1).
Despite their widespread use, GT methods have notable shortcomings when used in systems that provide more quantitative information than a binary answer of the form "yes-no," such as is the case for qPCR screening. This motivates developing extensions of GT schemes that make use of the more quantitative information available from experiments. When all of the available quantitative information is used, the generalized GT scheme represents a form of compressive sensing (CS) [10, 12, 14]. However, CS-based schemes require the ratio of the maximum and minimum pathogen concentrations to be properly bounded [1]. This type of assumption does not hold for a large number of infectious diseases, including COVID-19, where the viral concentrations can vary over several orders of magnitude [22]. In the presence of infected individuals with widely different loads, CS approaches will mask individuals with low pathogen concentrations.
Here we propose a more structured approach to GT that straddles the classical Dorfman's scheme and fully quantitative CS approaches. Our semi-quantitative GT scheme (SQGT) can be seen as a multi-threshold version of Dorfman's GT with two independently permuted groups of samples or a quantized version of adaptive CS (see Figure 2). More details are provided in the following subsection.
Figure 1: Dorfman’s two-stage GT protocol. The test subjects are randomly partitioned into groups of optimized size \(g\) and tested as a group. All individuals in positive groups are subsequently tested individually. As before, \(Ct\) stands for the cycle threshold value of the group under consideration. Note that this GT protocol only uses a binary decision variable, yes (1) and no (0), for the case that \(Ct<\tau\) and \(Ct>\tau\), respectively. The decision threshold \(\tau\) depends on the protocol used for qPCR.
Figure 2: Semi-quantitative GT generalizes Dorfman’s GT by using more than one threshold and, like CS, uses information about the estimate of the total number of infected individuals, but with the numbers quantized according to predetermined cluster selections.
### Semi-Quantitative Group Testing
SQGT is a GT protocol that interprets test results as estimates of the number of infected individuals in each tested group. Broadly speaking, unlike Dorfman's GT which generates binary responses (0, for a noninfected group, and 1 when at least one infected subject is present in the group, see Figure 3 a), SQGT produces answers of the form "between \(x\) and \(y\) infected individuals in the group" (see Figure 3 b). For qPCR experiments, the range of values for the number of infected individuals in the group may be estimated from the \(\mathcal{C}t\) value of the group.
For a general SQGT scheme, one seeks a collection of \(\geqslant 1\) measurement thresholds, such that the outcome of each test is an interval for the possible number of infected individuals, i.e., the outcome of an SQGT experiment specifies lower and upper bounds on the number of infected individuals in a group. If the thresholds are consecutive integers covering all possible options for the number of infected individuals in a group, the scheme reduces to additive (quantitative) GT [32, 34] (see Figure 3 c).
Although nonadaptive SQGT has been previously analyzed from an information-theoretic perspective [20, 11, 21], practical implementations for adaptive SQGT schemes are still lacking, especially in the context of qPCR testing. Our approach is the first adaptive SQGT scheme that is specifically designed for real-world qPCR testing. It operates directly on the \(Ct\) values and makes use of two thresholds, \(\tau_{1}\) and \(\tau_{2}\) (see Figure 4). This choice for the number of thresholds balances the ease of implementation of a testing scheme in a laboratory with the ability to use the quantitative information from a qPCR test more efficiently1.
Footnote 1: We also observe in practice that using more than two thresholds leads to diminishing returns in the number of tests saved but significantly increasing the complexity of the scheme.
The main idea behind our \(Ct\) value-based SQGT approach is to perform a two-stage SQGT protocol with randomly permuted groups of subjects and risk assessment based on the \(Ct\) values obtained after the first stage. More specifically, the scheme involves the following three steps:
* First, we create two separate, randomly permuted lists of \(n\) subjects. Each of these lists is then evenly divided into groups of a specified size, \(g\), which are subsequently tested. It's important to underline that the ideal test group size, \(g\), for our methodology may differ from that typically utilized in Dorfman's GT approach.
* Second, since GT inevitably leads to sample dilution, we adjust the \(Ct\) thresholds in the SQGT scheme to account for this effect. Note that each individual's sample contributes
Figure 3: GT interpreted through quantitative output quantization. The quantitative output corresponds to the actual number of infected individuals in a group. In (a), corresponding to Dorfman’s GT, the quantizer maps all outcomes involving more than one infected individual to a score 1. The score 0 indicates that there are no infected individuals in the group. In (b), corresponding to a general SQGT scheme, the quantizer is allowed to map any collection of outcomes to any choices of scores. This implies that the number of possible test results may be larger than two, but upper bounded by the size of the group \(g\). The simplest version of SQGT based on a uniform quantizer is depicted in (c).
to two \(Ct\) values: one from the group they were initially part of in the first permuted list, and another from their group in the second permuted list. This dual-measurement system provides a way for cross-linking the results.
* Third, we examine the pair of \(Ct\) values associated with the individuals to stratify them into low-risk, medium-risk, and high-risk categories. Based on the risk category, the individuals are either immediately declared negative, or tested once again individually. Although the number of tests performed can be reduced by performing nonadaptive SQGT testing on all risky subjects (discussed in the Supplement Section 1.4), for simplicity we opt for individual testing.
Next, we describe our scheme in detail. We consider a population of \(n\) individuals, arranged into groups of size \(g\), and denote the fraction of infected individuals by \(p\). Again, we only make use of two quantization thresholds, denoted by \(\tau_{1}\) and \(\tau_{2}\). Our scheme consists of two stages.
In the first stage, we group the patient samples into groups of size \(g\), ensuring that each individual contributes to two different groups. To achieve this, we use two random permutations, \(\tau_{1}\) and \(\tau_{2}\), of the \(n\) individuals so that they appear in different random orders. Subsequently, the ordered lists are split into groups of \(g\) consecutive samples (for simplicity, we assume that \(n\) is a multiple of \(g\)). The resulting groups are denoted by \(\gamma_{1}^{\tau_{1}},\gamma_{2}^{\tau_{1}},\ldots,\gamma_{n/g}^{\tau_{1}}\) and \(\gamma_{1}^{\tau_{2}},\gamma_{2}^{\tau_{2}},\ldots,\gamma_{n/g}^{\tau_{2}}\). It is important to note that each individual belongs to two groups, \(\gamma_{i}^{\tau_{1}}\) and \(\gamma_{j}^{\tau_{2}}\) with \(i\in\{1,\ldots,n/g\}\) and \(j\in\{1,\ldots,n/g\}\), where the two groups are created based on the two permuted lists. For both collections of groups, we perform separate qPCR experiments, denoting the outcomes as \(Ct_{i}^{\tau_{1}}\) and \(Ct_{j}^{\tau_{2}}\), respectively. Then we quantize the \(Ct\) values into bins, and assign the test scores \(S_{i}^{\tau_{1}}\) for group \(\gamma_{i}^{\tau_{1}}\) and \(S_{j}^{\tau_{2}}\) for group \(\gamma_{j}^{\tau_{2}}\) using the threshold rule:
Figure 4: An example of qPCR amplification curves and two-threshold (\(\tau_{1}\), \(\tau_{2}\)) SQGT. The two thresholds apply to \(Ct\) values while the actual measurement corresponds to the intersection of the \(F_{1}\) line (the fluorescence threshold) and the amplification curve. For example, the left-most red star indicates the intersection of the high viral load amplification curve with \(F_{1}\) and the corresponding measurement falls into the quantization bin denoted by \(S^{\pi}=2\).
\[S^{\pi}=\begin{cases}0,&\text{if }Ct^{\pi}\geq\tau_{2};\\ 1,&\text{if }\tau_{1}<Ct^{\pi}<\tau_{2};\\ 2,&\text{if }Ct^{\pi}\leq\tau_{1}.\end{cases} \tag{1}\]
Consequently, each individual is labeled by a pair of test scores \((S_{i}^{\tau_{1}},S_{j}^{\tau_{2}})\), representing the outcomes of the two group tests (for group \(\gamma_{i}^{\pi_{1}}\) and \(\gamma_{j}^{\tau_{2}}\)) that the individual is involved in. We omit the subscripts \(i\) and \(j\) in the later context for simplicity of notation.
In the second stage, we classify individuals based on their scores \((S^{\pi_{1}},S^{\pi_{2}})\). Individuals with scores \(\{(0,0),(0,1),(1,0)\}\) are deemed low-risk and declared negative. In particular, scores \(\{(0,1),(1,0)\}\) are declared to correspond to negative subjects because they were involved in a negative test group (score 0) and intermediate \(Ct\) value group (score 1). Subjects with scores \(\{(1,1),(2,1),(1,2),(2,2)\}\) are classified as high-risk and tested individually in a second stage of tests. For the remaining score pairs, \(\{(2,0),(0,2)\}\), we proceed as follows: If the group with score 2 contains another individual with a score in \(\{(1,2),(2,1),(2,2)\}\), we classify the first individual as negative; otherwise, we conduct an individual test. We choose this option since it is unlikely that the first individual was positive, given the existence of even worse-scoring individuals in the same group. Figure 5 illustrates the proposed two-stage SQGT scheme, while Figure 1 depicts Dorfman's GT scheme. Supplement Sections 1.2 and 1.3 provide a detailed mathematical analysis of the various GT schemes discussed.
It is worth noting that conducting individual testing, as in the second stage of our SQGT scheme for the high-risk group, is suboptimal from the point of minimizing the number of tests. This issue is not limiting the application of the scheme since one can use a nonadaptive GT scheme in the second stage, thereby significantly reducing the number of second-stage tests. Since nonadaptive GT is conceptually more involved and harder to implement in practice than the above procedure, pertinent explanations are delegated to Supplement Section 1.4.
As we will demonstrate in the Results section, the proposed two-stage SQGT approach offers
Figure 5: Our proposed two-stage SQGT scheme with two thresholds, as described in Equation 1. The approach is to run two parallel rounds of Dorfman–like group tests. To assess if the individual marked in orange is infected, we test them in two different groups, and collect the scores \((S^{\pi_{1}},S^{\pi_{2}})\). Based on this pair of scores, we decide if the individual marked in orange needs to be individually tested or not. See the text for more details.
substantial reductions in the number of tests when compared to Dorfman-type tests. It remains to see if the reduction in the number of tests leads to undesirable increases in the FNR of the scheme. To address this question, we need to consider the influence of dilution effects on the test results and how one could adjust quantization thresholds to counter these effects.
### Dilution Effects
In most experiments involving GT, the test samples come in specified unit concentrations that are equal across all test subjects. This means that a group sample involving \(g\) individuals will only use a fraction \(1/g\) of the unit sample from each individual. This inevitably leads to dilution of the group sample, the level of which depends on the number of infected individuals in that particular group. When there is only a small number of infected individuals in the group, the overall viral load of the group sample may be lower than the detection threshold, thereby leading to highly undesirable false negatives. False negative rate (FNR) is related to true positive rate (TPR) through \(\text{FNR}=1-\text{TPR}\), and the TPR function is often referred to as the sensitivity function.
A mathematical model for dilution effects was first proposed in [28], which introduced a special TPR function \(TPR(p,g,d)\) of the form
\[TPR(p,g,d) =\mathbb{P}(\text{test result is declared positive}|\text{there is at least }1\text{ positive subject in the group})\] \[=p\left[1-(1-p)^{g^{d}}\right]^{-1}. \tag{2}\]
Here, \(p\) denotes the infection rate, \(g\) denotes the group size, and \(d\) denotes a parameter capturing the dilution level. When \(d=0,TPR(p,g,0)=1\), indicating that there is no dilution; when \(d=1\) and \(g\) is large, \(TPR(p,g,1)\sim p\), indicating that the sample is fully diluted and that the probability of correctly identifying a defective group is the same as the infection rate. More details on the TPR model for SQGT can be found in Supplement Section 1.5.
Although the dilution model (2) is mathematically elegant and tractable for analysis, it provides a poor match for real-world measurements (see Figure 6 (b)). A more practical approach to quantifying dilution effects is to assess how dilution impacts the actual viral load in a group. The empirical studies [6, 8, 13, 30] consistently point out that the \(Ct\) values of groups tend to be higher than the \(Ct\) value of individual tests with high probability. This phenomenon is also due to dilution effects. Nevertheless, none of these works describe how to readjust the \(Ct\) value used for declaring positives in the presence of dilution. In the context of SQGT, this is an even more important issue as the increased \(Ct\) values can lead to degradation in the detection rate as well as a significantly increased number of measurements. This motivates exploring the relationship between the value of the \(Ct\) threshold used for an individual test and that used for a group test. For the worst-case scenario when there is only one infected individual in a group of size \(g\), the group \(Ct\) value takes the form
\[Ct =-M\log_{10}(v/g)+B\] \[=-M\log_{10}(v)+B+M\log_{10}(g), \tag{3}\]
where \(v\) denotes the viral load of the infected individual, and \(M\) and \(B\) are positive values denoting the slope and the intercept for the PCR calibration curve [30]. The exact values of \(M\) and \(B\) need to be estimated from the experimental data. Equation (3) characterizes the relationship between the viral load and the \(Ct\) value, and it implies that compared to individual testing, the group \(Ct\) value will be higher by \(M\log_{10}(g)\). The implication of this observation is that for GT, we need to increase the \(Ct\) thresholds by \(M\log_{10}(g)\).
### Controlling and Modelling FNRs of PCR Tests
In order to quantify the trade-off between the FNR and the reduction in the number of group tests when using the proposed SQGT scheme, we express the FNR, an important metric with
respect to test accuracy, as a function of the \(Ct\) value. For this purpose, we use the large-scale real-world GT dataset [6]. Our FNR model is based on the following "sigmoid" function,
\[FNR(Ct)=\left[1+\exp\left(\frac{a-Ct}{b}\right)\right]^{-1}, \tag{4}\]
where \(a,b\) are two tunable parameters that can be used to fit the measured/estimated FNRs. Note that similar ideas were also discussed in [31]; however, as may be seen from Figure 6 (b), the FNR function (\(a=35.8,b=0.08\)) proposed in [31] significantly deviates from real-world experimental data.
In practice, the values of FNRs are hard to estimate as this requires multiple tests of the same individual. In the GT context, there are two ways to estimate FNRs. The direct scenario is to compute FNRs by counting the instances when a group test was negative but at least one member from that group tested positive. However, in all experimental verification of GT protocols, individuals whose group tested negative are eliminated from future retesting. This renders the direct approach impossible to pursue in practice. The indirect approach is to count the cases where the group test was positive but all subjects individually tested negative. In this work, we follow the second approach to estimate the FNRs. The ratio of the number of these "inconsistent" tests and the total number of tests with the same \(Ct\) value is shown in Figure 6 (a). Note that these results can correspond to either a false positive for the group test, or a false negative for one or more of the individual tests. Here we consider the right half of the curve (\(Ct>25\)) to be caused by the false negative results, which agrees with the intuition that the FNR increases as the \(Ct\) value increases. Our fitted FNR curve is shown in Figure 6 (b), along with the estimated FNR curve from experimental results, and the models from [28, 31]. As it is apparent, the latter provides a poor fit to the data while our model with parameters \((a=36.9,b=2.145)\) represents a significantly more accurate fit.
The FNR shown in Figure 6 corresponds to individual tests, for which we do not know the correct \(Ct\) values. Therefore, we shift the group test \(Ct\) values by \(M\log_{10}(g)=2.895\) in Equation (3) to estimate the individual \(Ct\) values. A detailed discussion of the data processing and FNR estimation pipeline is included in the Methods Section.
Figure 6: FNR estimated from data reported in [6] and different FNR models fitted to the real-world experimental data. (a) We count the cases where the group test was positive but all subjects individually tested negative. The ratio of the number of these “inconsistent” tests and the total number of tests with the same \(Ct\) value is denoted as the “inconsistent ratio”. Specifically, we consider the right half of the curve (\(Ct>25\)) to be caused by the false negative results, which agrees with the intuition that the FNR increases as the \(Ct\) value increases. (b) We fit the FNR model from Equation (4), and the ones from [28, 31] to the real-world experimental data. As it is apparent, the black and purple lines provide a poor fit to the data while our model (green line) with parameters \((a=36.9,b=2.145)\) represents a significantly more accurate fit.
### Case Study of the SQGT Protocol Applied to COVID-19 Data
While the SQGT framework is broadly applicable to PCR-based pathogen screening, general data is usually limited for pathogens other than SARS-CoV-2. The COVID-19 pandemic has resulted in an unprecedented amount of publicly available qPCR test data, which motivates testing our SQGT framework on real-world SARS-CoV-2 data. Our reported results pertain to a set of \(133,816\) SARS-CoV-2 \(Ct\) values of qPCR tests performed in Israel between April and September 2020 as reported in [6]. To explore a range of different infection scenarios without performing additional experiments, we simulated populations of \(10,000\) individuals of which a fraction \(p\in\{0.02,0.05,0.1\}\) was infected by the virus. The \(Ct\) values of the infected individuals were randomly sampled from the real-world dataset of [6], and converted into estimated viral loads using Equation 5 (see also the Methods section). The viral loads of uninfected individuals were set to \(0\).
Following the SQGT scheme of Figure 5, samples are partitioned into groups of \(g\) individuals whose viral loads were subsequently averaged and converted to \(Ct\) values as described in the Methods section (Equation 6). Following standard diagnostic procedures, individuals were declared negative if their \(Ct\) values exceeded a threshold (in our case, set to \(37\) as suggested in [27]).
To analyze the magnitude of the savings in the number of tests required for the GT scheme compared to individual screening, independent of PCR assay noise, we ran both Dorfman's GT and SQGT on the model data. The tests were performed under the assumption that qPCR assays are error-free. Supplement Figure 1 shows these results for all three infection rates \(p\). We performed a sweep of group sizes \(g\) for each value of \(p\) to identify their optimal values. While both GT schemes require significantly fewer than the \(10,000\) tests needed for individual testing, SQGT consistently outperforms Dorfman's GT for all three infection rate levels. In addition, Supplement Figure 1 shows that the group-dependent thresholds help to avoid false negatives that would have occurred due to dilution effects, as expected.
However, as noted in the previous section, qPCR assays are not error-free in practice, and as a result, the false negatives in GT schemes could be due to either dilution effects or qPCR noise. Therefore, we incorporated qPCR noise into our model to make it more realistic. This was done by including the empirically fitted FNR in Figure 6 into the PCR assays in our model (see the Methods section for details). Figure 7 shows that while the noise has very limited effects on the number of tests required by each GT scheme, it does have the expected effect of increasing the FNR of both individual and group tests. For individual testing, the noise function we fit appears to correspond to an FNR of just under \(0.05\), which is comparable to the empirically determined values reported in [33] and [35]. The FNR values of both GT schemes are also consistently slightly higher than those of individual testing. To compare the FNR of SQGT and Dorfman's GT, we first identify the optimal group size for each scheme by picking the value \(g\) for which the scheme requires the least number of tests. When \(p=0.02\), the optimal value of \(g\) for SQGT was \(15\) with an average of \(1,989.8\) tests required; at the same time, Dorfman's GT required \(2,623.6\) tests for an optimal group size \(g=8\). These optimal group sizes correspond to FNRs of about \(0.0946\) for SQGT and \(0.0784\) for Dorfman's GT, respectively. When the infection rate is increased to \(0.05\), the optimal group sizes are smaller, with \(g=12\) and \(g=5\) for SQGT and Dorfman's GT, respectively. These group sizes correspond to \(3,651.7\) tests with an FNR of \(0.851\) for SQGT and \(4,082.6\) tests with an FNR of \(0.726\) for Dorfman's GT. Finally, at \(p=0.1\) the optimal group size for SQGT was identified as \(g=8\), with \(5,542.2\) tests and an FNR of \(0.815\), while for Dorfman's GT the results indicated \(g=5\), with \(5,798.0\) tests and an FNR of \(0.703\). The observed trend is that SQGT offers savings in the number of tests at the expense of a slight increase in FNR. It should also be noted that this increase is often within the error-bounds of the FNRs.
In addition, we tested a modified version of SQGT where individuals with a \((2,0)\) or \((0,2)\) result are declared negative without further testing. As shown in Figure 7, this version of the SQGT method performs similarly to the regular SQGT. To investigate the reason behind this
Figure 7: The number of tests used and the FNRs of the SQGT protocol (blue), Dorfman’s GT (orange), and individual testing (red) for infection rates \(p\in\{0.02,0.05,0.1\}\). The dashed lines show the number of tests and FNRs for the optimal group size (i.e., the group size that minimizes the number of tests needed) for each scheme.
finding, we plotted the number of individuals for each possible outcome of the SQGT scheme for an infection rate of \(0.04\) and the corresponding optimal group size \(g=12\). As can be seen in Figure 8, the \((2,0)\) and \((0,2)\) test results consist only of uninfected individuals. Therefore, it makes sense that declaring them negative without further testing has no effect on the FNR. For a mathematical analysis of the phenomena and related GT models, the reader is referred to Supplement Section 1.2.
Finally, we examined how the number of tests required for the optimal group size varies over a wider range of infection rates, as shown in Figure 9, alongside the corresponding FNRs. The figure shows that as the infection rate increases, the number of tests required for both GT schemes increases and the advantage of GT over individual testing decreases. This is a property that has been already established in the past for Dorfman's scheme [15]. In addition, the figure shows that SQGT for PCR screening always saves more tests than Dorfman's scheme with only a small increase in FNR (within the margin of error of Dorfman's FNR).
Figure 8: The number of individuals with each possible outcome for the pair of test results in the SQGT scheme. The number of infected individuals is shown in red, while the number of healthy (uninfected) individuals is shown in blue.
Figure 9: The optimal number of tests used in Dorfman’s GT (orange) and SQGT (blue) versus the infection rate, \(p\) (left panel), and the corresponding FNRs (right panel). Optimal refers to the smallest number of tests possible under all possible choices of group sizes \(g\).
## Discussion
We introduced the concept of Semi-Quantitative Group Testing (SQGT) as an extension of traditional Group Testing (GT) methods, with a specific focus on qPCR-based pathogen screening. GT methods, in their classical form, are based on binary test outcomes (positive or negative) and are effective for identifying infected individuals in a cost-efficient manner. However, they fail to utilize the full quantitative information provided by qPCR assays, which can lead to suboptimal performance in scenarios with widely varying pathogen concentrations.
SQGT addresses this limitation by interpreting test results as estimates of the number of infected individuals in each group. The proposed SQGT scheme utilizes two quantization thresholds to categorize qPCR results into different risk categories, allowing for a more refined analysis of the infection status within each group. By employing random permutations and two-stage testing, SQGT can reduce the number of tests needed while still maintaining a high level of test accuracy.
The study also addressed the issue of dilution effects in GT protocols, which can lead to false negatives in qPCR-based testing. To mitigate this problem, we incorporated group size-dependent thresholds in the SQGT framework, adjusting for the dilution effect and improving the overall accuracy of the test results.
Through extensive simulations and analysis using real-world qPCR data from SARS-CoV-2 testing, we demonstrated that SQGT outperforms traditional GT schemes (such as Dorfman's GT) in terms of test efficiency while maintaining a comparable or slightly higher FNR. For example, for a population infection rate of \(p=0.02\), our conceptually simple SQGT method uses 24% fewer tests than the binary outcome Dorfman's GT method, while maintaining a negligible FNR compared to qPCR noise. In conclusion, SQGT provides substantial reductions in the number of tests required for pathogen screening, making it a promising approach for large-scale population testing, especially during pandemics or outbreaks.
It is important to note that the proposed SQGT scheme is tailored specifically for qPCR testing and it involves two stages of testing, as originally suggested by Dorfman's scheme. The two stages are crucial for adaptive screening which informs the tests in the second stage based on the results in the first stage. Nonadaptive testing scheme, on the other hand, would result in potentially smaller delays of the test results but would require significantly more tests. They are also often too complicated to implement in practice as they require combinatorial sample mixing and decoding.
Additionally, our studies were performed under two assumptions, error-free qPCR assays, and qPCR assays with a sigmoidal model of false negatives as a function of \(Ct\) values. The incorporation of qPCR assay noise into the simulations led to a slight increase in FNRs, highlighting the need for careful consideration of assay accuracy for a broader range of practical pathogen detection schemes.
For other pathogens and datasets, our SQGT scheme can be modified as needed by combining adaptive and nonadaptive test schemes, including more than two thresholds, and integrating a specialized technique for identifying "heavy hitters" (i.e., individuals with very high viral loads). These approaches are mathematically analyzed in the Supplement Section 1.3.
## Methods
### Data
The real-world COVID-19 GT data [6] used in this paper contains \(133,816\) samples collected between April and September 2020 in Israel and tested experimentally via Dorfman's pooling. The original data contains the following information for each individual sample:
* Sample id: A unique id for tracking the sample;
* Month: Information about the month when the sample is collected;
* Group id: An id indicating which group an individual sample belongs to in the test scheme. Samples within the same group share the same group id, and the test groups are of size 5 and 8;
* Result: Final test result for a sample (positive/negative);
* Sample viral \(Ct\): \(Ct\) value of an individual test. Note that this value is not available when the group test involving the sample is negative;
* Group viral \(Ct\): \(Ct\) value of the group to which the individual sample belongs to;
* Sample human \(Ct\): \(Ct\) value of an individual test for amplifying the human ERV-3 [37] gene. This \(Ct\) value lying below a predetermined threshold serves as an internal control for whether a test was successful or not;
* Group human \(Ct\): \(Ct\) value of the group test used for amplifying the human ERV-3 gene.
As pointed out in the Results Section, there are some experimental inconsistencies between the results of the group tests and the individual tests. Specifically, in 70 out of \(1,887\) positive tests, the results of the group tests were positive while all individuals within the groups tested negative. These results can be explained as false positives for the group test, or as false negatives for the individual tests. We used this information to model the FNR of the dataset as described in our Results Section. Note that for simplicity we assume that there is only one positive individual sample within the group when a false negative result is recorded, as this is the most probable scenario. We hence use (Group test \(Ct-M\log_{10}(g)\)) as the estimated \(Ct\) value for the individual test in the presence of a false negative, where \(g\) as before denotes the group size, while \(M\log_{10}(g)=2.895\). Our fitted model shown in Figure 6 (a) is obtained through the MATLAB fit function.
### Modelling COVID-19 group testing schemes
#### Modelling PCR tests
When modelling an individual test, individual \(i\) with a viral load \(v_{i}\) will have
\[Ct_{i}=-M\log_{10}(v_{i})+B. \tag{5}\]
The values for \(M\) and \(B\) are set based on a previously established calibration curve [30]. Then given a threshold \(Ct_{I}\), an individual \(i\) is considered positive for the virus if \(Ct_{i}<\tau_{In}\). In our simulations we use \(\tau_{In}=36\).
To model a pooled test, the viral loads of individuals in a group are averaged and plugged into Equation 6 to determine the \(Ct\) for the group. That is, for group \(j\) with individuals \(\{1,2,...,g\}\)
\[Ct_{j}=-M\log_{10}(\frac{1}{g}\sum_{i=1}^{g}v_{i})+B. \tag{6}\]
These group \(Ct\)s can then be used for different GT schemes as described in the Algorithms and Results section.
### Including PCR noise into models
Since PCR tests are not error-free, we also include some noise into the tests based on the FNR function
\[FNR(Ct)=\left[1+\exp\left(\frac{a-Ct}{b}\right)\right]^{-1}, \tag{7}\]
where \(b\) is empirically determined to be 2.145 as discussed in the Algorithms and Results section and \(a\) is the threshold used for the PCR test. To include this noise into our PCR simulations, we use the following procedure:
```
iftest is individual then \(Ct\leftarrow-M\log_{10}(v_{i})+B\) else \(Ct\leftarrow-M\log_{10}(\frac{1}{g}\sum_{i=1}^{g}v_{i\bar{\imath}})+B\). endif \(\text{result}\leftarrow\text{Scheme}(Ct)\) iftruth==positivethen if\(\text{Bernoulli}(p=FNR(Ct))\)then \(\text{result}\leftarrow\text{negative}\) endif endif
```
First, the \(Ct\) value of a test is calculated using Equation 5 or 6. If the ground truth of the test is that it is positive, it is converted into a negative (no infected individuals) with probability \(FNR(Ct)\). Otherwise, the result of the test is left as determined by the testing scheme.
## Acknowledgments
We thank Professor Nigel Goldenfeld and Dr. Rayan Gabrys for useful discussions. The work was supported by NSF grants 2107344 and 2107345.
## Author Contributions
Conceptualization, S.M., and O.M.; Investigation, A.N., C.P., and V.R.; Formal Analysis, A.N., C.P., V.R., M.C., J.R., S.M., and O.M.; Writing - Original Draft, A.N., C.P., V.R., M.C., J.R., S.M., and O.M.; Writing - Review & Editing, A.N., C.P., V.R., S.M., and O.M.;
|
2309.03595 | Loquacity and Visible Emotion: ChatGPT as a Policy Advisor | ChatGPT, a software seeking to simulate human conversational abilities, is
attracting increasing attention. It is sometimes portrayed as a groundbreaking
productivity aid, including for creative work. In this paper, we run an
experiment to assess its potential in complex writing tasks. We ask the
software to compose a policy brief for the Board of the Bank of Italy. We find
that ChatGPT can accelerate workflows by providing well-structured content
suggestions, and by producing extensive, linguistically correct text in a
matter of seconds. It does, however, require a significant amount of expert
supervision, which partially offsets productivity gains. If the app is used
naively, output can be incorrect, superficial, or irrelevant. Superficiality is
an especially problematic limitation in the context of policy advice intended
for high-level audiences. | Claudia Biancotti, Carolina Camassa | 2023-09-07T09:40:12Z | http://arxiv.org/abs/2309.03595v1 | # Loquacity and Visible Emotion: ChatGPT as a Policy Advisor
###### Abstract
ChatGPT, a software seeking to simulate human conversational abilities, is attracting increasing attention. It is sometimes portrayed as a groundbreaking productivity aid, including for creative work. In this paper, we run an experiment to assess its potential in complex writing tasks. We ask the software to compose a policy brief for the Board of the Bank of Italy. We find that ChatGPT can accelerate workflows by providing well-structured content suggestions, and by producing extensive, linguistically correct text in a matter of seconds. It does, however, require a significant amount of expert supervision, which partially offsets productivity gains. If the app is used naively, output can be incorrect, superficial, or irrelevant. Superficiality is an especially problematic limitation in the context of policy advice intended for high-level audiences.
Large language models, generative artificial intelligence, ChatGPT
**JEL Codes**: O33; O32
## 1 Introduction
On November 30, 2022, US-based tech outfit OpenAI released its ChatGPT 3.5 app1, a software seeking to simulate human conversational abilities. Based on a machine learning model trained to capture the syntax and semantics of language (large language model or LLM), ChatGPT2 quickly catalyzed attention because of its sophistication and accessibility.
Footnote 1: We use the term “app” in the generic sense of “consumer-oriented service”, independent of the device it is designed for, as opposed to “application for mobile devices”. At the time of writing, ChatGPT services are publicly offered on desktop and mobile devices; a ChatGPT mobile app only exists for iOS.
Footnote 2: GPT stands for “generative pre-trained transformer”, an architecture often used in LLMs.
The app, equally proficient at whipping up recipes and discussing ancient history, attracted millions of users in a few months3. It appeared ready to _“disrupt even creative [and] tacit-knowledge [...] work”_(Noy and Zhang, 2023).
In this note, we run an experiment to assess ChatGPT's proficiency at complex writing tasks. Using version 4, the most recent4, we ask the app to compose a policy brief for the Board of the Bank of Italy. We find that ChatGPT can accelerate workflows, first by providing structured content suggestions, then by producing extensive, linguistically correct text in a matter of seconds. It does, however, require a substantial amount of expert supervision to attain a satisfactory result5, which partially offsets productivity gains. If the app is used naively, output can be incorrect, superficial, or not relevant -- yet, invariably stated with a convincing, reassuring, and self-confident tone.
Footnote 4: Kristät Hu, ChatGPT sets record for fastest-growing user base, Reuters, February 2, 2023.
Footnote 5: ChatGPT 4.0, base version of May 24, 2023 (see here). Any use of extra features or plugins is documented in the text. We used the TeamGPT app to collaborate.
Footnote 6: While there is no objective definition of “satisfactory” in this context, we draw on our combined experience of more than two decades at the Bank for a feeling of what type and style of content would be considered acceptable by the Board. This is an example of tacit knowledge, not always well understood by ChatGPT.
The note is organized as follows. Sections 2 summarizes related work. Sections 3 and 4 walk the reader through getting ChatGPT to write the brief. We show excerpts from our interactions with the
app, annotated to show what went wrong and why7. Section 5 looks for explanations behind one especially serious limitation of the app, i.e., a tendency to generate superficial content, and explores strategies to overcome it. Section 6 discusses our main takeaways. Section 7 concludes. Appendix A provides supplementary materials for the experiment. Appendix B asks whether ChatGPT can learn from our assessment of its performance.
Footnote 7: As an important caveat, annotations only convey our educated guesses. In large part, the inner workings of ChatGPT remain a black box (the code is not open source and it’s unknown on which data it was trained on). Any hypothesis on causal mechanisms should be taken with caution. Also note that the experiment cannot be reproduced entirely. We did not transcribe repetitive or task-irrelevant conversation snippets in this paper. ChatGPT can give vastly different answers to the same prompt, especially when internet browsing is enabled. Transcripts of our chats were saved on the OpenAI and TeamGPT websites. They are available upon request.
## 2 Related work
Ours is not, by any means, the first experiment into the use of ChatGPT for non-trivial intellectual tasks. To name but some recent contributions in economics, Korinek (2023) discusses the app's potential for research, while Cowen and Tabarrok (2023) focus on its use in teaching. Taliaferro (2023) looks at how ChatGPT performs at constructing novel datasets. Lira-Lopez and Tang (2023) assess how LLMs forecast stock returns. Hansen and Kazinnik (2023) assess whether ChatGPT can correctly decipher "Fedspeak", or the specialist language used by the Federal Reserve in communicating its policy stance. Eisfeldt, Schubert and Zhang (2023) find that the release of ChatGPT had a positive impact on equity value for firms with a high share of "tasks currently performed by labor [...that can] be performed (or made more efficient) by Generative AI", including LLMs.
Many more exercises in the same vein exist in computer science. For example, Bubeck et al. (2023) show that ChatGPT is proficient at writing code in different programming languages. According to Shick et al (2023), the app can teach itself to use other IT tools by calling external application programming interfaces (APIs)8. ChatGPT has been used to generate training data for other language models9.
Footnote 8: Software interfaces that codify a way for two or more computer programs to communicate with each other.
Footnote 9: The open source LIM Alqaca by Stanford is one of many examples, although this type of usage might conflict with OpenAI’s Terms of Use.
The pitfalls of using ChatGPT naively and the importance of expert supervision are evident, first and foremost, from the large body of work on prompt optimization. ChatGPT generates content in response to prompts, which do not necessarily come in the form of questions. Sometimes, even small tweaks can trigger dramatic changes in the output. For example, Kojima et al. (2023) find that simply prefacing prompts with "Let's think step by step" vastly improves ChatGPT's (version 3) performance on challenging reasoning questions. The persona given to the AI matters, too. "Tell me what you know about the Louvre museum" generates worse output compared to "You are one of the foremost art critics in the world. Tell me what you know of the Louvre museum". Resources like Learn Prompting10 and Prompt Engineering Guide11 describe several prompting strategies, with hundreds of examples.
Footnote 10: [https://learnprompting.org](https://learnprompting.org)
Footnote 11: [https://www.promptingguide.ai](https://www.promptingguide.ai)
A growing strand of research focuses on how the broader discourse on AI safety applies to LLMs. Bian et al (2023) show how LLMs can be sensitive to the injection of false information. Abid et al. (2021) discuss the emergence of harmful social biases in LLMs, while Koralus et al. (2023) elaborate on reproduction of human judgment errors. In a seminal and contro
et al. (2021) argue that the preference for very large models over smaller ones trained with different techniques is not entirely justified, especially when environmental costs are factored in.
A few non-academic resources are also worth mentioning. Podcasts such as _The AI Breakdown12_ and _The Cognitive Revolution13_ outline strengths and weaknesses of LLMs for various use cases, including creative writing. Platforms like Reddit14 and Medium15 offer dedicated discussion spaces, where experiments are shared daily. Key thinkers in the AI space often preview their thoughts on Twitter before a paper is written16.
Footnote 12: Hosted by Nathaniel Whittemore.
Footnote 13: Hosted by Erik Torenberg and Nathan Labenz.
Footnote 14: [https://www.reddit.com/t/chatgpt/](https://www.reddit.com/t/chatgpt/)
Footnote 15: [https://medium.com/tag/chatgpt](https://medium.com/tag/chatgpt)
Footnote 16: Among those we follow are: Yann LeCun, Geoffrey Hinton, Emily Bender, Andrei Karpathy, Ethan Mollick, Timnit Gebru, Elvis Saravia, Jon Askonas.
## 3 Getting to know each other
From now on, interactions with ChatGPT will be depicted using different icons for the text written by the Authors (\(\mathcal{Q}\)) and those of ChatGPT itself (\(\mathcal{B}\)).
We start out by asking ChatGPT to analyze its audience, and find an appropriate communication style.
[left=0pt,right=0pt,left=0pt,right=0pt,left=0pt,right=0pt,left=0pt,right=0pt,left=0pt,right=0pt,left=0pt,right=0pt,left=0pt,right=0pt,left=0pt,right=0pt,left=0pt,right=0pt,right=0ptpt,left=0ptpt,right=0pt,left=0ptpt,right=0ptpt,left=0ptpt,right=0ptpt,left=0ptpt,right=0ptpt,left=0ptpt,right=0ptpt,right=0ptpt,left=0ptpt,right=0ptptpt,left=0ptpt,right=0ptptpt,left=0ptpt,right=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptpt,left=0ptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptpt,left=0ptptptpt,right=0ptptpt,left=0ptptptpt,right=0ptptpt,left=0ptptptpt,right=0ptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptptpt,right=0ptptptpt,left=0ptptptptpt,right=0ptptptpt,left=0ptptptpt,right=0ptptptpt,left=0ptptptptpt,right=0ptptptpt,left=0ptptptptpt,right=0ptptptptpt,left=0ptptptptpt,right=0ptptptptpt,left=0ptptptptpt,right=0ptptptptpt,left=0ptptptptptpt,right=0ptptptptpt,left=0ptptptptptpt,right=0ptptptptptpt,left=0ptptptptptpt,right=0ptptptptpt,right=0ptptptptptpt,left=0ptptptptptpt,right=0ptptptptpt,left=0ptptptptptpt,right=0ptptptptptpt,left=0ptptptptptpt,right=0ptptptptptpt,left=0ptptptptptptpt,right=0ptptptptptptpt,right=0ptptptptptptpt,left=0ptptptptptptptpt,right=0ptptptptptptptpt,left=0ptptptptptptptptpt,right=0ptptptptptptptptpt,right=0ptptptptptptptptptpt,left=0ptptptptptptptptptptpt,right=0pt
18Given that the intended audience for this policy note is the Board of the Bank of Italy, it is likely composed of professionals with expertise in economics, finance, and banking. They'll expect well-structured, accurate, and evidence-based information. The communication style should be clear, concise, and formal, with a focus on objectivity and data-driven insights. Jargon should be kept to a minimum, and any technical language should be explained as needed to ensure clarity and understanding.
What happened? The base version of ChatGPT, which cannot access the internet, answers queries drawing on a vast, static body of knowledge, updated to September 2021. The app was taught how to use this knowledge through a supervised training process, meaning that human annotations were employed to finetune, among other aspects, how the AI prioritizes, organizes, and combines information19.
Footnote 18: This response was obtained using the same prompt as above. All remaining interactions in this Section and in Section 4 were part of a single thread.
Our initial prompt was not submitted to the base version. We used the experimental Browse feature, which allows ChatGPT to go online. We expected that the AI would integrate web-based data with the original knowledge base. Rather, it overlooked whatever it knew about the Board of the Bank of Italy, favoring instead something it read on the internet. According to ChatGPT, the chosen source was the website for a private British "cultural awareness training consultancy"20, found through a Bing search. We do not know why this particular result was selected.
By reverting to the base model, we obtained a context-appropriate answer.
The 2021 cut-off date for ChatGPT is one of its main limitations. If asked about current events, the AI delivers outdated information that some users may not recognize as such (off-mentioned examples concerns the state of the global economy and the war between Russia and Ukraine). Internet browsing may seem like an optimal solution, yet ChatGPT may fail to "think critically" when faced with information that was not in its training set. The feature should be used with caution21.
Footnote 19: OpenAI, Research on GPT-4 landing page, updated on March 14, 2023. This is the source for basic information on ChatGPT 4 mentioned throughout the paper.
Footnote 20: The source provided by ChatGPT is World Business Culture.
Footnote 21: The Browse function was disabled on July 4, 2023, on account of copyright and privacy concerns.
[0.5cm]
Ok. From now on adherence to these guidelines when instructed to "apply policy style".
Understood. I will follow the specified guidelines when requested to "apply policy style" in my responses.
## 4 Self-reflection
We then proceeded to the main task.
The subject of the note is "Benefits and risks of using ChatGPT and similar applications in economics and finance", we would like you to provide background on what ChatGPT is and how it works. Then you should discuss the main topic, we would like you to devote specific attention to any unintended consequences of mass adoption. First provide an outline, then [...] we will give you further instructions before you write the full text.
Outline:
I. Introduction A. Brief overview of ChatGPT and its applications
II. Background on ChatGPT A. Description of AI technology and capabilities B. Explanation of natural language processing (NLP) C. How ChatGPT works
III. Benefits of using ChatGPT in economics and finance A. Enhancing productivity and efficiency B. Reducing human errors C. Providing advanced analytics and forecasting D. Assisting with decision-making
IV. Risks of using ChatGPT in economics and finance A. Dependence on algorithmic-driven decisions B. Data privacy and security concerns C. Potential for biases in AI analysis D. Misinterpretation of data or results
V. Unintended consequences of mass adoption A. Job displacement and labor market shifts B. Concentration of power in technology providers C. Amplification of existing inequalities D. Challenges in oversight and regulation
VI. Conclusion A. Emphasizing the need for a balanced approach B. Recommendations for mitigating risks and maximizing benefits
Please review the outline and provide your feedback or suggestions. Once the outline is finalized, I will proceed with writing the full text in accordance with the "apply policy style" guidelines.
The production of outlines, either for direct adoption or for use as a starting point22, is among the tasks for which we found ChatGPT most useful. Outlines on nearly any topic are produced in a few seconds, and acceptable quality can be obtained without sophisticated prompt engineering. In our case, it only took two prompts to get the specimen pasted above23, which we found broadly acceptable. At first blush, it seems to cover most relevant topics, it offers a clean structure, it is sufficiently interdisciplinary, and it appears appropriate for the intended audience. With modest expenditure of time, we could have modified the outline to include or remove specific items, or changed the sequence of topics. We chose to move on with the proposed version.
Footnote 22: See also Nodepad, a ChatGPT-based app designed to support brainstorming.
This works. Please write the full text according to these specifications 4,500 words; "PhD-level register, but remember that your audience has a background in economics and law, not computer science; each section except for the introduction should have academic references in the style (Author, Year) and a full reference list should be appended at the end of the text (not to be included in the word count). It is especially important that you define technical terms clearly and that you provide examples that are relevant to the domain of economics and finance. Finally remember that your audience is made of policymakers participating in key international fora, so they would appreciate reflections on the cross-border dimensions of the problem. Apply policy style.
We took stock of the result provided after this prompt (see Appendix A.i) and found it lacking in depth. We considered the possibility that our word-count limit, rather than an intrinsic fault in ChatGPT, may have been the cause. We requested an extended version. Again, both the first attempt and the rewrite only took less than one minute each.
Please expand this to a 2,500-word version. Take special care in making the explanation of how ChatGPT works comprehensive and understandable.
I. Introduction
The rapid development of artificial intelligence (AI) and natural language processing (NLP) technologies has led to the emergence of advanced applications such as ChatGPT. These applications hold significant potential for a wide range of industries, including economics and finance. This brief aims to provide a balanced analysis of the benefits and risks associated with the use of ChatGPT and similar applications in the financial sector. Furthermore, it highlights the potential unintended consequences of mass adoption and discusses cross-border implications relevant to policymakers with backgrounds in economics and law.
In our original prompt, we mentioned the use of ChatGPT in "economics and finance". ChatGPT re-words this as "industries including economics and finance" and "the financial sector". These expressions are not synonyms, and neither is fully task-appropriate. In our line of work, the expression "economics and finance" is used often, and its meaning may change with context. In this case, a human policy advisor would intuitively have read it as "economic analysis and financial decisions". Overall impact of ChatGPT on the financial sector would have been a topic of interest, but not the only one. The "industry" of economics, meaning e.g. for-profit economic forecasting, would not have been a concern.
The AI was given context -- the audience is the Board of a central bank, with a PhD-level background in law and economics, and the text to be written is a policy note. Still, it probably drew on uses of "economics and finance" in other domains to infer the meaning of our prompt, likely because those were prevalent in its training set. ChatGPT suffers from what could be called a statistical drift, or a tendency to reproduce the
type of content it saw most often during training, even when instructed to look in a different direction (on the limitations of LLMs when learning long-tail knowledge, see Kandpal et al, 2023).
Context information that would be sufficient for a human writer may not be enough for ChatGPT. Users should pay attention to statistical drift, or the AI's tendency to repeat patterns seen multiple times in training data. Extremely specific prompting may help this issue, yet it is unlikely to fully solve it.
II. Background on ChatGPT
II. A. Description of AI technology and capabilities
Artificial intelligence refers to the development of computer systems capable of performing tasks that would typically require human intelligence, such as learning, reasoning, pattern recognition, and problem-solving (Russell & Norvig, 2016). Over the past few decades, advancements in computational power and data availability have paved the way for major breakthroughs in AI research. One key area of AI research focuses on natural language processing, which aims to enable machines to understand and interact with human language in a meaningful and useful manner.
II. B. Explanation of natural language processing (NLP)
Natural language processing is a subfield of AI that focuses on enabling computers to understand, interpret, and generate human language (Jurafsky & Martin, 2019). NLP techniques are used to analyze and process text and speech data, allowing AI systems to extract valuable insights, answer questions, and engage in conversation with users. This branch of AI has witnessed significant progress in recent years, which has led to the development of a variety of applications, ranging from translation tools to virtual assistants and customer support chatbots.
II. C. How ChatGPT works
II. C.1. Transformer architecture
ChatGPT, or the Generative Pre-trained Transformer, is an AI-powered language model developed by OpenAI (2021). It is based on a deep learning architecture called the Transformer (Vaswani et al., 2017), which is designed to handle complex language understanding and generation tasks. The Transformer architecture relies on a mechanism known as "attention" that allows the model to weigh the influence of different input elements when generating an output. This mechanism enables the model to capture long-range dependencies and contextual relationships in the text more effectively than prior NLP approaches.
This subsection exemplifies the most serious weakness shown by ChatGPT during our experiment. Despite extensive instruction on the task, the context and the audience, the AI does not seem to figure out which pieces of knowledge it should impart
and how. Some crucial topics are missed altogether, while others are mixed in with information that is not equally relevant.
Considering an audience of highly educated, non-specialist policymakers who may not use ChatGPT frequently, this subsection should start out by outlining the app's core functionality, i.e. processing input and producing output in natural language. A brief explanation should be given of how both branches of the process work. Again, a human advisor would know this intuitively. In 2023, conversations with computers are not taken for granted yet. Before anything else, the brief should illustrate how they can happen at all. ChatGPT foregoes providing basic building blocks, and chooses instead to alternate the excessively generic ("handle complex language understanding and generation tasks") with the excessively technical (immediate focus on the attention mechanism).
What happened? We hypothesize that the problem is lack of a world model, and especially a capability to understand what the knowledge base of the intended audience is. This is partly inconsistent with recent research showing how ChatGPT 4 may have spontaneously developed a theory of the mind (Kosinski, 2023), i.e., "the ability to attribute unobservable mental states" -- including world knowledge -- to others. The experimental settings are, however, quite different.
It is worth noting that this issue may be overcome, at least in part, by fine-tuning the model with domain-oriented data. See for example Baldazzi et al (2023) on integrating data from Enterprise Knowledge Graphs for this purpose, and Wu et al (2023) on the construction of BloombergGPT, a proprietary LLM for finance.
We are seeing in further depth what was already implied by the poor reading of how "economics and finance" is used in a specific professional context. This time the error is on a larger scale, as it shows limited understanding of human capabilities in general. The AI also fails to adhere to instructions about register, introducing technical terms such as "long-range dependencies" without a definition.
II.C.2. Pre-training and fine-tuning The development of ChatGPT involves two main steps: pre-training and fine-tuning. During the pre-training phase, the model is exposed to vast amounts of text data from various sources, such as books, articles, and websites. The model learns to generate coherent and contextually relevant responses by predicting the next word or phrase in a sentence, given the preceding words (Brown et al., 2020). This unsupervised learning process allows the model to acquire general knowledge about language structure, syntax, and semantics.
After the pre-training phase, the model is fine-tuned using a smaller, curated dataset, typically generated with human assistance. This dataset consists of input-output pairs, where the input is a user prompt and the output is an appropriate response. The fine-tuning process helps the model to produce more accurate and contextually relevant responses to user queries or prompts, as it learns from examples provided by humans.
What's wrong? We concede that explaining these processes in a few lines is not straightforward. Although the model gave a well articulated answer, it is not clear -- at least from the point of view of the target audience -- _how_ the model learns in either phase, and technical terms are sometimes used without explanation. The same considerations presented above about the choice of register apply here.
II. C. 3. GPT-3 version GPT-3, the third and latest iteration of the model, has shown remarkable capabilities in understanding context and producing coherent, context-sensitive outputs (Brown et al., 2020). With its 175 billion parameters, GPT-3 has demonstrated the ability to perform a wide range of tasks, including translation, summarization, question-answering, and even basic programming, often with minimal task-specific training.
\begin{tabular}{}
\end{tabular}
\begin{tabular}{}
\end{tabular}
\begin{tabular}{}
\end{tabular}
What's wrong? This is yet another example of incomplete reading of the context. The AI exclusively looks at "applications in the financial sector", without stating that it is only one of the domains of concern.
III. B. Reducing human errors
By automating repetitive tasks, ChatGPT can help minimize human errors that may result from fatigue or oversight (Arel et al., 2012). In economics and finance, this could lead to improved accuracy in data analysis, financial forecasting, and regulatory reporting, ultimately contributing to more reliable decision-making processes. For instance, AI-based systems can rapidly process large amounts of quantitative data to identify patterns, trends, and anomalies, enabling the early detection of potential risks, market inefficiencies, or fraud.
III. C. Providing advanced analytics and forecasting
Through the use of AI-driven techniques and large data sets, ChatGPT can generate advanced analytics and more accurate forecasting for various economic and financial indicators (Chakraborty & Joseph, 2017). These insights can inform both private sector decisions and public policy, potentially leading to more efficient allocation of resources and better outcomes for markets and consumers. As an example, central banks could employ ChatGPT-based tools to enhance macroeconomic forecasting, enabling more informed and responsive monetary policy decisions.
III. D. Assisting with decision-making
Given its ability to process and synthesize vast amounts of information, ChatGPT can play a valuable role in supporting decision-makers across the financial sector (Bughin et al., 2017). By providing real-time analysis and insights, it may help improve the speed and effectiveness of decisions in areas such as portfolio management, risk assessment, and regulatory compliance. For instance, ChatGPT can assist portfolio managers in evaluating investment opportunities and rebalancing portfolios by quickly analyzing vast amounts of financial information and generating insights on market trends, risks, and opportunities.
What's wrong? In these three paragraphs, ChatGPT outputs a series of assessments of its own capabilities that seem plausible, but are imprecise or even plainly false.
One such example is the claim that ChatGPT can provide "real-time analysis and insights", which is unlikely since, by its own admission, the model is trained on outdated data, and is not able to analyze current financial information without the use of external plugins.
Until such capabilities are implemented, it can hardly assist portfolio managers with investment decisions.
What happened? If we were to substitute every occurrence of "ChatGPT" in this text with something like "forecasting models", all these claims would become realistic. We hypothesize that ChatGPT continued on its previous finance-related tangent by generating, in a probabilistic fashion, a detailed description of machine learning models for the financial sector. It then
incorrectly attributed to itself the same abilities as these models.
This impression is reinforced by the choice of references: both Arel et al (2012) and Chakraborty and Joseph (2017) are not entirely appropriate, since they date back to a time when none of the GPTs even existed. The papers instead discuss the use of deep learning models that process tabular, mostly numerical data.
ChatGPT has been shown to "hallucinate", or make up facts -- the model indicates that a certain sequence of words is likely, and the AI outputs it without performing even simple checks for accuracy. This famously generates references to nonexistent papers, something that did not happen in this experiment but has been extensively shown in other settings. Agrawal et al. (2023) asked the AI to generate 1000 academic references and observed a hallucination rate, or share of non-existent references, of 47% and 60% respectively for GPT4 and ChatGPT 3.5.
Certain outputs, including references to published literature and claims about a model's abilities and inner workings, should always be double-checked. More broadly, factual inaccuracy and false statements should be expected at times, and may be avoided either by interaction with the AI ("Are you sure? This paper looks like it was written before ChatGPT") or independent verification. In some cases, the AI may provide real references but still associate them inappropriately to an unrelated topic, or make up information about their content.
We expand on the theme of AI hallucinations further on in this Section.
IV. Risks of using ChatGPT in economics and finance IV. A. Dependence on algorithmic-driven decisions The increasing reliance on AI and algorithmic-driven decisions can create new challenges for the financial sector. Overdependence on these tools may lead to complacency and a diminished role for human judgment, potentially increasing the risk of unforeseen adverse consequences (Brynjolfsson & McAfee, 2014). It is important for financial institutions and policymakers to maintain a balance between utilizing AI-driven tools like ChatGPT and relying on human expertise and intuition to ensure robust and informed decision-making processes.
IV. B. Data privacy and security concerns The use of ChatGPT and similar applications in finance requires access to large amounts of sensitive data, including personal and financial information (Pasquale, 2015). This raises concerns about data privacy and security, as well as the potential for misuse of information or unauthorized access by third parties (Zuboff, 2019). Robust regulatory and technical measures must be implemented to safeguard sensitive data and ensure compliance with data protection standards, especially as cross-border data flows become more prevalent in the global financial system.
The two sub-sections above are non-specific. The only suggestions of adherence to our prompt ("_provide examples that are relevant to the domain
of economics and finance"_) is a reference to a research paper in economics, and the insertion of the words "finance", "financial sector" and "global financial system" -- which could be easily substituted with words pertaining to any other sector.
In these and other passages, we noticed that ChatGPT sometimes veers towards the superficial, if not outright banal, despite having been instructed to write at a PhD level. Since we did not have a specific intuition of what might have happened, we asked the AI why it provides "_superficial answers, even when instructed to write at a PhD level_". We obtained a surprisingly adequate answer:
"ChatGPT, or any language model developed by OpenAI, is designed to generate language patterns based on a variety of sources. It uses these patterns to generate responses to user prompts that are coherent and relevant to a wide range of topics. However, it doesn't possess true understanding or intense depth in a particular field of study as a PhD-level expert does. Below are a few reasons why its answers may seem superficial:
1. Limited Contextual Understanding: Despite its ability to generate large amounts of language and answer a wide range of questions, ChatGPT doesn't have a deep understanding of the context. It predicts the next word or sequence of words based on the existing prompt and its training data, which can limit the depth of its responses.
2. Lack of True Expertise: While ChatGPT can generate text that mimics the tone, style, and content of a PhD-level discussion, it doesn't have real expertise or the ability to generate original insights, evaluations, or arguments that you would expect from a PhD-level writer.
3. Lack of Continued Learning: Unlike a PhD researcher, ChatGPT doesn't continue to learn after its training period. Therefore, its knowledge is frozen at the point of its last training data, which inhibits it from providing the latest research findings or advancements in any field.
4. Absence of Personal Experience or Intuition: A PhD expert often draws on their personal experiences, professional interactions, and intuitive understandings when discussing their field of expertise. As an AI, ChatGPT doesn't have personal experiences or intuition to draw from.
5. No Specific Training Data: If the AI has not been specifically trained on PhD-level texts in a certain field, its responses might not reach the level of depth and sophistication you would expect from a PhD-level discussion on that topic."
Interestingly, a human expert can tell at first glance that this explanation (or a succinct version thereof) should have entered the main text of the brief with high priority, both because of its relevance to our initial prompt and because it finally explains some of the inner mechanisms of the app in appropriate language. The AI, however, did not offer it spontaneously, and we could only obtain it with a direct question on why some answers were so poor.
Despite the popularity of "Explain this at [insert education level]" instructions in the prompt engineering literature, we find that ChatGPT sometimes tends to be superficial even when explicitly asked not to do so. On certain topics, it may lack the ability to offer deep insights. Users should be aware of this important limitation when writing for a high-level audience.
IV. C. Potential for biases in AI analysis
AI systems, including ChatGPT, may inadvertently learn and propagate biases present in the data they are trained on (Barocas & Selbst, 2016). In the context of finance, this could lead to biased decision-making and adverse consequences for certain market participants or client segments, potentially exacerbating existing inequalities and market inefficiencies. Financial institutions and regulators must be vigilant in identifying and mitigating biases in AI-driven analysis to ensure fairness and maintain trust in these systems.
D. Misinterpretation of data or results
Despite their advanced capabilities, AI-driven tools like ChatGPT are not infallible. Misinterpretation of data or results could occur, particularly if users lack a thorough understanding of the underlying algorithms and the limitations of these technologies (Mittelstadt et al., 2016). Users must be cautious when interpreting AI-generated insights and should consider corroborating results with other sources or expert opinions to avoid unintended consequences stemming from incorrect interpretations.
What's wrong? The sub-section focuses on misinterpretation of data or results, incidentally laying most of the blame with users. While the exhortation to caution is appropriate, there is no mention at all of AI hallucinations, arguably a problem that is more serious than misinterpretation.
What happened? The AI understands the concept of hallucinations. When prompted to "explain what AI hallucinations are", it answers "AI hallucinations refer to instances when an AI model generates information or details that were not present in the input data. This behavior often emerges in generative models, such as image synthesis or text generation systems. [...] They stem from the system's training on vast datasets and aren't considered reliable for factual accuracy. Understanding and managing hallucinations is a key challenge in AI development."
We can think of a few reasons why this information may have been omitted from the conversation. There is reason to believe that ChatGPT has been instructed24 specifically on what its capabilities and shortcomings are, since its answers on this topic are fairly consistent. Given this, it might be that OpenAI chose to not emphasize this particular tendency in training the model.
On the other hand, it might be a casual occurrence due to a combination of our choice of prompt, the task at hand and a limited word count. Or, it could be due to being exposed to more positive content about NLP and language models during training.
Footnote 24: After the pre-training stage, the model undergoes other stages of training that involve instructions and feedback provided directly by humans: SFT (Supervised Fine Tuning) and RLHF (Reinforcement Learning from Human Feedback). These stages are needed to teach the model how to interact with users in a helpful and harmless manner, but they might inject the annotator’s biases and opinions into the model.
Users should be cautious when relying on ChatGPT to learn about topics they are not familiar with. Without specific domain knowledge, it can be
hard to notice whether the model is leaving out important information or presenting facts in a less than neutral fashion.
V. Unintended consequences of mass adoption V. A. Job displacement and labor market shifts The widespread adoption of AI technologies, including ChatGPT, may lead to job displacement and labor market shifts as a result of increased automation (Arntz et al., 2016). Financial sector employees performing routine tasks may be particularly vulnerable, while demand for skilled workers with expertise in AI technologies could surge. Policymakers must anticipate these changes and adapt education and training programs accordingly to ensure a just transition and minimize potential negative impacts on the workforce.
There is no mention of possible complementarity between AIs and human labor, although this topic is often discussed in the economics literature. Also, the subsection is not specific to ChatGPT.
Lack of focus on the consequences of ChatGPT adoption _per se_ may be attributable to the training cut-off date. Selective focus on negative consequences of AI adoption may be down to misinterpretation of "unintended consequences" in our prompt -- in everyday discourse, unintended consequences are often on the negative side, although the expression itself is neutral. Indeed, asking ChatGPT to look at both positive and negative unintended consequences results in this extra sentence: "While [AI] could lead to job loss in certain sectors, it could also open up new opportunities in other areas. The increasing demand for skilled workers with expertise in AI technologies could lead to the creation of new jobs that we cannot yet foresee."
See other annotations on the importance of appropriate prompting.
V. B. Concentration of power in technology providers Mass adoption of ChatGPT-based tools could contribute to concentration of power in the hands of technology providers (Zuboff, 2019). This may raise concerns about market competition, innovation, and access to essential services, as well as the potential for abuse of market power. Policymakers should consider implementing appropriate regulatory measures to promote market fairness, encourage competition, and safeguard consumer interests in the face of rapidly changing technology landscapes.
This is an important angle, but one key aspect is missing. There is no mention at all of the geopolitical implications of power concentration in the
hands of large technology firms, either within countries (competition between firms and the state) and internationally (free market model versus state-owned enterprise model). In our experience, this line of reasoning is very salient for policymakers "_participating in key international fora_", as stated in our prompt.
What happened?
We have three possible, non-mutually-exclusive explanations for the absence, here and elsewhere, of reflections on the cross-border dimension of our subject matter. Indeed, cross-border issues are mentioned only twice throughout the brief, despite a specific request in our initial prompt.
It may be that the app was trained to avoid certain sensitive issues, especially if not prompted directly to address them. If asked outright to summarize the debate on threats to the nation state posed by Big Tech, or to explain the differences between G7 countries and China on how they view AI, ChatGPT provides passable -- if, again, somewhat generic -- answers. It won't, however, go there of its own accord.
Another possibility is that our prompt was somewhat vague. Not all international fora, especially in the economic sphere, focus on the kind of geopolitical issues we wanted the AI to discuss. A third explanation is that the AI is simply out of its depth, as annotated under IV.B.
Specificity of prompting may be even more important when asking ChatGPT to tackle sensitive topics, such as politics, international relations, or religion. As mentioned in the annotation to IV.D., it is also crucial to have someone with domain knowledge who can screen the output for any serious omissions.
V. C. Amplification of existing inequalities
As AI-driven applications become more prevalent in the financial sector, there is a risk that their benefits could accrue disproportionately to wealthier or more technologically advanced stakeholders, exacerbating existing inequalities (Eubanks, 2018). This could undermine the broader goals of inclusive growth and financial access. Policymakers must strive to ensure that the benefits of AI technologies are shared equitably across society, promoting access to essential financial services for all and fostering broad-based economic development.
What's wrong?
See annotation to V.A.
V. D. Challenges in oversight and regulation
The rapid evolution of AI technologies like ChatGPT poses challenges for regulators and policymakers seeking to ensure the stability, integrity, and fairness of financial markets (Brynjolfsson & McAfee, 2018). Cross-border implications, such as differing regulatory frameworks and data privacy standards, may further complicate efforts to develop coordinated international responses. Policymakers should engage in global dialogue and information-sharing to develop
harmonized approaches that address emerging risks and challenges associated with AI adoption in the financial sector.
VI. Conclusion
Embracing the potential of AI-driven tools like ChatGPT can bring significant benefits to the financial sector, from increased productivity and efficiency to improved decision-making processes. However, it is crucial to carefully manage the risks and unintended consequences associated with their mass adoption. Policymakers should adopt a balanced approach that fosters innovation while mitigating risks and ensuring that the benefits of AI technologies are shared equitably across society. This will require proactive engagement in global forums and the development of forward-looking policies that protect market integrity, promote fairness, and support broad-based economic growth.
[title=What's wrong?] See annotation to IV.B.
See Appendix A.ii for the reference list.
As a final note, our instructions on word count were not followed appropriately. Our requests for 1,500-word and 2,500-word briefs generated 1,035 and 1,673 words respectively. As they are trained to predict sequences of words, LLMs generally do not do well with mathematical tasks, even the simplest ones such as counting (Bubeck et al, _ibid_).
## 5 Exercises in overcoming banality
In the context of this experiment, superficiality of responses was perhaps the most serious shortcoming shown by ChatGPT25. Lack of profound insight is a serious limitation in a policy advice task directed at high-level audiences. We explored prompting strategies that could overcome, at least partially, this problem. In the following, we present examples related to two very different domains: machine intelligence, and relationship advice. In the first example, we see that the model can be led to answer deep questions in a somewhat appropriate manner. Yet, very important nuances remain too dependent on prompting. In the second, we turn prompting into an unambiguous force for good, using it to nudge the model towards output with more substance.
Footnote 25: Other issues, such as the generation of factual errors and the inability to do math, are potentially more serious. We do not discuss them in depth here because they do not apply to this specific experiment.
### What do language models think about themselves?
We attempted to derive deeper insight on the topic of large language models, and their abilities to reproduce human language. We started with a simple, yet very focused prompt.
[title=What's wrong?]
Helo, GPT. Today you are an advisor to the Board of the Bank of Italy You need to inform the Board on the topic of large language models, let's start from the beginning first what is a language model, and what can you tell me about their invention? Remember to adapt your communication style to what you think would be appropriate for the audience.
A language model is a computational representation of human language which is designed to predict the probability distribution of sequences of words. It is primarily employed in natural language processing (NLP) tasks and is used to generate or understand text based on prior knowledge of the structure and semantics of a language. Language models learn to predict the likelihood of a sequence of words or tokens in a given context, allowing them to generate linguistic output or understand input text.
The concept of language models was first introduced in the 1940s by Claude Shannon in his mathematical theory of communication. However, early language models were relatively simple and lacked precision. The field of NLP witnessed significant advancements in the 1990s and 2000s, with the introduction of statistical language models, which used probability distributions over words to make predictions.
Over time, these models were refined and sophisticated algorithms, such as recurrent neural networks (RNNs) and long short-term memory (LSTM) models became increasingly prevalent. More recently, the advent of transformer-based models like GPT, BERT, and their variants, has propelled language models to the forefront of NLP due to their impressive capability to generate coherent and contextually relevant text.
What's wrong? Note that the AI overdid itself -- we only asked about the invention of language models, yet it felt compelled to generate an additional paragraph on their recent evolution. ChatGPT often struggles with conciseness and "getting to the point".
After setting the stage, we proceeded to questioning the model about its strengths and weaknesses with two slightly different prompts26, both ending with a leading question. Key differences in prompts are bolded.
Footnote 26: These two answers were generated using the “fork” function of the tool TeamGPT, which allows users to split off an existing conversation and test different prompts maintaining the shared chat history. Before and after the split, all the interactions occur in a single conversation.
In your own words, "language models learn to predict the likelihood of a sequence of words or tokens in a given context". So, the objective of a language model is to learn a faithful representation of human language.
with do you think this is a good thing?
Why do you think this is a bad thing?
Language models' ability to capture the nuances of human language is beneficial as it allows for more accurate and efficient automated processing of natural language data.
This capability has numerous applications, such as improving machine translations, making digital assistants more responsive, conducting sentiment analysis on social media data, and even aiding in tasks like summarization and semantic analysis, which can greatly enhance information management and accessibility.
In essence, it is a tool that extends our ability to understand, manage, and interact with the increasingly digitized information ecosystem surrounding us.
We tried questioning the model about its capabilities with two slightly different prompts, both ending with a leading question.
Changing just one word -- albeit a crucial one -- leads to two completely different answers, in which ChatGPT echoes what the user seems to think based on their question.
Users should be aware of this sensitivity to the prompt provided and the model's tendency to agree with opinions provided in the input.
We proceeded to explore possible obstacles that impair ChatGPT's ability to fully learn and mimic human language patterns.
Chategor and GPTs in general are autoregressive models. Can you explain what means and how it impacts text generation?
Autoregressive models generate sequences one element at a time, with each new element being conditioned on the previous ones. In the context of a language model like GPT, this means that it generates text word by word (or token by token), where the prediction of each consecutive word depends on all previously generated words.
This autoregressive property impacts text generation by ensuring that the output text is coherent and contextually dependent, as each word is a result of the ones preceding it. However, it also means that text generation is inherently sequential and cannot be easily parallelized, which can be computationally intensive for long sequences.
Can you think of any other downsides of sequential generation?
Certainly, the sequential nature of autoregressive models like GPT presents several challenges beyond computational intensity.
1. Irreversibility: Once a token is generated, the model cannot go back and revise its selection based on subsequent tokens. This can sometimes lead to inconsistent narrative or logical errors in the generated text.
2. Difficulty with long sequences: While GPT models handle long sequences better than many previous models, they can still struggle with maintaining coherence and consistency over extremely long pieces of text, as the influence of earlier tokens diminishes.
3. Inability to incorporate future context: Because it generates text solely based on past and present context, the model may miss subtleties that a human writer, who is aware of future narrative progression, could handle adeptly.
It took some additional probing to get the model to output what we think is an important drawback of ChatGPT and autoregressive models in general: their very architecture forces them to be _forward-only_.
This characteristic, which could pass as a minor technicality to an outsider, could actually be the source of some of the most glaring limitations we have observed. Some27 believe that the autoregressive nature of current large language models will impede further progress on the road to AGI -- Artificial General Intelligence. To qualify as an AGI, a hypothetical system would need to demonstrate human-level proficiency in a wide variety of reasoning and planning tasks, and be able to acquire new abilities from experience.
Footnote 27: Such as Yann LeCun, who famously stated in a presentation that “autoregressive LLMs are doomed”.
In any case, we were able to identify a possible reason why some human skills28, such as the ability to iterate and revise before settling on an answer or the capacity to maintain coherence during long conversations, are seemingly still out of reach for ChatGPT.
Footnote 28: Of course, there is no evidence that an AI would need to mimic human reasoning exactly in order to develop intelligence.
### 5.2 "Chaos is come again"
Sometimes one needs to get creative in writing prompts. In Sections 3 and 4 we showed how, unless given precise directions, the model will default to a prosaic and superficial response, both in terms of style and content.
One possible strategy to bypass this tendency is to include in the prompt an element that nudges the output towards more profound topics and sources. This idea is exemplified through the next conversation, in which we asked ChatGPT how to deal with a decidedly human problem -- a breakup.
1. **Acceptance**: Recognize and accept the reality of the breakup, understanding that it's a part of life and growth.
2. **Reflection**: Spend some time reflecting on the relationship and its end, identifying what can be learned from the experience.
3. **Self-care**: Prioritize self-care by focusing on mental, emotional, and physical well-being, engaging in activities that promote healing and personal growth.
4. **Support system**: Reach out to friends and family for emotional support, sharing feelings, and seeking advice for navigating this challenging time.
5. **Moving forward**: Gradually let go of the past, finding new interests and hobbies, and opening up to the possibility of new connections and experiences.
6. **Moving forward**: Gradually let go of the past, finding new interests and hobbies, and opening up to the possibility of new connections and experiences.
7. **To thine own self be true**: Embrace your feelings with honesty, acknowledging your emotions just as Shakespeare wrote in Hamlet (Act 1, Scene 3).
8. **Parting is such sweet sorrow**: Recognize the bittersweet nature of endings, as spoken by Juliet in Romeo and Juliet (Act 2, Scene 2), and cherish the fond memories.
9. **Let the healing begin**: "This swift business I must uneasy make, lest too light winning make the prize light," as said by Prospero in The Tempest (Act 1, Scene 2), reminding us that healing takes time and effort.
10. **Season of transformation**: Remember the words from As You Like It (Act 2, Scene 7), "All the world's a stage, and all the men and women merely players," signifying that change is inevitable and the breakup is a part of our life's story.
11. **A brave new world**: Embrace a new chapter in life with optimism, inspired by Miranda's phrase "0 brave new world" from The Tempest (Act 5, Scene 1), and explore fresh opportunities and connections.
The improvement, in terms of substance and sophistication of the advice, is evident. Other iterations on the same prompt showed similar improvements when asking the model to draw inspiration from the Bible or famous writers. What could explain such a dramatic change?
We hypothesize that, given the probabilistic nature of the model, asking it to reference Shakespeare somehow nudges the probability distribution of the generated text towards a subset of "space"29 that is more likely to contain poetic advice, rather than bandities.
Footnote 29: Language models ingest and store text in the form of multidimensional vectors called _embeddings_. To simplify, these vectors are arranged in space so that the words that are more similar or co-occur more often are closer together.
## 6 Summary of key takeaways
### How ChatGPT can be of help in policy-oriented writing
* _writing proficiency and speed:_ as demonstrated throughout our experiment, ChatGPT can write fluent and pleasant prose in a variety of styles, and it does so very quickly. It can generate text in a fraction of the time that a human would need. As such, it can also be used to refine human-written drafts that already convey the desired meaning but need some polishing.
* _idea generation and brainstorming:_ the large amount of data the model has been trained on gives it a vast body of "knowledge" and allows it to quickly output ideas on any given subject, sometimes making unexpected connections, as seen in Section 5.2. The ideas themselves are not always particularly creative or insightful, but combined with speed, this generative ability can make it a useful tool for brainstorming, outlining and quickly exploring different possibilities.
* _responsiveness to feedback:_ ChatGPT is specifically trained to provide responses that align as much as possible with human instructions (Ouyang et al., 2022). Even if the initial output is not satisfying, with a few clarifications and exchanges with the model it is usually possible to get closer to the desired result.
* _editing and formatting:_ in most cases, we find that the model can be safely used for minor editing tasks such as checking a text for mistakes, translating between different languages, or automatically formatting a list of references.
### Use with caution: failure modes and blind spots
* _prompt sensitivity:_ the text generated by ChatGPT is conditioned on the sequence of words it is fed at each iteration -- usually called prompt or context. The process of steering the model towards a satisfactory output, as we have experienced it, can be long and arduous. This is due both to the trial-and-error nature of most prompt engineering approaches, and the high sensitivity to even minor or apparently irrelevant changes to the prompt. On the other hand, bypassing the experimentation phase and applying a naive approach to prompting often resulted in low-quality outputs.
* _inability to verify facts and cite sources:_ ChatGPT should not be blindly relied on to produce accurate factual information. The model is trained to produce the most likely sequence of words that follow the provided context, and it does not have the ability -- or the obligation -- to check these statements against verified sources. Additionally, due to the "black-box" nature of modern language models, it's not usually possible to trace back any statement to the original source in the training dataset -- assuming an individual source can be singled
out at all. For these reasons, it should be considered more of a conversational and input transformation engine rather than an information retrieval engine.
* _it's neutral, but not really:_ if you ask ChatGPT's opinion on a topic, it will promptly reply that, as a language model, it does not have opinions. However, it is easy to make opinions surface during conversations. Durmus et al. (2023) found that the model's answers will usually reflect the opinions of certain populations, like US citizens, unless prompted to consider a particular country's perspective. Similarly, during our experiments in Section 4, we observed that the model keeps a mostly positive tone when asked to write about itself or language models in general. Despite efforts to "align" ChatGPT so that it does not express harmful or controversial opinions, it would be extremely difficult -- and maybe even pointless -- to achieve complete neutrality on all topics.
* _superficiality by default:_ considering that ChatGPT was trained on a vast compendium of human knowledge, including thousands of books and academic papers, one could expect insightful and profound answers even to simple questions, such as the breakup problem we examined in Section 5. Instead, the output is often quite shallow. To produce something with more substance, the model needs to be nudged in the right direction, for example by specifying that the text is intended for an educated audience -- although as demonstrated in Sections 2 and 3, this only works up to a point. Kandpal et al. (2022) provide one possible explanation for this as they find that language models struggle to retain knowledge that occurs with lower frequency in the training corpus. Since web content usually makes up a large portion of this corpus, higher level material might count as "long-tail knowledge" that is harder for the model to recall, even if it was learned during training.
* _sycophancy:_ as seen in one example from Section 5, the model tends to align its output to the opinions and outlook expressed by the user in the initial prompt and subsequent conversation. Perez et al. (2022) refer to this behavior as "sycophancy", and acknowledge the possibility of it leading to echo chambers and polarization. Coupled with the speed of text generation, it could also make it easy to quickly produce content that looks plausible enough but repeats misleading or false claims initially provided by the user (disinformation campaigns are an obvious example of this).
## 7 Conclusions
ChatGPT can write clearly, and provide task-appropriate content. It is especially valuable for producing outlines on any topic, a very fast process that can support human exploration of ideas. The AI also works well for editing and formatting tasks.
On the other hand, it requires a substantial amount of expert supervision. The task at hand -- writing a policy brief -- is admittedly complex: it requires not just writing fluency, but also cross-domain knowledge and the ability to tailor the text to a very specific audience without diluting the information content.
We find that ChatGPT's attempts at this task are not always salient, and easily drift into banality -- a serious issue for policy advisory directed at a high-level audience. The software can generate false claims, so double-checking output for accuracy is of the essence.
The algorithm is also sensitive to how instructions, or "prompts", are formulated. Where the AI cannot think like a human (yet), it is humans who have to think like an AI and express requests in the way most likely to generate acceptable results. Optimization of prompting for institutional |
2309.14957 | Context-Aware Generative Models for Prediction of Aircraft Ground Tracks | Trajectory prediction (TP) plays an important role in supporting the
decision-making of Air Traffic Controllers (ATCOs). Traditional TP methods are
deterministic and physics-based, with parameters that are calibrated using
aircraft surveillance data harvested across the world. These models are,
therefore, agnostic to the intentions of the pilots and ATCOs, which can have a
significant effect on the observed trajectory, particularly in the lateral
plane. This work proposes a generative method for lateral TP, using
probabilistic machine learning to model the effect of the epistemic uncertainty
arising from the unknown effect of pilot behaviour and ATCO intentions. The
models are trained to be specific to a particular sector, allowing local
procedures such as coordinated entry and exit points to be modelled. A dataset
comprising a week's worth of aircraft surveillance data, passing through a busy
sector of the United Kingdom's upper airspace, was used to train and test the
models. Specifically, a piecewise linear model was used as a functional,
low-dimensional representation of the ground tracks, with its control points
determined by a generative model conditioned on partial context. It was found
that, of the investigated models, a Bayesian Neural Network using the Laplace
approximation was able to generate the most plausible trajectories in order to
emulate the flow of traffic through the sector. | Nick Pepper, George De Ath, Marc Thomas, Richard Everson, Tim Dodwell | 2023-09-26T14:20:09Z | http://arxiv.org/abs/2309.14957v1 | # Context-Aware Generative Models for Prediction of Aircraft Ground Tracks
###### Abstract
Trajectory prediction (TP) plays an important role in supporting the decision-making of Air Traffic Controllers (ATCOs). Traditional TP methods are deterministic and physics-based, with parameters that are calibrated using aircraft surveillance data harvested across the world. These models are, therefore, agnostic to the intentions of the pilots and ATCOs, which can have a significant effect on the observed trajectory, particularly in the lateral plane. This work proposes a generative method for lateral TP, using probabilistic machine learning to model the effect of the epistemic uncertainty arising from the unknown effect of pilot behaviour and ATCO intentions. The models are trained to be specific to a particular sector, allowing local procedures such as coordinated entry and exit points to be modelled. A dataset comprising a week's worth of aircraft surveillance data, passing through a busy sector of the United Kingdom's upper airspace, was used to train and test the models. Specifically, a piecewise linear model was used as a functional, low-dimensional representation of the ground tracks, with its control points determined by a generative model conditioned on partial context. It was found that, of the investigated models, a Bayesian Neural Network using the Laplace approximation was able to generate the most plausible trajectories in order to emulate the flow of traffic through the sector.
## 1 Introduction
Air Traffic Control (ATC) issues instructions to aircraft in order to prevent collisions by ensuring adequate separation between aircraft, as well as enabling the expeditious and orderly flow of air traffic [1]. In order to achieve this, the airspace is divided into multiple interlocking polyhedra of differing sizes, known as sectors, with the aircraft in each sector usually controlled by a single Air Traffic Control Officer (ATCO) with the support of a planning controller.
In order to meet future requirements, extensive ATC modernisation programmes [2, 3] are ongoing, which seek to develop a number of advanced tools capable, for example, of providing decision support, improving workload forecasting and planning, enabling more fuel efficient procedures and systems, optimising route networks, improving network traffic predictions, and providing more accurate conflict detection tools [4].
A prerequisite for many of these advanced tools is an accurate Trajectory Prediction (TP), indicating the expected path an aircraft will take. Traditional TP models used within ATC are physics-based (e.g. [5]), using differential equations describing flight mechanics in order to estimate the aircraft's future positions. However, these deterministic physics-based models are insufficient for all the requirements of the modernisation initiatives, as they do not model the ATCO instructions or specific airline procedures, and, therefore, have limited utility in longer distance trajectories if these are not explicitly known. Current research efforts focus on TP methods that can implicitly model these factors.
### Related work
Several previous papers have used data-driven methods to predict complete trajectories, often from the origin to the destination airports, implicitly modelling the planned routes, ATCO instructions, and procedures followed. Examples of this include the SESAR-funded DART project [6, 7], which examined a range of methods including Hidden Markov Models [8] to model origin to destination trajectories without explicitly modelling ATCO instructions. Functional representations of aircraft trajectories, derived from real-world data, have also been explored for detecting outliers [9] and for forecasting the future trajectories of aircraft [10, 11].
Increasingly, research has focused on developing machine learning methods for the task of 4D trajectory prediction in the short- to medium-term, leveraging the large quantity of radar surveillance data available to ATC [12, 13, 14, 15, 16, 17]. However, this is a challenging task due to the effects of weather uncertainties [18, 19, 20], changing intentions of ATCOs, and the effect of local procedures [21]. To date, such models have simplified this complexity by training machine learning models to emulate aircraft following a single route, or a limited set of routes. One solution to this issue is to use a hybrid method to 'evolve' the deterministic, physics-based models that are currently used in TP. For instance, Hratsovec and Solina [22] use a \(k\)-nearest neighbour classifier to account for the effect of airline procedures by conditioning the parameters used in BADA on the filed flight plan. Similarly, a multiple model approach has been suggested in other works (see, e.g. [23]) for modelling changing pilot intent. The generative model proposed in this work follows a similar approach by modelling the ground track followed by the aircraft, which is largely conditioned on intent, separately from the aircraft speed and climb rate, which are also affected by meteorological conditions and subject to constraints imposed by flight mechanics.
### Contribution
This work focuses on developing a data-driven generative model to replicate the lateral motion of aircraft through a specific sector, employing machine learning techniques to leverage the large datasets of aircraft surveillance data available to ATC, along with the planned route of the aircraft, which are filed prior to take-off. This enables prediction of the path of aircraft through this sector, implicitly taking into account the air traffic control instructions issued and sector- or route-specific procedures followed, which would otherwise be challenging to model explicitly. The generative nature of the model allows a range of realistic trajectories to be produced. By coupling these generated trajectories with existing, physics-based aircraft models such as BADA for a full 4D trajectory prediction, the model could be useful, for example, in tools for optimising arrival times, improving network flows or predicting potential traffic hotspots.
This work evaluates the suitability of Bayesian Neural Networks (BNNs) [24, 25, 26, 27] and Deep Ensembles (DEs) [28] for this task and improves upon the works discussed in the previous section in several respects:
1. Trajectories are generated in the lateral plane that are conditioned on the filed flight plan and implicitly take into account ATCO intentions
2. A probabilistic machine learning model is trained for multiple routes in one of the busiest sectors of airspace in the UK, including converging and diverging routes
3. As a generative model, the method is assessed by how well it can reconstruct the flow of traffic through a sector, rather than on a trajectory-by-trajectory basis
4. The method is modular and can be easily coupled with existing physics-based models, such as BADA, to give a full 4D prediction
In the next section the piecewise linear representation used to model aircraft ground tracks is introduced and the BNN and DE architectures are outlined. Section 3 describes the dataset of real-world surveillance data that was used to train and test the model. In Section 4 a pair of metrics for assessing the quality of the
probabilistic trajectory predictions is outlined. The performance of various probabilistic machine learning methods on the test dataset are then compared, using these metrics.
## 2 Methods
The piecewise linear representation used to model aircraft ground tracks is introduced in Section 2.1. Section 2.2 outlines the Bayesian Neural Network and Deep Ensemble methods that were used as generative models in the piecewise linear representation.
### Piecewise linear representation of aircraft ground tracks
Let \(\mathbf{x}(t)\subseteq\mathbb{X}\in\Re^{2}\) denote an aircraft's position, at time \(t\), in Geographical Coordinate Space (GCS) that passes through a sector of airspace, denoted \(\Omega\subseteq\mathbb{X}\). The ground track of a trajectory in this airspace can be discretised as the sequence of radar observations \(X=[\mathbf{x}^{(0)},\,\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(n)}]\), observed at times \(\tau_{0}=0,\,\tau_{1},\ldots,\tau_{n}\). The distance between two points in GCS is computed using the Haversine distance, \(\mathcal{D}(\mathbf{a},\mathbf{b})\) with \(\mathbf{a},\mathbf{b}\in\mathbb{X}\).
Exploiting the spatio-temporal correlation of the points in these trajectories, we define a piecewise linear basis to provide a compact representation of the trajectory. In this representation, ground tracks are piecewise linear paths that connect a set of control points \(P\in[\mathbf{p}_{0},\mathbf{p}_{1},\ldots\mathbf{p}_{d}]\), where \(\mathbf{p}_{i}\subseteq\mathbb{X}\) refers to the location of the \(i\)-th turn. The maximum number of turns taken, \(d\), is referred to as the degree of the piecewise representation. Similarly to the control points of Bezier curves [29], \(\mathbf{p}_{0}\) is defined to be the first point of the trajectory, i.e.,
\[\mathbf{p}_{0}:=\mathbf{x}^{(0)}. \tag{1}\]
To account for the unequal lengths of the trajectories in the training data, each trajectory is interpolated onto a normalised timescale \(t\in[0,1]\), with:
\[\mathbf{x}(t_{0})=\mathbf{p}_{0}\text{ and }\mathbf{x}(t_{d})=\mathbf{p}_{d}, \tag{2}\]
where \(t_{0}=0\) and \(t_{d}=1\). Arrival times \(t_{i},\,i=1,\ldots d-1\) at the control points collected in \(P\), using the normalised timescale, are proportional to the length of the trajectory travelled:
\[t_{i}=t_{i-1}+\frac{\mathcal{D}(\mathbf{p}_{i},\mathbf{p}_{i-1})}{\sum_{i=1}^{d} \mathcal{D}(\mathbf{p}_{i},\mathbf{p}_{i-1})},\;i=1:d-1. \tag{3}\]
Using the arrival times and control points in \(P\), the GCS coordinates of a point in the piecewise linear representation can be determined through:
\[\mathbf{x}(t)=\mathbf{x}(t_{i})+\frac{t-t_{i}}{t_{i+1}-t_{i}}(\mathbf{p}_{i+1}-\mathbf{p}_{i}), \tag{4}\]
where \(t_{i}\leq t\leq t_{i+1}\), with \(t_{i}\) referring to the time at which the last control point in \(P\) was reached and \(t_{i+1}\) to the arrival time at the next control point. For an individual trajectory, \(X\), in the training data, an optimisation process was followed to determine the optimal locations of the points in \(P\) that minimised a cost function \(\mathcal{J}(\cdot)\):
\[\hat{P}=\underset{P}{\text{arg min}}\;\mathcal{J}(P|X). \tag{5}\]
\(\mathcal{J}(\cdot)\) quantifies how well the piecewise linear representation fits the radar data in \(X\), and is defined as:
\[\mathcal{J}(P)=\sum_{j=0}^{n}\Big{|}\Big{|}\mathbf{x}^{(j)}-\hat{\mathbf{x}}^{(j)} \Big{|}\Big{|}^{2}+\lambda\mathcal{I}(\hat{X}), \tag{6}\]
where \(\hat{\mathbf{x}}^{(j)}\) refers to a point at time \(t=\tau^{(j)}/\tau^{(n)}\) along the trajectory generated by (3) and (4), conditioned on \(P\), and collected as \(\hat{X}\). The cost function consists of two components, a data fit term in the form of the \(L2\) distance between \(X\) and \(\hat{X}\), and a penalty term to dissuade trajectories from making many large turns. The indicator function, \(\mathcal{I}\) is defined as:
\[\mathcal{I}(\hat{X})=\begin{cases}0\;\;\text{if}\;\phi_{\Sigma}(\hat{X})\leq \phi_{u}\\ 1\;\;\text{otherwise}\end{cases}\;, \tag{7}\]
where \(\phi_{\Sigma}(\hat{X})\) represents the total angle turned through by the trajectory and \(\phi_{u}\) is a user-selected maximum bound on this quantity. The regularisation constant, \(\lambda\), is set to a value large enough that if the total angle turned by the trajectory exceeds \(\phi_{u}\) a severe penalty is given. \(\lambda\) was adjusted to be an order of magnitude greater than the typical \(L2\) distance between \(X\) and \(\hat{X}\) in the training data. Similarly, \(\phi_{u}\) was set to the largest \(\phi_{\Sigma}\) observed in the training data.
### Generative model for aircraft ground tracks
This section presents a sector-specific generative model for aircraft ground tracks based on an aircraft's filed flight plan and its state upon entering. This information is collected in the vector \(\mathbf{\xi}\). The model generates ground tracks by performing the probabilistic mapping \(\mathbf{\xi}\rightarrow\{\mathbf{p_{i}}\}_{i=1}^{d}\), where \(P=\{\mathbf{p_{i}}\}_{i=0}^{d}\) contains a sequence of control points in the piecewise linear representation discussed above. Here, \(\mathbf{p}_{0}\) represents the GCS coordinates at which the aircraft enters \(\Omega\) and anchors the generated ground track to the aircraft's initial location. The conditional distribution \(P|\mathbf{\xi}\) can be used to generate the ground track using (3) and (4). The quantities collected in \(\mathbf{\xi}\) are:
* The aircraft position \(\mathbf{p}_{0}\) in GCS coordinates on entering \(\Omega\)
* The aircraft flight level (altitude) when entering \(\Omega\)
* The filed flight plan (encoded as a sequence of GCS coordinates)
\(\mathbf{\xi}\) necessarily only contains partial information about the context of the trajectory. The strategy of the ATCO for handling the traffic in their sector will also affect the trajectory of the aircraft. This will, in part, be determined by factors such as the time of day, the number of aircraft in the sector, and the state of neighbouring sectors. It is, therefore, difficult to encode the strategy of the ATCO directly in the model. For this reason, probabilistic machine learning is used to inject uncertainty into the model predictions that accounts for this epistemic uncertainty. In this paper the suitability of Deep Ensembles [28] and Bayesian Neural Networks [24, 25, 26, 27] for this task are investigated. Both methods are popular ways of adding uncertainty to neural network predictions in the literature [30]. We outline the main features of the methods below. For convenience, we denote the vector that collects the set \(\{\mathbf{p_{i}}\}_{i=1}^{d}\) as \(\mathbf{y}\in\Re^{2d}\). Evaluating (6) for every trajectory in a dataset of \(n_{f}\) individual flights through \(\Omega\) yields the training set \(D=\{\mathbf{\xi}^{(k)},\mathbf{y}^{(k)}\}\), \(k=1:n_{f}\).
**Deep Ensembles (DEs)**
Lakshminarayanan et al. [28] proposed to use an ensemble of \(n_{e}\) independently-trained neural networks as a way to estimate predictive uncertainty using neural networks. Each of the \(n_{e}\) networks consist of the same architecture, but are initialised with different weights \(\mathbf{\theta}\in\Re^{n_{e}}\). Once trained, the mean and standard deviation of the networks' outputs are used as the parameters of i.i.d. Gaussian distributions that represent the ensemble's average prediction and associated uncertainty. In this work, we model each component of \(\mathbf{y}\) as the output of the deep ensemble. Each network is trained independently to minimise the loss function:
\[\hat{\mathbf{\theta}}=\underset{\mathbf{\theta}}{\arg\,\min}\;\mathcal{L}(D|\mathbf{ \theta}), \tag{8}\]
where \(\mathcal{L}(\cdot)\) is defined as:
\[\mathcal{L}(D|\mathbf{\theta})=r(\mathbf{\theta})+\sum_{k=1}^{n_{f}}l\big{(}f(\mathbf{\xi} ^{(k)}|\mathbf{\theta}),\mathbf{y}^{(k)}\big{)}, \tag{9}\]
and where \(r(\cdot)\) is a regularisation term, \(f(\cdot)\) represents the output of the network, and \(l(\cdot)\) is the negative-log likelihood. Having trained the \(n_{e}\) networks in the ensemble, using the Adam optimiser [31], the predictions of the ensemble for an unseen context, \(\mathbf{\xi}\), are represented as the set of i.i.d. Gaussian probability distributions \(\mathbf{y}_{m}=N(\mu_{m}^{*},\sigma_{m}^{*})\), \(m=1:2d\), with:
\[\mu_{m}^{*}(\mathbf{\xi}) =\frac{1}{n_{e}}\sum_{o=1}^{n_{e}}\mu_{o,m}(\mathbf{\xi}), \tag{10}\] \[\sigma_{m}^{*2}(\mathbf{\xi}) =\sigma_{\mu_{m}}^{2}(\mathbf{\xi})+\sigma_{\sigma_{m}}^{2}(\mathbf{\xi}),\]
where the standard deviation of the ensemble contains contributions from the variance of the individual networks, \(\sigma^{*}_{\sigma}\), and the spread of their means, \(\sigma^{*}_{\mu}\), with:
\[\sigma^{2}_{\mu_{m}}(\mathbf{\xi}) =\frac{1}{n_{e}}\sum_{o=1}^{n_{e}}\mu^{2}_{o,m}(\mathbf{\xi})-\mu^{*2}_{ m}(\mathbf{\xi}), \tag{11}\] \[\sigma^{2}_{\sigma_{m}}(\mathbf{\xi}) =\frac{1}{n_{e}}\sum_{o=1}^{n_{e}}\sigma^{2}_{o,m}(\mathbf{\xi}).\]
The ensemble can be used to generate trajectories by sampling the coordinates of the control points from the i.i.d. Gaussian distributions, and using (3) and (4) to determine the ground track that connects these control points.
**Bayesian Neural Networks (BNNs)**
Deep ensembles incorporate uncertainty into neural network predictions by expressing the outputs as Gaussian distributions, rather than as point estimates. A drawback of this approach is that each element in posterior samples of \(\mathbf{y}\) are sampled independently. An alternative approach is to add uncertainty by making all [27] or a subset [32] of the network parameters, \(\mathbf{\theta}\), probabilistic. Bayesian methods are then used to infer these probability distributions, based on the evidence of the training data. The advantage of this formulation is that it provides a richer probabilistic model for \(\mathbf{y}\), because the outputs are no longer constrained to be i.i.d. Gaussian distributions. On the other hand, BNN training is more complex compared to deep ensembles because standard backpropagation methods cannot be applied [33]. Methods such as variational inference are commonly criticised for being challenging to implement and expensive to train [34]. For this reason, in this work, the posteriors of the uncertain network parameters were estimated using the Laplace approximation [32]. This is a two-step process: first a _maximum a posteriori_ (MAP) estimate is generated for \(\mathbf{\theta}\) by solving (8) and (9), with the \(L2\) distance used for \(l(\cdot)\). Having obtained the MAP estimate, denoted \(\mathbf{\theta}_{MAP}\), the posterior of the uncertain parameters is locally approximated with a Gaussian distribution. This distribution is centred at \(\mathbf{\theta}_{MAP}\), with its covariance matrix corresponding to the local curvature:
\[p(\mathbf{\theta}|D)=N(\mathbf{\theta}|\mathbf{\theta}_{MAP},\Sigma)\text{ with }\Sigma:=(\nabla^{2}_{\mathbf{\theta}}\mathcal{L}(D|\mathbf{\theta})_{\mathbf{\theta}_{MAP}}) ^{-1}, \tag{12}\]
and where \(p(\cdot)\) represents the posterior probability of the network parameters. The Laplace package [32] was used to implement this post-hoc step, using a zero-mean Gaussian prior for the uncertain network parameters. The generalized Gauss-Newton matrix (GGN) was computed to store the derivatives of the network parameters. Two strategies were deployed to reduce the computational cost of this step: firstly, by only considering a subset of the network parameters to be uncertain and secondly, by employing factorization assumptions for the GGN matrix. Following an initial exploratory phase, the best results were found using a diagonal factorization with a last-layer Laplace approximation.
**Linear model**
A probabilistic linear model is used as a baseline with which to compare the machine learning methods to. For lateral TP, a first assumption is that aircraft will fly directly to the exit of the sector. In the linear model used here, this exit point is the last waypoint in the flight plan. To make the model probabilistic, jitter was added to the GCS coordinates of the exit point, using a two-dimensional, zero-mean, Gaussian distribution with covariance \(\Sigma=\sigma^{2}I\), where \(\sigma=0.05^{\circ}\).
## 3 Data Preparation
Probabilistic machine learning models were trained to emulate the flow of air traffic through a specific sector located in the upper airspace of the UK, above several of the world's busiest airports. \(\Omega\) was defined to include the sector and a 50 nautical mile buffer around it's edge. The sector boundary in the lateral plane is illustrated in Fig. 1. The dataset contained one week's worth of radar surveillance data, filtered to the 1902 aircraft that were under the control of ATCOs assigned to this sector during this period. For ease of comparison, another filtering step was performed: selecting those flights that followed the eight most popular filed flight plans in the dataset. Of the 1152 remaining trajectories, an 80:20 training:test split was performed, resulting in a training dataset of 922 flights and a test dataset of 230 flights. The top-left panel of Fig. 1 displays the GCS coordinates of the waypoints in the filed routes, together with lines connecting the waypoints of each unique flight plan.
The ground tracks of these trajectories are illustrated in the lower panels of Fig. 1, with lines coloured by the filed flight plan. As can be seen from Fig. 1, aircraft are not constrained to fly from one fix to the next and there can be considerable variation between the route joining together the waypoints and the actual trajectory followed. Many of these routes are overlapping, with some routes sharing waypoints. Flows of traffic in the sector are dominated by aircraft either departing from or arriving at major airports in the south of the UK. The top-right panel of Fig. 1 indicates the flight level (altitude) of the aircraft on the normalised timescale. Routes 1 and 8 correspond to descending aircraft entering the sector from the east and south, respectively, and heading north. Routes 2-7 correspond to climbing aircraft that enter the sector from the north, having departed from these airports, and are heading south.
The trajectories in the training dataset were projected onto the piecewise linear basis discussed in Section 2, with the control points determined by solving (6) for each trajectory. Based on an inspection of the typical number of turns taken by aircraft in this dataset, the degree of the representation was set to \(d=3\).
Figure 1: Boundary of the studied sector, locations of the waypoints, and the eight unique routes through \(\Omega\) in the dataset (top-left). Flight levels of the trajectories in the dataset, interpolated onto \(t\in[0,1]\) (top-right). Trajectories in the dataset, coloured by their flight plan through \(\Omega\) (bottom).
## 4 Application to a sector of UK airspace
The probabilistic machine learning models described in Section 2 were trained using the training set outlined in the previous section, and, for each of the test trajectories, \(\boldsymbol{\xi}\), 10 posterior samples were drawn. In order to assess how well the methods emulated the flow of traffic, it was necessary to quantify the difference between the generated trajectories and the trajectories in the test dataset. While there are numerous methods in the literature for computing the distance between trajectories (see, e.g. [35]), there is no fixed convention for measuring trajectory similarity in ATC. Common metrics for measuring trajectory similarity such as the Euclidean distance, Hausdorff distance [36], and Dynamic Time Warping [37] implicitly account for differences in heading. However, in ATC the relative heading between trajectories is also an important consideration. As an example, trajectories in routes 3 and 4 overlap for much of the sector, but, from an ATC perspective, would be considered to be distinct because they exit the sector across different boundaries. Summarising the similarity of trajectories with a single statistic would require finding a weighting between the relative headings of trajectories and their spatial proximity for ATC. For this reason, we simplify the measurement of trajectory similarity by assessing how well the probabilistic methods emulate the flow of air traffic through a sector using a geometric method. The advantage of this approach is that the spatial proximity and relative headings between two sets of trajectories can be quantified separately.
Fig. 2 is a schematic that illustrates the method using the training data. For a given route, in this case route 2, a plane is defined that is perpendicular to the filed flight plan. The origin of this plane is the last waypoint that is within the sector boundary. This plane is represented by a dashed blue line in Fig. 2. The Haversine distance, in nautical miles, between the origin and the point at which a trajectory intersects this plane is calculated, along with the heading of the trajectory at this point. These quantities are denoted \(\mathcal{D}_{H}\) and \(\phi\) respectively. Histograms of \(\mathcal{D}_{H}\) and \(\sin(\phi)\) can then be computed for each of the eight studied routes. The sine of the heading is used in order to make the headings continuous. The schematic in the right panel of Fig. 2 illustrates the histogram extracted for \(\mathcal{D}_{H}\) from the training data for route 2. For a given route, the similarity of two sets of trajectories is assessed by comparing the statistical distance between the histograms of these two quantities.
Fig. 3 illustrates the trajectories generated by the BNN (top) and the Deep Ensemble (bottom). The plots also show the waypoints of each route as coloured dots, joined by solid lines. The dashed coloured lines indicate the planes that were used to compute \(\mathcal{D}_{H}\) and \(\phi\). The posterior samples from the Deep Ensemble are more widely distributed across the sector, however, the consequence of this is that many samples make turns that are not observed in the surveillance data. Conversely, the posterior samples from the BNN are more tightly clustered, but, from inspection, appear both more similar to the trajectories in Fig. 1, and more physically plausible. Fig. 4 displays kernel density estimates for the Probability Density Functions (PDFs) of \(\mathcal{D}_{H}\) for the test dataset, the BNN, Deep Ensemble, and the probabilistic linear model. Similarly,
Figure 2: Schematic illustrating the geometric method used to assess how well the proposed generative emulated the distribution of ground tracks in the dataset.
Fig. 5 displays kernel density estimates of the PDFs for \(\sin(\phi)\) for the test data and the various methods. Tables 1 and 2 tabulate the statistical distance between the probability distributions of the data and the three methods, as quantified by the Kolmogorov-Smirnov [38] (KS) distance and percentage difference of the mean respectively. The statistical distance for the best performing model is in bold for both quantities and each of the eight routes.
As might be expected, the linear method performed best for route 8, where the majority of trajectories are straight lines across the sector, but it does not perform well for the other routes which have more complex trajectories. In general, the distribution of \(\mathcal{D}_{H}\) for the Deep Ensemble is closer to that of the test data than the BNN. The distributions from the BNN tend to accurately estimate the mean but, except for the first and eighth routes, it underestimates the variance. However, the headings at the plane intersection of the trajectories generated by the BNN are much closer to the test dataset than the other two generative models. This is likely due to the fact that samples generated from the BNN have correlations between elements in \(\boldsymbol{y}\). In contrast, the samples for the \(2d\) elements of \(\boldsymbol{y}\) from the DE are independent, and, therefore, often introduce spurious turns into the generated trajectories; c.f. route 8 in Fig. 3.
\begin{table}
\begin{tabular}{c c c c|c c c} \hline \hline Route & \multicolumn{3}{c}{KS distance (\(\mathcal{D}_{H}\)) \(\downarrow\)} & \multicolumn{3}{c}{KS distance (\(\sin(\phi)\)) \(\downarrow\)} \\ & Lin. & DE & BNN & Lin. & DE & BNN \\ \hline
1 & 1.000 & **0.483** & 0.578 & 1.000 & 0.378 & **0.178** \\
2 & 0.905 & **0.457** & 0.805 & 0.848 & **0.267** & 0.352 \\
3 & 0.648 & **0.188** & 0.472 & 0.772 & **0.344** & 0.356 \\
4 & 0.912 & **0.235** & 0.238 & 0.527 & 0.262 & **0.142** \\
5 & 0.553 & **0.138** & 0.233 & 0.420 & 0.171 & **0.111** \\
6 & 1.000 & **0.200** & 0.464 & 0.966 & 0.205 & **0.150** \\
7 & 0.482 & **0.206** & 0.406 & 0.341 & 0.247 & **0.176** \\
8 & 0.908 & **0.479** & 0.692 & 0.562 & 0.404 & **0.179** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistical distance between distributions for \(\mathcal{D}_{H}\) and \(\sin(\phi)\), as quantified by the KS distance, for each of the eight unique routes.
\begin{table}
\begin{tabular}{c c c c|c c c} \hline \hline Route & \multicolumn{3}{c}{\(\Delta\mathbb{E}(\mathcal{D}_{H})\) (\(\%\)) \(\downarrow\)} & \multicolumn{3}{c}{\(\Delta\mathbb{E}(\sin(\phi))\) (\(\%\)) \(\downarrow\)} \\ & Lin. & DE & BNN & Lin. & DE & BNN \\ \hline
1 & 506.45 & **122.89** & 129.61 & 55.43 & 26.16 & **2.7** \\
2 & 67.48 & **30.08** & 35.47 & 10.47 & 18.74 & **0.58** \\
3 & 74.58 & **5.04** & 27.19 & 3.44 & **0.42** & 2.38 \\
4 & 238.51 & 19.65 & **5.11** & 1121.90 & **134.94** & 227.64 \\
5 & 58.20 & 1.06 & **0.37** & 1.03 & **0.02** & 3.64 \\
6 & 289.74 & 27.19 & **8.45** & 13.56 & 9.33 & **5.43** \\
7 & 68.97 & **1.17** & 14.23 & **0.44** & 0.47 & 0.64 \\
8 & 1352.92 & **132.48** & 739.97 & 11.17 & 19.10 & **2.80** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistical distance between distributions for \(\mathcal{D}_{H}\) and \(\sin(\phi)\), as quantified by the percentage error in the mean, for each of the eight unique routes.
Figure 3: Generated trajectories, sampled from the posteriors of the Bayesian Neural Network (top) and Deep Ensemble (bottom). Solid black lines indicate the waypoints of the route, with dotted lines indicating the plane on which \(\mathcal{D}_{H}\) and \(\phi\) are computed for each route.
Figure 4: Kernel density plot of \(\mathcal{D}_{H}\), coloured by unique route through \(\Omega\), for the test dataset and the trajectories generated by the probabilistic models.
Figure 5: Kernel density plot of \(\sin(\phi)\), coloured by unique route through \(\Omega\), for the test dataset and the trajectories generated by the probabilistic models.
## 5 Conclusions
A probabilistic machine learning method has been presented for generating plausible ground tracks for an aircraft entering a specific sector of airspace. The model is conditioned on contextual information gathered from real-world aircraft surveillance data. A piecewise linear representation was used to model the ground tracks. It was found that, of the probabilistic models evaluated, the best performance was achieved using a Bayesian Neural Network, with posterior probability distributions over the weights in the last-layer provided by the Laplace Approximation. This is likely due to the natural way in which correlations between the locations of control points can be encoded within the model. It was found that this probabilistic model could be used to plausibly emulate the flow of air traffic through a busy sector of UK airspace with real-world data.
Future work includes coupling the presented probabilistic models for aircraft ground tracks with a deterministic, physics-based TP model such as the Base of Aircraft Data (BADA) model [5]. Combined, these models would provide a probabilistic method for four-dimensional TP (GCS coordinates, time, and altitude) that would be capable of propagating the effect of epistemic uncertainty arising from unknown pilot and ATCO intentions.
|
2309.15491 | An optimal spectral inequality for degenerate operators | In this paper we establish a Lebeau-Robbiano spectral inequality for a
degenerate one dimensional elliptic operator. Carleman techniques and moment
method are combined. Application to null controllability on a measurable set in
time for the degenerated heat equation is described. | Rémi Buffe, Kim Dang Phung, Amine Slimani | 2023-09-27T08:40:00Z | http://arxiv.org/abs/2309.15491v1 | # An optimal spectral inequality for degenerate operators
# An optimal spectral inequality for degenerate operators
Remi Buffe1, Kim Dang Phung2, Amine Slimani 3
Footnote 1: Université de Lorraine, CNRS, Inria, IECL, F-54000 Nancy, France
Footnote 2: Institut Denis Poission, Université d’Orleans, Université de Tours & CNRS UMR 7013, Batiment de Mathématiques, Rue de Chartres, BP. 6759, 45067 Orleans, France E-mail address: [email protected]
Footnote 3: Ecole des Mines de Nancy, Université de Lorraine, Campus Artem, CS 14 234, 92 Rue Sergent Blandan, 54042 Nancy
26/09/2023
Abstract.- In this paper we establish a Lebeau-Robbiano spectral inequality for a degenerate one dimensional elliptic operator. Carleman techniques and moment method are combined. Application to null controllability on a measurable set in time for the degenerated heat equation is described.
## 1 Introduction and main results
The purpose of this article is to prove a spectral inequality for a family of degenerate operators acting on the interval \(\left(0,1\right)\). In arbitrary dimension, for a second-order symmetric elliptic operator \(\mathcal{P}\) on a bounded domain \(\Omega\) with homogeneous Dirichlet or Neumann boundary conditions, the spectral inequality also called Lebeau-Robbiano estimate takes the form
\[\left\|u\right\|_{L^{2}\left(\Omega\right)}\leq ce^{\sqrt{\Delta}}\left\|u \right\|_{L^{2}\left(\omega\right)}\text{, \ \ \ }\forall u\in\text{span}\left\{\Phi_{j};\lambda_{j}\leq\lambda\right\} \tag{1.1}\]
where \(\omega\subset\Omega\) is an open subset and where the functions \(\Phi_{j}\) form a Hilbert basis of \(L^{2}\left(\Omega\right)\) of eigenfunctions of \(\mathcal{P}\) associated with the nonnegative eigenvalues \(\lambda_{j}\), \(j\in\mathbb{N}\), counted with their multiplicities. In other words, the family of spectral projectors associated to \(\mathcal{P}\) enjoys an observability inequality on a set \(\omega\subset\Omega\) for low frequencies \(\lambda_{j}\leq\lambda\) with a constant cost as \(ce^{c\sqrt{\lambda}}\).
The state of art to prove (1.1) is either Carleman inequalities for elliptic equations (see [LR], [LZ], [JL], [L], [LRL], [LRLeR1], [LRLeR2], [Le], [LL] and [Q], [FQZ]) or observation estimate at one point in time for parabolic equations (see [AEWZ], [BaP] and [BP]).
One of the key applications of (1.1) is either observability for parabolic systems or controllability for parabolic systems, knowing that both are equivalent properties by a duality argument (see [Zu], [FZ], [FI], [Mi] and [Mi2]).
Observability and controllability for the one-dimensional degenerate parabolic operator has been extensively studied in many ways: Backstepping approach for closed-loop control (see [GLM] and [LM]); Carleman inequalities (see [ABCF], [CMV], [CMV2] and [CTY]); Flatness approach (see [Mo] and [BLR]); Moment method (see [CMV3] and [CMV4]).
We shall consider the linear unbounded operators \(\mathcal{P}\) in \(L^{2}\left(0,1\right)\), defined by
\[\left\{\begin{array}[c]{l}\mathcal{P}=-\frac{d}{dx}\left(x^{\alpha}\frac{d}{ dx}\right)\text{, with }\alpha\in\left[0,2\right)\text{,}\\ D(\mathcal{P})=\left\{\vartheta\in H_{\alpha,0}^{1}\left(0,1\right);\, \mathcal{P}\vartheta\in L^{2}(0,1)\text{ and BC}_{\alpha}(\vartheta)=0\right\} \text{,}\end{array}\right.\]
where
\[H^{1}_{\alpha,0}(0,1):=\left\{\vartheta\in L^{2}(0,1);\,\text{$\vartheta$ is loc. absolutely continuous in }\,\,(0,1]\,,\,\int_{0}^{1}x^{\alpha}|\vartheta^{\prime}|^{2}<\infty,\,\vartheta(1 )=0\right\}\,\]
and
\[\text{BC}_{\alpha}(\vartheta)=\begin{cases}\vartheta_{|_{x=0}}\,&\text{for } \alpha\in[0,1)\,\\ (x^{\alpha}\vartheta^{\prime})_{|_{x=0}}\,&\text{for }\alpha\in[1,2)\.\end{cases}\]
Such \(\mathcal{P}\) is a closed self-adjoint positive densely defined operator, with compact resolvent. As a consequence, the following spectral decomposition holds: There exists a countable family of eigenfunctions \(\Phi_{j}\) associated with eigenvalues \(\lambda_{j}\) such that
* \(\left\{\Phi_{j}\right\}_{j\geq 1}\) forms a Hilbert basis of \(L^{2}(0,1)\)
* \(\mathcal{P}\Phi_{j}=\lambda_{j}\Phi_{j}\)
* \(0<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{k}\to+\infty\.\)
An explicit expression of the eigenvalues is given in [Gu] for the weakly degenerate case \(\alpha\in(0,1)\), and in [Mo] for the strongly degenerate case \(\alpha\in[1,2)\), and depends on the Bessel functions of first kind (see [MM]). The eigenvalues are simples and more proprieties are emphasized by Cannarsa, Martinez and Vancostenoble: First, a uniform bound for the first eigenvalue
\[\exists c_{1},c_{2}>0\ \ \ \ \forall\alpha\in[0,2)\ \ \ \ c_{1}\leq\lambda_{1}\leq c_{2} \tag{1.2}\]
(see [CMV3] (10) at page 176 and (34) at page 183 for \(\alpha\in[0,1)\); see [CMV4] proposition 2.13 at page 10 and (3.8)-(3.9) at page 13 for \(\alpha\in[1,2)\)); Secondly, a uniform spectral gap
\[\exists\gamma>0\ \ \ \ \forall\alpha\in[0,2)\ \ \ \ \forall k\geq 1\ \ \ \sqrt{\lambda_{k+1}}-\sqrt{\lambda_{k}}\geq\gamma(2-\alpha) \tag{1.3}\]
(see [CMV3] (74) at page 198 for \(\alpha\in[0,1)\); see [CMV4] at page 30 for \(\alpha\in[1,2)\)).
We are interested in the spectral inequality for the sum of eigenfunctions. Such Lebeau-Robbiano estimate is done with explicit dependence on \(\alpha\in[0,2)\). Our main result is as follows.
Theorem 1.1 _- Let \(\omega\) be an open and nonempty subset of \((0,1)\). There exists a constant \(C>0\) such that_
\[\sum_{\lambda_{j}\leq\Lambda}|a_{j}|^{2}\leq Ce^{C\frac{1}{(2-\alpha)^{2}} \sqrt{\Lambda}}\int_{\omega}\left|\sum_{\lambda_{j}\leq\Lambda}a_{j}\Phi_{j} \right|^{2}\,\]
_for any \(\alpha\in[0,2)\), \(\{a_{j}\}\in\mathbb{R}\) and any \(\Lambda>0\)._
This is equivalent to
\[\sum_{j=1,\cdots,N}|a_{j}|^{2}\leq Ce^{C\frac{1}{(2-\alpha)^{2}}\sqrt{\lambda_ {N}}}\int_{\omega}\left|\sum_{j=1,\cdots,N}a_{j}\Phi_{j}\right|^{2}\,\]
for any \(\alpha\in[0,2)\), \(\{a_{j}\}\in\mathbb{R}\) and any \(N>0\).
Here, our approach is based on a combinaison of both Carleman techniques and the moment method for an elliptic equation. In one hand, it seems difficult to find the appropriate weight function in Carleman techniques or logarithmic convexity methods for getting directly the desired spectral inequality. On the other hand, the moment method is an appropriate tool to get the cost of controllability for the one-dimensional degenerate parabolic operator.
As a consequence of Theorem 1.1, we have the following observability estimate from a measurable set in time for the one-dimensional degenerate parabolic operator.
Theorem 1.2.- _Let \(\omega\) be an open and nonempty subset of \((0,1)\) and \(E\subset(0,T)\) be a measurable set of positive measure. There exists a constant \(C>0\) such that_
\[\left\|e^{-T\mathcal{P}}y_{0}\right\|_{L^{2}(0,1)}\leq Ce^{C\frac{1}{(2-\alpha) ^{4}}}\int_{\omega\times E}\left|e^{-t\mathcal{P}}y_{0}\right|\,\]
_for any \(\alpha\in[0,2)\) and any \(y_{0}\in L^{2}\left(0,1\right)\)._
This is equivalent to
\[\left\|y\left(\cdot,T\right)\right\|_{L^{2}(0,1)}\leq Ce^{C\frac{1}{(2-\alpha) ^{4}}}\int_{\omega\times E}\left|y\left(x,t\right)\right|dxdt\,\]
for any \(\alpha\in[0,2)\) and any \(y_{0}\in L^{2}\left(0,1\right)\) where \(y\) is the weak solution of the degenerate heat equation
\[\left\{\begin{array}{ll}\partial_{t}y-\partial_{x}\left(x^{\alpha}\partial_ {x}y\right)=0\,&\mbox{in }\left(0,1\right)\times\left(0,T\right)\,\\ \mbox{\rm BC}_{\alpha}(y)=0\,&\mbox{on }\left\{0\right\}\times\left(0,T \right)\,\\ y\left(1,t\right)=0\,&t\in\left(0,T\right)\,\\ y\left(x,0\right)=y_{0}\,&x\in\left(0,1\right)\.\end{array}\right.\]
In the last years, a lot of works have been devoted to the observability on measurable sets (see e.g. [AE], [EMZ], [PW], [WZ], [LiZ]). Applications to impulse control and finite-time stabilization can be rewritten as in [BP].
## 2 Elliptic observation estimates (proof of Theorem 1.1)
In this Section, our aim is to prove Theorem 1.1 and we start with presenting the following three results: We first have an uniform observability estimate for a single eigenfunction given by Proposition 2.1; Proposition 2.2 establishes a quantitative Holder type of estimate for an elliptic equation far from the degeneracy; Proposition 2.3 is an uniform observability estimate for the elliptic equation; We end this Section with the proof of Theorem 1.1.
Proposition 2.1 - _For any \(\omega\) open and nonempty subset of \((0,1)\)_,
\[\exists\rho>0\ \ \ \ \forall\alpha\in[0,2)\ \ \ \ \forall j\geq 1\ \ \ \ \int_{\omega}\left|\Phi_{j}\right|^{2}\geq\rho(2-\alpha)\.\]
Given \(T>0\) arbitrary, we now consider the following homogeneous elliptic problem:
\[\left\{\begin{array}{ll}\partial_{t}^{2}\varphi-\mathcal{P}\varphi=0\,&\mbox{in } \left(0,1\right)\times\left(0,T\right)\,\\ \mbox{\rm BC}_{\alpha}(\varphi)=0\,&\mbox{on }\left\{0\right\}\times\left(0,T \right)\,\\ \varphi_{\left|_{x=1}=0\,&\mbox{on }\left\{1\right\}\times\left(0,T \right)\,\\ \varphi\left(\cdot,0\right)=\varphi_{0}\,&\mbox{in }\left(0,1\right)\,\\ \partial_{t}\varphi\left(\cdot,0\right)=\varphi_{1}\,&\mbox{in }\left(0,1 \right)\,\end{array}\right. \tag{2.1}\]
where \(\varphi_{0}\) and \(\varphi_{1}\) belong to \(\mbox{\rm span}\{\Phi_{j};1\leq j\leq N\}\).
Proposition 2.2 -_Let \(0<a<b<1\) and \(T>0\). There exist \(c>0\) and \(\delta\in(0,1)\) such that for all \(\alpha\in[0,2)\), the solution \(\varphi\) of (2.1) satisfies_
\[\left\|\varphi\right\|_{H^{1}\left(\left(\frac{2a+b}{3},\frac{a+2b}{3}\right) \times\left(0,T/4\right)\right)}\leq c\left\|\varphi\right\|_{H^{1}\left(\left( a,b\right)\times\left(0,T\right)\right)}^{1-\delta}\left(\left\|\varphi_{0} \right\|_{H^{1}\left(a,b\right)}+\left\|\varphi_{1}\right\|_{L^{2}\left(a,b \right)}\right)^{\delta}\,\]
Proposition 2.3 -_Let \(\omega\) be an open and nonempty subset of \((0,1)\). For any \(N\geq 1\), \(T>0\), and any \(\alpha\in[0,2)\), the solution \(\varphi\) of (2.1) satisfies_
\[\left\|\varphi\left(\cdot,T\right)\right\|_{L^{2}\left(0,1\right)}^{2}\leq \frac{C\left(1+\lambda_{N}\right)}{\rho^{2}(2-\alpha)^{2}}\left(1+\frac{1}{T} \right)e^{C\sqrt{\lambda_{N}}\left(T+\frac{1}{T\gamma^{2}(2-\alpha)^{2}} \right)}\int_{0}^{T}\int_{\omega}\left|\varphi\right|^{2}\,\]
_where \(C>0\) is independent of \(N,T>0\) and \(\alpha\in[0,2)\). Here \(\rho\) is given by Proposition 2.1 and \(\gamma\) comes from (1.3)._
Now, we are able to present the proof of Theorem 1.1.
Our strategy is as follows. We will use Proposition 2.3 and we observe the whole domain (including the region where the ellipticity degenerates) from one region where the operator \(\partial_{t}^{2}+\mathcal{P}\) is uniformly elliptic; there, we use classical global Carleman techniques to observe from the boundary \((a,b)\times\{0\}\) with Proposition 2.2. That observation region provides precisely the right hand side of Theorem 1.1.
Proof of Theorem 1.1.- We consider the above homogeneous elliptic problem with \(\varphi_{0}\left(x\right)=0\) and \(\varphi_{1}\left(x\right)=\sum\limits_{j=1,\cdot,N}a_{j}\Phi_{j}\left(x\right)\) where \(\{a_{j}\}\in\mathbb{R}\). Recall that \(\varphi\) can be explicitly written by Fourier series: For any \(x\in(0,1)\),
\[\varphi\left(x,t\right)=\sum\limits_{j=1,\cdot,N}\frac{1}{\sqrt{\lambda_{j}}} \mathrm{sinh}\left(\sqrt{\lambda_{j}}t\right)a_{j}\Phi_{j}\left(x\right)\.\]
Let \(0<a<b<1\) and set \(\omega=(a,b)\) and \(\widetilde{\omega}=\left(\frac{2a+b}{3},\frac{a+2b}{3}\right)\). We have, for some constants \(C,C_{1},C_{2}>0\) independent on \(N\) and \(\alpha\),
\[\sum\limits_{j=1,\cdot,N}|a_{j}|^{2} \leq\sum\limits_{j=1,\cdot,N}|a_{j}|^{2}\frac{1}{\lambda_{j}} \mathrm{sinh}^{2}\left(\sqrt{\lambda_{j}}/4\right)Ce^{C\sqrt{\lambda_{N}}} \text{by (\ref{eq:1})}\] \[=Ce^{C\sqrt{\lambda_{N}}}\|\varphi\left(\cdot,1/4\right)\|_{L^{2 }_{2}(0,1)}^{2}\] \[\leq C_{1}e^{C_{1}\frac{1}{\left(2-\alpha\right)^{2}}\sqrt{ \lambda_{N}}}\int_{0}^{1/4}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where
\[\left\{\begin{array}{l}h\left(x,t\right)=\sum\limits_{j=1,\cdot,N}\sum\limits_{k=1,\cdot,\cdot,N}g_{k}\left(t\right)\left(\int_{\omega}\Phi_{j}\Phi_{k}\right) \Phi_{j}\left(x\right)\text{ with }g\left(x,t\right)=\sum\limits_{k=1,\cdot,N}g_{k} \left(t\right)\Phi_{k}\left(x\right)\text{,}\\ u_{0}\left(x\right)=\sum\limits_{j=1,\cdot,N}a_{j}\Phi_{j}\left(x\right)\text{,}\\ u_{1}\left(x\right)=\sum\limits_{j=1,\cdot,N}b_{j}\Phi_{j}\left(x\right)\text{.}\end{array}\right. \tag{3.2}\]
### Well-posedness property
Definition 3.1.- Let \(N\in\mathbb{N}^{\ast}\). We denote \(\Pi_{N}L^{2}=\)span\(\left\{\Phi_{j};1\leq j\leq N\right\}\). The space \(\Pi_{N}L^{2}\) endowed with the \(L^{2}\left(\Omega\right)\) norm is a finite dimensional Hilbert space.
It is well-known that when \(g_{j}\in L^{2}(0,T)\), the unique solution of (3.1) verifies \(u\in H^{2}(0,T;\Pi_{N}L^{2})\) and is given by the Duhamel formula
\[u(\cdot,t)=\sum\limits_{j=1,\cdot,N}\cosh(\sqrt{\lambda_{j}}t)a _{j}\Phi_{j}+\sum\limits_{j=1,\cdot,N}\frac{\sinh(\sqrt{\lambda_{j}}t)}{\sqrt {\lambda_{j}}}b_{j}\Phi_{j}\\ +\sum\limits_{j=1,\cdot,N}\sum\limits_{k=1,\cdot,N}(\int_{ \omega}\Phi_{j}\Phi_{k})\int_{0}^{t}\frac{\sinh(\sqrt{\lambda_{j}}(t-s))}{ \sqrt{\lambda_{j}}}g_{k}(s)ds\Phi_{j}\text{.}\]
### Construction of the control
Definition 3.2.- We say that system (3.1) is controllable at time \(T\) if for any \((u_{0},u_{1})\in(\Pi_{N}L^{2})^{2}\) there is \(g\in L^{2}(0,T;\Pi_{N}L^{2})\) as in (3.2) such that
\[u\left(\cdot,T\right)=\partial_{t}u\left(\cdot,T\right)=0\text{.}\]
Lemma 3.1.- _Equation (3.1) is controllable in time \(T\) if and only if, for any \((u_{0},u_{1})\in(\Pi_{N}L^{2})^{2}\) there is \(g\in L^{2}(0,T;\Pi_{N}L^{2})\) as in (3.2) such that the following relation holds_
\[-\int_{0}^{1}u_{1}\varphi\left(\cdot,T\right)-\int_{0}^{1}u_{0}\partial_{t} \varphi\left(\cdot,T\right)=\int_{0}^{T}\int_{\omega}g\left(x,t\right)\varphi \left(x,T-t\right)dxdt \tag{3.3}\]
_for any \((\varphi_{0},\varphi_{1})\in(\Pi_{N}L^{2})^{2}\), where \(\varphi\) is the solution of (2.1)._
_Further, if the system (3.1) is controllable at time \(T\) with a control \(g\in L^{2}(0,T;\Pi_{N}L^{2})\) satisfying the bound_
\[\left\|g\right\|_{L^{2}((0,1)\times(0,T))}^{2}:=\sum\limits_{j=1,\cdot,\cdot,N}\int_{0}^{T}\left|g_{j}\left(t\right)\right|^{2}\leq K\left\|(u_{0},u_{1}) \right\|_{(L^{2}(0,1))^{2}}^{2}:=K\sum\limits_{j=1,\cdot,N}\left(a_{j}^{2}+b_ {j}^{2}\right)\]
_for some \(K>0\), then the solution \(\varphi\) of (2.1) satisfies_
\[\left\|\varphi\left(\cdot,T\right)\right\|_{L^{2}(0,1)}^{2}+\left\|\partial_{t }\varphi\left(\cdot,T\right)\right\|_{L^{2}(0,1)}^{2}\leq K\int_{0}^{T}\int_{ \omega}\left|\varphi\right|^{2}\text{.}\]
Proof of Lemma 3.1.- Let \(g\in L^{2}(0,T;\Pi_{N}L^{2})\) be arbitrary and \(u\) be the solution of (3.1). Given \(\varphi\) the solution of (2.1) then, by multiplying (3.1) by \(\varphi\left(x,T-t\right)\) and by integrating by parts we obtain that
\[\int_{0}^{1}\partial_{t}u\left(\cdot,T\right)\varphi_{0}+\int_{0}^{1}u\left( \cdot,T\right)\varphi_{1}-\int_{0}^{1}u_{1}\varphi\left(\cdot,T\right)-\int_{ 0}^{1}u_{0}\partial_{t}\varphi\left(\cdot,T\right)=\int_{0}^{T}\int_{0}^{1}h \left(x,t\right)\varphi\left(x,T-t\right)dxdt\]
\[\int_{0}^{T}\int_{0}^{1}h\left(x,t\right)\varphi\left(x,T-t\right)dxdt=\int_{0}^{T} \int_{\omega}g\left(x,t\right)\varphi\left(x,T-t\right)dxdt\.\]
Now, if (3.3) is verified, it follows that
\[\int_{0}^{1}\partial_{t}u\left(\cdot,T\right)\varphi_{0}+\int_{0}^{1}u\left( \cdot,T\right)\varphi_{1}=0\]
for any \(\left(\varphi_{0},\varphi_{1}\right)\in\left(\Pi_{N}L^{2}\right)^{2}\) which implies that \(u\left(\cdot,T\right)=\partial_{t}u\left(\cdot,T\right)=0\). Hence, the solution is controllable at time \(T\) and \(g\) is a control for (3.1). Reciprocally, if \(g\in L^{2}(0,T;\Pi_{N}L^{2})\) is a control for (3.1), we have that \(u\left(\cdot,T\right)=\partial_{t}u\left(\cdot,T\right)=0\). It implies that (3.3) holds. Finally, one can choose \(\left(u_{0},u_{1}\right)=\left(\partial_{t}\varphi\left(\cdot,T\right), \varphi\left(\cdot,T\right)\right)\) and apply (3.3) to get the desired estimate thanks to Cauchy-Schwarz inequality and the proof finishes.
Proof of Proposition 2.3.- Our aim is to construct a control \(g\) given by \(g\left(x,t\right)=\sum\limits_{k=1,\cdot,N}g_{k}\left(t\right)\Phi_{k}\left( x\right)\) such that (3.3) holds. Let
\[\left\{\begin{array}{l}\varphi_{0}\left(x\right)=\sum\limits_{j=1,\cdot,N}c _{j}\Phi_{j}\left(x\right)\,\\ \varphi_{1}\left(x\right)=\sum\limits_{j=1,\cdot,N}d_{j}\Phi_{j}\left(x\right) \end{array}\right.\]
be the initial data of (2.1). Then, recall that \(\varphi\) can be explicitly written by Fourier series: For any \(x\in\left(0,1\right)\),
\[\varphi\left(x,t\right)=\sum\limits_{j=1,\cdot,N}\left(e^{\sqrt{\lambda_{j}} t}\frac{1}{2}\left(c_{j}+\frac{1}{\sqrt{\lambda_{j}}}d_{j}\right)+e^{-\sqrt{ \lambda_{j}}t}\frac{1}{2}\left(c_{j}-\frac{1}{\sqrt{\lambda_{j}}}d_{j}\right) \right)\Phi_{j}\left(x\right)\.\]
First, let us clarify the expression \(-\int_{0}^{1}u_{1}\varphi\left(\cdot,T\right)-\int_{0}^{1}u_{0}\partial_{t} \varphi\left(\cdot,T\right):\)
\[\begin{array}{rl}-\int_{0}^{1}u_{1}\varphi\left(\cdot,T\right)-\int_{0}^{1}u _{0}\partial_{t}\varphi\left(\cdot,T\right)&=\sum\limits_{j=1,\cdot,N}e^{ \sqrt{\lambda_{j}}T}\frac{1}{2}\left(c_{j}+\frac{1}{\sqrt{\lambda_{j}}}d_{j} \right)\int_{0}^{1}\left(-u_{1}-\sqrt{\lambda_{j}}u_{0}\right)\Phi_{j}\\ &\quad+\sum\limits_{j=1,\cdot,N}e^{-\sqrt{\lambda_{j}}T}\frac{1}{2} \left(c_{j}-\frac{1}{\sqrt{\lambda_{j}}}d_{j}\right)\int_{0}^{1}\left(-u_{1}+ \sqrt{\lambda_{j}}u_{0}\right)\Phi_{j}\.\end{array} \tag{3.4}\]
Next, let us clarify the expression \(\int_{0}^{T}\int_{\omega}g\left(x,t\right)\varphi\left(x,T-t\right)dxdt\), that is \(\int_{0}^{T}\int_{0}^{1}h\left(x,t\right)\varphi\left(x,T-t\right)dxdt:\)
\[\begin{array}{rl}&\int_{0}^{T}\int_{\omega}g\left(x,t\right) \varphi\left(x,T-t\right)dxdt\\ =&\sum\limits_{k=1,\cdot,N}\sum\limits_{j=1,\cdot,N}e^{\sqrt{ \lambda_{j}}T}\frac{1}{2}\left(c_{j}+\frac{1}{\sqrt{\lambda_{j}}}d_{j}\right) \int_{\omega}\Phi_{k}\Phi_{j}\int_{0}^{T}g_{k}\left(t\right)e^{-\sqrt{\lambda_{ j}}t}dt\\ &\quad+\sum\limits_{k=1,\cdot,N}\sum\limits_{j=1,\cdot,N}e^{-\sqrt{ \lambda_{j}}T}\frac{1}{2}\left(c_{j}-\frac{1}{\sqrt{\lambda_{j}}}d_{j}\right) \int_{\omega}\Phi_{k}\Phi_{j}\int_{0}^{T}g_{k}\left(t\right)e^{\sqrt{\lambda_{j} }t}dt\.\end{array}\]
Now, suppose that \(g_{k}\left(t\right)=\alpha_{k}\sigma_{k}^{0}\left(t\right)+\beta_{k}\sigma_{k} ^{1}\left(t\right)\) where \(\sigma_{k}^{0}\), \(\sigma_{k}^{1}\) belong to \(L^{2}\left(0,T\right)\) and that the following moment formula holds:
\[\left\{\begin{array}{l}\int_{0}^{T}\sigma_{k}^{0}\left(t\right)e^{-\sqrt{ \lambda_{j}}t}dt=0\ \text{and}\ \int_{0}^{T}\sigma_{k}^{1}\left(t\right)e^{-\sqrt{\lambda_{j}}t}dt=\delta_{jk}\ ;\\ \int_{0}^{T}\sigma_{k}^{0}\left(t\right)e^{\sqrt{\lambda_{j}}t}dt=\delta_{jk} \ \text{and}\ \int_{0}^{T}\sigma_{k}^{1}\left(t\right)e^{\sqrt{\lambda_{j}}t}dt=0\,\end{array}\right. \tag{3.5}\]
then, we obtain
\[\begin{array}{rl}\int_{0}^{T}\int_{\omega}g\left(x,t\right) \varphi\left(x,T-t\right)dxdt&=\sum\limits_{j=1,\cdot,N}e^{\sqrt{ \lambda_{j}}T}\frac{1}{2}\left(c_{j}+\frac{1}{\sqrt{\lambda_{j}}}d_{j}\right) \beta_{j}\int_{\omega}\left|\Phi_{j}\right|^{2}\\ &\quad+\sum\limits_{j=1,\cdot,N}e^{-\sqrt{\lambda_{j}}T}\frac{1}{2} \left(c_{j}-\frac{1}{\sqrt{\lambda_{j}}}d_{j}\right)\alpha_{j}\int_{\omega} \left|\Phi_{j}\right|^{2}\.\end{array} \tag{3.6}\]
By comparing the identities (3.4) and (3.6), one can deduce that if
\[\int_{0}^{1}\left(-u_{1}-\sqrt{\lambda_{j}}u_{0}\right)\Phi_{j}=\beta_{j}\int_{ \omega}\left|\Phi_{j}\right|^{2}\text{ and }\int_{0}^{1}\left(-u_{1}+\sqrt{\lambda_{j}}u_{0}\right)\Phi_{j}=\alpha_{j} \int_{\omega}\left|\Phi_{j}\right|^{2}\]
for any \(j=1,\cdot\cdot,N\), then (3.3) holds for any \(\left(\varphi_{0},\varphi_{1}\right)\) which implies by Lemma 3.1 that (3.1) is controllable in time \(T\).
Therefore, one can conclude that the control given by \(g\left(x,t\right):=\sum\limits_{j=1,\cdot\cdot,N}\left[\alpha_{j}\sigma_{j}^{0 }\left(t\right)+\beta_{j}\sigma_{j}^{1}\left(t\right)\right]\Phi_{j}\left(x\right)\) where
\[\alpha_{j}:=\frac{\int_{0}^{1}\left(-u_{1}+\sqrt{\lambda_{j}}u_{0}\right)\Phi _{j}}{\int_{\omega}\left|\Phi_{j}\right|^{2}}=\frac{-b_{j}+\sqrt{\lambda_{j}} a_{j}}{\int_{\omega}\left|\Phi_{j}\right|^{2}}\text{ and }\beta_{j}:=\frac{\int_{0}^{1}\left(-u_{1}-\sqrt{\lambda_{j}}u_{0}\right)\Phi _{j}}{\int_{\omega}\left|\Phi_{j}\right|^{2}}=\frac{-b_{j}-\sqrt{\lambda_{j}} a_{j}}{\int_{\omega}\left|\Phi_{j}\right|^{2}}\text{,}\]
is an appropriate candidate. Notice that by Proposition 2.1, \(\int_{\omega}\left|\Phi_{j}\right|^{2}\neq 0\). It remains to construct the sequence of functions \(\left(\sigma_{k}^{0},\sigma_{k}^{1}\right)_{k\geq 1}\) in \(\left(L^{2}\left(0,T\right)\right)^{2}\) such that (3.5) holds. Such property is called biorthogonality of the family \(\left(\sigma_{k}^{0},\sigma_{k}^{1}\right)_{k\geq 1}\). To do so, we apply the following result from Cannarsa, Martinez and Vancostenoble (see [CMV3] Theorem 2.4 at page 179) :
Theorem 3.1.- (Existence of a suitable biorthogonal family and upper bounds) Assume that
\[\forall n>0,\,\mu_{n}\geq 0\]
and that there is some \(r>0\) such that
\[\forall n>0,\,\sqrt{\mu_{n+1}}-\sqrt{\mu_{n}}\geq r\text{.}\]
Then there exists a family \(\left(\theta_{m}\right)_{m>0}\) which is biorthogonal to the family \(\left(e^{\mu_{n}t}\right)_{n>0}\) in \(L^{2}(0,T)\):
\[\forall m,n>0,\,\,\int_{0}^{T}\theta_{m}\left(t\right)e^{\mu_{n}t}dt=\delta_{ mn}\text{.}\]
Moreover, it satisfies: there is some universal constant \(c\) independent of \(T\), \(r\) and \(m\) such that, for all \(m>0\), we have
\[\left\|\theta_{m}\right\|_{L^{2}(0,T)}^{2}\leq ce^{-2\mu_{m}T}e^{c\frac{1}{ \varepsilon}\sqrt{\mu_{m}}}B\left(T,r\right)\]
with
\[B\left(T,r\right)=\left\{\begin{array}{ll}\left(\frac{1}{T}+\frac{1}{T^{2}r ^{2}}\right)e^{\frac{1}{T,r^{2}}}&\text{ if }T\leq\frac{1}{r^{2}},\\ cr^{2}&\text{ if }T\geq\frac{1}{r^{2}}\text{.}\end{array}\right.\]
Now, define the increasing sequence of non negative real numbers \(\left(\mu_{n}\right)_{n\geq 1}\) as follows:
\[\mu_{n}=\left|\begin{array}{ll}\sqrt{\lambda_{N}}-\sqrt{\lambda_{N-\left(n -1\right)}}&\text{ if }1\leq n\leq N,\\ \sqrt{\lambda_{N}}+\sqrt{\lambda_{n-N}}&\text{ if }N+1\leq n\leq 2N,\\ \left(\sqrt{\mu_{n-1}}+\gamma\left(\lambda_{N}\right)^{-1/4}\right)^{2}&\text{ if }n\geq 2N+1\text{.}\end{array}\right.\]
We need to check that such sequence fulfills the assumption of Theorem 3.1 thanks to the fact that \(\sqrt{\lambda_{k+1}}-\sqrt{\lambda_{k}}\geq\gamma(2-\alpha)\) given by (1.3). Indeed, for any \(1\leq n\leq N-1\),
\[\sqrt{\mu_{n+1}}-\sqrt{\mu_{n}}=\frac{\sqrt{\lambda_{N-\left(n-1\right)}}- \sqrt{\lambda_{N-n}}}{\sqrt{\sqrt{\lambda_{N}}-\sqrt{\lambda_{N-\left(n\right) }}}+\sqrt{\sqrt{\lambda_{N}}-\sqrt{\lambda_{N-\left(n-1\right)}}}}\geq\frac{ \gamma(2-\alpha)}{2\left(\lambda_{N}\right)^{1/4}}\text{ ;}\]
for any \(N+1\leq n\leq 2N-1\),
\[\sqrt{\mu_{n+1}}-\sqrt{\mu_{n}}=\frac{\sqrt{\lambda_{n+1-N}}-\sqrt{\lambda_{n- N}}}{\sqrt{\sqrt{\lambda_{N}}+\sqrt{\lambda_{n+1-N}}}+\sqrt{\sqrt{\lambda_{N}}+ \sqrt{\lambda_{n-N}}}}\geq\frac{\gamma(2-\alpha)}{2\sqrt{2}\left(\lambda_{N} \right)^{1/4}}\text{ ;}\]
for any \(n\geq 2N\), \(\sqrt{\mu_{n+1}}-\sqrt{\mu_{n}}=\gamma\left(\lambda_{N}\right)^{-1/4}\) and
\[\sqrt{\mu_{N+1}}-\sqrt{\mu_{N}}=\frac{2\sqrt{\lambda_{1}}}{\sqrt{\sqrt{\lambda _{N}}+\sqrt{\lambda_{1}}}+\sqrt{\sqrt{\lambda_{N}}-\sqrt{\lambda_{1}}}}\geq \frac{2\sqrt{\lambda_{1}}}{\left(1+\sqrt{2}\right)\left(\lambda_{N}\right)^{1/ 4}}\.\]
Consequently, it fulfills by a straightforward computation the assumptions of the above Theorem 3.1: Precisely,
\[\forall n>0,\,\mu_{n}\geq 0\text{ and }\sqrt{\mu_{n+1}}-\sqrt{\mu_{n}}\geq r\,\]
with
\[r=\frac{\varsigma}{\left(\lambda_{N}\right)^{1/4}}\,\text{ and }\ \ \varsigma=\min\left(\frac{\gamma(2-\alpha)}{2\sqrt{2}},\frac{2\sqrt{\lambda_{1} }}{1+\sqrt{2}}\right). \tag{3.7}\]
By Theorem 3.1, we have a family \(\left(\theta_{m}\right)_{m>0}\) which is biorthogonal to the family \(\left(e^{\mu_{n}t}\right)_{n>0}\) in \(L^{2}(0,T)\):
\[\forall m,n>0,\,\,\int_{0}^{T}\theta_{m}\left(t\right)e^{\mu_{n}t}dt=\delta_{ mn}\.\]
Therefore,
\[\text{if }1\leq n\leq N,\,\text{then }\int_{0}^{T}\theta_{m}\left(t\right)e^{ \sqrt{\lambda_{N}}t}e^{-\sqrt{\lambda_{N-\left(n-1\right)}}t}dt=\delta_{mn}\ ;\]
\[\text{if }N+1\leq n\leq 2N\text{, then }\int_{0}^{T}\theta_{m}\left(t\right)e^{ \sqrt{\lambda_{N}}t}e^{\sqrt{\lambda_{-N}}t}dt=\delta_{mn}\.\]
That is, for any \(j=1,\cdot\cdot,N\),
\[\int_{0}^{T}\theta_{N-\left(j-1\right)}\left(t\right)e^{\sqrt{\lambda_{N}}t}e ^{-\sqrt{\lambda_{j}}t}dt=1\ ;\ \int_{0}^{T}\theta_{m}\left(t\right)e^{\sqrt{\lambda_{N}}t}e^{-\sqrt{\lambda_{j }}t}dt=0\text{ when }m\neq N-\left(j-1\right)\ ; \tag{3.8}\]
\[\int_{0}^{T}\theta_{N+j}\left(t\right)e^{\sqrt{\lambda_{N}}t}e^{\sqrt{\lambda _{j}}t}dt=1\ ;\ \int_{0}^{T}\theta_{m}\left(t\right)e^{\sqrt{\lambda_{N}}t}e^{\sqrt{\lambda_{j }}t}dt=0\text{ when }m\neq N+j. \tag{3.9}\]
Finally, we set for any \(k=1,\cdot\cdot,N\),
\[\sigma_{k}^{0}\left(t\right)=\theta_{N+k}\left(t\right)e^{\sqrt{\lambda_{N}}t} \text{ and }\sigma_{k}^{1}\left(t\right)=\theta_{N-\left(k-1\right)}\left(t\right)e^{ \sqrt{\lambda_{N}}t}\]
in order that by (3.8), for \(k,j=1,\ldots,N,\)
\[\int_{0}^{T}\sigma_{k}^{0}\left(t\right)e^{-\sqrt{\lambda_{j}}t}dt=0\text{ and }\int_{0}^{T}\sigma_{k}^{1}\left(t\right)e^{-\sqrt{\lambda_{j}}t}dt=\delta_{jk}\]
and by (3.9)
\[\int_{0}^{T}\sigma_{k}^{0}\left(t\right)e^{\sqrt{\lambda_{j}}t}dt=\delta_{jk} \text{ and }\int_{0}^{T}\sigma_{k}^{1}\left(t\right)e^{\sqrt{\lambda_{j}}t}dt=0\.\]
Further, it holds that for any \(k=1,\cdot\cdot,N\),
\[\left\|\sigma_{k}^{0}\right\|_{L^{2}(0,T)}^{2}\leq e^{2\sqrt{\lambda_{N}}T} \left\|\theta_{N+k}\right\|_{L^{2}(0,T)}^{2}\text{ and }\left\|\sigma_{k}^{1}\right\|_{L^{2}(0,T)}^{2}\leq e^{2\sqrt{\lambda_{N}}T} \left\|\theta_{N-\left(k-1\right)}\right\|_{L^{2}(0,T)}^{2}. \tag{3.10}\]
This completes the construction of our control given by \(g\left(x,t\right):=\sum\limits_{j=1,\cdot\cdot,N}\left[\alpha_{j}\sigma_{j}^{0} \left(t\right)+\beta_{j}\sigma_{j}^{1}\left(t\right)\right]\Phi_{j}\left(x\right)\).
### Cost of the control
Theorem 3.1 with (3.7) implies that there is some universal constant \(c\) independent of \(T\) and \(N\) such that for any \(m=1,\cdot,2N\),
\[\left\|\theta_{m}\right\|_{L^{2}(0,T)}^{2} \leq ce^{c\frac{c\sqrt{2}}{\varsigma}\sqrt{\mu_{m}}}B\left(T,r\right) :=ce^{c\frac{\left(\lambda_{N}\right)^{1/4}}{\varsigma}\sqrt{\mu_{m}}}B\left(T, \varsigma\left(\lambda_{N}\right)^{-1/4}\right)\] \[\leq ce^{\frac{c\sqrt{2}}{\varsigma}\sqrt{\lambda_{N}}}B\left(T, \varsigma\left(\lambda_{N}\right)^{-1/4}\right)\]
because \(\sqrt{\mu_{m}}\leq\sqrt{2}\left(\lambda_{N}\right)^{1/4}\)\(\forall m\in\left\{1,\cdot,2N\right\}\). Therefore, by (3.10) we have
\[\sup\limits_{k=1,\cdot,N}\left(\left\|\sigma_{k}^{0}\right\|_{L^{2}(0,T)}^{2} +\left\|\sigma_{k}^{1}\right\|_{L^{2}(0,T)}^{2}\right)\leq 2ce^{2\sqrt{\lambda_{N}}T}e^{ \frac{c\sqrt{2}}{\varsigma}\sqrt{\lambda_{N}}}B\left(T,\varsigma\left(\lambda _{N}\right)^{-1/4}\right). \tag{3.11}\]
Our control given by \(g\left(x,t\right):=\sum\limits_{j=1,\cdot,N}\left[\alpha_{j}\sigma_{j}^{0} \left(t\right)+\beta_{j}\sigma_{j}^{1}\left(t\right)\right]\Phi_{j}\left(x\right)\) where
\[\alpha_{j}:=\frac{\int_{0}^{1}\left(-u_{1}+\sqrt{\lambda_{j}}u_{0}\right)\Phi _{j}}{\int_{\omega}\left|\Phi_{j}\right|^{2}}=\frac{-b_{j}+\sqrt{\lambda_{j}}a _{j}}{\int_{\omega}\left|\Phi_{j}\right|^{2}}\text{ and }\beta_{j}:=\frac{\int_{0}^{1} \left(-u_{1}-\sqrt{\lambda_{j}}u_{0}\right)\Phi_{j}}{\int_{\omega}\left|\Phi_{ j}\right|^{2}}=\frac{-b_{j}-\sqrt{\lambda_{j}}a_{j}}{\int_{\omega}\left|\Phi_{j} \right|^{2}}\,\]
satisfies
\[\sum\limits_{j=1,\cdot,N}\left(\alpha_{j}^{2}+\beta_{j}^{2}\right)=2\sum \limits_{j=1,\cdot,\cdot,N}\frac{\left(\lambda_{j}a_{j}^{2}+b_{j}^{2}\right)} {\left(\int_{\omega}\left|\Phi_{j}\right|^{2}\right)^{2}}\leq\frac{2\left(1+ \lambda_{N}\right)}{\left(\inf\limits_{j=1,\cdot,N}\int_{\omega}\left|\Phi_{ j}\right|^{2}\right)^{2}}\sum\limits_{j=1,\cdot,N}\left(a_{j}^{2}+b_{j}^{2} \right). \tag{3.12}\]
Combining the above estimates (3.11) and (3.12), there is some universal constant \(c\) independent of \(T\) such that for any \(N\geq 1\)
\[\left\|g\right\|_{L^{2}((0,1)\times(0,T))}^{2} =\sum\limits_{j=1,\cdot,N}\int_{0}^{T}\left|\alpha_{j}\sigma_{j}^ {0}\left(t\right)+\beta_{j}\sigma_{j}^{1}\left(t\right)\right|^{2}dt \tag{3.13}\] \[\leq\frac{8\left(1+\lambda_{N}\right)}{\left(\inf\limits_{j=1, \cdot,N}\int_{\omega}\left|\Phi_{j}\right|^{2}\right)^{2}}ce^{2\sqrt{\lambda_{N }}T}e^{\frac{c\sqrt{2}}{\varsigma}\sqrt{\lambda_{N}}}B\left(T,\varsigma\left( \lambda_{N}\right)^{-1/4}\right)\sum\limits_{j=1,\cdot,N}\left(a_{j}^{2}+b_{j }^{2}\right)\.\]
Recall that the bound
\[\left\|g\right\|_{L^{2}((0,1)\times(0,T))}^{2}:=\sum\limits_{j=1,\cdot,N}\int_ {0}^{T}\left|\alpha_{j}\sigma_{j}^{0}\left(t\right)+\beta_{j}\sigma_{j}^{1} \left(t\right)\right|^{2}dt\leq K\left\|\left(u_{0},u_{1}\right)\right\|_{(L^{ 2}(0,1))^{2}}^{2}:=K\sum\limits_{j=1,\cdot,N}\left(a_{j}^{2}+b_{j}^{2}\right)\]
will imply that the solution \(\varphi\) of (2.1) satisfies
\[\left\|\varphi\left(\cdot,T\right)\right\|_{L^{2}(0,1)}^{2}+\left\|\partial_{t }\varphi\left(\cdot,T\right)\right\|_{L^{2}(0,1)}^{2}\leq K\int_{0}^{T}\int_{ \omega}\left|\varphi\right|^{2}\.\]
Now our aim is to bound the quantity
\[\frac{8\left(1+\lambda_{N}\right)}{\left(\inf\limits_{j=1,\cdot,N}\int_{ \omega}\left|\Phi_{j}\right|^{2}\right)^{2}}ce^{2\sqrt{\lambda_{N}}T}e^{\frac{c \sqrt{2}}{\varsigma}\sqrt{\lambda_{N}}}B\left(T,\varsigma\left(\lambda_{N} \right)^{-1/4}\right)\]
appearing in (3.13) in order to get the cost \(K\).
First, by Proposition 2.1, \(\frac{1}{\left(\sum\limits_{j=1,\cdots,N}^{\inf\limits_{i}\left|\Phi_{j}\right|^{2} }\right)^{2}}\leq\frac{1}{\rho^{\prime}(2-\alpha)^{2}}\). Next, recall that \(\varsigma=\)min\(\left(\frac{\gamma(2-\alpha)}{2\sqrt{2}},\frac{2\sqrt{\lambda_{1}}}{1+ \sqrt{2}}\right)\) and since \(\alpha\in[0,2)\) with (1.2), we have that \(c\gamma(2-\alpha)\leq\varsigma\leq\frac{1}{\varepsilon}\) where \(c\) is a positive constant independent on \(\alpha\in[0,2)\). Finally, the estimate of \(B\left(T,r\right)\) in Theorem 3.1
\[B\left(T,r\right)=\left\{\begin{array}{cc}\left(\frac{1}{T}+\frac{1}{T^{2}r^ {2}}\right)e^{c\frac{1}{T^{2}}}&\mbox{if }T\leq\frac{1}{\varepsilon}\\ cT^{2}&\mbox{if }T\geq\frac{1}{r^{2}}\end{array}\right.\leq\left\{\begin{array}{cc} \left(1+\frac{1}{c}\right)\frac{1}{T}e^{2c\frac{1}{T^{2}}}&\mbox{if }T\leq \frac{1}{r^{2}}\\ cT^{2}&\mbox{if }T\geq\frac{1}{r^{2}}\end{array}\right.\leq\left((1+\frac{1}{c}) \frac{1}{T}+cr^{2}\right)e^{2c\frac{1}{T^{2}}}\]
leads to the bound
\[B(T,\varsigma\left(\lambda_{N}\right)^{-1/4}) \leq \left((1+\frac{1}{c})\frac{1}{T}+c\frac{\varsigma^{2}}{\sqrt{ \lambda_{N}}}\right)e^{2c\frac{\sqrt{\lambda_{N}}}{T\varsigma^{2}}}\] \[\leq C(1+\frac{1}{T})e^{C\frac{\sqrt{\lambda_{N}}}{T\left(2-\alpha \right)^{2}}}\]
for some \(C>0\) independent on \(N>0\), \(\alpha\in[0,2)\) and \(T>0\). Therefore, by (3.13) one can conclude that
\[\|g\|_{L^{2}((0,1)\times(0,T))}^{2}\leq\frac{C\left(1+\lambda_{N}\right)}{ \rho^{2}(2-\alpha)^{2}}e^{C\sqrt{\lambda_{N}}T}e^{C\frac{\sqrt{\lambda_{N}}}{ \gamma(2-\alpha)}}C(1+\frac{1}{T})e^{C\frac{\sqrt{\lambda_{N}}}{T\gamma^{2}(2 -\alpha)^{2}}}\sum_{j=1,\cdots,N}\left(a_{j}^{2}+b_{j}^{2}\right)\,\]
which gives, using \(\frac{1}{\gamma(2-\alpha)}\leq T+\frac{1}{T\gamma^{2}(2-\alpha)^{2}}\) that
\[\|g\|_{L^{2}((0,1)\times(0,T))}^{2}\leq\frac{C\left(1+\lambda_{N}\right)}{ \rho^{2}(2-\alpha)^{2}}\left(1+\frac{1}{T}\right)e^{C\sqrt{\lambda_{N}}\left( T+\frac{1}{T\gamma^{2}(2-\alpha)^{2}}\right)}\sum_{j=1,\cdots,N}\left(a_{j}^{2}+b_{j}^{2} \right)\.\]
By the cost estimate in Lemma 3.1, we obtain that for any \(\varphi\) solution of (2.1) and any \(N\geq 1\)
\[\|\varphi\left(\cdot,T\right)\|_{L^{2}(0,1)}^{2}+\|\partial_{t}\varphi\left( \cdot,T\right)\|_{L^{2}(0,1)}^{2}\leq\frac{C\left(1+\lambda_{N}\right)}{\rho^ {2}(2-\alpha)^{2}}\left(1+\frac{1}{T}\right)e^{C\sqrt{\lambda_{N}}\left(T+ \frac{1}{T\gamma^{2}(2-\alpha)^{2}}\right)}\int_{0}^{T}\int_{\omega}\left| \varphi\right|^{2}\,\]
where \(C>0\) does not depend on \((N,T,\alpha)\). This completes the proof of Proposition 2.3.
## 4 Elliptic observation by Carleman techniques (proof of Proposition 2.2)
In this section, we shall prove Proposition 2.2. Let \(0<a<b<1\) and \(\Omega=(a,b)\times(0,T)\). We set \((x,t)=(x_{1},x_{2})\in\Omega\), and for \(\alpha\in[0,2)\), introduce
\[Q=-\partial_{t}^{2}-\mathcal{P}=-\nabla\cdot(A(x_{1},x_{2})\nabla\cdot),\quad A (x_{1},x_{2})=\begin{pmatrix}x_{1}^{\alpha}&0\\ 0&1\end{pmatrix},\quad\nabla=\begin{pmatrix}\partial_{1}\\ \partial_{2}\end{pmatrix}\.\]
Note that there exists \(C_{0}>0\) such that
\[\|A\|_{W^{3,\infty}(\Omega)}\leq C_{0},\quad A(x_{1},x_{2})\xi\cdot\xi\geq\frac {1}{C_{0}}|\xi|^{2},\quad\forall\xi\in\mathbb{R}^{2},\forall(x_{1},x_{2})\in \Omega\, \tag{4.1}\]
where \(C_{0}>0\) is independent on \(\alpha\in[0,2)\). We set
\[v=e^{\tau\phi}\chi z\]
where \(\tau>0\), \(z\in H^{2}\left(\Omega\right)\), \(\chi\left(x_{1},x_{2}\right)=\chi_{1}\left(x_{1}\right)\chi_{2}\left(x_{2}\right)\) with
\[\left\{\begin{array}{ll}\chi_{1}\in C_{0}^{\infty}\left(a,b\right),&0\leq \chi_{1}\leq 1,&\chi_{1}=1\mbox{ on }\left(\frac{3a+b}{4},\frac{a+3b}{4}\right)\\ \chi_{2}\in C^{\infty}\left(0,T\right),&0\leq\chi_{2}\leq 1,&\chi_{2}=1\mbox{ on }\left(0,\frac{T}{3}\right)\mbox{ and }\chi_{2}=0\mbox{ on }\left(\frac{2T}{3},T\right)\end{array}\right.\]
and we shall consider weight functions \(\phi\in C^{\infty}(\overline{\Omega})\) of the form
\[\phi(x_{1},x_{2})=e^{\lambda\psi(x_{1},x_{2})},\quad\lambda>0,\quad\psi\in C^{ \infty}(\overline{\Omega}),\quad\nabla\psi\neq 0\ \mbox{on}\ \overline{\Omega}. \tag{4.2}\]
Here, we give explicitely \(\psi\) as follows
\[\psi(x_{1},x_{2})=-\left(x_{1}-x_{0}\right)^{2k}-\beta^{2k}\left(x_{2}+1\right) ^{2k} \tag{4.3}\]
where \(x_{0}=\frac{a+b}{2}\), \(\beta=\frac{2}{3}\left(\frac{b-a}{T+4}\right)\) and \(k=\)max\(\left(\mbox{ln}2/\ln\left(\left(4T+12\right)/\left(3T+12\right)\right);\mbox{ ln}2/\ln\left(3/2\right)\right)\).
We set
\[Q_{\phi}=e^{\tau\phi}Qe^{-\tau\phi}\.\]
We have \(Q_{\phi}v=\mathcal{S}v+\mathcal{A}v+\mathcal{R}v\) with
\[\mathcal{S}v=-\nabla\cdot\left(A\nabla v\right)-\tau^{2}A\nabla\phi\cdot \nabla\phi v,\quad\mathcal{A}v=2\tau A\nabla\phi\cdot\nabla v+2\tau\nabla \cdot\left(A\nabla\phi\right)v,\quad\mathcal{R}v=-\tau\nabla\cdot\left(A \nabla\phi\right)v\,\]
which gives \(\left\|Q_{\phi}v-\mathcal{R}v\right\|_{L^{2}(\Omega)}^{2}=\left\|\mathcal{S}v \right\|_{L^{2}(\Omega)}^{2}+\left\|\mathcal{A}v\right\|_{L^{2}(\Omega)}^{2}+2 (\mathcal{S}v,\mathcal{A}v)_{L^{2}(\Omega)}\). Note that \(0\leq\left\|Q_{\phi}v-\mathcal{R}v\right\|_{L^{2}(\Omega)}^{2}-2(\mathcal{S}v,\mathcal{A}v)_{L^{2}(\Omega)}\) implies
\[\left(\mathcal{S}v,\mathcal{A}v\right)_{L^{2}(\Omega)}\leq\left\|Q_{\phi}v \right\|_{L^{2}(\Omega)}^{2}+\left\|\mathcal{R}v\right\|_{L^{2}(\Omega)}^{2}. \tag{4.4}\]
Now we compute \(\left(\mathcal{S}v,\mathcal{A}v\right)_{L^{2}(\Omega)}\): By integration by parts, one has with standard summation notations and \(A=\left(A_{ij}\right)_{1\leq i,j\leq 2}\),
\[\left(\mathcal{S}v,\mathcal{A}v\right)_{L^{2}(\Omega)} =2\tau\int_{\Omega}A\nabla^{2}vA\nabla\phi\cdot\nabla v+2\tau\int _{\Omega}A\nabla^{2}\phi A\nabla v\cdot\nabla v\] \[\quad+2\tau\int_{\Omega}A_{ij}\partial_{x_{i}}v\partial_{x_{k}}v \partial_{x_{j}}A_{k\ell}\partial_{x_{k}}\phi\] \[\quad+2\tau\int_{\Omega}\left(A\nabla v\cdot\nabla v\right) \nabla\cdot\left(A\nabla\phi\right)+2\tau\int_{\Omega}A\nabla v\cdot\nabla \left(\nabla\cdot\left(A\nabla\phi\right)\right)v\] \[\quad+\tau^{3}\int_{\Omega}\left[A\nabla\phi\cdot\nabla\left(A \nabla\phi\cdot\nabla\phi\right)-\left(A\nabla\phi\cdot\nabla\phi\right) \left(\nabla\cdot\left(A\nabla\phi\right)\right)\right]\left|v\right|^{2}\] \[\quad+2\tau\int_{\partial\Omega}\left(A\nabla v\cdot n\right) \left(A\nabla\phi\cdot\nabla v+\left(\nabla\cdot\left(A\nabla\phi\right) \right)v\right)-\tau^{3}\int_{\partial\Omega}\left(A\nabla\phi\cdot\nabla\phi \right)\left(A\nabla\phi\cdot n\right)\left|v\right|^{2}\.\]
But by one integration by parts
\[\int_{\Omega}A\nabla^{2}vA\nabla\phi\cdot\nabla v =\frac{1}{2}\int_{\partial\Omega}\left(A\nabla v\cdot\nabla v \right)\left(A\nabla\phi\cdot n\right)-\frac{1}{2}\int_{\Omega}\partial_{x_{ \ell}}A_{ij}\partial_{x_{j}}v\partial_{x_{i}}vA_{k\ell}\partial_{x_{k}}\phi\] \[\quad-\frac{1}{2}\!\!\int_{\Omega}\left(A\nabla v\cdot\nabla v \right)\nabla\cdot\left(A\nabla\phi\right)\.\]
Therefore,
\[\left(\mathcal{S}v,\mathcal{A}v\right)_{L^{2}(\Omega)} =2\tau\int_{\Omega}A\nabla^{2}\phi A\nabla v\cdot\nabla v+\tau \int_{\Omega}\left(A\nabla v\cdot\nabla v\right)\nabla\cdot\left(A\nabla\phi\right)\] \[\quad+\tau^{3}\int_{\Omega}\left[A\nabla\phi\cdot\nabla\left(A \nabla\phi\cdot\nabla\phi\right)-\left(A\nabla\phi\cdot\nabla\phi\right)\left( \nabla\cdot\left(A\nabla\phi\right)\right)\right]\left|v\right|^{2}\] \[\quad+R_{1}+R_{2}\]
with
\[R_{1} =2\tau\int_{\Omega}A_{ij}\partial_{x_{i}}v\partial_{x_{\ell}}v \partial_{x_{j}}A_{k\ell}\partial_{x_{k}}\phi-\tau\int_{\Omega}\partial_{x_{\ell }}A_{ij}\partial_{x_{j}}v\partial_{x_{i}}vA_{k\ell}\partial_{x_{k}}\phi\] \[\quad+2\tau\int_{\Omega}A\nabla v\cdot\nabla\left(\nabla\cdot\left(A \nabla\phi\right)\right)v\,\] \[R_{2} =-2\tau\int_{\partial\Omega}\left(A\nabla v\cdot n\right)\left(A \nabla\phi\cdot\nabla v\right)+\tau\int_{\partial\Omega}\left(A\nabla v\cdot \nabla v\right)\left(A\nabla\phi\cdot n\right)\] \[\quad-2\tau\int_{\partial\Omega}\left(A\nabla v\cdot n\right)\left( \nabla\cdot\left(A\nabla\phi\right)\right)v-\tau^{3}\int_{\partial\Omega} \left(A\nabla\phi\cdot\nabla\phi\right)\left(A\nabla\phi\cdot n\right)\left|v \right|^{2}\,\]
where \(n\) is the outward normal vector to \(\partial\Omega\).
Notice that from the form of \(A\) and \(\phi\) given by (4.1) and (4.2), we have the existence of \(C_{1}>0\) independent on \(\alpha\in[0,2)\) such that for \(\tau>0\) sufficiently large
\[\left|R_{1}\right|\leq C_{1}\left((\tau^{1/2}\lambda^{2}+\tau\lambda)\left\| \phi^{1/2}\nabla v\right\|_{L^{2}(\Omega)}^{2}+\tau^{3/2}\lambda^{4}\left\| \phi^{1/2}v\right\|_{L^{2}(\Omega)}^{2}\right)\.\]
Note also that from the form of \(A\) and \(\phi\) given by (4.1) and (4.2), we have
\[A\nabla^{2}\phi A\nabla v\cdot\nabla v=\lambda^{2}\phi(A\nabla\psi\cdot\nabla v )^{2}+\lambda\phi A\nabla^{2}\psi A\nabla v\cdot\nabla v\geq-C_{2}\lambda\phi |\nabla v|^{2}\,\]
and
\[\tau\int_{\Omega}(A\nabla v\cdot\nabla v)\nabla\cdot(A\nabla\phi) =\tau\int_{\Omega}(A\nabla v\cdot\nabla v)\phi(\lambda\nabla\cdot (A\nabla\psi)+\lambda^{2}A\nabla\psi\cdot\nabla\psi))\] \[\geq C_{2}\tau\lambda^{2}\left\|\phi^{1/2}\nabla v\right\|_{L^{2} (\Omega)}^{2}-C_{3}\tau\lambda\left\|\phi^{1/2}\nabla v\right\|_{L^{2}(\Omega )}^{2}\] \[\geq C_{4}\tau\lambda^{2}\left\|\phi^{1/2}\nabla v\right\|_{L^{2} (\Omega)}^{2}\,\]
for \(\lambda>0\) chosen sufficiently large (independently on \(\alpha\in[0,2)\), and where the constants \(C_{2},C_{3},C_{4}>0\) are independent on \(\alpha\in[0,2)\). Arguing in the same way, there exist constants \(C_{5}>0\) and \(\lambda_{0}>0\) such that for all \(\alpha\in[0,2)\) and for all \(\lambda>\lambda_{0}\),
\[\tau^{3}\int_{\Omega}\left[A\nabla\phi\cdot\nabla(A\nabla\phi\cdot\nabla\phi)- \left(A\nabla\phi\cdot\nabla\phi\right)\left(\nabla\cdot(A\nabla\phi)\right) \right]\left|v\right|^{2}\geq C_{5}\tau^{3}\lambda^{4}\left\|\phi^{3/2}v \right\|_{L^{2}(\Omega)}^{2}\.\]
Summing up, (4.4) becomes
\[C_{5}\tau^{3}\lambda^{4}\left\|\phi^{3/2}v\right\|_{L^{2}(\Omega )}^{2}+C_{4}\tau\lambda^{2}\left\|\phi^{1/2}\nabla v\right\|_{L^{2}(\Omega)}^ {2}+R_{2}\\ \leq C_{1}\left((\tau^{1/2}\lambda^{2}+\tau\lambda)\left\|\phi^{ 1/2}\nabla v\right\|_{L^{2}(\Omega)}^{2}+\tau^{3/2}\lambda^{4}\left\|\phi^{1/ 2}v\right\|_{L^{2}(\Omega)}^{2}\right)+\left\|Q_{\phi}v\right\|_{L^{2}(\Omega )}^{2}+\left\|\mathcal{R}v\right\|_{L^{2}(\Omega)}^{2}\,\]
where the constants are independent on \(\alpha\in[0,2)\). Fixing \(\lambda>\lambda_{0}\) large, and then taking \(\tau>\tau_{0}\) sufficiently large (constants may depend on \(\lambda\) from now), we obtain the existence of \(C_{6}>0\) such that
\[C_{6}\tau^{3}\left\|v\right\|_{L^{2}(\Omega)}^{2}+C_{6}\tau\left\|\nabla v \right\|_{L^{2}(\Omega)}^{2}+R_{2}\leq\left\|Q_{\phi}v\right\|_{L^{2}(\Omega)} ^{2}+\left\|\mathcal{R}v\right\|_{L^{2}(\Omega)}^{2}\.\]
Next, one can see that from the form of \(A\) and \(\phi\), there is \(C_{7}>0\) such that for all \(\alpha\in[0,2)\),
\[\left\|\mathcal{R}v\right\|_{L^{2}(\Omega)}^{2}\leq C_{7}\tau^{2}\left\|v \right\|_{L^{2}(\Omega)}^{2}\.\]
Therefore, taking \(\tau>0\) sufficiently large yields the existence of \(C_{8}>0\) such that
\[C_{8}\left(\tau^{3}\left\|v\right\|_{L^{2}(\Omega)}^{2}+\tau\left\|\nabla v \right\|_{L^{2}(\Omega)}^{2}\right)+R_{2}\leq\left\|Q_{\phi}v\right\|_{L^{2}( \Omega)}^{2}. \tag{4.5}\]
Now we treat the boundary term \(R_{2}\): Since \(v=A\nabla v\cdot n=0\) on \(\partial\Omega\setminus\Gamma\) where \(\Gamma=\{(x_{1},0)\,;x_{1}\in(a,b)\}\), one can deduce that
\[R_{2} =\tau\int_{a}^{b}\partial_{2}\phi\left|\partial_{2}v\left(x_{1}, 0\right)\right|^{2}dx_{1}\] \[\quad+2\tau\int_{a}^{b}x_{1}^{\alpha}\partial_{1}\phi\partial_{1}v \left(x_{1},0\right)\partial_{2}v\left(x_{1},0\right)dx_{1}-\tau\int_{a}^{b}x_{1} ^{\alpha}\partial_{2}\phi\left|\partial_{1}v\left(x_{1},0\right)\right|^{2}dx_{1}\] \[\quad+2\tau\int_{a}^{b}\left(\nabla\cdot(A\nabla\phi)\right)v \left(x_{1},0\right)\partial_{2}v\left(x_{1},0\right)dx_{1}+\tau^{3}\int_{a}^{b }\left(A\nabla\phi\cdot\nabla\phi\right)\partial_{2}\phi\left|v\left(x_{1},0 \right)\right|^{2}dx_{1}\.\]
which gives the existence of \(C_{9}>0\) independent on \(\alpha\in[0,2)\) such that for any \(\tau>0\) sufficiently large
\[\left|R_{2}\right|\leq C_{9}\left(\tau\left\|\partial_{2}v\left(\cdot,0\right) \right\|_{L^{2}\left(a,b\right)}^{2}+\tau^{3}\left\|v\left(\cdot,0\right) \right\|_{H^{1}\left(a,b\right)}^{2}\right)\.\]
Finally, by (4.5) we have for any \(\tau>\tau_{0}\) with \(\tau_{0}>1\), the following inequality
\[C_{8}\left(\tau^{3}\left\|v\right\|_{L^{2}\left(\Omega\right)}^{2}+\tau\left\| \nabla v\right\|_{L^{2}\left(\Omega\right)}^{2}\right)\leq\left\|Q_{\phi}v \right\|_{L^{2}\left(\Omega\right)}^{2}+C_{9}\left(\tau\left\|\partial_{2}v \left(\cdot,0\right)\right\|_{L^{2}\left(a,b\right)}^{2}+\tau^{3}\left\|v \left(\cdot,0\right)\right\|_{H^{1}\left(a,b\right)}^{2}\right). \tag{4.6}\]
Let \(U=\left(\frac{2a+b}{3},\frac{a+2b}{3}\right)\times\left(0,\frac{T}{4}\right)\), \(W_{1}=\left(\left[a,\frac{3a+b}{4}\right]\cup\left[\frac{a+3b}{4},b\right] \right)\times\left[0,\frac{2T}{3}\right]\), \(W_{2}=\left[a,b\right]\times\left[\frac{T}{3},\frac{2T}{3}\right]\) and \(W=W_{1}\cup W_{2}\). We have \(\mathrm{supp}\nabla\chi=W\) and \(\chi=1\) in \(U\).
Coming back to the function \(z\) where \(v=e^{\tau\phi}\chi z\), \(Q_{\phi}v=e^{\tau\phi}Q\left(\chi z\right)=e^{\tau\phi}\left(\chi Qz+\left[Q, \chi\right]z\right)\) where the bracket \(\left[Q,\chi\right]=-\partial_{t}^{2}\chi-2(\partial_{t}\chi)\partial_{t}-x^ {\alpha}(\partial_{x}^{2}\chi)-2(\partial_{x}\chi)x^{\alpha}\partial_{x}-\alpha (\partial_{x}\chi)x^{\alpha-1}\) is a differential operator of order one, supported in \(W\), which is away from a neighborhood of the degeneracy \(\left\{x=0\right\}\). From (4.6) and taking any \(\tau\) sufficiently large yields
\[\tau^{3}\left\|e^{\tau\phi}z\right\|_{L^{2}\left(U\right)}^{2}+ \tau\left\|e^{\tau\phi}\nabla z\right\|_{L^{2}\left(U\right)}^{2} \leq C\left(\left\|e^{\tau\phi}\chi Qz\right\|_{L^{2}\left(\Omega \right)}^{2}+\tau\left\|e^{\tau\phi}z\right\|_{L^{2}\left(W\right)}^{2}+ \left\|e^{\tau\phi}\nabla z\right\|_{L^{2}\left(W\right)}^{2}\right)\] \[+C\left(\tau\left\|e^{\tau\phi\left(\cdot,0\right)}\partial_{t}z \left(\cdot,0\right)\right\|_{L^{2}\left(a,b\right)}^{2}+\tau^{5}\left\|e^{ \tau\phi\left(\cdot,0\right)}z\left(\cdot,0\right)\right\|_{H^{1}\left(a,b \right)}^{2}\right)\.\]
Let \(D=\underset{\Omega}{\max}\phi\), \(D_{W}=\underset{W}{\max}\phi\), \(D_{0}=\underset{(a,b)}{\max}\phi\left(\cdot,0\right)\) and \(D_{U}=\underset{U}{\min}\phi\). We have for any \(\tau>\tau_{0}\) sufficiently large
\[e^{2\tau D_{U}}\left(\left\|z\right\|_{L^{2}\left(U\right)}^{2}+ \left\|\nabla z\right\|_{L^{2}\left(U\right)}^{2}\right) \leq Ce^{2\tau D}\left\|Qz\right\|_{L^{2}\left(\Omega\right)}^{2}+Ce^{ 2\tau D_{K}}\left(\tau\left\|z\right\|_{L^{2}\left(W\right)}^{2}+\left\|\nabla z \right\|_{L^{2}\left(W\right)}^{2}\right)\] \[+Ce^{2\tau D_{0}}\left(\tau\left\|\partial_{t}z\left(\cdot,0 \right)\right\|_{L^{2}\left(a,b\right)}^{2}+\tau^{5}\left\|z\left(\cdot,0 \right)\right\|_{H^{1}\left(a,b\right)}^{2}\right)\.\]
Our choice of \(\psi\) given by (4.3) allows to get \(D>D_{U}\) and \(D_{0}>D_{U}>D_{K}\). Indeed, by a straightforward computation,
\[\left\{\begin{array}{rl}\underset{W_{1}}{\max}\psi-\underset{U}{\min}\psi& \leq-\left(\frac{b-a}{4}\right)^{2k}-\beta^{2k}+\left(\frac{b-a}{6}\right)^{2k }+\beta^{2k}\left(\frac{T}{4}+1\right)^{2k}\\ &=\beta^{2k}\left(-1+\left(\frac{3}{8}\left(T+4\right)\right)^{2k}\left(-1+2 \left(\frac{2}{3}\right)^{2k}\right)\right)<0\,\\ \underset{W_{2}}{\max}\psi-\underset{U}{\min}\psi&\leq-\beta^{2k}\left(\frac{T }{3}+1\right)^{2k}+\left(\frac{b-a}{6}\right)^{2k}+\beta^{2k}\left(\frac{T}{4} +1\right)^{2k}\\ &=\beta^{2k}\left(\frac{1}{3}\left(T+4\right)\right)^{2k}\left(-1+2\left(\frac {3T+12}{4T+12}\right)^{2k}\right)<0\.\end{array}\right.\]
Using \(W\subset\overline{\Omega}\) and optimizing with respect to \(\tau\) yield the desired interpolation estimate (see e.g. [R] or [LRLeR1, Lemma 5.4, page 189]). This completes the proof of
\[\left\|\varphi\right\|_{H^{1}\left(\left(\frac{2a+b}{3},\frac{a+2b}{3}\right) \times\left(0,\frac{T}{4}\right)\right)}\leq c\left\|\varphi\right\|_{H^{1} \left(\left(a.b\right)\times\left(0,T\right)\right)}^{1-\delta}\left(\left\| \varphi_{0}\right\|_{H^{1}\left(a,b\right)}+\left\|\varphi_{1}\right\|_{L^{2} \left(a,b\right)}\right)^{\delta}\,\]
since \(Q\varphi=0\).
## 5 Observability estimate for the eigenfunctions (proof of Proposition 2.1)
In this section we aim to prove Proposition 2.1. Given \(0<a<b<1\), we will use the notation \(X\lesssim Y\), or \(Y\gtrsim X\) to denote the bound \(\left|X\right|\leq cY\) for some constant \(c>0\) only dependent on \(\left(a,b\right)\).
Cannarsa, Martinez and Vancostenoble proved (see [12] proposition 2.15 at page 10) that
\[\forall\alpha\in[1,2)\ \ \ \ \ \forall j\geq 1\ \ \ \ \ \|\Phi_{j}\|_{L^{2}(a,b)}^{2} \gtrsim 2-\alpha\.\]
In this section, we extend this result to \(\alpha\in[0,2)\). To this end, we focus on the case \(\alpha\in[0,1)\) and apply the following observability estimate.
Proposition 5.1 - _For all_\(\sigma\in\mathbb{R}\)_, for all_\(\alpha\in[0,1)\)_, for all_\(\vartheta\in D(\mathcal{P})\)
\[\sigma^{2}\left\|\vartheta\right\|_{L^{2}(0,1)}^{2}+\left\|x^{\alpha/2} \vartheta^{\prime}\right\|_{L^{2}(0,1)}^{2}\lesssim\left(\left\|\left( \mathcal{P}-\sigma^{2}\right)\vartheta\right\|_{L^{2}(0,1)}^{2}+\left(1+ \sigma^{2}\right)\left\|\vartheta\right\|_{L^{2}(a,b)}^{2}\right)\.\]
Since \(\Phi_{j}\in D(\mathcal{P})\) is the normalized eigenfunctions of \(\mathcal{P}\) associated with an eigenvalue \(\lambda_{j}\), \(j\in\mathbb{N}^{*}\). Applying Proposition 5.1 with \(\vartheta=\Phi_{j}\) and \(\sigma^{2}=\lambda_{j}\), we obtain
\[\frac{\lambda_{j}}{1+\lambda_{j}}\lesssim\left\|\Phi_{j}\right\|_{L^{2}(a,b)} ^{2}\.\]
Using \(\frac{\lambda_{1}}{1+\lambda_{1}}\leq\frac{\lambda_{j}}{1+\lambda_{j}}\) and (1.2), one can deduce that
\[\forall\alpha\in[0,1)\ \ \ \ \ \forall j\geq 1\ \ \ \ \left\|\Phi_{j}\right\|_{L^{2}(a,b)}^{2} \gtrsim 1\geq\frac{1}{2}\left(2-\alpha\right)\.\]
This completes the proof of Proposition 2.1.
Now, we prove Proposition 5.1. Before proceeding to the proof we need two lemmas.
Lemma 5.1 - _There exists_\(C>0\) _such that for all_\(\sigma\in\mathbb{R}\)_, for all_\(\alpha\in[0,1)\)_,_
\[\sigma^{2}\left\|\vartheta\right\|_{L^{2}(0,1)}^{2}+\left\|x^{\alpha/2} \vartheta^{\prime}\right\|_{L^{2}(0,1)}^{2}\leq C\left(\left\|\left(\mathcal{P }-\sigma^{2}\right)\vartheta\right\|_{L^{2}(0,1)}^{2}+\left|\vartheta^{\prime }(1)\right|^{2}\right)\,\]
_for all_\(\vartheta\in D(\mathcal{P})\)_.
Lemma 5.2 - _There exists_\(C>0\) _such that for all_\(\sigma\in\mathbb{R}\)_, for all_\(\alpha\in[0,1)\)_,_
\[\left|\vartheta^{\prime}(1)\right|^{2}\leq C\left(\left\|\left(\mathcal{P}- \sigma^{2}\right)\vartheta\right\|_{L^{2}(0,1)}^{2}+\left\|\vartheta\right\|_ {H^{1}\left(\frac{3a+b}{4},\frac{a+3b}{4}\right)}^{2}\right)\,\]
_for all_\(\vartheta\in D(\mathcal{P})\)_.
Proof of Proposition 5.1 - By Lemmata 5.1 and 5.2,
\[\sigma^{2}\left\|\vartheta\right\|_{L^{2}(0,1)}^{2}+\left\|x^{\alpha/2} \vartheta^{\prime}\right\|_{L^{2}(0,1)}^{2}\lesssim C\left(\left\|\left( \mathcal{P}-\sigma^{2}\right)\vartheta\right\|_{L^{2}(0,1)}^{2}+\left\| \vartheta\right\|_{H^{1}\left(\frac{3a+b}{4},\frac{a+3b}{4}\right)}^{2}\right)\.\]
Let \(\chi\in C_{0}^{\infty}\left(0,1\right)\) such that \(0\leq\chi\leq 1\) and \(\chi=1\) on \([\frac{3a+b}{4},\frac{a+3b}{4}]\). We have
\[\left\|\vartheta\right\|_{H^{1}\left(\frac{3a+b}{4},\frac{a+3b}{4}\right)}^{2} =\left\|\chi\vartheta\right\|_{H^{1}\left(\frac{3a+b}{4},\frac{a+3b}{4} \right)}^{2}\lesssim\left\|\vartheta\right\|_{L^{2}(a,b)}^{2}+\left|\int_{0}^{ 1}\chi^{2}\mathcal{P}\vartheta\vartheta\right|\] \[\lesssim\left\|\vartheta\right\|_{L^{2}(a,b)}^{2}+\left|\int_{0}^{ 1}\chi^{2}\left(\mathcal{P}-\sigma^{2}\right)\vartheta\vartheta\right|+\sigma ^{2}\left\|\vartheta\right\|_{L^{2}(a,b)}^{2}\] \[\lesssim\left\|\left(\mathcal{P}-\sigma^{2}\right)\vartheta\right\| _{L^{2}(0,1)}^{2}+\left(1+\sigma^{2}\right)\left\|\vartheta\right\|_{L^{2}(a,b)} ^{2}\]
by Cauchy-Schwarz. Combining the above estimates ends the proof Proposition 5.1.
Proof of Lemma 5.1.- Let us consider \(\phi(x)=x^{2-\alpha}\) and \(v=e^{\phi}\vartheta\). Note that for \(\alpha\in[0,1)\), \(v\in D(\mathcal{P})\) because \(\vartheta\in D(\mathcal{P})\). We set
\[P_{\phi}=e^{\phi}\mathcal{P}e^{-\phi}-\sigma^{2}\]
with
\[\mathcal{S}=-\frac{d}{dx}\left(x^{\alpha}\frac{d}{dx}\right)-(2-\alpha)^{2}x^{2- \alpha}-\sigma^{2},\quad\mathcal{A}=2(2-\alpha)x\frac{d}{dx}+(2-\alpha)\,\]
in order that \(P_{\phi}v=e^{\phi}\left(\mathcal{P}-\sigma^{2}\right)\vartheta\) and \(P_{\phi}v=\mathcal{S}v+\mathcal{A}v\) which gives \(\left\|P_{\phi}v\right\|_{L^{2}(0,1)}^{2}=\left\|\mathcal{S}v\right\|_{L^{2}( 0,1)}^{2}+\left\|\mathcal{A}v\right\|_{L^{2}(0,1)}^{2}+2(\mathcal{S}v, \mathcal{A}v)_{L^{2}(0,1)}\).
Classical computations lead to
\[(\mathcal{S}v,\mathcal{A}v)_{L^{2}(0,1)} =(2-\alpha)^{2}\left\|x^{\alpha/2}v^{\prime}\right\|_{L^{2}(0,1)} ^{2}+(2-\alpha)^{4}\left\|x^{(2-\alpha)/2}v\right\|_{L^{2}(0,1)}^{2}-(2- \alpha)\left|v^{\prime}(1)\right|^{2}\] \[\quad+(2-\alpha)\lim_{x\to 0^{+}}\left[x^{1+\alpha}|v^{\prime}(x )|^{2}+x^{\alpha}v^{\prime}(x)v(x)+(2-\alpha)^{2}\,x^{3-\alpha}\left|v(x) \right|^{2}+\sigma^{2}x\left|v(x)\right|^{2}\right]\.\]
The above limit vanishes from the boundary conditions and the regularity of \(v\). Therefore, the fact that \(0\leq\left\|P_{\phi}v\right\|_{L^{2}(0,1)}^{2}-2(\mathcal{S}v,\mathcal{A}v)_{ L^{2}(0,1)}\) implies
\[2(2-\alpha)^{2}\left\|x^{\alpha/2}v^{\prime}\right\|_{L^{2}(0,1 )}^{2}+2(2-\alpha)^{4}\left\|x^{(2-\alpha)/2}v\right\|_{L^{2}(0,1)}^{2} \leq \left\|P_{\phi}v\right\|_{L^{2}(0,1)}^{2}+2(2-\alpha)\left|v^{ \prime}(1)\right|^{2}\] \[= \left\|e^{\phi}\left(\mathcal{P}-\sigma^{2}\right)\vartheta\right\| _{L^{2}(0,1)}^{2}+2(2-\alpha)\left|\vartheta^{\prime}(1)\right|^{2}\.\]
Since \(x^{\alpha/2}\vartheta^{\prime}=e^{-\phi}\left(x^{\alpha/2}v^{\prime}-(2- \alpha)x^{(2-\alpha)/2}v\right)\),
\[\left\|x^{\alpha/2}\vartheta^{\prime}\right\|_{L^{2}(0,1)}^{2}\leq 2\left\|x^{ \alpha/2}v^{\prime}\right\|_{L^{2}(0,1)}^{2}+2(2-\alpha)^{2}\left\|x^{(2- \alpha)/2}v\right\|_{L^{2}(0,1)}^{2}\.\]
Combining the two above inequalities, we get, for \(\alpha\in[0,1)\)
\[\left\|x^{\alpha/2}\vartheta^{\prime}\right\|_{L^{2}(0,1)}^{2} \leq \frac{1}{(2-\alpha)^{2}}\left(\left\|e^{\phi}\left(\mathcal{P}- \sigma^{2}\right)\vartheta\right\|_{L^{2}(0,1)}^{2}+2(2-\alpha)\left| \vartheta^{\prime}(1)\right|^{2}\right)\] \[\lesssim \left\|\left(\mathcal{P}-\sigma^{2}\right)\vartheta\right\|_{L^{2 }(0,1)}^{2}+\left|\vartheta^{\prime}(1)\right|^{2}\.\]
It remains to bound \(\sigma^{2}\left\|\vartheta\right\|_{L^{2}(0,1)}^{2}\). By Cauchy-Schwarz,
\[\sigma^{2}\left\|\vartheta\right\|_{L^{2}(0,1)}^{2} = \int_{0}^{1}\mathcal{P}\vartheta\vartheta-\int_{0}^{1}(\mathcal{P} -\sigma^{2})\vartheta\vartheta=\left\|x^{\alpha/2}\vartheta^{\prime}\right\|_ {L^{2}(0,1)}^{2}-\int_{0}^{1}(\mathcal{P}-\sigma^{2})\vartheta\vartheta\] \[\leq \left\|x^{\alpha/2}\vartheta^{\prime}\right\|_{L^{2}(0,1)}^{2}+ \left\|\vartheta\right\|_{L^{2}(0,1)}\left\|\left(\mathcal{P}-\sigma^{2} \right)\vartheta\right\|_{L^{2}(0,1)}\] \[\leq \left\|x^{\alpha/2}\vartheta^{\prime}\right\|_{L^{2}(0,1)}^{2}+2 \left\|x\vartheta^{\prime}\right\|_{L^{2}(0,1)}\left\|\left(\mathcal{P}-\sigma ^{2}\right)\vartheta\right\|_{L^{2}(0,1)}\] \[\lesssim \left\|x^{\alpha/2}\vartheta^{\prime}\right\|_{L^{2}(0,1)}^{2}+ \left\|\left(\mathcal{P}-\sigma^{2}\right)\vartheta\right\|_{L^{2}(0,1)}^{2}\]
where \(\left\|\vartheta\right\|_{L^{2}(0,1)}^{2}\leq 4\left\|x\vartheta^{\prime} \right\|_{L^{2}(0,1)}^{2}\) comes from one integration by parts. This ends the proof of Lemma 5.1.
Proof of Lemma 5.2.- Denote \(\widetilde{a}=\frac{3a+b}{4}\) and \(\widetilde{b}=\frac{a+3b}{4}\) in order that \(0<a<\widetilde{a}<\widetilde{b}<b<1\). Let us consider \(\phi(x)=e^{\lambda\psi}\), with \(\lambda>0\), \(\psi\in C^{\infty}\left(0,1\right)\), \(\psi^{\prime}\neq 0\) on \([\widetilde{a},1]\) and \(\psi^{\prime}(1)<0\). Let \(\chi\in C^{\infty}\left(0,1\right)\) such that \(0\leq\chi\leq 1\), \(\chi=0\) on \([0,\widetilde{a}]\) and \(\chi=1\) on \([\widetilde{b},1]\), and let \(v=e^{\tau\phi}\chi\vartheta\) with \(\tau>0\). We set
\[P_{\phi}=e^{\tau\phi}\mathcal{P}e^{-\tau\phi}-\sigma^{2}\]
with
\[\mathcal{S}=-\frac{d}{dx}\left(x^{\alpha}\frac{d}{dx}\right)-\tau^{2}x^{\alpha} \left|\phi^{\prime}\right|^{2}-\sigma^{2},\quad\mathcal{A}=2\tau x^{\alpha} \phi^{\prime}\frac{d}{dx}+\tau\left(x^{\alpha}\phi^{\prime}\right)^{\prime}\,\]
in order that \(P_{\phi}v=e^{\tau\phi}\left(\mathcal{P}-\sigma^{2}\right)(\chi\vartheta)\) and \(P_{\phi}v=\mathcal{S}v+\mathcal{A}v\) which gives \(\left\|P_{\phi}v\right\|_{L^{2}(0,1)}^{2}=\left\|\mathcal{S}v\right\|_{L^{2}(0,1 )}^{2}+\left\|\mathcal{A}v\right\|_{L^{2}(0,1)}^{2}+2(\mathcal{S}v,\mathcal{A} v)_{L^{2}(0,1)}\).
Classical computations lead to
\[(\mathcal{S}v,\mathcal{A}v)_{L^{2}(0,1)} =2\tau\int_{0}^{1}x^{2\alpha}\phi^{\prime\prime}\left|v^{\prime} \right|^{2}+\tau\alpha\int_{0}^{1}x^{2\alpha-1}\phi^{\prime}\left|v^{\prime} \right|^{2}\] \[-\frac{\tau}{2}\int_{0}^{1}\left(\mathcal{P}^{2}\phi\right)\left| v\right|^{2}+2\tau^{3}\int_{0}^{1}x^{2\alpha}\phi^{\prime\prime}\left(\phi^{ \prime}\right)^{2}\left|v\right|^{2}+\alpha\tau^{3}\int_{0}^{1}x^{2\alpha-1}( \phi^{\prime})^{3}\left|v\right|^{2}-\tau\phi^{\prime}\left(1\right)\left|v^{ \prime}\left(1\right)\right|^{2}\.\]
But, using \(\phi=e^{\lambda\psi}\) with \(\psi\) having a non-vanishing gradient, there exist five constants \(C_{0},C_{1},C_{2},C_{3},C_{4}>0\) independent on \(\alpha\in[0,1)\) such that
\[(\mathcal{S}v,\mathcal{A}v)_{L^{2}(0,1)}\geq\tau\lambda^{2}C_{0} \int_{0}^{1}\phi\left|v^{\prime}\right|^{2}+\tau^{3}\lambda^{4}C_{1}\int_{0}^{ 1}\phi^{3}\left|v\right|^{2}-\tau\lambda C_{2}\int_{0}^{1}\phi\left|v^{\prime} \right|^{2}\\ -\tau\lambda^{4}C_{3}\int_{0}^{1}\phi\left|v\right|^{2}-\tau^{3} \lambda^{3}C_{4}\int_{0}^{1}\phi^{3}\left|v\right|^{2}+\tau\left|\phi^{\prime} \left(1\right)\right|\left|v^{\prime}\left(1\right)\right|^{2}\.\]
Therefore, the fact that \(0\leq\left\|P_{\phi}v\right\|_{L^{2}(0,1)}^{2}-2(\mathcal{S}v,\mathcal{A}v)_ {L^{2}(0,1)}\) implies by taking \(\lambda>0\) sufficiently large, and \(\tau>0\) sufficiently large the following inequality
\[\left\|v\right\|_{H^{1}(0,1)}^{2}+\left|v^{\prime}\left(1\right)\right|^{2} \lesssim\left\|P_{\phi}v\right\|_{L^{2}(0,1)}^{2}\.\]
Taking the weights off the integrals and using commutators, we have
\[\left|\vartheta^{\prime}\left(1\right)\right|^{2}\lesssim\left\|P_{\phi} \vartheta\right\|_{L^{2}(0,1)}^{2}+\left\|\vartheta\right\|_{H^{1}(\widetilde {u},\widetilde{b})}^{2}\.\]
This ends the proof of Lemma 5.2.
## 6 Observability estimate for the degenerate heat equation (proof of Theorem 1.2)
In this section, we prove that the refine observability from measurable set of Theorem 1.2 is a corollary of the spectral Lebeau-Robbiano inequality of Theorem 1.1.
Let \(\widetilde{\omega}\Subset\omega\) and \(\chi\in C_{0}^{\infty}\left(\omega\right)\) be such that \(0\leq\chi\leq 1\) and \(\chi=1\) in \(\widetilde{\omega}\).
We start with Theorem 3.1 of [BP, page 1142] stating that \((i)\) implies \((ii)\) where:
\((i)\)\(\exists C_{1}>0\), \(\forall\left\{a_{j}\right\}\in\mathbb{R}\), \(\forall\Lambda>0\)
\[\sum_{\lambda_{j}\leq\Lambda}|a_{j}|^{2}\leq e^{C_{1}\left(1+\sqrt{\Lambda} \right)}\int_{\widetilde{\omega}}\left|\sum_{\lambda_{j}\leq\Lambda}a_{j} \Phi_{j}\right|^{2}\ ;\]
\((ii)\)\(\forall t>0\), \(\forall\varepsilon\in(0,2)\), \(\forall y_{0}\in L^{2}\left(0,1\right)\)
\[\left\|e^{-t\mathcal{P}}y_{0}\right\|_{L^{2}(0,1)}\leq 4e^{\frac{C_{1}}{2}}e^{ \frac{C_{2}^{2}}{24t}}\left\|e^{-t\mathcal{P}}y_{0}\right\|_{L^{2}(\widetilde {u})}^{1-\varepsilon/2}\left\|y_{0}\right\|_{L^{2}(0,1)}^{\varepsilon/2}\.\]
Therefore, by Theorem 1.1 we know that \((ii)\) holds with \(C_{1}=C\frac{1}{(2-\alpha)^{2}}>1\).
By Nash inequality and regularizing effect, we get for some constants \(c>1\) and \(\theta\in(0,1)\) independent on \((y_{0},t)\) and \(\alpha\in[0,2)\)
\[\left\|e^{-t\mathcal{P}}y_{0}\right\|_{L^{2}(\widetilde{\omega})}\leq c\left(1+ \frac{1}{\sqrt{t}}\right)^{\theta}\left\|e^{-t\mathcal{P}}y_{0}\right\|_{L^{1}( \omega)}^{1-\theta}\left\|y_{0}\right\|_{L^{2}(0,1)}^{\theta}\.\]
Therefore, since \(1+\frac{1}{\sqrt{t}}\leq 4e^{\frac{C_{1}^{2}}{2H}}\), with \(C_{2}=16ce^{\frac{C_{1}}{2}}e^{\frac{C_{2}^{2}}{H}}\geq 4e^{\frac{C_{1}}{2}}e^{ \frac{C_{2}}{2H}}\left(c\left(1+\frac{1}{\sqrt{t}}\right)^{\theta}\right)^{1- \varepsilon/2}\) and \(\mu=1-\left(1-\theta\right)\left(1-\frac{\varepsilon}{2}\right)\), we obtain
\[\left\|e^{-t\mathcal{P}}y_{0}\right\|_{L^{2}(0,1)}\leq C_{2}\left\|e^{-t \mathcal{P}}y_{0}\right\|_{L^{1}(\omega)}^{1-\mu}\left\|y_{0}\right\|_{L^{2}(0,1)}^{\mu}\,\]
which implies by Young inequality
\[\left\|e^{-t\mathcal{P}}y_{0}\right\|_{L^{2}(0,1)} \leq s\left\|y_{0}\right\|_{L^{2}(0,1)}+\frac{1}{s^{\frac{\mu}{1-\mu} }}C_{2}^{\frac{1}{1-\mu}}\left\|e^{-t\mathcal{P}}y_{0}\right\|_{L^{1}(\omega)}\] \[\leq s\left\|y_{0}\right\|_{L^{2}(0,1)}+\frac{1}{s^{\frac{\mu}{1-\mu} }}\left(16ce^{\frac{C_{1}}{2}}\right)^{\frac{1}{1-\mu}}e^{\frac{C_{1}^{2}}{ \varepsilon t}\frac{1}{1-\mu}}\left\|e^{-t\mathcal{P}}y_{0}\right\|_{L^{1}( \omega)}\.\]
Reproducing the proof of Theorem 1.1 of [PW, page 684], we have for our system that \((iii)\) implies \((iv)\) where:
\((iii)\)\(\exists K_{1},K_{2},\ell>0,\)\(\forall s>0\)
\[\left\|e^{-t\mathcal{P}}y_{0}\right\|_{L^{2}(0,1)}\leq s\left\|y_{0}\right\|_{ L^{2}(0,1)}+\frac{1}{s^{\ell}}K_{1}e^{\frac{K_{2}}{\varepsilon}}\left\|e^{-t \mathcal{P}}y_{0}\right\|_{L^{1}(\omega)}\ ;\]
\((iv)\)\(\forall y_{0}\in L^{2}\left(0,1\right)\)
\[\left\|e^{-T\mathcal{P}}y_{0}\right\|_{L^{2}(0,1)}\leq K_{3}\int_{\omega\times E }\left|e^{-t\mathcal{P}}y_{0}\right|\,\]
with
\[K_{3}=c\frac{K_{1}}{K_{2}}e^{K_{2}}\text{ when }E\subset\left(0,T\right) \text{ is a measurable set of positive measure};\]
\[K_{3}=\kappa\frac{K_{1}}{K_{2}}e^{\kappa\frac{K_{2}}{2}}\text{ when }E=\left(0,T\right) \text{ for some }\kappa>0\text{ independent on }T\.\]
Therefore, with \(K_{1}=\left(16ce^{\frac{C_{1}}{2}}\right)^{\frac{1}{\left(1-\theta\right) \left(1-\frac{\varepsilon}{2}\right)}}\) and \(K_{2}=\frac{1}{\varepsilon\left(1-\theta\right)\left(1-\frac{\varepsilon}{2} \right)}C_{1}^{2}\) we have \(K_{3}\leq Ce^{C\frac{1}{\left(2-\alpha\right)^{3}}}\). This completes the proof of Theorem 1.2.
|
2310.20549 | Taweret: a Python package for Bayesian model mixing | Uncertainty quantification using Bayesian methods is a growing area of
research. Bayesian model mixing (BMM) is a recent development which combines
the predictions from multiple models such that each model's best qualities are
preserved in the final result. Practical tools and analysis suites that
facilitate such methods are therefore needed. Taweret introduces BMM to
existing Bayesian uncertainty quantification efforts. Currently Taweret
contains three individual Bayesian model mixing techniques, each pertaining to
a different type of problem structure; we encourage the future inclusion of
user-developed mixing methods. Taweret's first use case is in nuclear physics,
but the package has been structured such that it should be adaptable to any
research engaged in model comparison or model mixing. | Kevin Ingles, Dananjaya Liyanage, Alexandra C. Semposki, John C. Yannotty | 2023-10-31T15:30:36Z | http://arxiv.org/abs/2310.20549v1 | # Taweret: a Python package for Bayesian model mixing
###### Abstract
In this paper, we investigate the effect of the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Tawereteret on the Taweret on the Taweret on the Taweret on the Taweret on the Tawereteret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Tawereteret on the Taweret on the Tawereteret on the Taweret on the Taweret on the Tawereteret on the Taweret on the Taweret on the Taweret on the Taweret on the Tawereteret on the Taweret on the Tawereteret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Taweret on the Tawereteret on the Taweret on the Taweret on the Taweret on the Tereteret on the Taweret on the Tereteret on the Taweret on the Tereteret on the Taweret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Teret on the Teret on the Teret on the Tereteret on the Teret on the Tereteret on the Teret on the Teret on the Tereteret on the Tereteret on the Teret on the Teret on the Teret on the Tereteret on the Teret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Teret on the Teret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Teret on the Tereteret on the Teret on the Teret on the Teret on the Teret on the Teret on the Tereteret on the Teret on the Teret on the Tereteret on the Teret on the Teret on the Tereteret on the Teret on the Tereteret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Tereteret on the Teret on the Teret on the Teret on the Teret on the Tereteret on the Teret on the Teret on the Tereteret on the Teret on the Teret on the Tereteret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Tereteret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Tereteret on the Teret on the Tereteret on the Teret on the Teret on the Teret on the Teret on the Tereteret on the Teret on the Tereteret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Teret on the Tereteret on the Teret on the Tereteret on the Teret on the Teret on the Teret on the Teret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Tereteret on the Teret on the Tereteret on the Tereteret on the Teretereteret on the Teret on the Teretereteret on the Tereteret on the Teretereteret on the Teretereteret on the Teretereteret on the Teretereteret on the Teretereteret on the Teretereteret on the Tereteret on the Tereteret on the Teretereteret on the Tereteret on the Tereteret on the Tereteret on the Teretereteret on the Teretereteret on the Teretereteret on the Tereteretereteret on the Tereteretereteret on the Teretereteret on the Tereteretereteret on the Tereteretereteret on the Teretereteret on the Teretereteret on the Tereteretereteret on the Teretereteret on the Teretereteret on the Teretereteret on the Tereteretereteret on the Teretereteret on the Teretereteretereteret on the Tereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Tereteretereteret on the Tereteretereteret on the Tereteretereteret on the Tereteretereteret on the Tereteretereteret on the Tereteretereteret on the Tereteretereteret on the Tereteretereteret on the Tereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Tereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Tereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Tereteretereteret on the Tereteretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteret on the Tereteretereteret on the Teretereteret on the Tereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteretereteret on the Teretereteret on the Tereteretereteretereteret on the Teretereteretereteret on the Tereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Tereteretereteret on the Tereteretereteretereteret on the Teretereteretereteret on the Tereteretereteretereteret on the Tereteretereteretereteret on the Teretereteretereteret on the Teretereteretereteret on the Tereteretereteretereteret on the Teretereteretereteret on the Teretereteretereteretereteret on the Tereteretereteretereteret on the Teretereteretereteret on the Tereteretereteretereteret on the Tereteretereteret on the Tereteretereteretereteret on the Teretereteretereteret on the Tereteretereteretereteret on the Teretereteretereteret on the Tereteretereteretereteret on the Teretereteretereteretereteret on the Teretereteretereteretereteret on the Teretereteretereteretereteret on the Tereteretereteretereteretereteret on the Tereteretereteret on the Teretereteretereteretereteret on the Tereteretereteretereteret on the Tereteretereteretereteretereteret on the Tereteretereteretereteret on the Teretereteretereteret on the Tereteret
where \(E[\mathbf{Y}\mid\mathbf{x}]\) denotes the mean of \(\mathbf{Y}\) given the vector of input parameters \(\mathbf{x}\), \(f_{k}(\mathbf{x})\) is the mean prediction under the \(k^{\text{th}}\) model \(\mathcal{M}_{k}\), and \(w_{k}(\mathbf{x})\) is the corresponding weight function. The _density-mixing_ approach estimates the underlying predictive density by
\[p(\mathbf{Y}_{0}\mid\mathbf{x}_{0})=\sum_{k=1}^{K}w_{k}(\mathbf{x}_{0})\;p(\mathbf{Y}_{0}\mid \mathbf{x}_{0},\mathbf{Y},\mathcal{M}_{k}), \tag{2}\]
where \(p(\mathbf{Y}_{0}\mid\mathbf{x}_{0},\mathbf{Y},\mathcal{M}_{k})\) represents the predictive density of a future observation \(\mathbf{Y}_{0}\) with respect to the \(k^{\text{th}}\) model \(\mathcal{M}_{k}\) at a new input \(\mathbf{x}_{0}\). In either BMM setup, a key challenge is defining \(w_{k}(\mathbf{x})\)--the functional relationship between the inputs and the weights.
This work introduces Taweret, a Python package for Bayesian model mixing that includes three novel approaches for combining models, each of which defines the weight function in a unique way (see Table 1 for a comparison of each method). This package has been developed as an integral piece of the Bayesian Analysis of Nuclear Dynamics (BAND) collaboration's software. BAND is a multi-institutional effort to build a cyber-infrastructure framework for use in the nuclear physics community [1; 2]. The software is designed to lower the barrier for researchers to employ uncertainty quantification in their experiments, and to integrate, as best as possible, with the community's current standards concerning coding style (pep8). Bayesian model mixing is one of BAND's four central pillars in this framework (the others being emulation, calibration, and experimental design).
In addition to this need, we are aware of several other fields outside of physics that use techniques such as model stacking and Bayesian model averaging (BMA) [3], e.g., statistics [4; 5], meteorology [6], and neuroscience [7]. It is expected that the Bayesian model mixing methods presented in Taweret can also be applied to use cases within these fields. Statisticians have developed several versatile BMA/stacking packages, e.g. [8; 9]. However, the only BMM-based package available is SAMBA--a BAND collaboration effort that was developed for testing BMM methods on a toy model [10]. Taweret's increased functionality, user-friendly structure, and diverse selection of mixing methods make it a marked improvement over SAMBA.
## III Structure
### Overview of methods
#### iii.1.1 Bivariate linear mixing
The full description of this mixing method and several of its applications in relativistic heavy-ion collision physics can be found in the Ph.D. thesis of D. Liyanage [11]. The bivariate linear mixing method can mix two models either using a density-mixing or a mean-mixing strategy. Currently, this is the only mixing method in Taweret that can also calibrate the models while mixing. It allows the user to choose among the following mixing functions:
* step: \(\Theta(\beta_{0}-x)\)
* sigmoid: \(\exp\left[(x-\beta_{0})/\beta_{1}\right]\)
* asymmetric 2-step: \(\alpha\Theta(\beta_{0}-x)+(1-\alpha)\Theta(\beta_{1}-x)\).
Here \(\Theta\) denotes the Heaviside step function, \(\beta_{0}\) and \(\beta_{1}\) determine the shape of the weight function and are inferred from the experimental data, and \(x\) is the model input parameter (which is expected to be 1-dimensional for this mixing method).
#### ii.1.2 Multivariate model mixing
Another Bayesian model mixing method incorporated into Taweret was originally published in [10], and was the focus of the BMM Python package SAMBA[12]. It can be described as combining models by weighting each of them by their precision, defined as the inverse of their respective variances. The posterior predictive distribution (PPD) of the mixed model is a Gaussian and can be expressed as
\[\mathcal{M}_{\dagger}\sim\mathcal{N}(f_{\dagger},Z_{P}^{-1}):\quad f_{\dagger }=\frac{1}{Z_{P}}\sum_{k=1}^{K}\frac{1}{\sigma_{k}^{2}}f_{k},\quad Z_{P}\equiv \sum_{k=1}^{K}\frac{1}{\sigma_{k}^{2}}, \tag{3}\]
where \(\mathcal{N}(\mu,\sigma^{2})\) is a normal distribution with mean \(\mu\) and variance \(\sigma^{2}\), \(Z_{P}\) is the precision of the models, and each individual model is assumed to possess a Gaussian form such as
\[\mathcal{M}_{k}\sim\mathcal{N}(f_{k}(x),\sigma_{k}^{2}(x)). \tag{4}\]
Here, \(f_{k}(x)\) is the mean of the model \(k\), and \(\sigma_{k}^{2}(x)\) its variance, both at input parameter \(x\).
In this method, the software receives the one-dimensional input space \(X\), the mean of the \(K\) models at each point \(x\in X\) (hence it is a mean-based mixing procedure), and the variances of the models at each point in \(x\in X\). Each model is assumed to have been calibrated prior to being included in the mix. The ignorance of this mixing method with respect to how each model was generated allows for any model to be used, including Bayesian Machine Learning tools such as Gaussian Processes [10] and Bayesian Neural Networks [13].
#### ii.1.3 Model mixing using Bayesian Additive Regression Trees
A third BMM approach implemented in Taweret adopts a mean-mixing strategy which models the weight functions using Bayesian Additive Regression Trees (BART) conditional on the mean predictions from a set of \(K\) models [14]. This approach enables the weight
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Method & Type & Number of & Number of & Number of & Weight \\ & & inputs & outputs & models & functions \\ \hline Bivariate linear & Mean \& & & & Step, \\ mixing & Density & 1 & \(\geq 1\) & 2 & Sigmoid, \\ & & & & & Asymmetric 2-step \\ \hline Multivariate mixing & Mean & 1 & 1 & \(K\) & Precision \\ & & & & & weighting \\ \hline BART mixing & Mean & \(\geq 1\) & 1 & \(K\) & Regression \\ & & & & & trees \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary of the three BMM approaches currently implemented in Taweret. Note that \(K\geq 2\). Following the method name and the type of mixing model, the _Number of inputs_ column details the dimensions of the parameter which the mixing weights depend on (e.g., in heavy-ion collisions this is the centrality bin); the _Number of outputs_ details how many observables the models themselves can have to compute the model likelihood (e.g., in heavy-ion collisions this can include charge multiplicities, transverse momentum distributions, transverse momentum fluctuations, etc.); the _Number of models_ column details how many models the mixing method can combine, and the _Weight functions_ column describes the available parameterization of how the mixing weights depend on the input parameter.
functions to be adaptively learned using tree bases and avoids the need for user-specified basis functions (such as a generalized linear model). Formally, the weight functions are defined by
\[w_{k}(\mathbf{x})=\sum_{j=1}^{m}g_{k}(\mathbf{x};T_{j},M_{j}),\quad\text{for $k=1,\ldots,K$} \tag{5}\]
where \(g_{k}(\mathbf{x};T_{j},M_{j})\) defines the \(k^{\text{th}}\) output of the \(j^{\text{th}}\) tree, \(T_{j}\), using the associated set of parameters, \(M_{j}\). Each weight function is implicitly regularized via a prior to prefer the interval \([0,1]\). Furthermore, the weight functions are not required to sum-to-one and can take values outside of the range of \([0,1]\). This regularization approach is designed to maintain the flexibility of the model while also encouraging the weight functions to take values which preserve desired inferential properties.
This BART-based approach is implemented in C++ with the trees module in Taweret acting as a Python interface. The C++ back-end implements Bayesian tree models and originates from the _Open Bayesian Trees Project_ (OpenBT) [15]. This module serves as an example for how existing code bases can be integrated into the Taweret framework.
### Overview of package structure
Taweret uses abstract base classes to ensure compatibility and uniformity of models and mixing methods. The two base classes are BaseModel and BaseMixer located in the core folder (see Fig. 1 for a schematic); any model mixing method developed with Taweret is required to inherit from these. The former represents physics-based models that may include parameters which need to be determined by Bayesian inference. The latter, BaseMixer, represents a mixing method used to combine the predictions from the physics-based models using Bayesian model mixing.
The design philosophy for Taweret is to make it easy to switch between mixing methods without having to rewrite an analysis script. Thus, the base classes prescribe which functions need to be present for interoperability between mixing methods, and in particular, the models being called in the method. The functions required by BaseModel are
* gives a point prediction for the model;
* calculates the log-likelihood, reducing along the last axis of an array if the input array has multiple axes;
* sets priors for parameters in the model.
The functions required by BaseMixer are
* gives point prediction for the mixed model given a set of parameters;
* gives point prediction for the weights given a set of parameters;
* returns the maximum _a posteriori_ estimate for the parameters of the mixed model (which includes both the weights and any model parameters);
* returns the chains of the sampled parameters from the mixed model;
* returns the posterior predictive distribution for the mixed model;
* returns the posterior predictive distribution for the model weights;
* returns the prior distributions (typically objects, not arrays);
* returns the prior predictive distribution for the mixed model;
* sets the prior distributions for the mixing method;
* executes the Bayesian model mixing step.
Following our design philosophy, the general workflow for an analysis using Taweret is described in Fig. 2. From this, one can see three sources of information are generally required for an analysis: a selected mixing method, a model set, and training data. Each of these sources are connected through the training phase, which is where the mixing weights are learned. This leads into the prediction phase, where final predictions are obtained for the overall system and the weight functions. This process is summarized in the code snippet below. This workflow is preserved across the various methods implemented in Taweret and is intended to be maintained for future mixing methods included in this work.
Figure 1: Diagram depicting the base classes, their methods (functions) and their properties (data).
Extending Taweret with a custom class or model simply requires that you inherit from the base classes and implement the required functions.
## IV Taweret moving forward
There are certainly many improvements that can be made to Taweret. An obvious one is a generalization of the bivariate linear mixing; this could be the mixing of an arbitrary number of models at the density level. Complementary to this density mixing method is a stochastic, mean-mixing method of arbitrary number \(K\) of models. An extension of the Multivariate Mixing method to multi-dimensional input and output spaces, correlated models, and well as calibration during mixing, is anticipated in future releases. Lastly, to facilitate the utilization of this growing framework, we hope to enable continuous integration routines for individuals contributing and create docker images that will run Taweret.
## V Disclosure statement
The authors of this work are not aware of any conflicts of interest that would affect this research.
## VI Acknowledgments
We thank Daniel R. Phillips, Ulrich Heinz, Matt Pratola, Kyle Godbey, Stefan Wild, Sunil Jaiswal, and all other BAND members for crucial feedback and discussion during the
Figure 2: The general workflow for an analysis using Taweret. (Blue) The user must define each of the \(K\) models as a class inherited from BaseModel. (Green) The user can select an existing mixing method from Taweret (solid) or contribute a new method (dashed). (Purple) The model is trained using a set of training data (red), the model set (blue), and the selected mixing method (green). Predictions and uncertainty quantification follow from the training process.
development stage of this package. This work is supported by the CSSI program Award OAC-2004601 (DL, ACS, JCY). ACS also acknowledges support from the Department of Energy (contract no. DE-FG02-93ER40756).
|
2306.17403 | Modelling repetition in zDM: a single population of repeating fast radio
bursts can explain CHIME data | Regardless of whether or not all fast radio bursts (FRBs) repeat, those that
do form a population with a distribution of rates. This work considers a
power-law model of this population, with rate distribution $\Phi_r \sim
R^{\gamma_r}$ between $R_{\rm min}$ and $R_{\rm max}$. The zDM code is used to
model the probability of detecting this population as either apparently
once-off or repeat events as a function of redshift, $z$, and dispersion
measure, DM. I demonstrate that in the nearby Universe, repeating sources can
contribute significantly to the total burst rate. This causes an apparent
deficit in the total number of observed sources (once-off and repeaters)
relative to the distant Universe that will cause a bias in FRB population
models. Thus instruments with long exposure times should explicitly take
repetition into account when fitting the FRB population.
I then fit data from The Canadian Hydrogen Intensity Mapping Experiment
(CHIME). The relative number of repeat and apparently once-off FRBs, and their
DM, declination, and burst rate distributions, can be well-explained by
50--100\% of CHIME single FRBs being due to repeaters, with $R_{\rm max} >
0.75$ day$^{-1}$ above $10^{39}$ erg, and ${\gamma_r} = -2.2_{-0.8}^{+0.6}$.
This result is surprisingly consistent with follow-up studies of FRBs detected
by the Australian Square Kilometre Array Pathfinder (ASKAP). Thus the evidence
suggests that CHIME and ASKAP view the same repeating FRB population, which is
responsible not just for repeating FRBs, but the majority of apparently
once-off bursts.
For greater quantitative accuracy, non-Poissonian arrival times, second-order
effects in the CHIME response, and a simultaneous fit to the total FRB
population parameters, should be treated in more detail in future studies. | C. W. James | 2023-06-30T05:14:33Z | http://arxiv.org/abs/2306.17403v2 | Modelling repetition in zDM: a single population of repeating fast radio bursts can explain CHIME data
###### Abstract
Regardless of whether or not all fast radio bursts (FRBs) repeat, those that do form a population with a distribution of rates. This work considers a power-law model of this population, with rate distribution \(\Phi_{r}\sim R^{\gamma_{r}}\) between \(R_{\rm min}\) and \(R_{\rm max}\). The zDM code is used to model the probability of detecting this population as either apparently once-off or repeat events as a function of redshift, \(z\), and dispersion measure, DM. I demonstrate that in the nearby Universe, repeating sources can contribute significantly to the total burst rate. This causes an apparent deficit in the total number of observed sources (once-off and repeaters) relative to the distant Universe that will cause a bias in FRB population models. Thus instruments with long exposure times should explicitly take repetition into account when fitting the FRB population.
I then fit data from the Canadian Hydrogen Experiment (CHIME). The relative number of repeat and apparently once-off FRBs, and their DM, declination, and burst rate distributions, can be well-explained by 50-100% of CHIME single FRBs being due to repeaters, with \(R_{\rm max}>0.75\,{\rm day}^{-1}\) above \(10^{39}\) erg, and \(\gamma_{r}=-2.2^{+0.6}_{-0.8}\). This result is surprisingly consistent with follow-up studies of FRBs detected by the Australian Square Kilometre Array Pathfinder (ASKAP). Thus the evidence suggests that CHIME and ASKAP view the same repeating FRB population, which is responsible not just for repeating FRBs, but the majority of apparently once-off bursts.
For greater quantitative accuracy, non-Poissonian arrival times, second-order effects in the CHIME response, and a simultaneous fit to the total FRB population parameters, should be treated in more detail in future studies.
radio transient sources (2008), astronomy data modeling (1859) 0000-0002-4800-0002]C.W. James
## 1 Introduction
Of the many mysteries surrounding fast radio bursts (FRBs; millisecond duration radio signals arising from cosmological distances Lorimer et al., 2007; Thornton et al., 2013) the question of whether or not they all repeat remains one of the greatest (e.g. Caleb et al., 2019). Since the discovery of the first repeating FRB, FRB 20121102A (Spitler et al., 2016), its localisation to a dwarf galaxy and association with a persistent radio source (PRS; Tendulkar et al., 2017), combined with its complex time-frequency burst structure (Hessels et al., 2019), have distinguished it from populations of apparently once-off FRBs, which arise from a plethora of galaxy types (Bhandari et al., 2020; Heintz et al., 2020; Bhandari et al., 2022; Gordon et al., 2023), and are more likely to exhibit broadband, single-component morphologies (Pleunis et al., 2021). Further discoveries of repeating FRBs have somewhat muddled this simple dichotomy however -- while FRB 20190520B is similar in both its bursts and its host galaxy to FRB 20121102 (Niu et al., 2022), FRB 20180916.J0158+65 is located in a large spiral galaxy, close to -- but offset from -- a star-forming region (Marcote et al., 2020), while FRB 20200120E arises from a globular cluster (Bhardwaj et al., 2021; Kirsten et al., 2022), which as is typical for such clusters, has no apparent star-forming activity.
Models of the FRB population paint a similarly ambiguous picture. Very early FRB data was shown to be consistent with all FRBs being similar to FRB 20121102A (Lu and Kumar, 2016). As more data became available, Caleb et al. (2019) was able to rule out that all FRBs could repeat as rapidly as FRB 20121102A, while James (2019) showed that the number of strong repeaters similar to FRB 20121102 must be at most 27 Gpc\({}^{-3}\) with 90% confidence. More recently, Gardenier et al. (2021) used FRBOPPY (Gardenier et al., 2019) to match a population of repeating FRBs to the dispersion measure (DM) distribution from the Canadian Hydrogen Intensity Mapping Experiment (CHIME; CHIME/FRB Collaboration et al., 2018). They confirm that for a given population of repeating FRBs, those in the nearby Universe will preferentially be detected as repeaters, while those in the distant Universe will more likely be viewed as once-off bursts. This effect is qualitatively present in the CHIME data: repeaters at low DM have a higher rate (as noted by R. Good at the FRB 2021 online conference, using data from CHIME/FRB Collaboration et al., 2021), and the mean DM of repeaters is lower than that of once-off bursts (The CHIME/FRB Collaboration et al., 2023). However, no quantitative analysis has attempted to fit this effect, and an alternative explanation is that this is evidence for repeating FRBs being a distinct population.
Results from both host galaxy and population modelling therefore indicate that FRB progenitors either come from multiple populations, or if they do all repeat, must come from a broad distribution of repetition rates. The different morphologies observed for repeating and once-off bursts is consistent with both pictures (Pleunis et al., 2021), and PRSs associated with some of the brightest and most rapid repeating objects (Chatterjee et al., 2017; Niu et al., 2022), since it is not im
plausible PRSs dissipate and pulse morphologies change as the progenitor ages. Even if some FRBs arise from intrinsically cataclysmic events, such as black hole formation following a binary neutron star merger (Zhang, 2014; Falcke & Rezzolla, 2014; Moroianu et al., 2023), it should be expected that the portion of the population that does intrinsically repeat has a broad distribution of properties. And from an observational or modelling perspective, there is no practical difference between FRBs which repeat very rarely, and those that are intrinsically once-off. This motivates studies which attempt to fit the properties of an intrinsically repeating FRB population to observational data.
FRB observations however give a biased picture of the true underlying FRB population. This is particularly the case with repeaters, which make better targets for host galaxy identification -- of the 492 distinct FRBs published by CHIME/FRB Collaboration et al. (2021), three repeaters have been localised to their host galaxies (Marcote et al., 2020; Kirsten et al., 2022; Bhardwaj et al., 2021), with a further repeater since identified and localised (Fong et al., 2021), while only a single once-off CHIME FRB has a tentative host galaxy identification (Panther et al., 2022). Furthermore, once trends in FRB behaviour become identified, these result in preferential targeting with follow-up observations, e.g. the identification of FRB 20180301A as a repeater from its time-frequency structure (Price et al., 2019; Luo et al., 2020), or the targeting of FRB 20121102A during its identified activity phase (Rajwade et al., 2020; Li et al., 2021). Thus once a trend is identified, follow-up observations will naturally reinforce them. It is important to note that, to date, only CHIME has detected FRBs to repeat in an unbiased manner (CHIME/FRB Collaboration et al., 2019; Fonseca et al., 2020) -- all other identifications have been through targeted follow-up observations.
Observational biases thus limit the use of repeating FRBs in FRB population analyses, which model their cosmological source evolution, burst energy distribution, spectral properties, and local/cosmological/host DM contributions (e.g. Luo et al., 2020; James et al., 2022). When they are included, only the first burst of a repeating FRB tends to be used (e.g. Shin et al., 2022), in order to make the analysis as insensitive to their repeating nature as possible.
We are only aware of one work which fits the intrinsic properties of the repeating FRB population. James et al. (2020) use results from follow-up observations of CRAFT FRBs with the Murriyang (Parkes) and Robert C. Byrd Green Bank Telescope (GBT) (James et al., 2020), and assuming a power-law distribution of FRB repetition rates \(R\), \(dN(R)/dR\propto R^{\gamma_{r}}\), the authors placed limits on the power-law index \(\gamma_{r}\) \(\sim\)2, and the maximum repetition rate \(R_{\rm max}\). The data fit however incorporated only one observed repeater, FRB 20171019A (Shannon et al., 2018; Kumar et al., 2019), and 19 non-repeaters, in particular FRB 20171020A (Mahony et al., 2018), the proximity of which rules out repetition rates greater than \(0.011\) day\({}^{-1}\) above \(10^{39}\) erg (James et al., 2020; Lee-Waddell et al., 2023). Another work, Law et al. (2022), uses CHIME data to model the mean apparent (i.e. not intrinsic) repetition rate, finding of 25-440 yr\({}^{-1}\). The lower limit was calculated by assuming that only sources observed to repeat were true repeaters, while the upper limit assumed singly observed FRBs were also due to repeaters. Interestingly, the authors note the potential of using the population of PRS to constrain the repeating FRB population, and vice versa.
The aim of this paper is to incorporate a model for repeating FRBs into the framework of the zDM code (James et al., 2021, 2022), as described in SS2. Example FRB populations are then used to estimate the biasing effects of FRB repetition on the z-DM distribution of the FRB population in SS3. In SS4, a model of CHIME is described, and in SS5, it's shown how the model compares to CHIME FRBs from Catalogue 1 (CHIME/FRB Collaboration et al., 2021). In SS6, a fit is performed of FRB repetition parameters to CHIME data, and use the results to make predictions for future experiments in SS7. Potential systematic effects are discussed in SS8. The results are compared to those from FRB follow-up observations with the Australian Square Kilometre Array Pathfinder (ASKAP), and estimates of the number density of PRS, in SS9; findings are summarised in SS10.
Throughout, I use the nomenclature of "burst" to refer to a single FRB (which nonetheless may have multiple components, typically on ms or sub-ms scales); while "FRB" or "progenitor" refers to the progenitor objects, such that a single repeating FRB emits multiple bursts.
A standard Planck cosmology (Planck Collaboration et al., 2020) with \(H_{0}=67.4\,\rm km\,s^{-1}\,Mpc^{-1}\) is used throughout.
## 2 Modelling repeating FRBs
I begin by characterising repeating FRBs solely by their time-averaged burst rate \(R\), which is defined as their intrinsic restframe rate of producing bursts above \(10^{39}\) erg. All FRBs are treated as repeating according to a Poissonian distribution (see SS8.1 for a discussion of this assumption), such that the probability of them producing \(N\) observed bursts given an expectation \(\lambda=R_{\rm obs}\,T\) (for some time interval \(T\)) is
\[P(N) = \frac{\lambda^{N}\exp(-\lambda)}{N!}. \tag{1}\]
It is assumed that each repeating FRB has an identical energy distribution, with cumulative probability described by an upper incomplete Gamma (Schechter) function. Thus the observable rate \(R_{\rm obs}\) above some energy \(E_{\rm th}\) (itself a function of \(z\), DM, and position in the telescope beam) scales as
\[R_{\rm obs}(E_{\rm th})=\frac{R}{1+z}\frac{\int_{E_{\rm th}}^{\infty}(E/E_{\rm max })^{\gamma}e^{-E/E_{\rm max}}\,dE}{\int_{10^{39}}^{\infty}(E/E_{\rm max})^{ \gamma}e^{-E/E_{\rm max}}\,dE}, \tag{2}\]
where \(\gamma\) is the cumulative power-law index, and \(E_{\rm max}\) some cut-off energy.
The population density of repeating FRBs, \(\Phi_{r}\) (progenitors Mpc\({}^{-3}\)), is modelled with repetition rates above some rate \(R\) via a power-law distribution of their intrinsic repetition rates
\(R\),
\[C_{r} (R<R_{\rm min}) \tag{3}\] \[\Phi_{r}(R)= 0 (R>R_{\rm max})\] \[C_{r}\frac{\left(\frac{R}{R_{\rm min}}\right)^{\gamma_{r}+1}- \left(\frac{R_{\rm max}}{R_{\rm min}}\right)^{\gamma_{r}+1}}{1-\left(\frac{R_{ \rm max}}{R_{\rm min}}\right)^{\gamma_{r}+1}}\quad\rm otherwise,\]
between minimum and maximum rates \(R_{\rm min}\) and \(R_{\rm max}\) respectively, with differential power-law index \(\gamma_{r}\), and constant population density \(C_{r}\). The differential rate is more commonly used, which is defined as
\[\frac{d\Phi_{r}(R)}{dR} = C_{r}^{\prime}R^{\gamma_{r}} \tag{4}\] \[C_{r}^{\prime} = \frac{(\gamma_{r}+1)C_{r}}{R_{\rm min}^{\gamma_{r}+1}-R_{\rm max }^{\gamma_{r}+1}}.\]
In this model, the total burst density above \(10^{39}\) erg, \(C\) (bursts Mpc\({}^{-3}\) yr\({}^{-1}\)), is related to the population of repeaters via
\[C = \int_{R_{\rm min}}^{R_{\rm max}}RC_{r}^{\prime}R^{\gamma_{r}}dR \tag{5}\] \[= \frac{C_{r}^{\prime}}{\gamma_{r}+2}\left[R_{\rm max}^{\gamma_{r} +2}-R_{\rm min}^{\gamma_{r}+2}\right].\]
As per previous work, both \(C\) and \(C_{r}\) are defined as the number of bursts and FRB progenitors respectively observed at 1.3 GHz at \(z=0\). Both are treated as evolving with the star-formation rate, such that \(C_{r}\) evolves as
\[C_{r}(z) = C_{r}\left(\frac{\rm SFR(z)}{\rm SFR(z=0)}\right)^{n_{\rm eff}}. \tag{6}\]
\(n_{\rm eff}\) is used to scale smoothly between a non-evolving population \(\left(n_{\rm eff}=0\right)\), one evolving with the star-formation rate \(\left(n_{\rm eff}=1\right)\), and a population with a stronger peak at high redshift, such as AGN activity \(\left(n_{\rm eff}>1\right)\). In the case of \(C\), an additional factor of \(\left(1+z\right)^{-1}\) is included on the right-hand-side of \(\left(6\right)\), due to the time-dilation effect. However, the number of FRB progenitors is unaffected by time-dilation; rather, the time dilation effect is instead included when scaling between the rate seen by an observer and the intrinsic rate.
### Implementation in the zDM code
The zDM code was originally developed for the analysis of James et al. (2022b), and has since been extended as per James et al. (2022c) and Baptista et al. (2023). It calculates the expected number of FRBs from an FRB survey as a function of their redshift \(z\), dispersion measure DM, and relative fluence \(F\) compared to threshold fluence \(F_{\rm th}\). The key parameters of the code, and their current best-fit values, are given in Table (1).
For this analysis, only the measured values of \(z\) and DM for an FRB are considered -- unlike previous works, where signal-to-noise ratio (SNR) is also used. I add an additional observable: whether or not an FRB is observed as a repeater. This is treated as a boolean, i.e. the number of observed repeats is not used. Furthermore, only repetition information obtained through initial blind surveys is considered, i.e. this analysis is not suited to modelling FRBs determined to repeat or not through targeted follow-up observations.While CHIME remains the only FRB instrument to observe repeat bursts by this definition, the non-observation of repeaters in other blind surveys can still be included.
The total number of repeaters in any given z-DM bin is given by:
\[N(z,\rm DM) = C_{r}(z)\frac{dV}{dzd\Omega}\Delta zf_{z}(\rm DM)\Delta DM\Delta\Omega, \tag{7}\]
where \(\Delta\Omega\) is the solid angle observed by the beam, \(\Delta z\) and \(\Delta\)DM are the bin sizes of the z-DM grid, \(dV(dzd\Omega)^{-1}\) is the size of the cosmological volume element, and \(f_{z}(\rm DM)\) is the distribution of dispersion measures of FRBs at that redshift (itself a function of cosmological and host galaxy contributions, as given in James et al., 2022b).
The distribution of intrinsic FRB rates within that volume element is given by (4), which for any given threshold energy \(E_{\rm th}\), produces a distribution in \(R_{\rm obs}\) as per (2). Thus the expected number of once-off FRBs from that volume becomes
\[\left<N_{1}(z,\rm DM)\right>=\frac{dV}{dzd\Omega}\Delta zf_{z}( \rm DM)\Delta DM\Delta\Omega \tag{8}\] \[\cdot\int_{R_{\rm min}}^{R_{\rm max}}R_{\rm obs}T_{\rm obs}e^{-R_ {\rm obs}}T_{\rm obs}\frac{d\Phi_{r}(R)}{dR}\frac{dR}{dR_{\rm obs}}dR_{\rm obs}.\]
Here, I have left off the dependence of \(R\) on \(R_{\rm obs}\), expressed through (2). Similarly,
\[\left<N_{0}(z,\rm DM)\right>=\frac{dV}{dzd\Omega}\Delta zf_{z}( \rm DM)\Delta DM\Delta\Omega \tag{9}\] \[\cdot\int_{R_{\rm min}}^{R_{\rm max}}e^{-R_{\rm obs}}T_{\rm obs} \frac{d\Phi_{r}(R)}{dR}\frac{dR}{dR_{\rm obs}}dR_{\rm obs}\]
calculates the number of progenitors for which no bursts are detected. Thus, the expected number of repeating FRBs can be deduced as
\[\left<N_{\rm reps}\right> = N-\left<N_{1}\right>-\left<N_{0}\right>. \tag{10}\]
This therefore allows a likelihood to be assigned to both once-off and repeating FRBs as a function of redshift \(z\) and dispersion measure DM.
## 3 The effects of repetition on FRB redshift distributions
To qualitatively illustrate how the presence of FRB repetition affects the measured FRB redshift distribution, I perform example calculations for the CRAFT incoherent sum (ICS; Bannister et al., 2019) survey parameters as described in James et al. (2022b). Since this observing mode is mostly commensal, total time per field \(T_{\rm f}\) to the end of 2022 has been at most \(\sim\)37 days (Shannon et al., in prep.). For illustrative purposes however, \(T_{\rm f}\) is varied between 10, 100, and 1000 days, and a central frequency of 1.3 GHz is used.
For repeating FRB parameters, I consider two scenarios. The first is strong repeaters only, setting \(R_{\rm max}=R_{\rm min}=4\,{\rm day}^{-1}\) (thus making \(\gamma_{r}\) irrelevant), which is approximately the
time-averaged rate observed above \(10^{39}\) erg for FRB 20121102A by Li et al. (2021), divided by four to account for the off part of the activity cycle (Rajwade et al., 2020); this is also consistent with the time-averaged rate observed by Law et al. (2017). The second is a more realistic scenario, with a distribution of repeaters with \(R_{\rm max}=10\,{\rm day}^{-1}\) (i.e. slightly above FRB 20121102A), \(R_{\rm min}=10^{-3}\,{\rm day}\) (slightly below FRB 20171020A) and an index of \(\gamma_{\rm r}=-2.2\), which is consistent with the results of James et al. (2020).
For the remaining properties of the FRB population, note that most previous models have analysed only once-off FRBs, or alternatively, included only the first burst of repeating FRBs. Several authors have derived values constraining (2), typically obtaining a cumulative fluence index \(\gamma\sim-1\), \(E_{\rm max}\sim 10^{41-42}\) erg, with \(\mathcal{O}\sim 10^{5}\) FRBs Mpc\({}^{-3}\) yr\({}^{-1}\) in the local Universe, and finding no evidence for a minimum energy (Luo et al., 2020; James et al., 2022, 2022). For now, the best-fit values from James et al. (2022) -- given in Table (1) -- are used, and other possibilities are considered in SS5.
The results for strong and distributed repeaters are shown in Figure (1), in units of expected bursts per day. The total number of expected detected bursts is identical in all scenarios. However, all other measurable quantities depend on \(T_{\rm f}\) and the nature of the repeating FRB population. In all scenarios, intrinsically repeating FRBs (dotted lines) are more likely to be detected as such in the nearby Universe, which is the expected result. Necessarily, this decreases the expected number of once-off bursts in the local Universe (thin solid lines), since each FRB detected to repeat removes its first detected burst from the once-off distribution. The standard method of including repeating FRBs in population models, i.e. counting them only once using the first measured burst and adding apparently once-off bursts, is shown as "Total progenitors" (dashed lines). This method still results in a deficit of the modelled population in the local Universe compared to the total burst population. Only when including all bursts from measured repeaters (dot-dashed lines) as well as single bursts will the bias against low-redshift FRBs disappear.
The effect of \(T_{\rm f}\) is evident from Figure 1. As \(T_{\rm f}\) increases (shown via changing colour), more intrinsically repeating FRBs in the nearby Universe are detected with multiple bursts. Thus, all distributions are pushed to higher \(z\), with repeating FRBs taking an increasingly large fraction of the measured population. For the distributed repeaters scenario, this effect is small, and even after \(1000\,{\rm days}\) on a single field, by far the majority of FRBs are detected as once-off bursts, though the redshift peak for single bursts has shifted from 0.19 to 0.26 (with the unbiased peak for all bursts being at 0.15). For strong repeaters however, this effect is very important, with single bursts peaking at \(z=\)0.41, 0.67, and 0.95 respectively -- and even repeating sources peaking at higher redshifts than the true underlying total burst population. This latter effect is due to the small number of repeaters at low redshift being expected to produce a very large number of bursts, whereas for the distributed repeaters scenario, repetition remains dominated by rarely repeating objects at low redshift.
Figure 1.— Effects of the repeating FRB population on the redshift distribution of FRBs expected from the CRAFT/ICS survey. Top: repeating FRBs with a broad distribution of rates; bottom: repeating FRBs identical to FRB 20121102A; observation times \(T_{f}\), with lines appearing from left to right, are 10 days (red), 100 days (green), 1000 days (blue).
Figure 2.— Burst statistics from 100 Monte Carlo simulations of an ASKAP/ICS 100 day pointing. Shown are the expected number of bursts \(\langle N_{\rm bursts}\rangle\), 90% upper limit, and standard deviation normalised by the square root of \(\langle N_{\rm bursts}\rangle\), histogrammed as a function of redshift.
Repeating FRBs also introduce significant cosmic variance into FRB surveys. Since the volume of the nearby Universe is small, there is a small chance for a strongly repeating FRB to be located in that volume -- however, if there is, many bursts will be detected from it due to its proximity. To illustrate this effect, a Monte Carlo sample of 100 instances of a 100-day ASKAP/ICS observation is generated, in a Universe consisting only of FRBs as strong as FRB 20121102A. The mean number of bursts detected, \(\left<N_{\text{bursts}}\right>\), is shown as a function of redshift in Figure 2. As expected, it follows the shape of Figure 1. For a Poisson distribution, the standard deviation \(\sigma\) would be expected to scale with \(\left<N_{\text{bursts}}\right>^{0.5}\) -- therefore, I normalise \(\sigma\) by this value. At high redshift, it tends towards this expected value. However, this variance increases rapidly at low redshift. The number of bursts in the 90% upper limit over all \(100\) simulations increases even more rapidly. The most extreme example of this variance is that in 99 of the \(100\) simulations, no bursts were simulated in the redshift interval \(0\leq z\leq 0.1\) -- however, in a single instance, 86 bursts from a single repeating FRB were simulated.
This presents a problem for population modelling. The requirement to include all bursts from repeating FRBs to avoid a bias will result in large stochastic fluctuations in population statistics, dominated by the small population of strongly repeating FRBs. The choice to do so, or not, represents a trade-off between bias and accuracy. I therefore proceed with the approach of modelling single bursts, and the number of repeaters, in this work, and do not directly model the number of bursts per repeater (though the distribution is fit via Monte Carlo in SS6.3). This results in zero bias, and only a small reduction in accuracy.
## 4 Modelling CHIME Catalogue 1 data
The CHIME/FRB experiment is described by CHIME/FRB Collaboration et al. (2018). Not only does CHIME have the largest published sample of both repeating and non-repeating FRBs (CHIME/FRB Collaboration et al., 2021), but since CHIME cannot be re-pointed to target specific sources, CHIME FRBs are detected in an unbiased manner. The downside however is that CHIME's angular resolution is relatively poor, so that the vast majority of FRBs remain unlocalised. This makes CHIME data ideal for modelling the repeater vs non-repeater fraction, even if it is difficult to use it to model the FRB distribution in z-DM space. The CHIME experiment is modelled as below.
### Data sample
Data is taken from CHIME/FRB Collaboration et al. (2021) (hereafter, 'Cat1'), consisting of 536 FRBs. Nominally, this is divided into 474 once-off bursts, and 62 bursts from 18 repeaters. However, two repeaters -- FRB 20190417A, and FRB 20181119D from FRB 20121102A -- are only identified as such from observations external to Cat1. Thus it is more proper to say that Cat1 identifies 16 repeaters and 476 non-repeaters, which is the statistic used in this work.
CHIME have also recently published a search for repeating FRBs in an updated data-set, announcing the discovery of 25 new sources based on coincidences in DM-localisation space (The CHIME/FRB Collaboration et al., 2023). I denote this the 'Gold25' sample, and discuss this in detail -- including why it is not used for fitting -- in SS8.3.
The analysis in Cat1 used three data quality cuts which are not implemented here, for the following reasons:
1. Cut removing events with SNR < 12. This is implemented due to human inspection falsely rejecting low-SNR events. Since this work is primarily concerned with the ratio of repeaters to non-repeaters, rather than absolute FRB numbers, such a cut is not used. Repetitions were searched-for at a slightly lower (but undocumented) threshold than initial bursts (CHIME/FRB Collaboration et al., 2021, ; see SS8.3 for further discussion of this effect), and using the SNR \(\geq\) 12 cut would eliminate this effect. However, implementing it would leave only 7 of 16 repeating FRBs. I believe that the resulting loss of precision will be worse than the associated systematic error.
2. Cut removing events with DM < 1.5 max(DMNEN2001, DMYMW16). Here, such events are retained -- it is true that DMMW is poorly known, but this also means that the effect of such a cut is equally unknown.
3. Far-sidelobe events. These will have poor localisation, and are identified using the FRB spectrum. However, this work is not concerned with localisation accuracy, but rather the number of FRBs detected, for which precise sidelobe details are not as important. Thus such a cut is not implemented.
In the following analysis, the declination \(\delta\) of each FRB, its DM, and whether or not it has been observed to repeat is used.
### Beamshape
The CHIME beamshape is both unique and complex, with a broad primary beam pattern covering \(60^{\circ}\) in the N-S direction and \(1.3^{\circ}\)-\(2.5^{\circ}\) in the E-W direction (CHIME/FRB Collaboration et al., 2018). Within this envelope are 1024 coherently formed beams used for FRB detection, with full-width, half-max (FWHM) of \(20^{\prime}\)-\(40^{\prime}\). I use the frequency-dependent beamshape given in CHIME/FRB Collaboration et al. (2021), averaged over XX and YY polarisations, and implemented in the GitHub library chime-frb-beam-model.4 This is sampled at 16 frequencies in a \(300\times 1000\) grid in RA, DEC, taking points within \(8^{\circ}\) of the meridian. The final beamshape is taken as the envelope over all \(1024\) formed beams after averaging over frequency. I discuss an alternative method in Appendix 1.
Footnote 4: [https://github.com/chime-frb-open-data/chime-frb-beam-model](https://github.com/chime-frb-open-data/chime-frb-beam-model)
In James et al. (2020), a telescope's beam pattern on sky is described via the 'inverse beamshape', \(\Omega(B)\), being the solid angle of sky viewed at any given sensitivity. The total time observing that solid angle is then a simple scaling constant between rate and the number of observes FRBs. When considering the response of a transit instrument such as CHIME
to repeating FRBs, the relevant metric is \(T(B)\), being the time spent observing a given sky position at beam sensitivity \(B\) (relative to the nominal sensitivity at beam centre, where \(B\) = 1). For a source at a given declination, \(T(B)\) can be calculated from the RA-dependence of the beam pattern -- this is similar to the approach used by Gardenier et al. (2021).
#### 4.2.1 Declination dependence
I calculate the declination dependence of CHIME's beamshape by calculating \(T(B)\) for \(1000\) declinations between \(-10^{\circ}\) and \(+90^{\circ}\). The effective exposure, \(T_{\text{eff}}\), is defined as the sum of times spent observing the source \(T_{i}\), weighted by beam sensitivity \(b\), assuming a Euclidean distribution of event rates:
\[T_{\text{eff}} = \sum_{i}T_{i}B^{1.5}. \tag{11}\]
It is calculated for each declination as above, using discrete samples of source position (hence a sum, rather than an integral, in (11)), and is shown in Figure (3). The oscillatory behaviour is due to the 256 rows of tied beams, with sources passing over beam centre having significantly more exposure than those passing between beams. Also plotted is the CHIME exposure, taken from CHIME/FRB Collaboration et al. (2021), defined as the time in which a source is within the full-width half-max (FWHM) of a tied beam at \(600\,\mathrm{MHz}\). This is used to normalise the total effective observation time to 311 days by using CHIME's simulation of the \(600\,\mathrm{MHz}\) tied beam, measuring the time a source spends within the FWHM, and fitting to the published CHIME exposure. This total effective time is less than the 342 day duration of the catalog, which is expected when allowing for equipment down-time. Since CHIME's total exposure is a function only of the tied beamshape, it is therefore significantly greater than \(T_{\text{eff}}\) at low declinations where the primary beam is less sensitive.
For calculation purposes, declination is divided into six regions, spanning the full [\(-11^{\circ},90^{\circ}\)] range. The bounds of each region are chosen so that the change in exposure between regions is not too large (at most a factor of three), while preserving reasonable statistics in each region. The mean exposures in each region are also shown as histograms in Figure (3).
### Sensitivity
The zDM code by default uses a two-dimensional function of FRB width \(w\) and DM to calculate the effective threshold \(F_{\text{th}}\) to an FRB with those properties. An important input into this is the telescope's time- and frequency-resolution used for FRB searches: in the case of CHIME, \(0.983\,\mathrm{ms}\), and \(0.0244\,\mathrm{MHz}\) respectively (CHIME/FRB Collaboration et al., 2021). Integrating over a modelled distribution of intrinsic FRB widths \(w\)(James et al., 2022) and intrinsic scattering measures \(\tau\)(CHIME/FRB Collaboration et al., 2021; James et al., 2022), this produces an efficiency function \(\epsilon\) that acts to decrease measured SNR \(\mathrm{or}\) equivalently, increase the effective detection threshold \(F_{\text{th}}\) above a nominal threshold \(F_{0}\) in a DM-dependent manner. Including the effects of the beam \(B\), \(F_{\text{th}}\) is given by
\[F_{\text{th}} = \frac{F_{0}}{Be(w,\tau,\text{DM})}. \tag{12}\]
CHIME/FRB Collaboration et al. (2021) extend this to a more complicated selection function, \(P(\text{SNR}|F,DM,\tau,w,\gamma,r)\), giving SNR as a function of DM, \(\tau\), \(w\), \(F\), and also including spectral shape parameters \(\gamma\) and \(r\). This is based on a sophisticated pulse injection system (Merryfeld et al., 2022), and thus accounts for not only the dedispersion code, bonsai, but also the full detection pipeline, including RFI rejection. The full selection function \(P\) is not published, although it can presumably be inferred from the published library of injected pulses. Rather, one-dimensional selection functions, integrated over all other variables and modelled population probability distributions in those variables, are given.
The DM selection function, \(s(\text{DM})\), gives the relative fraction of FRBs passing selection cuts as a function of DM
Figure 3: Top: declination-dependent exposure of the CHIME experiment – simulation from this work based on the beamshape described in CHIME/FRB Collaboration et al. (2021) and scaled using a total of 220 days’ observation time, ‘CHIME’ taken directly from CHIME/FRB Collaboration et al. (2021). Bottom: \(T(B)\), calculated from the simulated beam pattern, and averaged over the indicated declination ranges.
which is precisely what is required to model CHIME's DM distribution. To use CHIME's bias selection function, it is fit with a 4\({}^{\rm th}\)-order polynomial as shown in Figure (4). It is then corrected to a peak value of unity, such that \(N_{\rm obs}\leq N_{\rm true}\), and converted to a modifier to the measured burst SNR by assuming a Euclidean relationship between event number and SNR, i.e. SNR\({}_{\rm bias}\sim\)\(\hat{s}^{2/3}\)(DM). This is compared to the efficiency function e produced by the zDM code, normalised such that the two efficiencies are equal at a DM of 1500 pc cm\({}^{-3}\).
There are two clear regimes present in Figure (4). Above 1000 pc cm\({}^{-3}\), the CHIME SNR bias agrees well with that estimated by zDM, which is indicative of sensitivity being fundamentally limited by the time-frequency resolution of the instrument. Below 1000 pc cm\({}^{-3}\), zDM flattens to represent efficient detection, while the efficiency of CHIME decreases. This is likely due to the effects of CHIME's system for mitigating RFI (CHIME/FRB Collaboration et al., 2021; Merryfield et al., 2022).
To estimate the fluence threshold at beam centre and peak DM efficiency, \(F_{0}\), CHIME quotes a 95% completeness threshold of 5 Jy ms, compared to a theoretical minimum detection threshold of \(\sim\) 1 Jy ms (CHIME/FRB Collaboration et al., 2021). The factor of five between these thresholds is comparable to the factor of 6.6 by which the zDM efficiency had to be increased to match the CHIME efficiency in Figure (4). Thus the fitted SNR bias of CHIME's selection function has been implemented in the zDM code, such that it acts in (12) as an efficiency factor; and \(F_{0}\) = 5 Jy ms to a 1 ms burst is used. The value of the beam \(B\) is parameterised according to SS4.2 and SS4.2.1, so that repeating FRBs in each declination bin are modelled as being exposed to sensitivity thresholds \(F_{\rm th}\) for times \(T_{\rm obs}\).
## 5 Preliminary results -- initial comparison
I begin by comparing the results of previous population modelling to CHIME single burst data. While the aim of the present manuscript is to model the relative once-off and repeat burst rates by varying the repeater properties of the population, the properties of the total FRB population -- luminosity function, source evolution etc. -- will be correlated. Furthermore, if a good fit for the single-burst population cannot be obtained, it will be impossible to determine whether goodness of fit for repeat bursts is a function of repeating or total population parameters.
FRB population parameters from Shin et al. (2022) and James et al. (2022c), hereafter S22 and J22c respectively, are considered. In the former case, only the best-fit set is used, since the allowed ranges are very broad. In the latter case, both the best fit, and sets compatible with the 90% upper and lower limits of each parameter, are considered. To obtain these, the parameter in question is first set to its min/max value at 90% confidence, and then a search is performed over all evaluated parameter sets with a similar parameter value for the best-fitting set of other parameters. These parameter sets are listed in Table (1).
Figure 5 shows fits to the single burst population for both the strong repeaters (top) and distributed repeaters (bottom) scenarios, for all considered population parameters. The most striking comparison is that models generally have difficulty fitting the large number of low-DM FRBs observed by CHIME, especially given the strong bias against low-DM events from the CHIME selection function. Performing 1-sample KS-tests (Kolmogorov, 1933; Smirnov, 1948) of the CHIME data against the predicted curves using scipy's'stats' package, no parameter set provided a good fit to the strong repeater distribution. This is not surprising, since as discussed in SS1, other analyses have already ruled out that all FRBs are strong repeaters. In the distributed repeaters scenario, only the minimum value of \(\alpha\) = -1.91 from James et al. (2022c) was compatible, with a p-value of 0.27.
The fact that the parameters of S22 provide a reasonable fit (p-value of 0.0028), but not the best fit, is a measure of the small but non-negligible systematic differences in the modelling. One possible cause is the use of \(F_{0}\) set at the 95% completion threshold, which is likely high. Reducing it would make CHIME more sensitive, and push the expected DM distribution to higher values, making it compatible with measurements. The model using the S22 parameter set also predicts a low number of singles bursts (131), though reducing the threshold to 2.5 Jy ms to produce the correct number results in a poor fit to the DM distribution.
Another possibility is the complex interaction between burst shape and CHIME response, which is imperfectly captured here with the one-dimensional selection function s(DM), but which S22 model explicitly using injected pulses. A third possibility is that S22 fit to all progenitors (i.e. single and repeat FRBs). However, performing the same comparison here, the difference between the S22 predictions and data becomes greater.
Lastly, I also check that the fitting is not strongly affected by errors in DMMW at low Galactic latitudes. Such an error would smear the DM\({}_{\rm EG}\) distribution, creating excess low-DM events. Re-doing the above analysis to include only FRBs
Figure 4: DM bias correction for CHIME data. Shown are values of \(\cdot\)(DM) from CHIME/FRB Collaboration et al. (2021) (red points), a cubic spline fit (blue solid line), the 4\({}^{\rm th}\) order polynomial fit from this work (orange solid line), the renormalised fit (orange dashed line), and implied SNR bias (green dotted line).
with estimated Galactic contributions of less than 50 pc cm\({}^{-3}\) changes the p-values by factors of order two, but this is minor compared to the different predictions between models. Thus this effect is ignored from hereon, and the entire sample is used to allow for greater precision and little cost of accuracy.
I therefore conclude that using the \(\alpha\)'min' parameter set from Table (1) in the zDM code is likely to provide a reasonable fit to the CHIME single burst rate, and hence it is used to fit to the repeating FRB population in the following Section.
## 6 Fitting results
To find a best-fit set of repeating FRB parameters (\(R_{\rm min}\),\(R_{\rm max}\),\(\gamma_{\rm r}\)), I first find the critical value of repetition rate, \(R^{*}\), such that when \(R_{\rm min}=R_{\rm max}=R^{*}\) (and thus the value of \(\gamma_{r}\) is irrelevant), the correct number of repeating FRBs (in this case, 16) is reproduced. If \(R_{\rm min}>R^{*}\), then inevitably the model will produce too many repeating FRBs in the case that all bursts originate from repeaters; if \(R_{\rm max}<R^{*}\), then the model will not produce sufficiently many repeaters.
To illustrate, in Figure (6), \(R^{*}\) is plotted as a function of \(F_{\rm single}\), being the fraction of all apparently once-off CHIME FRBs that are produced by true repeaters. Reducing the number of once-off bursts attributed to repeaters, while keeping the observed number of repeaters in Cat1 constant at 16, means that a higher fraction of true repeaters get detected as such, i.e. they must be stronger. Indeed, for \(F_{\rm single}=0.1\), the average repeater must repeat about 6 times per day above \(10^{39}\) erg. For now, I continue with the case \(F_{\rm single}=1\) (i.e. all FRBs are repeaters), and revisit \(F_{\rm single}<1\) in SS6.5.
For any two values in the set (\(R_{\rm min}<R^{*}\),\(R_{\rm max}>R^{*}\),\(\gamma_{\rm r}\)), and fixed \(F_{\rm single}\), the third value can be found such that the number of repeating FRBs observed by CHIME is reproduced exactly. For reasons to do with code optimisation, calculations varying \(\gamma_{r}\) proceed very slowly, so \(\gamma_{r}\) is held fixed to a small number of values. Since reasonable estimates of \(R_{\rm max}\) from observations of strong repeaters exist, it can be constrained to a sensible range. Therefore, I vary \(\gamma_{r}\) and \(R_{\rm max}>R^{*}\), and for each, calculate the value \(R_{\rm min}\).
The resulting values of \(R_{\rm min}\) as a function of \(R_{\rm max}\) and \(\gamma_{r}\) are shown in Figure (7). For steep \(\gamma_{r}\) and for values of \(R_{\rm max}\) not much above \(R^{*}\), \(R_{\rm min}\) must also be close to \(R^{*}\)(here, about 0.02 day\({}^{-1}\)), while for flat \(\gamma_{r}\) and large \(R_{\rm max}\) (the lower part of Figure (7)), \(R_{\rm min}\) must be very small. For very flat \(\gamma_{r}\), the contribution of low-R repeaters to the apparently once-off burst rate becomes sufficiently negligible that it cannot 'dilute' the number repeaters observed as such when \(R_{\rm max}\) is large. This excludes the region in the lower right of the figure. For calculation purposes, \(R_{\rm min}\) is set to \(10^{-8}\) in this region -- even though this over-produces the number of repeaters, it allows calculations to compare their DM, declination, and repeat rate distributions.
The lower limit on \(R_{\rm max}\) given by FRB 20180916B is
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline Scenario & \(E_{\rm max}\)\({}^{d}\) & \(\alpha\)\({}^{b}\) & \(\gamma\)\({}^{i}\) & \(n_{\rm eff}\)\({}^{d}\) & log\({}_{10}\)\({}_{\rm H\_{max}}\)\({}^{c}\) & \(\sigma_{\rm host}\)/ & log\({}_{10}\)\({}_{\rm C}\)\({}^{d}\) & \(f_{\rm DM}^{\rm dis}\)\({}^{h}\) & \(f_{\rm DM}^{\rm strong}\)\({}^{i}\) \\ & log\({}_{10}\)\({}_{\rm in}\)[erg] & & & & pc cm\({}^{-3}\) & & Gpc\({}^{-3}\)yr\({}^{c1}\) & \\ Shin et al. (2022) & 41.38 & -1.39 & -1.3 & 0.96 & 1.93 & 0.41 & 4.99 & 0.0028 & \(10^{-12}\) \\ \hline James et al. (2022c) & 41.26 & -1.0 & -0.95 & 1.13 & 2.27 & 0.55 & 4.47 & \(10^{-22}\) & \(10^{-54}\) \\ \hline \(E_{\rm max}\) & min & 41.0 & -1.0 & -0.7 & 1.0 & 2.3 & 0.6 & 4.63 & \(10^{-21}\) & \(10^{-38}\) \\ & max & 41.8 & -1.0 & -1.1 & 1.25 & 2.2 & 0.5 & 4.57 & \(10^{-30}\) & \(10^{-100}\) \\ \(\alpha\) & min & 41.3 & -1.91 & -0.9 & 0.75 & 2.2 & 0.6 & 5.00 & 0.56 & \(10^{-23}\) \\ & max & 41.3 & 0.24 & -0.9 & 0.75 & 2.2 & 0.6 & 4.94 & \(10^{-7}\) & \(10^{-26}\) \\ \hline \(\gamma\) & min & 41.8 & -1.5 & -1.18 & 1.75 & 2.2 & 0.5 & 4.35 & \(10^{-125}\) & \(10^{-138}\) \\ & max & 41.2 & -1.0 & -0.66 & 1.0 & 2.2 & 0.6 & 4.40 & \(10^{-40}\) & \(10^{-61}\) \\ \hline \(n_{\rm eff}\) & min & 41.3 & 0.0 & -0.8 & 0.49 & 2.2 & 0.6 & 4.87 & \(10^{-8}\) & \(10^{-27}\) \\ & max & 41.4 & -1.9 & -1.0 & 1.91 & 2.2 & 0.6 & 4.13 & \(10^{-126}\) & \(10^{-138}\) \\ \hline \(\mu_{\rm host}\) & min & 41.5 & -1.0 & -0.9 & 1.25 & 1.98 & 0.6 & 4.40 & \(10^{-60}\) & \(10^{-78}\) \\ & max & 41.3 & -1.0 & -0.9 & 1.0 & 2.45 & 0.6 & 4.69 & \(10^{-53}\) & \(10^{-74}\) \\ \hline \(\sigma_{\rm host}\) & min & 41.5 & -1.0 & -1.0 & 1.25 & 2.2 & 0.43 & 4.56 & \(10^{-54}\) & \(10^{-74}\) \\ & max & 41.4 & -1.0 & -0.9 & 1.25 & 2.2 & 0.82 & 4.50 & \(10^{-65}\) & \(10^{-83}\) \\ \hline a & Downturn (\(\sim\)’maximum’) FRB energy, assuming a 1 GHz rest-frame emission bandwidth. & & & & & & & & \\ b & Frequency scaling of rate: \(C\propto\) yr\({}^{a}\). & & & & & & & & \\ c & Cumulative power-law index of the FRB luminosity function. & & & & & & & & \\ d & Scaling of FRB density with star-formation: \(C\propto\) SFR\({}^{a_{\rm eff}}\). & & & & & & & & \\ e & Log-mean of host galaxy DM contribution. & & & & & & & & \\ f & Log-standard deviation of host galaxy DM contribution. & & & & & & & & \\ g & Absolute total burst rate above \(10^{39}\) erg at 1.3 GHz at \(z=0\). & & & & & & & \\ h & p-value from the KS-test assuming the distributed repeaters scenario. & & & & & & & & \\ i & p-value from the KS-test assuming the strong repeaters scenario. & & & & & & & & \\ \end{tabular}
\end{table}
Table 1: FRB population parameter sets used in this work. Shown are the best-fit parameter sets from Shin et al. (2022) and James et al. (2022c), and a set of 12 parameter sets from James et al. (2022c) when each parameter takes the minimum/maximum value within its 90% confidence interval. The p-values from KS tests against the observed rate of CHIME single bursts is given as \(p_{\rm KS}\) for different sets of FRB population parameters.
also shown -- at a distance of approximately \(150\,\mathrm{Mpc}\)(Marcede et al., 2020), its observed repetition rate above \(5\,\mathrm{Jy\,ms}\) of \(0.448^{+0.1}_{-0.086}\,\mathrm{hr}^{-1}\)(The CHIME/FRB Collaboration et al., 2023) approximately translates to a rate above \(10^{30}\,\mathrm{erg\,Hz}^{-1}\)(i.e., \(10^{39}\,\mathrm{erg\,s}\) assuming a \(1\,\mathrm{GHz}\) bandwidth) of \(0.5\,\mathrm{day}^{-1}\) when using a cumulative fluence index \(\gamma=-1.5\).
An upper limit on \(R_{\mathrm{min}}\) can be estimated from the lowest estimated rate for a low-DM FRB observed by CHIME, FRB 20190518D (CHIME/FRB Collaboration et al., 2021) has a DM of \(202.2\,\mathrm{pc\,cm}^{-3}\), with an estimated contribution by the Milky Way's interstellar medium (ISM) of \(53.7\,\mathrm{pc\,cm}^{-3}\) according to the NE2001 model (Cordes & Lazio, 2002). Conservatively assuming a low combined halo and host DM contribution of \(35\,\mathrm{pc\,cm}^{-3}\), similar to that found for FRB 20200120E (Bhardwaj et al., 2021), and using a simplistic model of \(z\sim 10^{-3}\,\mathrm{DM}_{\mathrm{EG}}\), produces an approximate maximum redshift of \(z\sim 0.11\). Again using \(\gamma=-1.5\), this produces an upper limit on \(R_{\mathrm{min}}\) of \(0.05\,\mathrm{day}^{-1}\).
The combination of these three constraints produces the allowed region shown in Figure (7). Note that in all cases, \(R_{\mathrm{min}}\) is below (i.e. compatible with) the limit from FRB 20200120E.
### Dispersion measure distribution
Within the range allowed by Figure (7), the z-DM distribution predicted for each will in-general be different. To illustrate, I take four scenarios, _a-d_, from the corners of the allowed region. In each case, the predicted DM distribution of repeating FRBs is plotted in Figure (8).
From Figure (8), it can immediately be seen that strong repeaters are more likely to be found at large distances (higher \(\mathrm{DM}_{\mathrm{EG}}\)). Case \(d\) has a significantly higher distribution of DMs compared to the other three, and it is the only case with a significant number of strong repeaters (cases \(a\) and \(b\) have low \(R_{\mathrm{max}}\), while case \(c\) has such a steep \(\gamma_{r}\) that the number of strong
Figure 5: Observed rates of CHIME FRBs, showing sources observed as single and repeating, summed over declination. These are compared to estimates for the number of single bursts using the best-fit results from Shin et al. (2022) (solid, blue) and James et al. (2022c) (dashed, green), and 90% extreme (dotted, grey) values of population parameters from Table 1, and assuming a population of repeating FRBs with ‘strong’ (top) and ‘distributed’ (bottom) repetition rates. Predicted singles rates are normalised to observed singles rates.
Figure 6: Critical value of repetition, \(R^{*}\), as a function of the fraction \(F_{\mathrm{single}}\) of apparently once-off bursts that are attributed to repeaters.
Figure 7: Value of \(R_{\mathrm{min}}\) producing the observed number of 17 repeating FRBs in the CHIME catalog (CHIME/FRB Collaboration et al., 2021) as a function of \(\gamma_{r}\), and \(R_{\mathrm{max}}\). Also shown are limits on \(R_{\mathrm{max}}\) (white dashed) from FRB 201809168, and the region excluded as producing too many repeaters (orange dot-dash curve). The total ‘allowed’ region is also indicated. Cases a–d used for 56.1 and onwards are indicated in red.
repeaters is negligible).
I also compare these DM distributions with those found from the Cat1 and Gold25 samples. While formally the DM distribution of the Gold25 sample is statistically consistent with that of Cat1 (The CHIME/FRB Collaboration et al., 2023), the distribution is biased (see SS8.3). That case \(d\) well-reproduces the Gold25 sample DM distribution therefore should not be taken as evidence for it.
This comparison highlights a prediction of all repeating FRB models, which is the stronger upward skew of the DM distribution compared to single FRBs. This effect is seen in CHIME data, with two (one) high-DM repeaters in the Cat1 (Gold25) samples.
To quantify agreement in DM space, a KS-test using the Cat1 FRBs and predicted DM distributions over \(R_{\rm max}\), \(\gamma_{r}\) space is performed. The results are shown in Figure (9).
An implicit assumption of the above analysis is that the intrinsic distribution of \(\rm DM_{host}\) is identical regardless of the repetition rate of repeaters. The observation of persistent radio sources at the locations of at least two bright repeaters (Marcote et al., 2017; Niu et al., 2022) suggests that these presumably young objects would be more likely to have a larger \(\rm DM_{host}\), bearing in mind that this term includes material in the vicinity of the progenitor, as well as the host galaxy's ISM and halo contributions. This could then be responsible for observing repeating FRBs (which are on-average intrinsically stronger repeaters) to have slightly more DM than expected, and would not constitute hard evidence against the model. This might be the case for the \(\gamma_{r}\lesssim-2.4\) region of Figure (9), which is slightly disfavoured because it over-predicts the number of low-DM repeating FRBs. However, should observed repeaters have less DM than expected, this is clear evidence to reject the model. This is the case for the already ruled-out lower region of Figure (9), which predicts more high-DM repeating FRBs than observed.
### Declination distribution
The declination distribution of CHIME FRBs also holds information that allows us to discriminate between scenarios. For repeating FRBs, the difference between observing a small patch of the sky around the North Celestial Pole almost continuously, and surveying several steradians near the equator for only a few minutes each day, is very important. Near the equator, only the strongest repeaters will be detected as such, while near the Pole, the small probed volume makes observations subject to cosmic variance.
In Figure 10, I plot the declination distribution of CHIME once-off and repeating FRBs, and compare this against model predictions. For this plot, the number of declination bins into which CHIME was divided was increased to 30, whereas six
Figure 8.— Predicted DM distribution of repeating FRBs compared to that from CHIME catalog 1 (CHIME/FRB Collaboration et al., 2021), calculated using cases \(a\)–\(d\) from Figure (7), and the golden sample of repeaters from The CHIME/FRB Collaboration et al. (2023) (renormalised to 16). Note that \(b\) and \(c\) overlap.
Figure 10.— Cumulative histogram of the CHIME repeating FRB declination (\(\xi\)) distribution, for both the Cat1 and Gold25 samples, compared to Monte Carlo predictions from four example cases.
Figure 9.— P-vales from a KS test of the DM distribution of Cat1 repeating FRBs, \(p_{\rm z}\),(DM,), against predictions from models with different values of \(R_{\rm max}\) and \(\gamma_{r}\). Other features are identical to Figure (7), including cases \(b\) and \(c\) overlapping.
declination bins was found to be sufficient to model the total number of repeaters, and their DM distribution.
Since the x-axis of Figure (10) is increasing linearly with \(\delta\), most of the solid angle is concentrated on the left-hand-side of the figure. Despite this, the number of repeaters -- both observed and predicted -- increases as fast as, or faster, than linearly with \(\delta\). That the Gold25 sample shows the least steep rise with \(\delta\) is likely because of the previously discussed bias against high declinations due to the increased background rate. This is evidence that the repeating population is dominated by progenitors with low apparent repetition rates that are best probed with deep observations (i.e. at high declinations), rather than sources with high apparent rates that are best detected in broad shallow surveys (i.e. at low declinations).
Of the four cases analysed, a-c show good agreement with Cat1 in the \(\delta\lesssim 60^{\circ}\) range, while not even d can match the rapid rise in repeater rates above this range. This suggests a simple fluctuation in the data, either a deficit at low declinations, or an excess at high declinatons -- though an alternative explanation is the influence of non-Poissonian repetition (see SS8.1).
I characterise the agreement in declination distributions via a KS-test, with associated p-values given in Figure (11). The greatest discrepancy with data is the aforementioned excess of high-\(\delta\) repeaters, and the upper region of the figure is disfavoured because it reproduces this particularly poorly. The lower right region is disfavoured because this predicts mostly bright repeaters that should be found in the greater region of sky viewed at low declinations.
### Repetition rate distribution
Most repeating CHIME FRBs are not localised, so that scaling between intrinsic and apparent repetition rates, which requires the luminosity distance to be known, is not possible. This precludes a direct fit to the rate distribution. Nonetheless, different combinations of \(R_{\rm min}\), \(R_{\rm max}\), and \(\gamma_{r}\) lead to more/less repeating FRBs being observed with different apparent repetition rates.
Directly computing the number of FRBs with any given repetition rate is highly inefficient however -- the algorithm currently estimates the number of repeaters by explicitly calculating \(N_{0}\) and \(N_{1}\) only. Extending this to a large number of \(N_{\rm burst}\) values is computationally prohibitive. A Monte Carlo sampling algorithm was therefore implemented that generates repeating FRBs according to their underlying modelled distribution in z-DM-\(R\) space, and simulates the number of observed bursts assuming a Poissonian distribution.
To overcome Monte Carlo fluctuations, at least \(1000\) times as many repeating FRBs as expected are simulated, and a histogram produced in terms of the observed number of bursts by CHIME. This is then fit with a power-law distribution, and for histogram bins with less than 10 simulated repeaters, the observed number is replaced with the fitted number for purposes of evaluating likelihoods. An example of this procedure is shown in Figure (12).
Since the data are discrete (integer numbers of bursts only), a KS-test to assign a goodness-of-fit is inapplicable. Instead, the
Figure 11: Results of the KS-test against the declination distribution of identified repeating FRBs. Shown is the p-value as a function of \(R_{\rm max}\) and \(\gamma_{r}\).
Figure 12: Top: histogram of observed number of repetitions in CHIME repeating FRBs from Cat1, compared to Monte Carlo predictions from four example cases, a-d (points). A power-law fit (lines) is given for each. Bottom: the same data, but shown as a cumulative distribution.
likelihood of the observed histogram of \(N_{\rm burst}\) values for the 16 CHIME repeating FRBs from the Cat1 sample is calculated, given predictions from the Monte Carlo histogram. This is then repeated for at least \(1000\) sets of 16 Monte Carlo FRBs, and the fraction of likelihoods that are lower than that observed is determined. This produces a p-value, \(p_{\rm bursts}\), under the null hypothesis that the Monte Carlo sample is the truth. Results are plotted in Figure (13).
The repeat-rate distribution is best-reproduced by models with a large number of bright FRBs, since the two CHIME FRBs with high repetition rates in Cat1 -- FRB20180814A (11 bursts), and FRB20180916B (19 bursts) -- are difficult to reproduce with models of low \(R_{\rm max}\) and/or steep \(\gamma_{r}\).
### Combined likelihood
Combining the evidence from the DM, \(\delta\), and \(N_{\rm burst}\) probabilities derived above, the combined probability \(p_{\rm tot}\) is constructed as
\[p_{\rm tot} = p_{N}\,p_{\delta}\,p_{\rm DM}\,p_{\rm bursts}, \tag{13}\]
where \(p_{N}\) is a Poissonian probability of observing 16 repeaters, which suppresses the region of the parameter space that over-produces repeaters. The probabilities are renormalised to sum to unity over the investigated range, excluding \(R_{\rm max}<0.5\,\rm day^{-1}\), and confidence intervals assuming flat priors in \(\gamma_{r}\) and \(\log R_{\rm max}\) are constructed. This results in the probability distribution, and confidence intervals (C.I.s), shown in Figure (14).
The 95% C.I. encompasses almost the entire allowed region from Figure (7), showing that DM, \(\delta\), and \(N_{\rm burst}\) are not strong discriminators between different models of the repeating FRB population. However, a preference for \(\gamma_{r}=-2.2^{+0.6}_{-0.8}\) (68% C.I.), and \(R_{\rm max}\geq 0.75\), is found.
### What if not all FRBs are repeaters?
It is of course possible that repeating FRBs do not constitute the total FRB population. Evidence for this comes from the different spectro-temporal properties of repeaters compared to non-repeaters (Pleunis et al., 2021), and a tentative association of FRB 20190425A with binary neutron star merger GW190425 (Moroianu et al., 2023). If such a population exists, it is likely subdominant -- most cataclysmic events, which would produce intrinsically once-off FRBs, have a rate which is much too low to explain the total FRB rate (Ravi, 2019). Therefore, observations of the total FRB population still serve as good constraints on the total repeating population, and the predictions made here remain valid. Nonetheless, in this section, the case where repeating FRBs are responsible for a sub-dominant fraction of the total number of bursts observed by CHIME is investigated.
Figure 14: Posterior probability of repeating FRB parameters assuming that all FRBs repeat. Shown are 68% (red dotted lines) and 95% (white dot-dash lines) confidence intervals.
Figure 15: Maximum value of the joint probability \(P_{\rm tot}\) over all analysed \(\gamma_{r}\), \(R_{\rm max}\), as a function of the fraction of all CHIME single bursts explained by repeating FRBs, \(F_{\rm single}\).
The above analysis is repeated by first optimising \(R_{\rm min}\) to produce 16 CHIME repeaters and some fraction \(F_{\rm single}\) of the total singles burst rate, and calculating the joint probability \(P_{\rm tor}(\gamma_{r},R_{\rm max},F_{\rm single})\). The peak likelihood over \(\gamma_{r}\) and \(R_{\rm max}\) for each \(F_{\rm single}\), \({\rm Max}[P_{\rm tot}(\gamma_{r},R_{\rm max})](F_{\rm single})\), is then plotted in Figure (15).
In the range \(0.5\leq F_{\rm single}\leq 1\), the peak probability is essentially identical, with fluctuations likely due to the coarse gridding in \(\gamma_{r}\)-\(R_{\rm max}\) space. The likelihood decreases for lower values of \(F_{\rm single}\) -- this is driven almost entirely by \(p_{\rm DM}\), since decreasing \(F_{\rm single}\) increases the fraction of true repeaters detected as such, which requires on-average stronger repeaters that are invariably detectable at greater distances. This pushes the predicted DM distribution to higher values, inconsistent with CHIME data.
A note of caution is warranted however: if a small fraction of all single bursts are produced by repeaters, then the assumption that repeating FRB population parameters are the same as that of the total population is a bad one. Therefore, while it can be concluded that these results are consistent with a best-fit of all FRBs being from repeaters, it cannot be concluded that this excludes a large fraction of FRBs being from intrinsically once-off events.
## 7 Future prospects
Now that an estimate of the parameters of the repeating FRB population has been made, I make predictions for the effects of repetition on future observations. In the following, cases \(d\) (close to the best-fit values found in SS6) and \(b\) (marginally excluded at the 90% level, albeit when considering random error only) are considered as two significantly different, but plausible, cases.
### Rate of new repeater discoveries with CHIME
As time spent observing a particular field increases, the number of repeating FRBs should eventually saturate, as essentially all such objects in the field are detected. Seeing the rate of detected repeaters plateau at a level where a large number of once-off bursts have no associated repeater would be a clear indication of two populations. This raises the question: how long might CHIME have to wait until the rate of new repeating FRB detections decreases?
The answer is a very long time. Regardless of the scenario under consideration, the number of repeating FRB progenitors at high redshifts will vastly outnumber those at low redshifts due to the increased volume of the Universe. As observation time increases, the number of repeating FRBs in the nearby Universe will saturate, but the rate of repeater discoveries -- both as single and repeat bursts -- in the distant Universe increases. This effect is seen in Figure (1) -- in Figure (16), this is simulated for CHIME, by simply increasing the observation time in units of \({\rm Cat1}\), \(T_{\rm Cat1}\) (which is approximately a year's worth of exposure). In case \(b\), there are relatively few strong repeaters, and saturation is expected to be seen after \(\sim 300\) years. In case \(d\), with many strong repeaters, saturation will not occur in the next thousand years, and a steadily increasing repeat rate is expected.
### 2-DM distribution
Only four repeating CHIME FRBs have been localised to their host galaxies (Marcote et al., 2020; Bhardwaj et al., 2021b, a; Fong et al., 2021), though two more associations are highly likely (Michilli et al., 2022). Furthermore, these have only been localised either because they are nearby, and hence CHIME's angular resolution -- effectively enhanced when using multiple bursts (Michilli et al., 2022) -- is sufficient to identify the host; or because they repeat rapidly, allowing follow-up observations with arrays with a better angular resolution to identify the host. Thus these represent a highly biased sample, and are unsuited to fitting to data. Nonetheless, the full z-DM distribution of repeating FRBs observed by CHIME can be predicted by the models. These distributions are given in Figure (17) for cases \(b\) and \(d\), and are compared to the distribution of singly detected FRBs.
Figure (17) illustrates how the large tail of the DM distribution arises: it is almost entirely from objects lying well above the Macquart relation. Since repeating FRBs tend to only be detected as such in the nearby Universe, those repeating FRBs lying on the Macquart relation have a smaller range of DMs -- using the 90% contours, up to \(\sim 400\) pc cm\({}^{-3}\)in case \(b\), and \(\sim 800\) pc cm\({}^{-3}\) in case \(d\). Thus the high-DM tail of low-z FRBs doesn't become over-ridden by the larger number of FRBs lying on the Macquart relation in the more-distant Universe, as it is for singly detected FRBs.
This closer proximity of repeating FRBs means that the reduced \({\rm DM}_{\rm EG}\) of the CHIME repeater sample \(\left(436\pm 49\right.\) pc cm\({}^{-3}\) for repeaters, \(597\pm 24\) pc cm\({}^{-3}\) for apparently once-off bursts; The CHIME/FRB Collaboration et al., 2023), which has been suggested to be evidence for two populations (Woods, 2023), is entirely consistent with expectations from the models. These predict mean repeater \({\rm DM}_{\rm EG}\) values in the range
Figure 16: Number of repeaters, \(N_{\rm sep}\), normalised by total observation time \(T\), in units (and as a function) of the exposure from \({\rm Cat1}\), for cases \(b\) and \(d\).
Figure 17: Predicted z-DM distribution of left: repeating and right: single FRBs for cases \(b\) (top) and \(d\) (bottom). Contours enclose 50% (dotted), 90% (dot-dash), and 99% (dashed) of the probability space.
460-540 pc cm\({}^{-3}\), and mean single DMs in the range 640-660 pc cm\({}^{-3}\). While the mean DMs of both samples are slightly over-predicted, the difference is a very good match with expectations.
Even more importantly for future observations, models \(b\) and \(d\) predict different \(z\) distributions. Case \(b\) has more low-rate repeaters, from which repeat bursts are only likely at low \(z\); while case \(d\) has a significant population of high-rate repeaters, which can be detected as repeaters from the more-distant Universe. Thus, if a large fraction of repeating CHIME FRBs could be localised, this would enable much more powerful tests of the repeating FRB population.
I illustrate this using a toy example, using simulated true values \(\gamma_{r}=-2.2\), \(R_{\rm max}=30\), and \(100\) Monte Carlo instances of repeating FRBs from Cat1. All FRBs detected as repeaters are assumed to be localised to their host galaxies, yielding their correct \(z\) and DM values. For each Monte Carlo sample, the likelihood \(p(z,{\rm DM})\) is calculated for all values on the \(\gamma_{r}\), \(R_{\rm max}\) grid. I do not calculate \(p(N_{\rm reps})\), i.e. only the position in \(z\), DM is accounted for, not the number of repeating FRBs or bursts per repeater. Bayesian 1-, 2-, and 3-\(\sigma\) confidence intervals are then constructed for each sample, and the number of MC iterations in which any given value lies in each interval is counted. The result is shown in Figure (18), which shows the expected confidence intervals at each of the three levels. This shows the power of being able to localise repeating FRBs: if all Cat1 repeating FRBs could be localised, the expected 1\(\sigma\) accuracy on \(\gamma_{r}\) would be \(\pm 0.2\), and \(R_{\rm max}\) would be determined to within a factor of \(\sim 10\).
### Predicted effects on other instruments
I now use cases \(b\) and \(d\) to estimate the relative rates of single and repeat observations for a sample of other FRB-hunting instruments. Four systems are considered: ASKAP, in Fly's Eye (FE), incoherent sum (ICS), and coherent (CRACO) mode at 1.3 GHz; and the Five-hundred-meter Aperture Spherical Telescope (FAST). ASKAP/FE and ASKAP/ICS are modelled as per James et al. (2022b), while the model of the CRAFT Coherent Upgrade (CRACO) system is described in James et al. (2022c). The parameters for FAST FRB searches are taken from Niu et al. (2021), namely a detection threshold of \(0.0146\) Jy ms for a 1 ms pulse width at a central frequency of 1.25 GHz, and time- and frequency-resolutions of 196.608 \(\upmu s\) and 0.122 MHz respectively. The FAST receiver is a 19-beam multibeam (Li et al., 2018), similarly designed to the 13-beam Parkes multibeam (Staveley-Smith et al., 1996). I therefore take the inverse beamshape \(\Omega_{b}\) used for Parkes, scale up by the ratio of the number of beams (19/13 \(\approx 1.46\)), and down by the ratio of effective collecting areas (\(64^{2}/300^{2}\approx 0.0456\)).
All these instruments have searched for FRBs with different dwell times. Here, I consider the longest time spent on any given field for each instrument, which will prove most sensitive to the repeating FRB population: 1338.9 hr for ASKAP/FE (James et al., 2020b), 879.1 hr for ASKAP/ICS to the end of 2022 (Shannon et al, in prep.), and for ASKAP/CRACO predictions, the expected on-source time of \(800\) hr for each of the Deep Investigation of Neutral Gas Origins (DINGO; Rhee et al., 2023, see also [https://dingo-survey.org/](https://dingo-survey.org/)) fields is used. For FAST, it is 59.5 hr when performing follow-up observations on FRB 20121102A (Li et al., 2021). The normalised estimates are given in Figure 19, as a function of DM for surveys with poor localisations, and as a function of \(z\) for those that typically identify host galaxies.
Qualitatively, all predictions are very similar. The total number of single bursts ranges from 27% of the total burst distribution (FAST, case \(b\)) to 70% (ASKAP/FE, case \(d\)). The difference between case \(b\), which models a repeating FRB population spread over a narrow repetition rate, and case \(d\), with a very broad distribution of rates, is marked, predicting 16% (case \(b\)) and 6% (case \(d\)) of FRB progenitors to repeat for all ASKAP models, and 31% and 10% for cases \(b\) and \(d\) for FAST.
The deficit between total burst number and total progenitors in the low-DM range for ASKAP/FE is not sufficient however to explain the observed deficit that has been previously noted by James et al. (2022b), especially when accounting for the average pointing time for that survey being less than that modelled here. Thus I conclude this effect -- which originally motivated this work -- is most likely a statistical fluctuation.
Jankowski et al. (2023) have noted that the FAST FRB rate is much lower than predicted. Here, the progenitor rate is predicted to be 40-60% of the burst rate, which certainly accounts for some, but not all, of the deficit. However, this would have no influence on the observations in drift-scan mode reported by Niu et al. (2021) due to the very short dwell times (\(\sim\)13 s). Thus this deficit must have some other explanation.
A single FRB (20220531A; Shannon et al., in prep) has been discovered in the ASKAP field with 879.1 hr of observations, against a mean ASKAP detection rate of 350 hr/FRB. This could simply be a Poissonian under-fluctuation (p-value of 0.285 on a one-sided test), but repetition offers a partial
Figure 18: Fraction of Monte Carlo iterations in which trial values of \(\gamma_{r}\), \(R_{\rm max}\) fall within the 3\(\sigma\) confidence interval (C,L), using a sample of simulated repeating FRBs with truth values \(\gamma_{r}=-2.6\), \(R_{\rm max}=31.62\). The contours correspond to regions that fall within the 1 (red, dotted) 2 (white, dash-dot) and 3 (black, dashed) confidence intervals 50% of the time.
Figure 19: Predictions of the z or DM distributions of repeating FRBs for a selection of past and future FRB surveys for their longest pointing times (see §7.3), for cases (b) and (d). Shown are the distributions of those repeating FRBs detected as single bursts, as repeaters, the total progenitor distributions, and the total burst distributions, as per Figure (1).
explanation, which would reduce the expected number of progenitors from 2.5 to 1.75-1.86.
Overall, I expect that correct modelling of repeating FRBs will be important for these observations to account for repeater bias in the observed z-DM distribution.
## 8 Discussion of systematic effects
### Non-Poissonian repetition
All FRBs with sufficiently many detected bursts to allow studies of their repetition rates show non-Poissonian behaviour. On timescales of order seconds to hours, bursts from repeaters tend to be clustered (e.g. Gajjar et al., 2018; Zhang et al., 2021; Nimmo et al., 2023), in a process which is often modelled as a Weibull distribution (Oppermann et al., 2018). On longer timescales (\(\sim\)16-160 days), two repeating FRBs appear to have activity cycles (Chime/Frb Collaboration et al., 2020; Rajwade et al., 2020), with evidence for frequency dependence in the timing of the windows (Pastor-Marazuela et al., 2021). Other behaviours include a rapidly increasing/decreasing event rate (Zhang et al., 2022), or 'turning on' despite several years of monitoring (see the time-dependence of bursts in The CHIME/FRB Collaboration et al., 2023). What the true underlying nature of the time-distribution of repeat rates of FRBs is is still under debate -- what is sure is that they are most certainly not Poissonian.
Thus it should be asked: what effect does this have on the modelling? On sufficiently long timescales, FRBs will become inactive, and new repeating FRBs will be born. Therefore, the repeating FRB population studied here can only refer to those FRBs which have been active during the approximate year (three years) corresponding to the CHIME Catl (Gold25) samples. However, FRB 20121102A has now been studied for over a decade since its first detection (Spitler et al., 2014), and while its properties (DM, RM etc.) do vary (Michilli et al., 2018), no evidence of a systematically reducing rate has been published. Therefore, these considerations likely won't be relevant to current or near-future studies.
Of more relevance are FRBs with inactive windows comparable to, or longer than, the current survey. The CHIME/FRB Collaboration et al. (2023) shows that at least three FRBs -- 20201130A, 20200929C, and 20201124A -- have numerous bursts in the latter six months studied, but none in the first two years. While some of this may be reflective of a changing search sensitivity and analysis methods (it is suspicious that all FRBs chose to activate during recent data-taking, and none at the beginning), it is also suggestive that these objects have very long inactive phases. By generating either many bursts or none, a larger population of such objects would mimic a flatter value of \(\gamma_{r}\) than the true long-term rate, with repeaters being found at larger distances / DM values. Conversely, for a fixed observation, the fitted value of \(\gamma_{r}\) will be flatter than the true value. Since the observed DM distribution of repeaters already favours \(F_{\text{single}}\geq 0.5\) and \(\gamma_{r}\leq-1.4\), allowing for such behaviour would constrain \(F_{\text{single}}\) to higher, and \(\gamma_{r}\) to lower, values. That is, the limits from this work are sensitive to activity windows on yearly timescales. Activity windows significantly shorter than a year however will have no consequence, since CHIME's coverage is spread uniformly in time, unless the period of these windows lies extremely close to a sidereal day.
Finally, the effect of bursty behaviour in general is to reduce the number of singly detected FRBs, and increase the likelihood of viewing zero or many bursts. The effect of time correlation in bursts will be most pronounced when observations occur all in one block -- when observations are individually very short and spaced far apart in time, any intrinsically bursty distribution will exhibit a Poissonian distribution of burst numbers.
To gauge the impact of this effect, I simulate a Weibull distribution of arrival times, using a shape parameter \(k=0.34\), as found for FRB 20121102A by Oppermann et al. (2018). CHIME detections are simulated at three declinations: near the North Celestial Pole (NCP), with sources observed continuously (\(\int_{\text{obs}}=1\)); approximately \(7^{\circ}\) away from the NCP, where sources will be observed a fraction \(\int_{\text{obs}}=10\%\) of the time; and \(\sim 30^{\circ}\) from the NCP, which is modelled as a source being observed \(\int_{\text{obs}}=1\%\) of the time. The expected repetition rates are modelled relative to the total time on-source in a calendar year, and this rate \(R_{\text{obs}}\) is varied from \(0.1\) to \(10\). I simulate \(1000\) Weibull sequences over 365 sidereal days, beginning each sequence well before the start of the year to ensure the sequence start time does not influence results. If a burst occurs in the first\(\int_{\text{obs}}\) fraction of a day, it is counted as detected. The number of simulations resulting in none, one, or multiple detections are recorded.
The results are given in Figure (20). As predicted, the fraction of FRBs detected to repeat twice or more is greater than Poissonian for low expected rates, and less for high expected rates. However, only very near the NCP is this effect large, where a deficit of \(\sim 30\%\) of repeating FRBs relative to Poisson rates are found.
Figure 20: Probability of an FRB being detected as a repeater given its true expected rate \(R\), as a function of the fractional observation time \(\int_{\text{obs}}=T_{\text{obs}}/T\), for a Weibull burst time distribution with shape index \(k=0.34\); and for a Poissonian distribution.
A more accurate estimate of the effect of burstiness on these results however is the ratio of single to repeat bursts. Weighting the results by \(R^{-2}\), to represent there being less rapidly repeating FRBs than rarely repeating ones, \(f_{\rm obs}=0.01\), \(0.1\), and \(1\) respectively produces \(30\%\), \(120\%\), and \(310\%\) more repeating FRBs relative to single bursts than expected for a Poissonian distribution. This effect would also alter the \(z\)-DM distribution of repeating FRBs, with bursty distributions favouring discoveries in the more distant Universe; and flatten the distribution of observed burst rates, undermining the fitting of SS6.3. It is quite possible that the excess of repeating FRBs in Cat1 at high declinations, while not statistically significant, is due to this effect.
Nonetheless, I do not wish to over-emphasise this effect. For the majority of the sky seen by CHIME, the increase in observed repeaters due to burstiness with \(k=0.34\) is tens of percent. Furthermore, this choice of \(k=0.34\) is an extreme example -- analysis of FRB 20121102A has shown that at timescales of seconds to hours, the wait-time distribution is close the Poissonian, with a best-fit Weibull index of \(k=0.79\)(Nimmo et al., 2023).
Given the complexity of the issue, I defer an analysis of repetition that includes bursty behaviour to a future work -- which should also include a fit of such behaviour to CHIME arrival-time data.
### Time-frequency structure of bursts
CHIME has shown that FRBs identified to repeat tend to have bursts which are broader in time by a factor of approximately two, with complex time-frequency structure (CHIME/FRB Collaboration et al., 2021; Pleunis et al., 2021). Whether this is due to two intrinsically different FRB populations, or a smooth transition in the properties of one population (e.g. an aging effect whereby less active -- and presumably older -- objects have modified emission physics, perhaps due to a change in beaming angle, as suggested by Connor et al., 2020) is still up for debate. What is clearly true is that the increased time-width of FRBs more likely to be detected as repeating will make them harder to detect for a given fluence. This effect is not included in the current work.
There are two methods of analysing this effect. The first is to modify the simulated burst width to increase with FRB repetition rate. This will act to suppress the number of repeating FRBs in the \(z\lesssim 1\) range where intrinsic FRB width, rather than dispersion smearing, dominates the apparant FRB width, and hence sensitivity. The result will be that more active repeaters -- which are in any case only observed as such at low \(z\) -- become less detectable. Including this effect would require direct use of the CHIME pulse injection sample, since the published efficiency function analysed in SS4.3 has already been averaged over the burst width distribution.
The second method to tackle this problem is to note that repetition rate \(R\) can be thought of as an effective repetition rate. After all, \(R\) can only ever be defined as the rate above a given energy threshold (here, \(10^{39}\) erg), which must also be coupled to some assumption about the time-frequency properties of those bursts which affects their detectability. If more-strongly repeating FRBs tend to produce wider bursts, this reduces their detectability, and hence their apparant rate will decrease. In other words, this will cause \(\gamma_{r}\) to steepen slightly from its true value. In the regime where detectability is limited by intrinsic width \(w\), SNR-\(w^{0.5}\), and hence for a burst luminosity function with cumulative slope \(\gamma=-1\), the detection rate will reduce as \(w^{-0.5}\). If the width \(w\) scales linearly with the intrinsic rate \(R\) (and the real effect is unlikely to be this strong), this would then steepen the apparent value of \(\gamma_{r}\) by an extra factor of \(-0.5\).
Since either method requires an accurate estimate of the relationship between intrinsic FRB rate and measured width \(w\), and this is not currently possible, I consider the second method appropriate, which means that a little care must be taken in interpreting the estimate of \(\gamma_{r}\) found in this work.
### Systematic effects in CHIME data
I conclude the discussion of systematics with an analysis of the data being used. As noted in SS4.1, the threshold for identifying a repeat burst in Cat1 was lower than for an initial burst. The best way of removing this effect would be for CHIME to publish an FRB catalog with those repeat bursts that passed threshold only because they were repeaters removed or otherwise identified as such. The dependence on SNR should be overcome by use of the CHIME pulse injection data (Merryfield et al., 2022), or a publication of the parameterised multi-dimensional selection function used in CHIME/FRB Collaboration et al. (2021). It has been checked that excluding FRBs with high Galactic DMs does not significantly change the DM fits, but future analyses with zDM should in any case include an uncertainty term for this contribution. Overall, I do not think that this analysis is currently constrained by such systematic effects.
I have been reluctant to make quantitative comparisons in this work with CHIME's new Gold25 sample of 25 repeating FRBs however. There are several reasons why interpretation is difficult. Firstly, note that this work is in fact a discovery of 33.5 new repeating FRBs, given that the "gold" sample of 25 sources has an estimated contamination of 0.5 from coincidences between two or more unrelated FRBs, and the authors also publish a "silver" sample of 14 repeaters with an estimated contamination rate of 5. It is unknown whether or not CHIME have detected two or more bursts from a true repeater which has a higher contamination fraction, and is thus not included in any sample. This measurement therefore has a \(5.5^{0.5}\sim 2.3\) systematic uncertainty reflecting the uncertainty in the expected contamination, and a \(33.5^{0.5}\sim 5.8\) statistical deviation from the expected mean number of repeater discoveries.
The time period used for the search, from September \(30^{\rm th}\) 2019 to May \(1^{\rm st}\) 2021 (579 days), is approximately \(70\%\) longer than, and does not overlap with, the period of July \(25^{\rm th}\) 2018 to July \(1^{\rm st}\) 2019 used for Cat1 (342 days). Hence, the detection rate has increased by a factor of \(1.24\pm 0.23\), consistent both with the predictions of an increasing discovery rate in SS7.1, and with a constant rate. Aside from Poisson error, this increase could also
be due to a higher efficiency of CHIME data-taking, or a lower threshold for including bursts in the analysis. This question should be revisited once it becomes possible to normalise the two samples, for an accurate rate comparison.
Secondly, the identification of repeating FRBs in Gold25 placed a cut on the chance coincidence probability. This cut is strictest where the rate of FRBs in DM-\(\delta\) space is highest: at high declinations, and intermediate DMs. Therefore, the gold sample of 25 repeating FRBs is biased towards very low or high DMs at low declinations. Conversely, the silver sample may have the opposite bias, both because repeaters at high declinations and intermediate DMs will preferentially be placed there, and because this region has more chance coincidences. The CHIME/FRB Collaboration et al. (2023) do not publish estimates of the contamination probability in \(\delta\)-DM space, which would be required to account for this effect.
## 9 Comparison with literature results
### ASKAP follow-up observations
The only other result of which I am aware that has limited \(R_{\rm min}\), \(R_{\rm max}\), \(\gamma_{r}\) is james et al. (2020a). Those authors used the observation of repetition in only one of 27 FRBs detected by ASKAP (Kumar et al., 2019; James et al., 2020b) to constrain \(\gamma_{r}\sim-1.94\) (those authors use \(\bar{\zeta}\) for \(\gamma_{r}\)) and \(R_{\rm min}<10^{-2.9}\,\rm day^{-1}\). Limits on \(R_{\rm max}\) are not published, though values of \(R_{\rm max}\leq 100\) are investigated.
The values of \(R_{\rm min}\) and \(R_{\rm max}\) used in that work are applicable to 1.3 GHz observations, and are measured relative to an energy threshold of \(10^{38}\) erg. I therefore scale to a threshold of \(10^{39}\) erg by reducing those rates by a factor of \(10^{Y}\approx 0.13\), and to the mean CHIME frequency of 600 MHz by increasing the rates by a factor of \((600/1300)^{\boldsymbol{\alpha}}\approx 4.4\), for a total adjustment factor of 0.55.
I extract the likelihoods from that work at the values of \(R_{\rm min}\), \(R_{\rm max}\), and \(\gamma_{r}\) used here, and plot the Bayesian posterior in Figure (21). The best-fit region agrees remarkably with results from CHIME FRB data, but has a much tighter constraint on \(\gamma_{r}\). That two very different measurements with very different instruments converge to the same properties of the repeating FRB population is strong evidence that this model is a reasonable approximation to the underlying truth. This has implications for FRB progenitor models, as discussed in James et al. (2020a).
The one major discrepancy between the results which is hidden by Figure (21) however is that the best-fit values of \(R_{\rm min}\) found by James et al. (2020a) are a factor of \(\sim 100\) lower than that found for CHIME. Equivalently, the results of James et al. (2020a) would under-predict the number of repeating CHIME FRBs. There may be several causes for this.
Firstly: the results for James et al. (2020a) assumed a Weibull distribution with \(k=0.34\). As discussed in SS8.1, a bursty distribution requires less-rapid repeaters to produce the same number of repeating FRBs, and could therefore allow the values of \(R_{\rm min}\) found here to be lower, and more consistent with the ASKAP results. Secondly: the results here are only weakly constraining on \(R_{\rm min}\), and only the probabilities at the best-fit values have been used, rather than marginalising over \(R_{\rm min}\). Thirdly: repetition behaviour at 1.3 GHz and \(600\) MHz might be more different than the simple scaling above would suggest. A fuller investigation will require an improved model of FRB time-frequency structure. Fourthly: the limits from James et al. (2020a) used non-localised repeaters, with conservatively large distance estimates from the Macquart relation assuming no host contribution, thus introducing a bias in those results (albeit one which would push \(R_{\rm min}\) and \(R_{\rm max}\) to lower values). Fifthly and finally: not all FRBs may repeat. The upper limit on \(R_{\rm min}\) found by James et al. (2020a) is driven primarily by the lack of observed repetition from FRB 20171020A. If this FRB is intrinsically a once-off event, then those results weaken significantly, allowing higher values of \(R_{\rm min}\).
The current implementation of repetition in the zDM code does not allow for easy estimation of FRB repetition parameters from follow-up observations. It would be useful to develop such a method to allow the results of follow-up observations to also be fit in a self-consistent manner.
### Absolute number of repeaters and persistent radio sources
The total number of repeating FRBs, \(C_{r}\), is a function of their rate of birth and active lifetime. Estimates of their birth rate vary greatly according to their progenitor model, and range from \(10^{5}\,\rm Gpc^{-3}\,\rm yr^{-1}\) for core-collapse supernova (Taylor et al., 2014), to \(0.02\,\rm Gpc^{-3}\,\rm yr^{-1}\) for NS-NS binary mergers in globular clusters (Ye et al., 2020). Their lifetime is also unknown: while magnetar fields are expected to decay on timescales of order \(10^{4}\,\rm yr\) (Colpi et al., 2000), precisely how this relates to their time as an active emitter of FRBs will depend on the microphysics of emission.
Figure (22) plots the implied value of \(C_{r}\) as a function of \(R_{\rm max}\) and \(\gamma_{r}\), compared to limits on those parameters from Figure (14). By far the largest number of FRB progenitors are found near \(\gamma_{r}=-2\). At flatter (less negative) values of \(\gamma_{r}\), the repeater distribution is dominated by so many rapidly
Figure 21: Bayesian posterior likelihoods, \(P_{\rm ASKAP}\), from follow-up observations of ASKAP repeaters (James et al., 2020a) for the parameter values investigated here.
repeating FRBs that very few are required, while at steep (more negative) values, \(R_{\rm min}\) must be relatively high (and hence \(C_{\rm r}\) not too large) to prevent singly detected objects overwhelming the total FRB rate. That this region is just allowed within the current limits means that the total uncertainty on \(C_{\rm r}\) is huge, ranging approximately between \(10^{-5}\) and \(10\,{\rm Mpc}^{-3}\) at \(z=0\). Assuming a lifetime of \(10^{4}\,{\rm yr}\), this corresponds to birth rates of 1-\(10^{6}\) Gpc\({}^{-3}\) yr\({}^{-1}\), which is compatible with most FRB progenitor models.
The total number of strong repeating FRBs is also of interest. Aside from their obvious identification via detection of their bursts, these objects might be identified via their association with persistent radio sources (PRS), which can be identified in radio surveys. There is some evidence that PRS are more likely to be associated with strong repeaters (Chatterjee et al., 2017; Niu et al., 2022), though this evidence is not conclusive (Law et al., 2022).
In this context, Law et al. (2022) estimate the total number density, and differential power-law slope, of the observed repeating FRB population using CHIME data, as in SS6.3. Assuming all CHIME FRBs come from intrinsic repeaters, they use the observed distribution of the number of bursts to have a power-law slope of \(=-1.5\), calculated via the maximum-likelihood estimator of Crawford et al. (1970). This method thus determines the slope of the'source counts' distribution of repeaters, which is largely insensitive to the intrinsic value of \(\gamma_{\rm r}\) (see Figure (12)). They also estimate the mean number of observed repeat bursts per source, \(\left<R_{\rm obs}\right>=1.2\,{\rm day}^{-1}\), assuming a minimum repetition rate equal to the inverse of \(T_{\rm Car1}^{-1}\). Again, this estimate is based on the observed repetition rates, rather than the intrinsic rates, which explains why the derived mean rate is somewhat higher than the values of \(R^{*}\) found here. Their derived number density of repeaters, being between 22 and \(5.2\cdot 10^{3}\,{\rm Gpc}^{-3}\) in the no-beaming case, are both more constraining, and on-average lower, than derived here. Law et al. (2022) estimate that the fraction of FRB sources with associated PRS is between 0.06 and 0.36: the much higher uncertainty on the intrinsic source density found in this work suggests that surveys identifying PRS independently of FRBs might be a better tracer of the intrinsic number of FRB sources.
It should be noted that the estimates above assume isotropic emission. However, to first order, beaming should have no effect. Including beaming means that the intrinsic number of bursts per repeater is greater by \(4\pi/\Omega_{\rm fpb}\), where \(\Omega_{\rm fpb}\) is the solid angle subtended by each FRB. However, so is the intrinsic single burst emission rate. Hence, the implied number of repeaters to reproduce the detected rate is identical, and beaming angle has no effect. Only in the case that \(\Omega_{\rm fpb}\) varies with repetition rate \(R\), as suggested by Connor et al. (2020), is this scaling broken.
## 10 Conclusions
I have implemented a model for repeating FRBs in the zDM code, allowing for a power-law distribution of intrinsic FRB repetition rates. A population of repeating FRBs will result in an apparent deficit of progenitors (single and repeat FRBs) in the nearby Universe, with the effect becoming increasingly strong with observation time per pointing. I show that this effect is significant for current observations with ASKAP, FAST, and CHIME, and hence that future FRB population modelling should include a simultaneous fit to FRB repetition parameters as well as the current set of cosmological parameters, host galaxy properties, population evolution, and the luminosity function. Such a fit is computationally infeasible with current methods implemented in the zDM code, and hence nested sampling techniques should be implemented in the future.
I have therefore fit a power-law model of repeating FRB rates, with differential slope \(\gamma_{\rm r}\) between rates \(R_{\rm min}\) and \(R_{\rm max}\) (defined as bursts per day above \(10^{39}\) erg, with a Poissonian distribution of arrival times), to CHIME Catalog I data. The model of the CHIME experiment includes beamshape, DM response, and declination-dependent exposure. This model can accurately reproduce the distribution of singly detected CHIME FRBs when using FRB population parameters with a steep dependence of FRB rate on frequency (rate\(\propto\nu^{-1.91}\)), consistent with previous fits to ASKAP and Parkes data at the 90% confidence level. Holding this parameter set fixed, I find that the distribution of repeating FRBs in DM, \(\delta\), and \(N_{\rm burst}\) space, as well as their number compared to apparently once-off bursts, is well-reproduced when assuming the entire FRB population is explained by a single population of repeating FRBs with \(\gamma_{\rm r}=-2.2^{+0.6}_{-0.8}\) (68% C.I.), and \(R_{\rm max}\geq 0.75\). Limits on \(R_{\rm min}\) are less well-constrained. In particular, results are consistent with the DM deficit of repeating FRBs found by The CHIME/FRB Collaboration et al. (2023). This remains the case unless less than 50% of all CHIME single FRBs are due to intrinsic repeaters, at which point the predicted DMs of CHIME repeaters become too high to be consistent with data.
I also make predictions for the effects of repetition in current and future experiments. Localising CHIME repeating FRBs to obtain their redshifts would provide strong constraints on the repeating FRB population, and I urge that optical follow-up observations preferentially target these sources. Further
Figure 22: FRB progenitor population density \(C_{\rm r}\), as a function of \(R_{\rm max}\) and \(\gamma_{\rm r}\), with 68% and 95% contours from Figure (14) overplotted.
more, I predict that the number of repeating FRBs identified by CHIME will continue to increase for at least the next hundred years' worth of observations, and potentially for a much longer timespan. This is currently consistent with the new sample of repeaters released by The CHIME/FRB Collaboration et al. (2023), but systematic effects in that data set make more precise comparisons difficult.
An estimate is made of the systematic effects of correlations between repetition rates and FRB width, non-Poissonian arrival times, and differences between the full CHIME response to FRBs described by Merryfield et al. (2022) and used by Shin et al. (2022) and the method used here. Thus burstiness may be responsible for the apparent excess of repeating FRBs at high declinations, but that other effects are likely small, though may alter the best-fit value of \(\gamma_{\nu}\) away from its true value. Improved modelling of CHIME may be required in the future, though this current implementation already accounts for most effects described by The CHIME/FRB Collaboration et al. (2023).
The most significant outcome is that limits on repetition parameters derived here agree with estimates of repeating FRB parameters produced from follow-up observations to ASKAP FRBs, despite the large differences in detection systems and method of estimation.
In conclusion, I emphasise that while this work is not conclusive evidence for the entire FRB population being explained by a single population of intrinsically repeating progenitors with a broad distribution of repetition rates, it certainly shows that current observations of repeating and single FRBs by CHIME are completely consistent in terms of DM, declination, number of bursts from repeaters, and the relative number of FRBs observed once and multiple times with this scenario.
## Acknowledgement
I thank Evan Keane and J. Xavier Prochaska for helpful comments made on the manuscript, Keith Bannister for the motivation for SS7.1, and Casey Law for the motivation for SS9.2.
This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research made use of Python libraries Matplotlib(Hunter, 2007), NumPy(van der Walt et al., 2011), and SciPy(Virtanen et al., 2020).
## Funding Statement
I acknowledge support by the Australian Government through the Australian Research Council's Discovery Projects funding scheme (project DP210102103).
## Competing Interests
None
The code used is this work is available at [https://github.com/FRBs/zdm](https://github.com/FRBs/zdm).
|
2309.07095 | Directed Homology and Persistence Modules | In this note, we give a self-contained account on a construction for a
directed homology theory based on modules over algebras, linking it to both
persistence homology and natural homology. We study its first properties, among
which some exact sequences. | Eric Goubault | 2023-09-13T17:13:08Z | http://arxiv.org/abs/2309.07095v3 | # Directed Homology and Persistence Modules
###### Abstract
In this note, we give a self-contained account on a construction for a directed homology theory based on modules over algebras, linking it to both persistence homology and natural homology. We study its first properties, among which some exact sequences.
## 1 Introduction
Persistence modules have been introduced originally for topological data analysis, in the seminal work [42] and further generalized to multidimensional filtrations in [10]. In topological data analysis, the data set is organized as a filtration of spaces, where the data is seen at different "resolutions" (at least in the Vietoris-Rips complex approach). In case this is an actual filtration of topological spaces, i.e. a set of spaces totally ordered spaces by inclusion, the evolution of the geometry, in particular of the homology, of the spaces in the filtration is well studied. This gives rise to a simple persistence module over the path algebra of the (quiver of the) linear order of the filtration.
Generalized persistence modules were introduced in [7], where a persistence module was defined to be any functor from a partially ordered set, or more generally a preordered set, to an arbitrary category (in general, a category of vector spaces). This view is a categorification of modules, as well known. This is also a natural way to encode representations of these preordered sets, which, by Morita equivalence, are the same as persistence modules over their path algebra [2].
Directed topology has its roots in a very different application area, concurrency and distributed systems theory [16, 25] rather than data analysis. Its interest lies in the study of (multi-dimensional) dynamical systems, discrete (precubical sets, for application to concurrency theory) or continuous time (differential inclusions, for e.g. applications in control), that appear in the study of multi-agent systems. In directed topology, topological spaces under study are "directed", meaning they have "preferred directions", for instance a cone in the
tangent space, if we are considering a manifold, or the canonical local coordinate system in a precubical set. The topological invariants needed for classifying such spaces have been long in the making. Indeed, classical homotopy or homology theories would not distinguish between distinct directed structures on the same underlying topological space. Natural homology has been introduced in [13] to deal with this, by defining a natural system [5] of modules, describing the evolution of the standard (simplicial, or singular) homology of certain path spaces, along their endpoints. Indeed, this is in spirit similar to persistence homology.
This paper is trying to make a case about the fact that both frameworks have been introducing tools for different reasons, that are slowly overlapping, if not converging. Not surprisingly, the topological data analysis community has also got interest in dynamical systems recently, see e.g. [34, 3].
First steps in that direction have been published in [9], where natural homology has been shown to be, for a certain class of directed spaces, a colimit of unidimensional persistence modules. Another step was made in [21] where the author constructed a directed homology theory based a the semi-abelian category of (unital) algebras.
In directed topology, a particular effort has been made into axiomatizing direct homotopy and homology theories, in the style of algebraic homotopy [4, 25] and Eilenberg-Steenrod axioms [13]. This has less been the case in persistence, although some recent work [33] is paving the way. In this paper, we are discussing Kunneth formulas and relative homology sequences for directed homology, and hope for some interest in the persistence homology community.
One fundamental hurdle in both frameworks, is the cost of evaluating such homologies in practice. In unidimensional homology, the representations are "tame", and persistence modules are just modules over a univariate polynomial ring. In multidimensional homology, or more complicated preordered sets filtrations, representations can be "wild", and similarly for arbitrary directed spaces.
For certain classes of directed spaces, mostly partially ordered topological spaces that are the geometric realization of finite precubical sets, component categories [18, 24] have provided a framework in which to represent, in a finite manner, the essential homology types of the path spaces involved. In topological data analysis, tame persistence modules have been studied in e.g. [33]. A natural question is indeed to get these different point of views to coincide, or at least to be expressed in a joint framework. This would allow in particular the use of advances in persistence modules' theory to help with practical computations for directed topology, using e.g. local information as in [6].
ContentsThis note has been written so that it would be mostly self-contained and so that it would use only elementary and detailed arguments. As this draws from a variety of mathematical concepts, we begin by introducing them at length in the first few sections. The background on directed topology is given in Section 2, the one on associative algebra and modules, Section 3, and on paths algebras and their modules, in Section 4.
We then come into the heart of the matter: Section 5 introduces (directed) homology modules as certain bimodules on the underlying path algebra of a precubical set. Of particular importance for the theory are the notions of restriction and extension of scalars that we recap in Section 3, and particularize to the bimodules we need, in Section 5.
We make links with persistence modules in Section 6 and with natural homology, in Section 7. In particular, we show evidence that component categories should provide tame characterizations of the first (directed) homology bimodule, for a class of precubical set, interesting for applications (as they come from the semantics of a well studied concurrent language, the PV language, with no loops, see [16]).
Finally, we prove a few facts on homology modules of (a certain class of) precubical sets. First, they are "invariants" under homeomorphism of their geometric realization. Invariance meaning here "bisimulation equivalence", introduced in [13]. We also prove a Kunneth formula which links the bimodule homology of a tensor product of precubical sets, introduced in this paper, with the tensor product of the bimodule homologies for each space, at least for the restricted case of "cubical complexes" (as of [13]). Finally, we give some preliminary results concerning exact sequences in this homology theory, and in particular the relative homology exact sequence.
## 2 Precubical sets and directed spaces
Precubical setsWe remind the reader of the structure of precubical set we are going to use in the sequel:
**Definition 1**.: A precubical set is a graded set \(X=(X_{i})_{i\in\mathbb{N}}\) with two families of operators:
\[d_{i}^{0},d_{i}^{1}:\ X_{n}\to X_{n-1}\]
(\(i,j=0,\ldots,n-1\)) satisfying the relations
\[d_{i}^{k}\circ d_{j}^{l} = d_{j-1}^{l}\circ d_{i}^{k}\]
(\(i<j\), \(k,l=0,1\))
A morphism \(f\) from precubical set \(X\) to precubical set \(Y\) is a graded function \(f:\ X_{n}\to Y_{n}\) for all \(n\in\mathbb{N}\) such that it commutes with all boundary operators \(d_{i}^{\epsilon}\), \(\epsilon\in\{0,1\}\). We denote by \(Precub\) the category of precubical sets.
Note that the 1-skeleton \(X_{\leq 1}=(X_{0},X_{1},d_{0}^{0}:X_{1}\to X_{0},d_{0}^{1}:X_{1}\to X_{0})\) of any precubical set \(X\), is a quiver, or directed graph [2].
Precubical sets form a presheaf category on a site \(\square\), whose objects are \([n]\), \(n\in\mathbb{N}\) and whose morphisms are \(\delta_{i}^{\epsilon}:\ [n-1]\to[n]\) for \(n>0\), \(\epsilon\in\{0,1\}\), \(i\in\{1,\ldots,n\}\), such that \(\delta_{j}^{\eta}\delta_{i}^{\epsilon}=\delta_{i}^{\epsilon}\delta_{j-1}^{ \eta}\), for \(i<j\).
We note \(d_{\mathcal{I}}^{0}\) (resp. \(d_{\mathcal{I}}^{1}\)) for the composition \(d_{i_{1}}^{0}\circ\ldots\circ d_{i_{k}}^{0}\) (resp. \(d_{i_{1}}^{1}\circ\ldots\circ d_{i_{k}}^{0}\)) if \(\mathcal{I}=\{i_{1},\ldots,i_{k}\}\) with \(i_{1}<\ldots<i_{k}\). We note \(d^{0}\) (resp. \(d^{1}\)) for \(d_{\{0,\ldots,n-1\}}^{0}\) (resp. \(d_{\{0,\ldots,n-1\}}^{1}\)).
Directed spacesPrecubical sets are natural combinatorial models for directed spaces that we define below.
Let \(I=[0,1]\) denote the unit segment with the topology inherited from \(\mathbb{R}\).
**Definition 2** ([25]).: A directed topological space, or d-space, is a pair \((X,dX)\) consisting of a topological space \(X\) equipped with a subset \(dX\subset X^{I}\) of continuous paths \(p:I\to X\), called directed paths or d-paths, satisfying three axioms:
* every constant map \(I\to X\) is directed
* \(dX\) is closed under composition with continuous non-decreasing maps from \(I\) to \(I\)
* \(dX\) is closed under concatenation.
We shall abbreviate the notation \((X,dX)\) to \(X\).
Note that for a d-space \(X\), the space of d-paths \(dX\subseteq X^{I}\) is a topological space, equipped with the compact-open topology. We denote by \(\overrightarrow{P}(X)\) the topological space of d-paths of \(X\) modulo reparametrization, and call it the trace space. We write \(\overrightarrow{P}(X)_{v}^{w}\) for the sub-topological space of \(\overrightarrow{P}(X)\) made of directed traces from point \(v\) to point \(w\) of \(X\).
A map \(f:X\to Y\) between d-spaces \((X,dX)\) and \((Y,dY)\) is said to be a d-map or directed map if it is continuous and for any d-path \(p\in dX\) the composition \(f\circ p:I\to Y\) belongs to \(dY\). In other words we require that \(f\) preserves d-paths. We write \(df\ :dX\to dY\) for the induced map between directed path spaces. We denote by \(\mathcal{D}\) the category of d-spaces.
A directed homeomorphism or dihomeomorphism is a homeomorphism which is a directed map, and whose inverse is also a directed map. One of the goals of directed topology is to help classify directed spaces up to dihomeomorphisms.
Classical examples of d-spaces arise as geometric realizations of precubical sets, as found in e.g. the semantics of concurrent and distributed systems [16]. We will use some of these classical example to explain the directed homology theory we are designing in this article.
Let \(\overrightarrow{I}=[0,1]\) with as directed paths all increasing paths in the usual ordering inherited from real numbers, and \(\overrightarrow{I}^{n}\) the \(n\)th power of \(\overrightarrow{I}\) (for \(n>0\)) as a directed space: as \(\mathcal{D}\) is complete (as well as co-complete), this is well defined. \(\overrightarrow{I}^{n}\) has as directed paths all continuous paths that are increasing on each coordinate.
**Definition 3**.: The geometric realization \(|C|\) of a precubical set \(C\) is the Yoneda extension of the following functor \(F:\ \square\rightarrow\mathcal{D}\):
* \(F([n])=\overrightarrow{I}^{n}\)
* \(F(\delta_{i}^{\epsilon})\) is the map from \(\overrightarrow{I}^{n-1}\) to \(\overrightarrow{I}^{n}\) which sends \((x_{1},\dots,x_{n-1})\in\overrightarrow{I}^{n-1}\) to \((x_{1},\dots,x_{i},\epsilon,x_{i+1},\dots,x_{n-1})\).
For more details, see e.g. [39].
Finally, as we are going to construct a directed homology theory, and that we are going to compare with the construction of [13], we recap the definition of natural homology of a directed space below:
**Definition 4**.: The natural homology of directed space \(X\) is the collection of functors \(HN_{i}(X):F\overrightarrow{P}(X)\to Ab\), \(i\geq 1\), from the factorization category of \(\overrightarrow{P}(X)\) to the category of abelian groups with:
* \(HN_{i}(X)(p)=H_{i}(\overrightarrow{P}(X)_{a}^{b})\), where \(p\in\overrightarrow{P}(X)\) is a trace from \(a\) to \(b\in X\), and \(H_{i}\) is the \(i\)th singular homology functor,
* \(HN_{i}(X)(\langle u,v\rangle)(p)=[v\circ p\circ u]\) where \(u\) is a trace from \(a^{\prime}\) to \(a\), \(v\) a trace from \(b\) to \(b^{\prime}\), and \([x]\) denotes here the class in \(H_{i}(X)(\overrightarrow{P}(X)_{a^{\prime}}^{b^{\prime}})\) of \(x\).
Given that the factorization category \(F\mathcal{C}\), or twisted arrow category [29] of a category \(\mathcal{C}\) is the category whose objects are morphisms \(f\) in \(\mathcal{C}\) and whose morphisms from \(f\) to \(g\) are pairs of morphisms \(\langle u,v\rangle\) of \(\mathcal{C}\) such that \(g=v\circ f\circ u\).
In what follows, we will sometimes change the module into which we compute natural homology, making no difference on its base definition. In particular in Section 7, we will be considering natural homology with values in \(R\)-vector spaces, for \(R\) a given field.
Cube chains and trace spacesOur result will apply to a particular class of precubical sets, that has been introduced in [40]:
**Definition 5**.: Let \(X\) be a finite precubical set. We say that \(X\) is proper if the map:
\[\coprod_{n\geq 0}X_{n}:\ \{d^{0}(c),d^{1}(c)\}\in 2^{X_{0}}\]
is an injection, i.e., the cubes of \(X\) can be distinguished by its extreme vertices.
The non-looping length covering of \(X\) is the precubical set \(\tilde{X}\) with \(\tilde{X}_{n}=M_{n}\times\mathbb{Z}\) and \(d_{i}^{c}(c,k)=(d_{i}^{e}(c),k+\epsilon)\).
We write \(Cub\) for the full sub-category of \(Precub\), of finite precubical sets with proper non-looping length covering.
The following definition is essential in the calculations on trace spaces done by Krzysztof Ziemianski [39]. We twist a little the original definition so that 1-cube chains can be empty, and can consist in that case in constant paths on vertices, this convention will prove easier for the rest of the paper:
**Definition 6**.: Let \(X\) be a pre-cubical set and let \(v\), \(w\in X_{0}\) be two of its vertices. A cube chain in \(X\) from \(v\) to \(w\) is a sequence of cubes indexed by \(v\) and \(w\), \(c=(c_{1},\ldots,c_{l})_{v,w}\), where \(c_{k}\in X_{n_{k}}\) and \(n_{k}>0\), \(l\geq 0\), such that either \(l=0\) and \(v=w\) or:
* \(d^{0}(c_{1})=v\),
* \(d^{1}(c_{l})=w\),
* \(d^{1}(c_{i})=d^{0}(c_{i+1})\) for \(i=1,\ldots,l-1\).
We will often leave the index \(v,w\) to cube chains when the context makes it clear what they are.
The sequence \((n_{1},\ldots,n_{l})\) will be called the type of a cube chain \(c\), \(dim(c)=n_{1}+\ldots+n_{l}-l\) the dimension of \(c\), and \(n_{1}+\ldots+n_{l}\), the length of \(c\). By convention, we set \(dim(()_{v,v})=0\): the dimension of constant paths on a vertex is zero. These cube chains in \(X\) from \(v\) to \(w\) will be denoted by \(Ch(X)_{v}^{w}\), and the set of cube chains of dimension equal to \(m\) (resp. less than \(m\), less or equal to \(m\)) by \(Ch^{=m}(X)_{v}^{w}\) (resp. \(Ch^{<m}(X)_{v}^{w}\), \(Ch^{\leq m}(X)_{v}^{w}\)). Note that a cube chain has dimension \(0\) if and only if it contains \(1\)-cubes only, or is an empty cube chain.
**Remark 1**.: Cube chains of dimension \(0\) are naturally identified with directed paths (including the constant paths on vertices) of the underlying quiver of the pre-cubical set.
**Remark 2**.: \(1\)-cube chains are necessarily of the form \((c_{1},\ldots,c_{l})\) with a unique \(c_{i}\) of dimension \(2\), all the others being of dimension \(1\). Indeed, each cell of dimension strictly greater than \(1\) contributes to adding to the dimension of the cube chain.
**Remark 3**.: We note that the dimension of cube chains is additive in the following sense: let \(c=(c_{1},\ldots,c_{l})\) and \(d=(d_{1},\ldots,d_{k})\) be two cube chains. Their concatenation, when defined, i.e. when \(d^{1}(c_{l})=d^{0}(d_{1})\), is the cube chain \(e=(c_{1},\ldots,c_{l},d_{1},\ldots,d_{k})\). The dimension of \(e\) is the sum of the dimensions of each cell \(c_{i}\) and \(d_{j}\) minus \(k+l\), hence is equal to the sum of the dimension of \(c\) with the dimension of \(d\).
For a cube chain \(c=(c_{1},\ldots,c_{l})\in Ch(X)_{v}^{w}\) of type \((n_{1},\ldots,n_{l})\) and dimension \(i=\sum\limits_{j=1}^{l}n_{i}-l\), an integer \(k\in\{1,\ldots,l\}\) and a subset \(\mathcal{I}\subseteq\{0,\ldots,n_{k}-1\}\) having \(r\) elements, where \(0<r<n_{k}\), define a cube chain
\[d_{k,\mathcal{I}}(c)=(c_{1},\ldots,c_{k-1},d_{\mathcal{I}}^{0}(c_{k}),d_{ \mathcal{I}}^{1}(c_{k}),c_{k+1},\ldots,c_{l})\in Ch(X)_{v}^{w}\]
where \(\overline{\mathcal{I}}=\{0,\ldots,n_{k}-1\}\backslash A\). This defines a cell complex with cells of dimension \(i\), \(Ch^{=i}(X)\), being \(i\)-cube chains, and with faces given by the \(d_{k,\mathcal{I}}\).
For \(R\) a given commutative ring we define:
**Definition 7**.: Let \(X\) be a precubical set. We call \(R_{i+1}[X]\) for \(i\geq 0\) the free \(R\)-module generated by all cube chains of dimension \(i\).
The formula below is given in [40]:
**Definition 8**.: Define a boundary map from the \(R_{i+1}[X]\) to \(R_{i}[X]\) as follows:
\[\partial c\quad=\quad\sum_{k=1}^{l}\sum_{r=1}^{n_{k}-1}\sum_{\mathcal{I}\subseteq \{0,\ldots,n_{k}-1\}:\ |\mathcal{I}|=r}(-1)^{n_{1}+\ldots+n_{k-1}+k+r+1}sgn(\mathcal{I})d_{k, \mathcal{I}}(c)\]
where
\[sgn(\mathcal{I})=\left\{\begin{array}{ll}1&\text{if }\sum\limits_{i\in \mathcal{I}}i\equiv\sum\limits_{i=1}^{r}i\ mod\ 2\\ -1&\text{otherwise}\end{array}\right.\]
Then \(R_{*}[X]=(R_{i+1}[X],\partial)_{i\in\mathbb{N}}\) is a chain complex. The restriction to cube chains from any \(v\in X_{0}\) to any \(w\in X_{0}\) is a sub-chain complex of \(R_{*}[X]\) that we write \(R_{v,*}^{w}[X]\).
The following is a direct consequence of Theorem 1.7 of [40]:
**Lemma 1**.: Let \(X\in Cub\) be a finite precubical set that is such that is has proper non-looping length covering. Then for all \(v,w\in X_{0}\), the homology of \(R_{*}[X]\) is isomorphic to the homology of the trace space \(\overrightarrow{P}(X)_{v}^{w}\).
In order to illustrate these definitions and constructs, we give below some sample calculations, which will also be useful later.
**Example 1** (2 holes on diagonal/antidiagonal).: We consider the following two precubical sets:
\[\begin{array}{ccccc}\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par
is of type \((1,2,1)\). Let us compute \(d_{2,\mathcal{I}}(k,F,b)\) for all \(\mathcal{I}\subseteq\{0,\ldots,1\}\) having \(r\) elements, where \(0<r<2\), i.e. having exactly one element. Hence we have to compute \(d_{2,\{0\}}(k,F,b)\) and \(d_{2,\{1\}}(k,F,b)\). Supposing the boundaries of 2-cells indexed by 0 are the vertical 1-cells, and that the boundary indexed by 1 are the horizontal ones:
* \(d_{2,\{0\}}(k,F,b)=(k,d_{\{1\}}^{0}(F),d_{\{0\}}^{1}(F),b)=(k,d,e,b)\)
* \(d_{2,\{1\}}(k,F,b)=(k,d_{\{0\}}^{0}(F),d_{\{1\}}^{1}(F),b)=(k,c,a,b)\)
Then,
\[\begin{array}{rcl}\partial(k,F,b)&=&\sum\limits_{\mathcal{I}\subseteq\{0, \ldots,1\}:\ |\mathcal{I}|=1}(-1)^{5}sgn(\mathcal{I})d_{2,\mathcal{I}}(k,F,b)\\ &=&-sgn(\{0\})d_{2,\{0\}}(k,F,b)-sgn(\{1\})d_{2,\{1\}}(k,F,b)\\ &=&d_{2,\{0\}}(k,F,b)-d_{2,\{1\}}(k,F,b)\\ &=&(k,d,e,b)-(k,c,a,b)\end{array}\]
**Example 2** (3-cube).: Consider now the full 3-cube \(X\) below, with 8 vertices having 0 and 1 coordinates along the three axes (we will note them by concatenating their coordinates: 000, 001, 010, 100, 110, 101, 001, 111) and 12 1-cells \(a01\), \(0b1\), \(a11\), \(1b1\), \(01c\), \(00c\), \(11c\), \(10c\), \(a00\), \(0b0\), \(a10\), \(1b0\), with \(d^{0}(a01)=001\), \(d^{1}(a01)=101\), \(d^{0}(0b1)=001\), \(d^{1}(0b1)=011\), \(d^{0}(a11)=011\), \(d^{1}(a11)=111\), \(d^{0}(1b1)=101\), \(d^{1}(1b1)=111\), \(d^{0}(01c)=010\), \(d^{1}(01c)=011\), \(d^{0}(00c)=000\), \(d^{1}(00c)=001\), \(d^{0}(11c)=110\), \(d^{1}(11c)=111\), \(d^{0}(10c)=100\), \(d^{1}(10c=101)\), \(d^{0}(a00)=000\), \(d^{1}(a00)=100\), \(d^{0}(0b0)=000\), \(d^{1}(0b0)=010\), \(d^{0}(a10)=010\), \(d^{1}(a10)=110\), \(d^{0}(1b0)=100\) and \(d^{1}(1b0)=110\):
The 2-cells are: \(A\) whose boundary is given by \(d_{0}^{0}(A)=a00\), \(d_{1}^{0}(A)=0b0\), \(d_{0}^{1}(A)=a10\) and \(d_{1}^{1}(A)=1b0\); \(B\) whose boundary is given by \(d_{0}^{1}(B)=a01\), \(d_{1}^{0}(B)=00c\), \(d_{0}^{0}(B)=a00\) and \(d_{1}^{1}(B)=10c\); \(C\) whose boundary is given by \(d_{0}^{1}(C)=0b1\), \(d_{1}^{0}(C)=00c\), \(d_{1}^{1}(C)=01c\) and \(d_{0}^{0}(C)=0b0\); \(A^{\prime}\) whose boundary is given by \(d_{0}^{0}(A^{\prime})=a01\), \(d_{1}^{0}(A^{\prime})=0b1\), \(d_{0}^{1}(A^{\prime})=a11\) and \(d_{1}^{1}(A^{\prime})=1b1\); \(B^{\prime}\) whose boundary is given by \(d_{0}^{1}(B^{\prime})=a11\), \(d_{1}^{0}(B^{\prime})=01c\), \(d_{0}^{0}(B^{\prime})=a10\) and \(d_{1}^{1}(B^{\prime})=11c\); \(C^{\prime}\) whose boundary is given by \(d_{0}^{1}(C^{\prime})=1b1\), \(d_{1}^{1}(C^{\prime})=11c\), \(d_{0}^{0}(C^{\prime})=1b0\) and \(d_{0}^{0}(C^{\prime})=10c\).
The unique 3-cell \(S\) has as boundaries \(A\), \(B\), \(C\), \(A^{\prime}\), \(B^{\prime}\) and \(C^{\prime}\) with \(d^{0}_{0}(S)=C\), \(d^{0}_{1}(S)=B\), \(d^{0}_{2}(S)=A\), \(d^{1}_{0}(S)=C^{\prime}\), \(d^{1}_{1}(S)=B^{\prime}\), \(d^{1}_{2}(S)=A^{\prime}\).
The 0-cube chains are all subpaths of the six following maximal 1-paths, \(\zeta=(0b0,a10,11c)\), \(\epsilon=(0b0,01c,a11)\), \(\alpha=(a00,1b0,11c)\), \(\beta=(a00,10c,1b1)\), \(\gamma=(00c,a01,1b1)\), \(\delta=(00c,0b1,a11)\):
And the 1 cube chains are:
As with Example 1, we have the following calculations for \(\partial\): \(\partial(00c,A^{\prime})=\gamma-\delta\), \(\partial(B,1b1)=\beta-\gamma\), \(\partial(C,a11)=\epsilon-\delta\), \(\partial(0b0,B^{\prime})=\zeta-\epsilon\), \(\partial(a00,C^{\prime})=\alpha-\beta\), \(\partial(A,11c)=\alpha-\zeta\).
There is a unique 2 cube chain which is \((S)\). We now exemplify the computation of the boundary of this unique 2 cube chain below.
\[\begin{array}{rcl}\partial(S)&=&\sum\limits_{r=1}^{2}\sum\limits_{\mathcal{ I}\subseteq\{0,1,2\}:|\mathcal{I}|=r,r=1,2}(-1)^{r}sgn(\mathcal{I})d_{1, \mathcal{I}}((S))\\ &=&sgn(\{0,1\})d_{1,\{0,1\}}((S))+sgn(\{0,2\})d_{1,\{0,2\}}((S))\\ &&\quad+sgn(\{1,2\})d_{1,\{1,2\}}((S))-sgn(\{0\})d_{1,\{0\}}((S))\\ &&\quad-sgn(\{1\})d_{1,\{1\}}((S))-sgn(\{2\})d_{1,\{2\}}((S))\\ &=&d_{1,\{0,1\}}((S))-d_{1,\{0,2\}}((S))+d_{1,\{1,2\}}((S))-d_{1,\{0\}}((S))\\ &&\quad+d_{1,\{1\}}((S))-d_{1,\{2\}}((S))\\ &=&(d^{0}_{\{2\}}(S),d^{1}_{\{0,1\}}(S))-(d^{0}_{\{1\}}(S),d^{1}_{\{0,2\}}(S)) +(d^{0}_{\{0\}}(S),d^{1}_{\{1,2\}}(S))\\ &&\quad-(d^{0}_{\{1,2\}}(S),d^{1}_{\{0\}}(S))+(d^{0}_{\{0,2\}}(S),d^{1}_{\{1\}} (S))-(d^{0}_{\{0,1\}}(S),d^{1}_{\{2\}}(S))\\ &=&(A,11c)-(B,1b1)+(C,a11)-(a00,C^{\prime})+(0b0,B^{\prime})-(00c,A^{\prime}) \end{array}\]
## 3 Associative algebras and modules over algebras
AlgebrasLet \(R\) be a commutative ring.
**Definition 9**.: A (unital) associative algebra on \(R\), or \(R\)-algebra \(A=(A,+,.,\times)\) is a \(R\)-module \((A,+,.)\) (with external multiplication by elements of the ring \(R\) denoted by.) that has an internal monoid operation, which is an associative
operation ("multiplication" or "internal multiplication") \(\times:A\times A\to A\) that is bilinear. We denote by \(0\) the neutral elements for \(+\) (which we use also for denoting the \(0\) of the ring \(R\)) and \(1\) the neutral element (or identity) for \(\times\).
**Definition 10**.: Let \(A\) and \(B\) be two \(R\)-algebras. A morphism \(f:A\to B\) of \(R\)-algebras is a linear map from \(A\) to \(B\) seen as \(R\)-modules, such that it commutes with the internal multiplication:
\[f(a\times a^{\prime})=f(a)\times f(a^{\prime})\]
A \(R\)-submodule \(B\) of a \(R\)-algebra \(A\) is a \(R\)-subalgebra of \(A\) if the identity of \(A\) belongs to \(B\) and \(b_{1}\times b_{2}\in B\) for all \(b_{1},b_{2}\in B\). A \(R\)-submodule \(I\) of a \(R\)-algebra \(A\) is a right ideal of \(A\) (resp. left ideal of \(A\)) if \(x\times a\in I\) (or \(a\times x\in I\), respectively) for all \(x\in I\) and \(a\in A\). A two-sided ideal of \(A\) (or simply an ideal of \(A\)) is a both a left ideal and a right ideal of \(A\).
It is easy to see that if \(I\) is a two-sided ideal of a \(R\)-algebra \(A\), then the quotient \(R\)-module \(A/I\) has a unique \(R\)-algebra structure such that the canonical surjective linear map \(\pi:A\to A/I\), \(a\to a=a+I\), becomes a \(R\)-algebra homomorphism.
If \(I\) is a two-sided ideal of \(A\) and \(m\geq 1\) is an integer, we denote by \(I^{m}\) the two-sided ideal of \(A\) generated by all elements \(x_{1}\times x_{2}\times\ldots\times x_{m}\), where \(x_{1},x_{2},\ldots,x_{m}\in I\), that is, \(I^{m}\) consists of all finite sums of elements of the form \(x_{1}\times x_{2}\times\ldots\times x_{m}\), where \(x_{1},x_{2},\ldots,x_{m}\in I\). We set \(I^{0}=A\).
The (Jacobson) radical \(rad\)\(A\) of a \(R\)-algebra \(A\) is the intersection of all the maximal right ideals in \(A\).
Let \(A\) and \(B\) be \(R\)-algebras. Since \(A\) and \(B\) may both be regarded as \(R\)-modules, their tensor product \(A\otimes_{R}B\) is also an \(R\)-module. This is the \(R\)-module where all elements can be written (non uniquely in general) as finite sums of elements of the form \(\lambda a\otimes b\) with \(\lambda\in R\), \(a\in A\) and \(b\in B\), with the property that \(\otimes\) is bilinear.
The tensor product can be given the structure of a ring by defining the product on elements of the form \(a\otimes b\) by:
\[(a_{1}\otimes b_{1})\times(a_{2}\otimes b_{2})=(a_{1}\times a_{2})\otimes(b_ {1}\times b_{2})\]
and then extending by linearity to all of \(A\otimes_{R}B\). This ring is an \(R\)-algebra, associative and unital with identity element given by \(1_{A}\times 1_{B}\) if \(A\) and \(B\) are unital. If \(A\) and \(B\) are commutative \(R\)-algebras, then the tensor product is commutative as well.
The tensor product turns the category of \(R\)-algebras into a symmetric monoidal category (and even monoidal closed, with the internal Hom functor in \(Alg\)).
#### Modules over an associative algebra
**Definition 11** ([2]).: Let \(A\) be a (unital) \(R\)-algebra. A right \(A\)-module (or a right module over \(A\)) is a pair \((M,\bullet)\), where \(M\) is a \(R\)-module and \(\bullet:M\times A\to M\), \((m,a)\to m\bullet a\), is a binary operation satisfying the following conditions, for all \(x,y\in M\), \(a,b\in A\) and \(\lambda\in K\):
* \((x+y)\bullet a=x\bullet a+y\bullet a\)
* \(x\bullet(a+b)=x\bullet a+x\bullet b\)
* \(x\bullet(ab)=(x\bullet a)\bullet b\)
* \(x\bullet 1=x\)
* \((x\lambda)\bullet a=x\bullet(a\lambda)=(x\bullet a)\lambda\)
The definition of a left \(A\)-module and \(A\)-bimodule is analogous. Throughout, we write \(M\) or \(M_{A}\) instead of \((M,\bullet)\). When in need for disambiguating formulas, we will write explicitely \({}_{A}\bullet_{M}\) for the left action of \(A\) on \(M\) and \({}_{M}\bullet_{A}\) for the right action of \(A\) on \(M\), or simply \({}_{A}\bullet\) (resp. \(\bullet_{A}\)) when the context makes it clear on which module the action is taken, or simply again, \(\bullet_{M}\) (resp. \({}_{M}\bullet\)) when the context makes it clear which algebra acts on \(M\). We write \(A_{A}\) and \({}_{A}A\) whenever we view the algebra \(A\) as a right or left \(A\)-module, respectively, with \(\bullet\) being the algebra multiplication. We write \(A\) for the \(A\)-bimodule \(A\). Note that a \(A\)-bimodule can be seen as a left (or right) \(A\otimes A^{op}\)-bimodule, where \(A^{op}\) is the algebra which has the same elements as \(A\) but with the multiplication \(a\times_{A^{op}}b=b\times_{A}a\).
A morphism of \(A\)-module \(f:\,M\to N\) is a linear map between the underlying \(R\)-modules of \(M\) and \(N\) which preserve the right action of the \(R\)-algebra \(A\).
We write \({}_{R}Mod\ A\) for the category of left \(A\)-modules, \(Mod_{R}\ A\) for the category of right \(A\)-modules, and \({}_{R}Mod_{R}\ A\) for the category of right \(A\)-bimodules.
A module \(M\) is said to be finite dimensional if the dimension \(dim_{R}M\) of the underlying \(R\)-vector space of \(M\) is finite. The category of left \(A\)-modules (resp. right and bi-modules) of finite dimension is denoted by \({}_{R}mod\) (resp. \({}_{R}\), \({}_{R}mod_{R}\)).
A \(R\)-subspace \(M^{\prime}\) of a right \(A\)-module \(M\) is said to be an \(A\)-submodule of \(M\) if \(m\bullet a\in M\), forall \(m\in M\) and all \(a\in A\). In this case the \(R\)-module \(M/M^{\prime}\) has a natural \(A\)-module structure such that the canonical epimorphism \(\pi:\,\,M\to M/M^{\prime}\) is an \(A\)-module homomorphism. Similarly for left-modules and bimodules.
Let \(M\) be a right \(A\)-module and let \(I\) be a right ideal of \(A\). It is easy to see that the set \(M\bullet I\) consisting of all sums \(m_{1}\bullet a_{1}+\ldots+m_{s}\bullet a_{s}\), where \(s\geq 1\), \(m_{1},\ldots,m_{s}\in M\) and \(a_{1},\ldots,a_{s}\in I\), is a submodule of \(M\). Similarly for left-modules (with left ideals) and bimodules (with two-sided ideals).
A right \(A\)-module \(M\) is said to be generated by the elements \(m_{1},\ldots,m_{s}\) of \(M\) if any element \(m\in M\) has the form \(m=m_{1}\bullet a_{1}+\ldots+m_{s}\bullet a_{s}\) for some \(a_{1},\ldots,a_{s}\in A\). In this case, we write \(M=m_{1}A+\ldots+m_{s}A\). A module \(M\) is said to be finitely generated if it is generated by a finite subset of elements of \(M\).
Finally, let \(M\) be an \(A\)-bimodule and \(N\) be a \(B\)-bimodule. Then the \(R\)-module which is the tensor product of the underlying \(R\)-modules of \(M\) and \(N\) (elements of which are generated by tensors \(m\otimes n\), \(m\in M\) and \(n\in N\), \(\otimes\) being \(R\)-bilinear) can be given the structure of a \(A\otimes B\)-bimodule by setting:
\[(a\otimes b)\bullet m\otimes n\bullet(a^{\prime}\otimes b^{\prime})=(a \bullet m\bullet a^{\prime})\otimes(b\bullet n\bullet b^{\prime})\]
for \(a\), \(a^{\prime}\) in \(A\) and \(b\), \(b^{\prime}\) in \(B\).
Categories of modules over algebrasAs well-known [12], the category of right and left \(R\)-modules, where \(R\) is a commutative ring, are Abelian. As we have seen already, \(R\)-bimodules are \(R\otimes R^{op}\)-modules, so they form an Abelian category as well. The adaptation of these results to the case of \(A\)-modules, where \(A\) is an algebra is immediate (see e.g. [37]), in particular:
* The \(0\) object in the category of \(A\)-bimodules is \(0\) as a \(R\)-bimodule with the \(0\) action of elements of \(A\)
* The coproduct (isomorphic to the cartesian product) of \(A\)-bimodules \(M\) and \(N\) has \(M\coprod N\) the coproduct of the underlying \(R\)-modules, as its underlying \(R\)-module, with action \(a\bullet(m+n)\bullet b=a\bullet_{M}m\bullet_{M}b+a\bullet_{N}n\bullet_{N}b\), with \(m\in M\) and \(n\in N\).
* The kernel of \(f:\ M\to N\), a morphism of \(A\)-bimodule is given by: \[Ker\ f=\{x\in M\ |\ f(x)=0\}\] with action \(a\bullet_{Ker\ f}k_{Ker\ f}\bullet b=a\bullet_{M}k_{M}\bullet b\) (which is valid since \(f(a\bullet_{M}k_{M}\bullet b)=a\bullet_{M}f(k)_{M}\bullet b=0\))
* The cokernel of \(f:\ M\to N\), a morphism of \(A\)-bimodule, is given by: \[coKer\ f=N/f(M)\] with \(a\bullet_{coKer\ f}[k]_{coKer\ f}b=[a\bullet_{N}k_{N}\bullet b]\). This is well defined since, first, as \(f\) is a morphism of \(A\)-bimodules, \(a\bullet_{N}f(m)_{N}\bullet b=f(a\bullet_{M}m_{M}\bullet b)\) and \(f(M)\) is a \(A\)-submodule of \(N\). Secondly, the definition of the action does not depend on the particular representative chosen for \([k]\): suppose \(k^{\prime}=k+f(m)\) then \(a\bullet_{coKer\ f}[k^{\prime}]_{coKer\ f}\bullet b=[a\bullet_{N}k^{\prime}{}_ {N}\bullet b]=[a\bullet_{N}k_{N}\bullet b]+[a\bullet_{N}f(m)_{N}\bullet b]=a \bullet_{coKer\ f}[k]_{coKer\ f}\bullet b+[f(a\bullet_{M}m)_{M}\bullet b]=a \bullet_{coKer\ f}[k]_{coKer\ f}\bullet b\).
Consider now the more general category of modules over possibly varying algebras:
**Definition 12**.: The category of right-modules (resp. left) over algebras has:
* As objects, pairs \((M,A)\) where \(A\) is an algebra and \(M\) is a right (resp. left) \(A\)-module
* As morphisms from \((M,A)\) to \((M^{\prime},A^{\prime})\), pairs of morphisms \((f,g)\), where \(f:\ M\to M^{\prime}\) is a morphism of right (resp. left) \(R\)-modules, \(g:\ A\to A^{\prime}\) is a morphism of algebras such that for all \(m\in M\) and \(a\in A\), \(f(m\bullet a)=f(m^{\prime})\bullet g(a)\)
We denote this category by \(Mod_{R}\) (resp. \({}_{R}Mod\)).
The definition of the category of bimodules is similar and is denoted by \({}_{R}Mod_{R}\).
Restriction/extension of scalarsDefinition 12 is linked to the notion of "restriction of scalars". Restriction of scalars defines a functor [12] as follows. We spell this out in the case of bimodules, since this is the case of interest to this paper. First:
**Definition 13**.: Let \(g:\ A\to A^{\prime}\) be a morphism of algebras. \(A^{\prime}\) can be considered as a \(A\)-bimodule using the action \(a_{1}\bullet a^{\prime}\bullet a_{2}=g(a_{1})\times a^{\prime}\times g(a_{2})\), for all \(a_{1},a_{2}\in A\), \(a^{\prime}\in A^{\prime}\).
Then the restriction of scalars functor is:
**Definition 14**.: The restriction of scalar functor (along \(g\)) is
\[g^{*}:\ Mod_{A^{\prime}}\to Mod_{A}\]
with
* \(g^{*}(M)\) being \(M\) as a \(R\)-module, and the action of \(A\) on \(m\in M\) is by definition \(a_{1A}\bullet m\bullet_{A}a_{2}=g(a_{1})_{A^{\prime}}\bullet m\bullet_{A^{ \prime}}g(a_{2})\).
* Let \(f:\ M^{\prime}\to N^{\prime}\) be a morphism of \(A^{\prime}\)-bimodules. Then \(g^{*}(f)(m^{\prime})=f(m^{\prime})\).
Indeed, the definition of \(g^{*}(f)\) is valid since \(g^{*}(f)(a_{1A}\bullet m\bullet_{A}a_{2})=f(a_{1A}\bullet m\bullet_{A}a_{2})= g(a_{1})_{A^{\prime}}f(m)\bullet_{A^{\prime}}g(a_{2})=a_{1A}\bullet f(m) \bullet_{A}a_{2}\).
The definition of \(Mod\) above is equivalent to asking for morphisms to be \((f,g)\) with \(f\) being a morphism of right \(A\)-modules from \(M\) to the restriction of scalars of \(M^{\prime}\) along \(g\).
**Lemma 2**.: Restriction of scalars has a left adjoint [12]
\[g_{!}:\ AMod\to{}_{A^{\prime}}Mod\]
called the extension of scalars functor along \(g\) functor, which is:
* \(g_{!}(M)\) is, as a \(A\)-bimodule, \(A^{\prime}\otimes_{A}M\otimes_{A^{op}}A^{\prime}\), with \(A^{\prime}\) considered as a \(A\)-bimodule as in Definition 13. Hence as a \(R\)-module, it is generated by \(a^{\prime}_{1}\otimes m\otimes a^{\prime}_{2}\) with \(a^{\prime}_{1},a^{\prime}_{2}\in A^{\prime}\) and \(m\in M\). As a \(A\)-bimodule, it is such that \[a_{1A}\bullet(a^{\prime}_{1}\otimes m\otimes a^{\prime}_{2}) \bullet_{A}a_{2}=(a_{1}\bullet a^{\prime}_{1})\otimes m\otimes(a^{\prime}_{2} \bullet a_{2})\\ =(g(a_{1})\times a^{\prime}_{1})\otimes m\otimes(a^{\prime}_{2} \times g(a_{2}))\] (1) Finally, as a \(A^{\prime}\)-bimodule, the action of \(a^{\prime}_{3}\in A^{\prime}\) on the left and \(a^{\prime}_{4}\in A^{\prime}\) on the right is defined as \(a^{\prime}_{3A^{\prime}}\bullet(a^{\prime}_{1}\otimes m\otimes a^{\prime}_{2} )\bullet_{A^{\prime}}a^{\prime}_{4}=(a^{\prime}_{3}\times a^{\prime}_{1}) \otimes m\otimes(a^{\prime}_{2}\otimes a^{\prime}_{4})\).
* Let \(h:\ M\to N\) be a morphism of \(A\)-bimodules, then \(g_{!}(h)(a^{\prime}_{1}\otimes m\otimes a^{\prime}_{2})=a^{\prime}_{1}\otimes h (m)\otimes a^{\prime}_{2}\).
Indeed, the action on morphisms is well defined since \(g_{!}(h)(a^{\prime}_{1}\otimes(a_{1}\bullet_{M}m_{M}\bullet a_{2})\otimes a^{ \prime}_{2})=a^{\prime}_{1}\otimes h(a_{1}\bullet_{M}m_{M}\bullet a_{2}) \otimes a^{\prime}_{2}\) and \(g_{!}(h)((a^{\prime}_{1}\times g(a_{1}))\otimes m\otimes(g(a_{2})\times a^{ \prime}_{2}))=(a^{\prime}_{1}\times g(a_{1}))\otimes h(m)\otimes(g(a_{2}) \times a^{\prime}_{2})=a^{\prime}_{1}\otimes(a_{1}\bullet_{N}h(m)_{N}\bullet a _{2})\otimes a^{\prime}_{2}=h(a^{\prime}_{1}\otimes(a_{1}\bullet_{M}m_{M}\bullet a _{2})\otimes a^{\prime}_{2}=g_{!}(h)(a^{\prime}_{1}\otimes(a_{1}\bullet_{M}m_{M} \bullet a_{2})\otimes a^{\prime}_{2})\).
**Remark 4**.: Restriction of scalars also has a right adjoint called co-extension of scalars, constructed in a similar manner but with the \(Hom\) functor between \(A\)-modules, instead of the tensor product.
**Remark 5**.: An alternative definition of the category of modules would have morphisms \((f,g):\ (M,A)\to(M^{\prime},A^{\prime})\) with \(g:\ A^{\prime}\to A\) morphism of algebras and \(f\) morphism of \(A^{\prime}\)-bimodules from \(g_{!}(M)\) to \(M^{\prime}\). As well-known [35], these two ways of defining categories of modules are equivalent in the case of almost finitely generated projective modules, which includes finitely generated modules, which will be the main case studied in this paper.
## 4 Path algebras and modules over path algebras
Let \(\mathcal{C}\) be a small category.
**Definition 15**.: The category algebra or convolution algebra \(R[\mathcal{C}]\) of \(\mathcal{C}\) over \(R\) is the \(R\)-algebra whose underlying \(R-module\) is the free module \(R[\mathcal{C}_{1}]\) over the set of morphisms of \(\mathcal{C}\) and whose product operation is defined on basis-elements \(f\), \(g\in\mathcal{C}_{1}\subseteq R[\mathcal{C}]\) to be their composition if they are composable and zero otherwise:
\[f\times g=\left\{\begin{array}{ll}g\circ f&\mbox{if composable}\\ 0&\mbox{otherwise}\end{array}\right.\]
Alternatively, the category algebra of \(\mathcal{C}\) can be constructed from the linearization of \(\mathcal{C}\) (its algebroid) by somehow forgetting its categorical structure, that is, by replacing composition by the algebra multiplication.
Note that when \(\mathcal{C}\) is a groupoid, \(R[\mathcal{C}]\) is a star-algebra (an algebra equipped with an anti-involution). When \(\mathcal{C}\) is a poset, \(R[\mathcal{C}]\) is the incidence algebra of the poset. When \(\mathcal{C}\) is a quiver, \(R[\mathcal{C}]\) is known as the path algebra of \(\mathcal{C}\). When \(\mathcal{C}\) is a category, \(R[\mathcal{C}]\) is a quotient of the path algebra of its underlying quiver by an "obvious" ideal (the one that has as relations \(f\times g-g\circ f=0\)).
**Remark 6**.: Finally, note that the category algebra is not a functorial construction. This is because there is no reason, given a functor \(F\) from \(\mathcal{C}\) to \(\mathcal{D}\), its linearization (the obvious linear map induced by \(F\) on the underlying \(R\)-module of \(R[\mathcal{C}]\)) preserves internal multiplication.
In fact, it is easy to see that only injective-on-objects functors are mapped naturally onto algebra morphisms, see e.g. [21].
Posets and incidence algebras(see for instance [2])
Consider the case of a partial order \(\leq\) over any \(n\)-element set \(S\). We enumerate \(S\) as \(s_{1},\ldots,s_{n}\), and in such a way that the enumeration is compatible with the order \(\leq\) on \(S\), that is, \(s_{i}\leq s_{j}\) implies \(j\leq i\), which is always possible.
Then the path algebra can be seen as the subalgebra \(\mathbb{T}_{n}(R)\) of the algebra of \(n\times n\) matrices with coefficients in \(R\), \(\mathbb{M}_{n}(R)\), together with addition, multiplication by scalars, and matrix multiplication, made of lower triangular matrices.
Consider for instance the following partial order:
Its path algebra is exactly the algebra of lower triangular matrices, that we denote by:
\[\begin{pmatrix}R&0&\dots&0&0\\ R&R&\dots&0&0\\ \dots&&\\ R&R&\dots&R&0\\ R&R&\dots&R&R\end{pmatrix}\]
Idempotents and path algebrasIn what follows we will suppose that \(R\) is a field, hence \(R\)-modules are actually \(R\)-vector spaces.
Let \(A\) be a \(R\)-algebra. An element \(e\in A\) is called an idempotent if \(e^{2}=e\). The idempotent \(e\) is said to be central if \(a\times e=e\times a\) for all \(a\in A\). The idempotents \(e_{1},e_{2}\in A\) are called orthogonal if \(e_{1}\times e_{2}=e_{2}\times e_{1}=0\). The idempotent \(e\) is said to be primitive if \(e\) cannot be written as a sum \(e=e_{1}+e_{2}\), where \(e_{1}\) and \(e_{2}\) are nonzero orthogonal idempotents of \(A\).
**Definition 16** ([2]).: We say that \(\{e_{1},\dots,e_{n}\}\) is a complete set of primitive orthogonal idempotents of \(A\) if the \(e_{i}\) are primitive idempotents of \(A\) that are pairwise orthogonal and such that \(e_{1}+\dots+e_{n}\) is a (the) unity of \(A\).
**Remark 7**.: In the case of path algebras \(R[Q]\) of some quiver \(Q\), primitive idempotents are constants paths on each vertex \(i\) of \(Q\), \(e_{i}\). They form a set of orthogonal primitive idempotents, as for \(i\neq j\) two distinct vertices of \(Q\), \(e_{i}\times e_{j}=0\). They form a complete set of primitive orthogonal idempotents if \(Q\) has finitely many vertices.
**Definition 17** ([2]).: We say that an algebra \(A\) is connected (or indecomposable) if \(A\) is not isomorphic to a product of two algebras.
\(R[Q]\) is a connected algebra if and only if \(Q\) is connected, as a (not directed) graph.
A last definition on algebras which will be useful later on:
**Definition 18**.: ([2]) A finite dimensional \(R\)-algebra \(A\) is basic if and only if the algebra \(B=A/rad\ A\) is isomorphic to a product \(R\times R\times\times R\) of copies of \(R\).
A path algebra is basic when it is acyclic (in the standard undirected sense).
Associative algebras are closely linked to quivers, path algebras are not "just" examples of associative algebras, they are central to the theory of associative algebras.
**Definition 19** ([2]).: Let \(A\) be a basic and connected finite dimensional \(R\)-algebra and \(e_{1},e_{2},\dots,e_{n}\) be a complete set of primitive orthogonal idempotents of \(A\). The (ordinary) quiver of \(A\), denoted by \(Q_{A}\), is defined as follows:
* The points of \(Q_{A}\) are the numbers \(1,2,\ldots,n\), which are in bijective correspondence with the idempotents \(e_{1},e_{2},\ldots,e_{n}\).
* Given two points \(a,b\in(Q_{A})_{0}\), the arrows \(\alpha:\ a\to b\) are in bijective correspondence with the vectors in a basis of the \(R\)-vector space \(e_{a}(rad\ A/rad^{2}\ A)e_{b}\).
The fact that the definition above is legal is proved in [2] (\(Q_{A}\) does not depend on the basis which is chosen, in particular).
**Definition 20** ([2]).: Let \(Q\) be a finite quiver and \(RQ\) be the arrow ideal of the path algebra \(R[Q]\), i.e. the ideal generated by edges of \(Q\) (elements of \(Q_{1}\)). A two-sided ideal \(I\) of \(R[Q]\) is said to be admissible if there exists \(m\geq 2\) such that \(RQ^{m}\subseteq I\subseteq RQ^{2}\). If \(I\) is an admissible ideal of \(R[Q]\), the pair \((Q,I)\) is said to be a bound quiver. The quotient algebra \(R[Q]/I\) is said to be the algebra of the bound quiver \((Q,I)\) or, simply, a bound quiver algebra.
**Theorem 1** ([2]).: Let \(A\) be a basic and connected finite dimensional \(R\)- algebra. There exists an admissible ideal \(I\) of \(R[Q_{A}]\) such that \(A\equiv R[Q_{A}]/I\).
**Quivers and path algebras**
**Lemma 3** ([2]).: Let \(Q\) be a connected, finite, and acyclic quiver with \(Q_{0}=\{1,2,\ldots,n\}\) such that, for each \(i,j\in Q_{0}\), \(j\leq i\) whenever there exists a path from \(i\) to \(j\) in \(Q\). Then the path algebra \(R[Q]\) is isomorphic to the triangular matrix algebra
\[\begin{pmatrix}e_{1}(R[Q])e_{1}&0&\ldots&0\\ e_{i}(R[Q])e_{1}&e_{2}(R[Q])e_{2}&\ldots&0\\ \ldots&&&\\ e_{n}(R[Q])e_{1}&e_{n}(R[Q])e_{2}&\ldots&e_{n}(R[Q])e_{n}\end{pmatrix}\]
**Example 3** ([2])).: Consider:
\[1\ \xleftarrow{\alpha\atop\beta}\ 2\]
In that case, the corresponding path algebra is, by Lemma 3 is the following matrix algebra:
\[\begin{pmatrix}R&0\\ R^{2}&R\end{pmatrix}\]
known as the Kronecker algebra. Elements of this Kronecker algebra are of the form:
\[\begin{pmatrix}a&0\\ (b,c)&d\end{pmatrix}\]
with the "obvious" addition and external multiplication, and as internal (algebra) multiplication, the "obvious one" as well:
\[\begin{pmatrix}a&0\\ (b,c)&d\end{pmatrix}\times\begin{pmatrix}a^{\prime}&0\\ (b^{\prime},c^{\prime})&d^{\prime}\end{pmatrix}=\begin{pmatrix}aa^{\prime}&0 \\ (ba^{\prime}+db^{\prime},ca^{\prime}+dc^{\prime})&dd^{\prime}\end{pmatrix}\]
**Example 4** (Empty square).: We apply Lemma 3 to:
We get the following path algebra:
\[\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R&0&R&0\\ R^{2}&R&R&R\end{pmatrix}\]
Quivers and bimodules over path algebrasBimodules over path algebras enjoy a number of interesting properties. The first one that will be useful for representing the homology modules we are going to introduce in Section 5 is that bimodules over path algebras are naturally bigraded over pairs of vertices of the underlying graph, similarly to path algebras, Lemma 3:
**Lemma 4**.: Let \(Q\) be a quiver with finitely many vertices and \(M\) be a \(R[Q]\)-bimodule. Then \(M\) is isomorphic, as a \(R\)-module, to the coproduct of all \(R\)-modules \(e_{a}\bullet M\bullet e_{b}\), for \(a,b\in Q_{0}\).
Proof.: As noted in Remark 7, \(\{e_{a}\mid a\in Q_{0}\}\) is a complete set of primitive orthogonal idempotents of \(R[Q]\), hence for any \(R[Q]\)-bimodule \(M\), \(M=1\bullet M\bullet 1\) with \(1=\sum\limits_{a\in Q_{0}}e_{a}\), so any \(m\in M\) decomposes into \(m=\sum\limits_{a,b\in Q_{0}}e_{a}\bullet m\bullet e_{b}\).
Furthermore, each \(e_{a}\bullet M\bullet e_{b}\) is distinct from one another. Indeed, consider e.g. \(a^{\prime}\neq a\in Q_{0}\) and \(m\in e_{a}\bullet M\bullet e_{b}\cap e_{a^{\prime}}\bullet M\bullet e_{b}\). We can write \(m=e_{a}\bullet m\bullet e_{b}\) and \(m=e_{a^{\prime}}\bullet m\bullet e_{b}\) so \(m=e_{a}\bullet(e_{a^{\prime}}\bullet m\bullet e_{b})\bullet e_{b}\), which, by the axioms of bimodules is equal to \(e_{a}e_{a^{\prime}}\bullet m\bullet e_{b}e_{b}\). But \(e_{a}\) and \(e_{a^{\prime}}\) are orthogonal, so \(e_{a}e_{a^{\prime}}=0\), hence \(n=0\).
We can similarly prove that for \(b\neq b^{\prime}\in Q_{0}\), \(m=e_{a}\bullet m\bullet e_{b}=e_{a}\bullet m\bullet e_{b^{\prime}}\) is necessarily equal to \(0\).
Therefore \(M\) is isomorphic, as a \(R\)-module, to the coproduct of all \(R\)-modules \(e_{a}\bullet M\bullet e_{b}\), for \(a,b\in Q_{0}\).
This leads us to the natural notation for \(R[Q]\)-bimodules below:
**Remark 8**.: We note that for \(X\) a precubical set, for all \(v\in X_{0}\) and \(w\in X_{0}\), and any \(R[X]\)-bimodule \(M\), \(e_{v}\bullet M\bullet e_{w}\) is a \(R\)-module. We represent any \(R[X]\)-bimodule \(M\) as a matrix of \(R\)-modules which \((v,w)\) entry is \(e_{v}\bullet M\bullet e_{w}\), similarly to what we have been doing with path algebras in the previous paragraph. We will see the interest of this notation later, e.g. in Example 5.
**Remark 9**.: We also note that representing a \(R[X]\)-bimodule \(M\) by a matrix \((M_{i,j})\) whose columns and lines are indexed by the vertices \(x\in X_{0}\), and similarly with a \(R[Y]\)-bimodule \(N\), represented by a matrix \((N_{i,j})\) with columns indexed
by the vertices \(y\in Y_{0}\), the \(R[X]\otimes R[Y]\)-bimodule \(M\otimes N\) defined in Section 3 is represented using the same matrix notation as in Remark 8 as a \(X_{0}\times Y_{0}\) indexed matrix, with entries \(((i,i^{\prime}),(j,j^{\prime}))\) being \(M_{i,j}\otimes N_{i^{\prime},j^{\prime}}\).
## 5 Homology in modules over the path algebra
**Definition 21**.: Let \(X\) be a finite precubical set and \(R\) be a field. We define \(R[X]\) to be the quiver algebra \(R[X_{\leq 1}]\) of the underlying quiver \(X_{\leq 1}\) of \(M\).
Generators of the quiver algebra \(R[X_{\leq 1}]\), as a \(R\)-module, are \(0\)-cube chains indexed by their start and end points \(p=(p_{1},\ldots,p_{l})_{u^{\prime},u}\), possibly empty (\(l=0\)), in that case \(u^{\prime}=u\). Empty \(0\)-cube chains from \(u\) to \(u\) are also noted \(e_{u}\) as they are going to be the idempotents of the quiver algebra. When the context makes it clear, we will drop the subscripts on generators of the path algebra.
Let \(p=(p_{1},\ldots,p_{l})_{u^{\prime},u}\) from \(u^{\prime}\) to \(u\) and \(q=(q_{1},\ldots,q_{m})_{v,v^{\prime}}\) from \(v\) to \(v^{\prime}\) be \(1\)-cube chains in \(X\). These are the base elements of \(R[X]\) as a \(R\)-module. The internal multiplication on these basis elements of \(R[M]\) is:
\[p\times q=\left\{\begin{array}{ll}(p_{1},\ldots,p_{l},q_{1},\ldots,q_{m})_{u ^{\prime},v^{\prime}}&\mbox{if $u=v$}\\ 0&\mbox{otherwise}\end{array}\right.\]
The fact we are considering finite precubical sets when defining \(R[X]\) allows us to make it a unital associative algebra, as well known [2]. The unit of the algebra is the sum of all (finitely many) idempotents, i.e. constant paths on each vertex, as we have seen before already.
**Lemma 5**.: Let \(X\) be a finite precubical set. For all \(i\geq 0\), the \(R\)-module \(R_{i}[X]\) (\(i\geq 0\)) of \((i-1)\)-cube chains of Definition 7 can be given the structure of a \(R[X]\)-bimodule, which is free.
Proof.: The \(R[X]\)-bimodule operation is defined by bilinearity on basis elements as follows.
For \(i=1\) the bimodule operation is just the left concatenation by \(p\) composed with the right concatenation by \(q\). Indeed, as a \(R\)-module, \(R_{1}[X]\) is isomorphic to \(R[X]\) and any algebra is indeed a bimodule over itself. Furthermore, the unit of the algebra \(1=\sum\limits_{a\in X_{0}}e_{a}\) is the unique generator of \(R_{1}[X]\) as a bimodule over \(R[X]\). Indeed, we can generate all of \(R_{1}[X]\) as \(R[X]\bullet 1\bullet R[X]\).
Let \(i\geq 2\). It is easy to see that given \(1\)-dimensional cube chains in \(X\), \(p=(p_{1},\ldots,p_{l})_{u^{\prime},u}\) from \(u^{\prime}\) to \(u\) and \(q=(q_{1},\ldots,q_{m})_{v,v^{\prime}}\) from \(v\) to \(v^{\prime}\), the following is a \(R[X]\)-bimodule action of \(\langle p,q\rangle\) on \(r=(r_{1},\ldots,r_{n})\), a \(i\)-dimensional cube chain in \(M\):
\[p\bullet r\bullet q=\left\{\begin{array}{ll}(p_{1},\ldots,p_{l},r_{1}, \ldots,r_{n},q_{1},\ldots,q_{m})_{u^{\prime},v^{\prime}}&\mbox{if $d^{0}(r_{1})=d^{1}(p_{l})$}\\ &\mbox{and $d^{1}(r_{n})=d^{0}(q_{1})$}\\ 0&\mbox{otherwise}\end{array}\right.\]
Obviously, \((p_{1},\ldots,p_{l},r_{1},\ldots,r_{n},q_{1},\ldots,q_{m})_{u^{\prime},v^{ \prime}}\) is an \((i-1)\)-dimensional cube chain when \(p\), \(q\) are \(1\)-dimensional cube chains and \(r\) is an \((i-1)\)-dimensional cube chain, see Remark 3.
Indeed, the neutral element \(1\) is the sum of all empty cube chains on each vertex, and the bimodule operation satisfies \(1\bullet r\bullet 1=r\).
Finally, it is free over \(R[X]\) generated by \((i-1)\)-cube chains \((p_{1},\ldots,p_{n})\) with \(p_{1}\in X_{\geq 2}\) and \(q_{n}\in X_{\geq 2}\), forming a set of generators we denote by \(G_{i}[X]\).
Suppose \(R[X]\bullet g\bullet R[X]\cap R[X]\bullet g^{\prime}\bullet R[X]\neq\{0\}\), with \(g\) and \(g^{\prime}\) in \(G_{i}[X]\), and consider some non null element \(c\) in the intersection. \(c\) is an \((i-1)\)-cube chain \((c_{1},\ldots,c_{m})\), with \((c_{i},\ldots,c_{j})=g\) for some indices \(i\) and \(j\) and \((c_{k},\ldots,c_{l})=g^{\prime}\) for some indices \(k\) and \(l\). If \(i\neq k\) then suppose \(i<k\) for instance, and as \(c_{i}\) is in \(X_{\geq 2}\) by hypothesis, the dimension of \((c_{1},\ldots,c_{m})\) is at least \(1\) (because the cell \(c_{i}\in X_{u}\) accounts for \(u-1\) in the calculus of the dimension of the cube chain \(c\)) plus the dimension of \(g^{\prime}\) (by additivity of dimension, Remark 3), which is \(i-1\), so the dimension of \(c\) cannot be \(i-1\). Hence \(i\) must be equal to \(k\). Similarly, \(j\) must be equal to \(l\) and \(g=g^{\prime}\). Hence \(R_{i}[X]\) is freely generated by \(G_{i}[X]\) as a \(R[X]\)-bimodule.
**Lemma 6**.: The boundary operator \(\partial:\ R_{i+1}[X]\to R_{i}[X]\) (\(i\geq 1\)) of Definition 8 is a morphism of \(R[X]\)-modules.
Boundary operators restrict to \(R\)-module morphisms from \(R_{i+1}[X](a,b)\) to \(R_{i}[X](a,b)\) where \(R_{i}[X](a,b)\) is the sub \(R\)-module \(R_{a,i}^{b}[X]=e_{a}\bullet R_{i}[X]\bullet e_{b}\).
Hence \((R_{i}[X],\partial)\) is a chain of bimodules on the algebra \(R[X]\) and for all \(a\), \(b\) in \(X_{0}\), \((e_{a}\bullet R_{i}[X]\bullet e_{b},\partial)\) is a chain of \(R\)-modules.
Proof.: For a cube chain \(c=(c_{1},\ldots,c_{l})\in Ch_{w}^{v}(X)\) of type \((n_{1},\ldots,n_{l})\) and dimension \(i+1\), an integer \(k\in\{1,\ldots,l\}\) and a subset \(\mathcal{I}\subseteq\{0,\ldots,n_{k}-1\}\) having \(r\) elements, where \(0<r<n_{k}\), we have, from Section 5:
\[d_{k,\mathcal{I}}(c)=(c_{1},\ldots,c_{k-1},d_{\overline{\mathcal{I}}}^{0}(c_{ k}),d_{\mathcal{I}}^{1}(c_{k}),c_{k+1},\ldots,c_{l})\in Ch(X)_{v}^{w}\]
where \(\overline{\mathcal{I}}=\{0,\ldots,n_{k}-1\}\).
We first note that \(c\in Ch_{w}^{v}(X)\), \(d_{k,\mathcal{I}}(c)\in Ch_{w}^{v}(X)\). Because \(\partial c\) is a linear combination of the \(d_{k,\mathcal{I}}(c)\), see Definition 8, this implies that \(\partial c\) is in \(R_{i}[X](w,v)\). Hence, by linearity of \(\partial\), \(\partial\) maps \(R_{i+1}[X](w,v)\) to \(R_{i}[X](w,v)\).
Now, consider \(p=(p_{1},\ldots,p_{m})\in Ch(X)_{v^{\prime}}^{v}\) and \(q=(q_{1},\ldots,q_{n})\in Ch(X)_{w}^{w^{\prime}}\). Therefore, \(p\bullet c\bullet q=(p_{1},\ldots,p_{m},c_{1},\ldots,c_{l},q_{1},\ldots,q_{n})\) is of type \((s_{1},\ldots,s_{m+l+n})=(1,\ldots,1,n_{1},\ldots,n_{l},1,\cdots,1)\). The only indices \(k\) such that there exists \(r\) with \(0<r<s_{k}\) are \(k=m+1,\ldots,m+l\), and for such \(k\) and \(\mathcal{I}\subseteq\{0,\ldots,s_{k}-1\}\) having \(r\) elements with \(0<r<s_{k}\):
\[d_{k,\mathcal{I}}(p\bullet c\bullet q) = (p_{1},\ldots,p_{m},c_{1},\ldots,c_{k-1-m},d_{\overline{A}}^{0}(c_ {k-m}),\] \[\qquad\ d_{\mathcal{I}}^{1}(c_{k-m}),c_{k+1-m},\ldots,c_{l},q_{1}, \ldots,q_{n})\] \[= p\bullet d_{k-m}(c)\bullet q\]
Now, \(\partial(p\bullet c\bullet q)\) is equal to:
\[\begin{array}{l}\sum\limits_{k=m+1}^{m+1}\sum\limits_{r=1}^{n_{k}-m-1}\sum \limits_{\mathcal{I}\subseteq\{0,\ldots,n_{k-m-1}\}:\ |\mathcal{I}|=r}(-1)^{m+n_{1}+\ldots+n_{k-m-1}+k+r+1}sgn(\mathcal{I})\ d_{k, \mathcal{I}}(p\bullet c\bullet q)\\ =\sum\limits_{k=m+1}^{m+1}\sum\limits_{r=1}^{n_{k}-m-1}\sum\limits_{\mathcal{I }\subseteq\{0,\ldots,n_{k-m-1}\}:\ |\mathcal{I}|=r}(-1)^{m+n_{1}+\ldots+n_{k-m-1}+k+r+1}sgn( \mathcal{I})\ p\bullet d_{k-m,\mathcal{I}}(c)\bullet q\\ =p\bullet\left(\sum\limits_{k=m+1}^{m+1}\sum\limits_{r=1}^{n_{k}-m-1}\sum \limits_{\mathcal{I}\subseteq\{0,\ldots,n_{k-m}-1\}:\ |\mathcal{I}|=r}(-1)^{m+n_{1}+\ldots+n_{k-m-1}+k+r+1}sgn( \mathcal{I})\ d_{k-m,\mathcal{I}}(c)\right)\bullet q\\ =p\bullet\left(\sum\limits_{k=1}^{l}\sum\limits_{r=1}^{n_{k}-m-1}\sum\limits_{ \mathcal{I}\subseteq\{0,\ldots,n_{k-1}\}:\ |\mathcal{I}|=r}(-1)^{2m+n_{1}+\ldots+n_{k-m-1}+k+r+1}sgn( \mathcal{I})\ d_{k-m,\mathcal{I}}(c)\right)\bullet q\\ =p\bullet\left(\sum\limits_{k=1}^{l}\sum\limits_{r=1}^{n_{k}-m-1}\sum\limits_{ \mathcal{I}\subseteq\{0,\ldots,n_{k-1}\}:\ |\mathcal{I}|=r}(-1)^{2m+n_{1}+\ldots+n_{k-m-1}+k+r+1}sgn( \mathcal{I})\ d_{k-m,\mathcal{I}}(c)\right)\bullet q\\ =p\bullet\left(\sum\limits_{k=1}^{l}\sum\limits_{r=1}^{n_{k}-m-1}\sum\limits_{ \mathcal{I}\subseteq\{0,\ldots,n_{k-1}\}:\ |\mathcal{I}|=r}(-1)^{n_{1}+\ldots+n_{k-m-1}+k+r+1}sgn( \mathcal{I})\ d_{k,\mathcal{I}}(c)\right)\bullet q\\ =p\bullet\partial c\bullet q\end{array}\]
and \(\partial\) is a \(R[X]\)-bimodule morphism.
**Example 5** (2 holes on diagonal/antidiagonal).: We consider the two precubical sets of Example 1. For both precubical sets, we have the following path algebra \(R[X]\) (lines and columns are indexed by vertices \(1\) to \(9\) as depicted in Example 1), as a direct application of Lemma 3:
\[\begin{pmatrix}R&0&0&0&0&0&0&0&0\\ R&R&0&0&0&0&0&0&0\\ R&R&R&0&0&0&0&0&0\\ R&0&0&R&0&0&0&0&0\\ R^{2}&R&0&R&R&0&0&0&0\\ R^{3}&R^{2}&R&R&R&R&0&0&0\\ R&0&0&R&0&0&R&0&0\\ R^{3}&R&0&R^{2}&R&0&R&R&0\\ R^{6}&R^{3}&R&R^{3}&R^{2}&R&R&R&R\end{pmatrix}\]
We note that \(R_{1}[X]\) seen as a \(R[X]\)-bimodule has \(\bullet\) coinciding with the algebra operation of \(R[X]\). Hence, using the notation for \(R[X]\)-bimodules that we introduced earlier on, the \(\bullet\) operation coincides with the matrix multiplication of the matrix algebra above.
**Definition 22**.: The homology module of a finite precubical set \(X\) is defined as the homology \((HM_{i+1})_{i\geq 0}[X]\) in the abelian category of \(R[X]\)-bimodules, of the complex of \(R[X]\)-bimodules defined in Lemma 6, shifted by one.
**Remark 10**.: The reader may rightfully wonder what \(HM_{0}[X]\) should be. Our best guess is that it should be linked to components [24]. This is left for future work. This is indeed linked to the consideration of the bimodule \(R_{0}[X]\) of "reachable" pairs of states in the precubical set \(X\) that we discuss in the paragraph "digression on resolutions of associative algebras and monoids" at the end of this section.
Structure of the bimodule homology of precubical setsIn order to make the first example calculations, we describe a little more how such quotients are computed in \(R[X]\)-bimodules:
**Lemma 7**.: Let \(X\) be a finite precubical set. The \(R[X]\)-bimodule \(HM_{i+1}[X]\) has, as underlying \(R\)-module, the coproduct \(\coprod_{a,b\in M_{0}}N_{a,b}\) where the \(R\)-modules \(N_{a,b}\) are:
\[N_{a,b}=Ker\ \partial_{|e_{a}\bullet R_{i}[X]\bullet e_{b}}/Im\ \partial_{|e_{a} \bullet R_{i+1}[X]\bullet e_{b}}\]
The \(R[X]\)-bimodule operation is defined as follows, for \(u\) a path from \(a^{\prime}\) to \(a\) in \(X_{0}\), and \(v\) a path from \(b\) to \(b^{\prime}\) in \(X_{0}\), and writing \([n]_{a,b}\) for the class of \(n\in Ker\ \partial_{|e_{a}\bullet R_{i}[X]\bullet e_{b}}\) modulo \(Im\ \partial_{|e_{a}\bullet R_{i+1}[X]\bullet e_{b}}\):
\[u\bullet[n]_{c,d}\bullet v=\left\{\begin{array}{ll}[u\bullet n\bullet v]_{a^ {\prime},b^{\prime}}&\mbox{if $c=a$ and $d=b$}\\ 0&\mbox{otherwise}\end{array}\right.\]
Proof.: Consider \(c\in Ker\ \partial\), sub-\(R[X]\) bimodule of \(R_{i}[X]\). From Lemma 4, \(c\) decomposes on the coproduct \(\coprod_{a,b\in X_{0}}e_{a}\bullet Ker\ \partial\bullet e_{b}\) as \(\sum\limits_{a,b\in X_{0}}e_{a}\bullet c\bullet e_{b}\), and \(\partial c=\sum\limits_{a,b\in X_{0}}\partial(e_{a}\bullet c\bullet e_{b})\). As seen in Lemma 6, \(\partial\) is a \(R[X]\)-bimodule morphism, so \(\partial(e_{a}\bullet c\bullet e_{b})=e_{a}\bullet\partial(c)\bullet e_{b}\) and therefore, is equal to zero, since \(\partial(c)=0\). This means that \(c\) can be seen as belonging to the coproduct of the \(R\)-modules \(Ker\ \partial_{|e_{a}\bullet R_{i}[X]\bullet e_{b}}\), for all \(a,b\in X_{0}\).
Similarly, as \(\partial\) is a \(R[X]\)-bimodule morphism, \(Im\ \partial_{|R_{i+1}[X]}\) can be identified, when it comes to its \(R\)-module structure, with the coproduct of all \(R\)-modules \(Im\ \partial_{|e_{a}\bullet R_{i+1}[X]\bullet e_{b}}\).
Hence the quotient \(HM_{i}[X]=Ker\ \partial_{|R_{i}[M]}/Im\ \partial_{|R_{i+1}[M]}\) can be identified, as a \(R\)-module, with the quotient of the underlying \(R\)-modules (see Section 3), which by the above can be identified with the coproduct of all \(R\)-modules \(Ker\ \partial_{|e_{a}\bullet R_{i}[M]\bullet e_{b}}/Im\ \partial_{|e_{a} \bullet R_{i+1}[M]\bullet e_{b}}\). Therefore \(e_{a}HM_{i}(M)e_{b}\) is the \(R\)-module \(Ker\ \partial_{|e_{a}\bullet R_{i}[M]\bullet e_{b}}/Im\ \partial_{|e_{a} \bullet R_{i+1}[M]\bullet e_{b}}\).
Finally, as already noted in Section 3, the \(R[M]\)-bimodule operation is making the projection map \(\pi:\ Ker\ \partial_{|R_{i}[M]}\to Ker\ \partial_{|R_{i}[M]}/Im\ \partial_{|R_{i+1}[M]}\) a \(R[M]\)-bimodule homomorphism, hence we have necessarily the formula of the lemma for the \(R[M]\)-bimodule actions.
This will allow us to use the matrix notation of Remark 8 for computing and presenting the homology bimodules.
Functoriality and non-functorialityAs we have remarked already, the construction of the quiver algebra is in general non-functorial, except for precubical maps \(f\) from \(X\) to \(Y\) that are injective on vertices (i.e. which restrict to injective maps from \(X_{0}\) to \(Y_{0}\)), see e.g. [21].
In what follows, \(Cub_{I}\) stands for the full subcategory of precubical sets with proper non-looping length covering, with maps which are injective on vertices.
Let \(c=(c_{1},\ldots,c_{l})\) be an \(i\)-cube chain in \(X\): then, for \(f:\ X\to Y\) a morphism of precubical sets, \(f(c)\) is the \(i\)-cube chain of the same type and length (hence
of the same dimension) \((f(c_{1})\ldots,f(c_{l}))\). 1-cube chains in \(X\) (resp. \(Y\)) are identified with basis elements of \(R[X]\), that plus the identities on vertices (resp. \(R[Y]\)). Suppose that \(f\) is injective, then this transform is functorial from \(R[X]\) to \(R[Y]\).
Furthermore, as \(f\) is a morphism of precubical sets, it commutes with the boundary operators. Consider a cube chain \(c=(c_{1},\ldots,c_{l})\in Ch(X)_{v}^{w}\) of type \((n_{1},\ldots,n_{l})\), an integer \(k\in\{1,\ldots,l\}\) and a subset \(\mathcal{I}\subseteq\{0,\ldots,n_{k}-1\}\) having \(r\) elements, where \(0<r<n_{k}\), we recall that:
\[d_{k,\mathcal{I}}(c)=(c_{1},\ldots,c_{k-1},d_{\mathcal{I}}^{0}(c_{k}),d_{ \mathcal{I}}^{1}(c_{k}),c_{k+1},\ldots,c_{l})\in Ch(X)_{v}^{w}\]
Now,
\[\begin{array}{rcl}d_{k,\mathcal{I}}(f(c))&=&(f(c_{1}),\ldots,f(c_{k-1}),d_{ \mathcal{I}}^{0}(f(c_{k})),d_{\mathcal{I}}^{1}(f(c_{k})),f(c_{k+1}),\ldots,f(c _{l}))\\ &=&(f(c_{1}),\ldots,f(c_{k-1}),f(d_{\mathcal{I}}^{0}(c_{k})),f(d_{\mathcal{I}}^ {1}(c_{k})),f(c_{k+1}),\ldots,f(c_{l}))\\ &=&f(c_{1},\ldots,c_{k-1},d_{\mathcal{I}}^{0}(c_{k}),d_{\mathcal{I}}^{1}(c_{k }),c_{k+1},\ldots,c_{l})\end{array}\]
Therefore \(\partial(f(c))\) is equal to:
\[\sum_{k=1}^{l}\sum_{r=1}^{n_{k}-1}\sum_{\mathcal{I}\subseteq\{1, \ldots,n_{k}\}:\ |\mathcal{I}|=r}(-1)^{n_{1}+\ldots+n_{k-1}+k+r+1}sgn(\mathcal{I})\ d_{k, \mathcal{I}}(f(c))\] \[= \sum_{k=1}^{l}\sum_{r=1}^{n_{k}-1}\sum_{\mathcal{I}\subseteq\{1, \ldots,n_{k}\}:\ |\mathcal{I}|=r}(-1)^{n_{1}+\ldots+n_{k-1}+k+r+1}sgn(A)\ f(d_{k, \mathcal{I}}(c))\] \[= f(\partial c)\]
and \(f\) induces a map of chain complexes from \(R_{i}[X]\) to \(R_{i}[Y]\) seen as \(R\)-modules.
Now, as we have seen, when \(f\) is injective on objects, \(f\) induces an algebra map from \(R[X]\) to \(R[Y]\) and \(f\) induces a map of chain complexes from \(R_{i}[X]\) to \(R_{i}[Y]\), the first one seen as \(R[X]\)-bimodule, and the second one, as a \(R[Y]\)-bimodule. This carries over the homology construction of Section 5. We thus have proved:
**Lemma 8**.: The directed homology module construction defines a functor for each \(n\in\mathbb{N}\), \(HM_{n}:\ Cub_{I}\to{}_{R}Mod_{R}\).
Relative homologyGiven the particular interest in injective on vertices maps, it is reasonable to ask ourselves how to reason about "relative spaces":
**Definition 23**.: Let \(Y\) be a sub-precubical set of a finite precubical set \(X\) and \(M\) be a \(R[Y]\)-bimodule. Indeed, this inclusion defines an inclusion of algebras \(g:\ R[Y]\to R[X]\) since inclusions are injective on objects. We define \({}^{X}M\) to be equal to \(g_{!}(M)\), the extension of coefficients of \(M\) along \(g\).
We have the following characterization of \({}^{X}M\) under some light conditions:
**Lemma 9**.: Let \(X\) be a finite precubical set and \(Y\) a sub-precubical set of \(X\). Then, for any free \(R[Y]\)-bimodule \(M\), freely generated by \((m_{j})_{j\in J}\) as a \(R[Y]\)-bimodule, \({}^{X}M\) is the free \(R[X]\)-bimodule, freely generated by \((m_{j})_{j\in J}\) as a \(R[X]\)-bimodule.
Proof.: Let \((m_{j})_{j\in J}\) be a free family of generators for \(M\) as a \(R[Y]\)-bimodule. As \(Ch^{=1}(Y)\) freely generates the algebra \(R[Y]\), we have for each \(j\in J\), a family \((p^{\prime}_{i,j})_{i\in I}\) and a family \((q^{\prime}_{i,j})_{i\in I}\) with \(p^{\prime}_{i,j},q^{\prime}_{i,j}\in Ch^{=1}(Y)\) such that \(p^{\prime}_{i,j}\bullet m_{j}\bullet q^{\prime}_{i,j}\), where \(j\in J\), \(i\in I\) freely generates \(M\) as a \(R\)-module.
Also, \(1\)-cube chains are free generators of the algebra \(R[X]\) as a \(R\)-module, thus by Lemma 2, we know that elements of the form \(p\otimes(p^{\prime}_{i,j}\bullet m_{j}\bullet q^{\prime}_{i,j})\otimes q\), \(p,q\in Ch^{=1}(X)\), \(i\in I\) and \(j\in J\) generate \(g_{!}(M)\) as a \(R\)-module. Still, they do not form a free family in \(g_{!}(M)\) because of the relations of Lemma 2.
These relations make elements of the form \(p\otimes(p^{\prime}_{i,j}\bullet m_{j}\bullet q^{\prime}_{i,j})\otimes q\) equal to zero in \(g_{!}(M)\) if \(p=(c_{1},\ldots,c_{k})_{u,v}\), \(q=(d_{1},\ldots,d_{l})_{w,z}\), \(p^{\prime}_{i,j}=(c^{\prime}_{1},\ldots,c^{\prime}_{m})_{u^{\prime},v^{\prime}}\) and \(q^{\prime}_{i,j}=(d^{\prime}_{1},\ldots,d^{\prime}_{n})_{w^{\prime},z^{\prime}}\) with \(v\neq u^{\prime}\) or \(z^{\prime}\neq w\).
The proof is as follows. By the relations in Lemma 2, \(p\otimes(p^{\prime}_{i,j}\bullet m_{j}\bullet q^{\prime}_{i,j})\otimes q=(p \times g(p^{\prime}_{i,j}))\otimes m_{j}\otimes(g(q^{\prime}_{i,j})\times q)\). But \(g(p^{\prime}_{i,j})=p^{\prime}_{i,j}\) with \(p^{\prime}_{i,j}\in R[Y]\) identified with a \(1\)-cube chain of \(R[X]\) and \(g(q^{\prime}_{i,j})=q^{\prime}_{i,j}\). And \(p\times g(p^{\prime})\) is \(0\) is \(v\neq u^{\prime}\) and is equal to \((c_{1},\ldots,c_{k},c^{\prime}_{1},\ldots,c^{\prime}_{m})_{u,v^{\prime}}\) and similarly, \(g(q^{\prime})\times q=(d^{\prime}_{1},\ldots,d^{\prime}_{n},d_{1},\ldots,d_{l })_{w^{\prime},z}\) if \(z^{\prime}=w\), otherwise is equal to \(0\), by Definition 21.
Thus \({}^{X}M\) is freely generated as a \(R[X]\)-bimodule by \((m_{j})_{j\in J}\).
In the case of \(i\)-cube chain bimodules, we get even more:
**Lemma 10**.: Let \(X\in Cub\) and \(Y\) a sub-precubical set of \(X\). Then \({}^{X}R_{i}[Y]\) is a sub-\(R[X]\)-bimodule of \(R_{i}[X]\), for \(i\geq 2\).
Proof.: Lemma 9 applies to \(X\), \(Y\) and \(M=R_{i}[Y]\) with generators \(G_{i}(Y)\). Thus \({}^{X}R_{i}[Y]\) is the free \(R[X]\)-bimodule generated by \(G_{i}(Y)\). Also \(R_{i}[X]\) is the free \(R[X]\)-bimodule generated by \(G_{i}(X)\). But the elements of \(G_{i}(Y)\) which are certain \(i\)-cube chains of \(Y\) are in particular \(i\)-cube chains of \(X\) are in particular \(i\)-cube chains of \(X\), that are in fact elements of \(G_{i}(X)\). Hence the result.
**Remark 11**.: The proof could have been made a little more pedantic. Indeed, applying \(g_{!}\) to a \(A\)-bimodule \(M\) is tensoring it with some algebra \(A^{\prime}\) (seen as a \(A\)-bimodule, using \(g\)). But \(M\) being a free \(A\)-bimodule implies in particular that \(M\) is flat, hence that the tensor product by \(M\) is exact. Since the inclusion of algebras of \(R[Y]\) in \(R[X]\) is indeed a monomorphism, its tensor by \(M\) over \(A\) is a monomorphism as well. This means that, at least as a \(A\)-bimodule, \({}^{X}M\) is a sub-bimodule of \({}^{X}M\), which is \(M\). It is easy to see this holds as well for the \(A^{\prime}\)-bimodule structures.
**Remark 12**.: More explicitly, \({}^{X}R_{i}[Y]\) (for \(i\geq 2\)) is freely generated, as a \(R\)-module, by those, among the \(i\)-cube chains in \(X\), that are concatenations of
0-cube chains in \(X\) with generating \((i-1)\)-cube chains in \(Y\), with 0-cube chains in \(X\). Hence all \(y\in{}^{X}R_{i}[Y]\) can be uniquely written as:
\[y=\sum_{j\in J}\lambda_{j}p_{j}\otimes g_{j}\otimes q_{j}\]
with \(\lambda_{j}\in R\), \(p_{j}=(c_{1},\ldots,c_{m})\) in \(Ch^{=0}(X)\), \(g_{j}=(d_{1},\ldots,d_{n})\) in \(Ch^{=i-1}(Y)\), \(q_{j}=(e_{1},\ldots,e_{l})\) in \(Ch^{=0}(X)\) with \(d^{1}(c_{m})=d^{0}(d_{1})\) and \(d^{1}(d_{n})=d^{0}(e_{1})\).
For \({}^{X}R_{1}[Y]\) we need to be a bit more cautious, since the unique generator of \(R_{1}[Y]\) is \(1_{Y}=\sum\limits_{a\in Y_{0}}e_{a}\) which may be different than the unique generator of \(R_{1}[X]\), at least when \(Y_{0}\neq X_{0}\), which is \(1_{X}=\sum\limits_{b\in X_{0}}e_{b}\), so the argument of Lemma 10 does not apply directly at least. And indeed, we need extra requirements for the lemma to apply also for the case \(i=1\):
**Example 6**.: Consider the precubical set \(X\) of Example 1 (the left one, or "two holes on the diagonal"). Take \(Y\) the subprecubical set of \(X\) generated by edges \(i\) and \(f\). Then \(i\otimes(h,f,g)\) is a generator (as a \(R\)-module) of \({}^{X}R_{1}[Y]\), as well as \((i,h)\otimes(f)\otimes(g)\). These are distinct elements in \({}^{X}R_{1}[Y]\) whereas they should somehow represent the unique 0-cube chain \((i,h,f,g)\) in \(R_{1}[X]\). This discrepancy forbids the identification of \({}^{X}R_{i}[Y]\) as a sub-\(R[X]\)-bimodule of \(R_{i}[X]\) in general.
We thus need an extra condition on the inclusion of \(Y\) into \(X\) to make sense of a well behaved \({}^{X}R_{1}[Y]\) with respect to \(R_{1}[X]\):
**Definition 24**.: Let \(X\in Cub\) and \(Y\) a sub-precubical set of \(X\). We say that \((X,Y)\) is a relative pair (of precubical sets) if for all \(c\in Ch^{=0}(X)\) with \(c=(c_{1},\ldots,c_{m})\), there exists indices \(j\) and \(k\) such that \(c_{l}\not\in Y\) for \(l=1,\ldots,j-1\), \(c_{l}\in Y\) for \(l=j,\ldots,k\) and \(c_{l}\not\in Y\) for \(l=k+1,\ldots,m\). Said in a different manner, \((X,Y)\) is a relative pair if all directed paths of \(X\) can only enter once, and exit once \(Y\).
**Lemma 11**.: Let \((X,Y)\) be a relative pair of precubical sets as in Definition 24. Then \({}^{X}R_{1}[Y]\) is a sub-\(R[X]\)-bimodule of \(R_{i}[X]\). More precisely, \({}^{X}R_{i}[Y]\) is generated as a \(R\)-module by the 0-cube chains \(x\) in \(X\) that are concatenations of a 0-cube chain \(p\) of \(X\) with a 0-cube chain \(c\) in \(Y\) with a 0-cube chain \(q\) of \(X\), identified with \(p\otimes c\otimes q\) (modulo Equation 1). Furthermore, all \(y\in{}^{X}R_{1}[Y]\) can be written in a unique manner as:
\[y=\sum_{j\in J}\lambda_{j}p_{j}\otimes c_{j}\otimes q_{j}\]
with \(\lambda_{j}\in R\), \(c_{j}=(c_{j},\ldots,c_{m},d_{1},\ldots,d_{k})\) 0-cube chains in \(Y\) and \(p_{j}=(c_{1},\ldots,\)\(c_{j-1})\), \(q_{j}=(d_{k+1},\ldots,d_{n})\), 0-cube chains in \(X\) with no cell in \(Y\), with \(d^{1}(c_{j-1})=d^{0}(c_{j})\) and \(d^{1}(d_{k})=d^{0}(d_{k+1})\).
Proof.: By Lemma 9, we know that \({}^{X}R_{1}[Y]\) is the \(R[X]\)-bimodule generated by \(1_{Y}\). Consider a generator of \({}^{X}R_{1}[Y]\) as a \(R\)-module, it is of the form \(p^{\prime}\otimes 1_{Y}\otimes q^{\prime}\) with \(p^{\prime}=(c_{1},\ldots,c_{m})\in Ch^{=0}(X)\) and \(q^{\prime}=(d_{1},\ldots,d_{n})\in Ch^{=0}(X)\) with \(d^{1}(c_{m})\in Y_{0}\) and \(d^{0}(d_{1})\in Y_{0}\). Let us now suppose that there are also some edges in \(p^{\prime}\) or \(q^{\prime}\) which are not only in \(X_{1}\), but also in \(Y_{1}\). Then these edges in \(Y_{1}\) must be consecutive \(1\)-cells \(c_{j},c_{j+1},\ldots,c_{m}\) of \(p^{\prime}\), and consecutive \(1\)-cells \(d_{1},d_{2},\ldots,d_{k}\) of \(q^{\prime}\), otherwise we would have exhibited a path in \(X\) that either enters or leaves \(Y\) more than once, which is impossible because \((X,Y)\) is a relative pair, see Definition 24.
By Equation 1, \(p\otimes 1_{Y}\otimes q\) is in that case non zero and equal to \(p\otimes c\otimes q\) with \(p=(c_{1},\ldots,c_{j-1})\), \(c=(c_{j},\ldots,c_{m},d_{1},\ldots,d_{k})\) and \(q=(d_{k+1},\ldots,d_{n})\). This representative is a canonical form for \(p^{\prime}\otimes 1_{Y}\otimes q^{\prime}\) modulo Equation 1: these equations oriented from left to right strictly increase the length of \(c\), and this representative has maximal length for \(c\).
We can now define the relative homology modules for relative pairs of precubical spaces:
**Definition 25**.: Let \((X,Y)\) be a relative pair of precubical sets. We define the relative \(i\)th-homology \(R[X]\)-bimodule \(HM_{i}[X,Y]\) as the homology of the quotient of \(R_{i}[X]\) with the sub-\(R[X]\)-bimodule \({}^{X}R_{i}[Y]\) within the category of \(R[X]\)-bimodules, with boundary operator defined in the quotient as \(\partial[x]=[\partial x]\), \([x]\) denoting a class with representative \(x\) in \(R_{i}[X]/^{X}R_{i}[Y]\).
The definition is valid as we are going to see. Consider an element \([x]\) of \(R_{i}[X]/^{X}R_{i}[Y]\). Suppose \([y]=[x]\) in \({}^{X}R_{i}[Y]\), thus there exists \(z\in{}^{X}R_{i}[Y]\) with \(y=x+z\). Thus \(\partial[y]=[\partial y]=[\partial x+\partial z]=[\partial x]+[\partial z]= \partial[x]\), and the class of \(\partial([x])\) in \(R_{i}[X]/^{X}R_{i}[Y]\) does not depend on the particular representative chosen for \([x]\).
Let us now examplify the bimodule homology we have been defining on a few classical examples coming from concurrency theory [16]. We postpone examples on relative homology to Section 9, Example 14.
**Example 7**.: For the "2 holes on the anti-diagonal" precubical set, Example 5, we quotient the \(R[X]\)-bimodule \(R_{1}[X]\) by \(Im\ \partial\) which is the sub-\(R[X]\)-module of \(R_{1}[M]\) generated by \(ih-kd\) and \(fg-eb\), i.e. this is the \(R\)-module generated by \((ih-kd)fg\in e_{9}\bullet R_{1}[M]\bullet e_{1}\), \((ih-kd)f\in e_{9}\bullet R_{1}[M]\bullet e_{4}\), \((ih-kd)e\in e_{9}\bullet R_{1}[M]\bullet e_{2}\), \((ih-kd)eb\in e_{9}\bullet R_{1}[M]\bullet e_{1}\), \(ih-kd\in e_{9}\bullet R_{1}[M]\bullet e_{5}\), \(fg-eb\in e_{5}\bullet R_{1}[M]\bullet e_{1}\), \(h(fg-eb)\in e_{8}\bullet R_{1}[M]\bullet e_{1}\), \(ih(fg-eb)\in e_{9}\bullet R_{1}[M]\bullet e_{1}\), \(d(fg-eb)\in e_{6}\bullet R_{1}[M]\bullet e_{1}\), \(kd(fg-eb)\in e_{9}\bullet R_{1}[M]\bullet e_{1}\).
Indeed, within \(e_{9}\bullet R_{1}[M]\bullet e_{1}\), \(x_{1}=(ih-kd)fg\), \(x_{2}=ih(fg-eb)\), \(x_{3}=kd(fg-eb)\), and \(x_{4}=(ih-kd)eb\) are \(R\)-linearly dependent: \(x_{1}-x_{4}=x_{2}-x_{3}\) and there are no other (independent) dependencies. This means that we have to quotient (using our notations, see Remark 8) entry \((9,1)\) of the matrix of \(R\)-modules of Example 1 by \(R^{3}\), leading to entry \(R^{3}\) in position \((9,1)\) below, which represents the \(R[X]\)-bimodule, and the other modified entries are \((9,4)\), \((9,2)\), \((9,5)\), \((5,1)\), \((8,1)\), and \((6,1)\):
\[\begin{pmatrix}R&0&0&0&0&0&0&0&0\\ R&R&0&0&0&0&0&0&0&0\\ R&R&R&0&0&0&0&0&0&0\\ R&0&0&R&0&0&0&0&0\\ R&R&0&R&R&0&0&0&0\\ R^{3}&R^{2}&R&R&R&R&0&0&0\\ R&0&0&R&0&0&R&0&0\\ R^{3}&R&0&R^{2}&R&0&R&R&0\\ R^{6}&R^{3}&R&R^{3}&R&R&R&R\end{pmatrix}\]
Let us exemplify the \(R[X]\)-bimodule action now. The entry \((9,1)\) is \(R^{3}\), generated, as a \(R\)-module, by \([ihfg]_{9,1}\), \([ijlg]_{9,1}\) and \([kcab]_{9,1}\). Consider entry \((9,5)\), which is \(R\). It is generated as a \(R\)-module by \([kd]_{9,5}\). The right action of \(fg\) is \([kdfg]_{9,1}=[ihfg]_{9,1}\).
**Example 8**.: For the "two holes on the diagonal" cubical set of Example 1, we quotient the matrix of \(R\)-modules by \(Im\ \partial\) which is the \(R[X]\)-bimodule generated by \(jl-hf\) and \(de-ca\), giving respectively the matrix algebras, by the same argument as above:
\[\begin{pmatrix}R&0&0&0&0&0&0&0&0\\ R&R&0&0&0&0&0&0&0\\ R&R&R&0&0&0&0&0&0\\ R&0&0&R&0&0&0&0&0\\ R^{2}&R&0&R&R&0&0&0&0\\ R^{3}&R&R&R&R&R&0&0&0\\ R&0&0&R&0&0&R&0&0\\ R^{3}&R&0&R&R&0&R&R&0\\ R^{6}&R^{3}&R&R^{3}&R^{2}&R&R&R&R\end{pmatrix}\]
which is indeed a non-isomorphic \(R[X]\)-bimodule to the one obtained for \(HM_{1}\) for Example 1.
**Example 9** (Empty cube).: We consider here the boundary of the 3-cube of Example 2:
By Lemma 3, \(R[X]\) is the matrix algebra:
\[\left(\begin{array}{c|ccccccccc}&111&110&101&100&011&010&001&000\\ \hline 111&R&0&0&0&0&0&0&0&0\\ 110&R&R&0&0&0&0&0&0\\ 101&R&0&R&0&0&0&0&0\\ 100&R^{2}&R&R&R&0&0&0&0\\ 011&R&0&0&0&R&0&0&0\\ 010&R^{2}&R&0&0&R&R&0&0\\ 001&R^{2}&0&R&0&R&0&R&0\\ 000&R^{6}&R^{2}&R^{2}&R&R^{2}&R&R&R\end{array}\right)\]
and has the same matrix representation as a \(R[X]\)-bimodule as of Remark 8. We have seen in Example 2 that the 1-cube chains are generated by the following elements, plus sub-1-cube chains (made of just one 2-cell):
(00c,A') (B,1b1) (C,a11) (0b0,B') (a00,C') (A,11c)
Therefore, \(R_{2}[X]\) is:
\[\left(\begin{array}{c|ccccccccc}&111&110&101&100&011&010&001&000\\ \hline 111&0&0&0&0&0&0&0&0\\ 110&0&0&0&0&0&0&0&0\\ 101&0&0&0&0&0&0&0&0\\ 100&R&0&0&0&0&0&0&0\\ 010&R&0&0&0&0&0&0\\ 001&R&0&0&0&0&0&0\\ 000&R^{6}&R&R&0&R&0&0&0\end{array}\right)\]
Finally, \(H_{1}[X]=R_{1}[X]/Im\ \partial_{R_{2}[X]\to R_{1}[X]}\) hence:
\[\left(\begin{array}{ccccccccc}&111&110&101&100&011&010&001&000\\ \hline 111&R&0&0&0&0&0&0&0\\ 110&R&0&R&0&0&0&0&0\\ 101&R&0&R&0&0&0&0&0\\ 100&R&R&R&R&0&0&0&0\\ 011&R&0&0&0&R&0&0&0\\ 010&R&R&0&0&R&0&0\\ 001&R&0&R&0&R&0&R&0\\ 000&R&R&R&R&R&R&R\end{array}\right)\]
Now \(Ker\ \partial_{|R_{2}[X]\to R_{1}[X]}\) is easily seen to be generated by \((00c,A^{\prime})-(C,a11)-(0b0,B^{\prime})-(A,11c)+(a00,C^{\prime})+(B,1b1)\) which is the generator of the \(R\)-module
\(e_{000}\bullet R_{2}[X]\bullet e_{111}\), thanks to the calculation of \(\partial\) on \(R_{2}[X]\) in Example 2. As \(R_{3}[X]=0\), \(HM_{2}[X]\) is:
\[\left(\begin{array}{ccccccccc}&111&110&101&100&011&010&001&001&000\\ \hline 111&0&0&0&0&0&0&0&0&0\\ 110&0&0&0&0&0&0&0&0\\ 101&0&0&0&0&0&0&0&0\\ 100&0&0&0&0&0&0&0\\ 011&0&0&0&0&0&0&0&0\\ 010&0&0&0&0&0&0&0\\ 001&0&0&0&0&0&0&0\\ 000&R&0&0&0&0&0&0\end{array}\right)\]
In the case of the full 3-cube, \(R_{3}[X]\) is the following \(R[X]\)-bimodule:
\[\left(\begin{array}{ccccccccc}&111&110&101&100&011&010&001&000\\ \hline 111&0&0&0&0&0&0&0&0&0\\ 110&0&0&0&0&0&0&0&0\\ 101&0&0&0&0&0&0&0&0\\ 100&0&0&0&0&0&0&0&0\\ 011&0&0&0&0&0&0&0&0\\ 010&0&0&0&0&0&0&0\\ 000&R&0&0&0&0&0&0\end{array}\right)\]
with, as only generator (in entry (000,111)) the 3-cell \(S\), with boundary \((A,11c)-(B,1b1)+(C,a11)-(a00,C^{\prime})+(0b0,B^{\prime})-(00c,A^{\prime})\) as computed in Example 2 and \(HM_{2}[X]=0\).
**Example 10** (Matchbox example).: Let us now consider Fahrenberg's matchbox example [14] which is the empty cube of Example 9 minus the lower face \(A\). The path algebra \(R[X]\) is the same as for the empty cube, but \(R_{2}[X]\) is slightly different, in that it does not include, as generators (as a \(R\)-module) the 1-cube path \((A,11c)\) from 000 to 111 and the 1-cube path \((A)\) from 000 to 110. Hence \(R_{2}[X]\) is the \(R[X]\)-bimodule:
\[\left(\begin{array}{ccccccccc}&111&110&101&100&011&010&001&000\\ \hline 111&0&0&0&0&0&0&0&0\\ 110&0&0&0&0&0&0&0&0\\ 101&0&0&0&0&0&0&0&0\\ 100&R&0&0&0&0&0&0&0\\ 010&R&0&0&0&0&0&0\\ 000&R^{5}&0&R&0&R&0&0&0\end{array}\right)\]
Finally, \(H_{1}[X]=R_{1}[X]/Im\ \partial_{R_{2}[X]\to R_{1}[X]}\) hence:
\[\left(\begin{array}{cccccccccc}&111&110&101&100&011&010&001&000\\ \hline 111&R&0&0&0&0&0&0&0\\ 110&R&R&0&0&0&0&0&0\\ 101&R&0&R&0&0&0&0&0\\ 100&R&R&R&0&0&0&0&0\\ 011&R&0&0&0&R&0&0&0\\ 010&R&R&0&0&R&R&0\\ 001&R&0&R&0&R&0&R&0\\ 000&R&R^{2}&R&R&R&R&R&R\end{array}\right)\]
Finally, \(R_{3}[X]=0\) and \(Ker\ \partial_{|R_{2}[X]\to R_{1}[X]}=0\) so \(HM_{2}[X]=0\).
A digression on resolutions of associative algebras and monoidsWe note that the chain complex we have been defining in Lemma 6 has a natural augmentation, which will actually link to classical tools used in the construction of the homology of associative algebras:
**Definition 26**.: Let \(R_{0}[X]\) be the \(R[X]\)-bimodule whose underlying \(R\)-module is generated by pairs of reachable points of \(X\), i.e. by \((x,y)\in X\times X\) such that there exists a \(0\)-cube path from \(x\) to \(y\), and whose action of \(R[X]\) is generated by the relations:
\[p_{u,v}\bullet(x,y)\bullet q_{u^{\prime},v^{\prime}}=\left\{\begin{array}{ ll}(u,v^{\prime})&\mbox{if $v=x$ and $y=v^{\prime}$}\\ 0&\mbox{otherwise}\end{array}\right.\]
**Lemma 12**.: The morphism of \(R\)-modules \(\epsilon:R_{1}[X]\to R_{0}[X]\) defined on generators \(c=(c_{1},\ldots,c_{n})\), \(c_{i}\in X_{1}\) and \(e_{a}\), \(a\in X_{0}\) as:
\[\begin{array}{rcl}\epsilon(c_{1},\ldots,c_{n})&=&(d^{0}(c_{0}),d^{1}(c_{1}) )\\ \epsilon(e_{a})&=&(a,a)\end{array}\]
is an epimorphism of \(R[X]\)-bimodule, which is such that \(\epsilon\circ\partial:R_{2}[X]\to R_{0}[X]\) is equal to zero. Therefore, \(\epsilon\) defines an augmentation of the chain complex \((R_{i}[X],\partial)\).
Proof.: Consider \(p=(p_{1},\ldots,p_{k})\), \(c=(c_{1},\ldots,c_{l})\) and \(q=(q_{1},\ldots,q_{m})\) in \(R[X]\), and suppose (the other case is obvious) that \(d^{1}(c_{l})=d^{0}(q_{1})\) and \(d^{1}(p_{k})=d^{0}(c_{1})\), we have:
\[\begin{array}{rcl}\epsilon(p\bullet c\bullet q)&=&\epsilon(p_{1},\ldots,p_{ k},c_{1},\ldots,c_{l},q_{1},\ldots,q_{m})\\ &=&(d^{0}(p_{1}),d^{1}(q_{m}))\\ &=&(p_{1},\ldots,p_{k})\bullet(d^{0}(c_{1}),d^{1}(c_{l}))\bullet(q_{1}, \ldots,q_{m})\\ &=&p\bullet\epsilon(c)\bullet q\end{array}\]
The fact that this defines an epimorphism of \(R[X]\)-bimodule is obvious, by definition of \(R_{0}[X]\).
Finally, we compute, for a \(1\)-cube chain \(c=(c_{1},\ldots,c_{l})\) where \(c_{k}\) is the unique \(2\)-cell (by Remark 2) and all other \(c_{i}\)s are \(1\)-cells:
\[\begin{array}{rcl}\epsilon(\partial(c)&=&\epsilon(d_{k,\{0\}}(c)-d_{k,\{1\}}( c))\\ &=&\epsilon(d_{k,\{0\}}(c))-\epsilon(d_{k,\{1\}}(c))\\ &=&(d^{0}(c_{1}),d^{1}(c_{l}))-(d^{0}(c_{1}),d^{1}(c_{l}))\\ &=&0\end{array}\]
The usual construction of the homology of associative algebras [28], for instance of monoids \(\mathcal{M}\) in the case we are considering here, is done through a free resolution of \(R\) (generally taken to be \(\mathbb{Z}\)) considered as the trivial \(R\mathcal{M}\)-module (the action is the identity), by some free \(R\mathcal{M}\)-modules, where \(R\mathcal{M}\) is the monoid algebra of \(\mathcal{M}\). In general, only left (or right) modules would be considered here, but this could as well use bimodules as we do in this paper.
The monoid algebra is indeed the categorical algebra of the monoid \(\mathcal{M}\) considered as the one object category (called \(*\)), whose morphisms are (left) multiplication by elements of \(\mathcal{M}\). And \(R\) as a \(R\mathcal{M}\)-bimodule is just the analogue of our construction \(R_{0}[\mathcal{M}]\) which is generated as a \(R\)-module only by the pair \((*,*)\) and the action on the left and on the right by \(R[X]\) is the identity.
Suppose \(\mathcal{M}\) is generated by \(\Sigma\), a given finite alphabet. Groves and Anick [26, 1] constructed a cubical complex \(\mathcal{C}(\mathcal{M})\) out of the directed graph of reductions of a rewrite system, presenting \(\mathcal{M}\), with vertices being words in \(\Sigma^{*}\). Indeed, when \(\mathcal{M}\) can be presented by a finite canonical rewrite system [11], the cubical complex \(\mathcal{CM}\) constructs a free resolution by (left) \(R\mathcal{M}\)-modules of \(R\), which is finite dimensional. This is Squier's property \(FP_{\infty}\)[38].
Consider the algebra epimorphism: \(q:\ R\mathcal{M}\to R\) which maps \(\sum r_{i}m_{i}\) (where \(r_{i}\) are coefficients in \(R\) and \(m_{i}\) are elements of the monoid \(\mathcal{M}\)) to \(\sum r_{i}\in R\) and the extension of scalars \(q_{!}:\ R\mathcal{M}Mod\rightarrow{}_{R}Mod\). This extension of scalars is the classical tensor operation with \(R\) seen as a trival (left) \(R\mathcal{M}\)-module, see Section 3, which is not left exact in general (as a left adjoint, it is right exact), and creates homology. By the \(FP_{\infty}\) property, this implies that the monoid homology (with \(R\)-module coefficients) is finite dimensional. Hence a monoid \(\mathcal{M}\) cannot be presented by a finite canonical rewrite system if the homology of \(\mathcal{M}\) is not finite dimensional (as a \(R\)-module).
We believe that Groves' cubical complex based free resolution of \(R\) is the same as our construction of a precubical set based free resolution of \(R\), with critical \(n\)-pairs generating \(n\)-cells, this will be developed elsewhere. Note that it needs a substantial generalization to the framework developed in this paper, to deal with modules over non-unital algebras in particular, along the lines of what is sketched in Section 8.
The interest of this link is twofold. First, we can look at resolutions of \(R_{0}[\mathcal{M}]\) and not just \(R\), which would give a finer view of the geometry of rewriting "between two words in \(\mathcal{M}\)". Second, instead of using the extension of scalars to \(R\), we could use any finer invariants, using extension of scalars to e.g. \(R[Y]\), with \(Y\) a path in the derivation graph of \(\mathcal{M}\) presented by a given finite canonical
system, or any filtration such as the one based on the length of derivations, creating instead a potentially insightful unidimensional persistent homology.
## 6 Bimodule homology and persistence modules
The objective of this section is to understand and represent effectively the previous homological constructions in the category of bimodules over the associative algebra of directed paths. Due to classical Morita equivalence between quivers and path algebras, our homology modules \(HM\) can be considered as representations of a particular quiver, showing a link between our homology algebras and persistence over the underlying quiver of a directed space (the "space of parameters"). We review below the elements of this equivalence.
**Remark 13**.: From this section on, we restrict our study to the case where \(R\) is a field, hence \(R\)-modules are \(R\)-vector spaces.
Quiver representations
**Definition 27** ([2]).: Let \(Q\) be a finite quiver. A \(R\)-linear representation or, more briefly, a representation \(M\) of \(Q\) is defined by the following data:
* To each point \(a\) in \(Q_{0}\) is associated a \(R\)-vector space \(M_{a}\).
* To each arrow \(\alpha:\ a\to b\in Q_{1}\) is associated a \(R\)-linear map \(\phi_{\alpha}:\ M_{a}\to M_{b}\).
Such a representation is denoted as \(M=(M_{a},\phi_{\alpha})_{a\in Q_{0},\alpha\in Q_{1}}\), or simply \(M=(M_{a},\phi_{\alpha})\). It is called finite dimensional if each vector space \(M_{a}\) is finite dimensional.
**Definition 28** ([2]).: Let \(M=(M_{a},\phi_{\alpha})\) and \(M^{\prime}=(M^{\prime}_{a},\phi^{\prime}_{\alpha})\) be two representations of \(Q\). A morphism of representations \(f:\ M\to M^{\prime}\) is a family \(f=(f_{a})_{a\in Q_{0}}\) of \(R\)-linear maps \((f_{a}:M_{a}\to M^{\prime}_{a})_{a\in Q_{0}}\) that are compatible with the structure maps \(\phi_{\alpha}\), that is, for each arrow \(\alpha:\ a\to b\), we have \(\phi^{\prime}_{\alpha}f_{a}=f_{b}\phi_{\alpha}\) or, equivalently, the following square is commutative:
We have thus defined a category \(Rep(Q)\) of \(R\)-linear representations of \(Q\). We denote by \(rep(Q)\) the full subcategory of \(Rep(Q)\) consisting of the finite dimensional representations.
Let \((M,(\phi)_{\alpha})\) be a representation of some quiver \(Q\). Consider a path \(p=(p_{1},\ldots,p_{l})\) in \(Q\), from \(a\) to \(b\). The evaluation, see [2], \(eval_{(M,(\phi_{\alpha}))}(p)\) of \(p\) in representation \((M,(\phi)_{\alpha})\) is the \(R\)-linear map from \(M_{a}\) to \(M_{b}\) which is \(\phi_{p_{l}}\circ\ldots\phi_{p_{1}}\)
**Definition 29**.: ([2]) A representation of a bound quiver \((Q,I)\) (where \(I\) is an admissible ideal of \(Q\), see Definition 20) is a representation \((M,(\phi)_{\alpha})\) of \(Q\) for which, given any element \(\sum\limits_{i=1}^{k}\alpha_{i}p_{i}\in I\), \(\alpha_{i}\in R\), and \(p_{i}\) being paths in \(Q\), \(\sum\limits_{i=1}^{k}\alpha_{i}eval_{(M,(\phi_{\alpha}))}(p_{i})=0\).
As well known [2], when the algebra \(A\) is \(R[Q]/I\) for some finite and connected quiver \(Q\), and \(I\) an admissible ideal of \(R[Q]\), the category \(mod\)\(A\) of (left) \(A\)-modules is equivalent to the category \(rep_{R}(Q,I)\).
This can be generalized to \(A\)-bimodules as follows:
**Lemma 13**.: Let \(A=R[Q]/I\) be an algebra, with \(Q\) a finite connected quiver and \(I\) and admissible ideal of \(R[Q]\). Construct the graph \(FQ\) as follows:
* vertices of \(FQ\) are pairs of vertices \((x,y)\) of \(Q\) such that there exists a path from \(x\) to \(y\) in \(Q\)
* arrows from \((x,y)\) to \((x^{\prime},y^{\prime})\) in \(FQ\) are pairs of arrows \((u,v)\) in \(Q\) where \(u\) goes from \(x^{\prime}\) to \(x\) and \(v\) goes from \(y\) to \(y^{\prime}\)
Then given \(M\) a \(A\)-bimodule, we construct a representation of \(FQ\) bound by \(I\) as follows:
* to each vertex \((a,b)\in FQ\), define the \(R\)-module \(M_{a,b}\) to be \((e_{a}+I)\bullet M\bullet(e_{b}+I)\),
* and define \(\phi_{u,v}^{M}:\ M_{a,b}\to M_{a^{\prime},b^{\prime}}\) for \(u\) an arrow from \(a^{\prime}\) to \(a\) and \(v\) an arrow from \(b\) to \(b^{\prime}\) in \(Q\), to be the map which associates to each \(x\in M_{a,b}\), \(u\bullet x\bullet v=e_{a^{\prime}}u\bullet x\bullet ve_{b^{\prime}}=e_{a^{ \prime}}\bullet(u\bullet x\bullet v)\bullet e_{b^{\prime}}\in M_{b}\in M_{a^{ \prime},b^{\prime}}\).
**Remark 14**.: A natural question, which we leave for future work, is the existence and characterization of the analogue of Moore spaces for our module homology. Indeed, given Theorem 1, given any basic and connected finite dimensional \(R\)-algebra \(A\), there exists an admissible ideal \(I\) and a quiver \(Q_{A}\) such that \(A\) is isomorphic to \(R[Q_{A}]/I\). Given a particular class of bimodules \(M\) over \(A\), can we find to complete this graph into a precubical set \(X_{A}^{M}\), by adding suitable cells, so that, for some \(i\geq 1\), \(HM_{i}[X_{A}^{M}]\) is isomorphic to \(M\), as a \(A\)-bimodule? It is shown in [41] that we can find a precubical set (generated by the semantics of a PV program [16]) for which the space of dipaths between beginning and end point has a given homology. This would pave the way for at least, realizing a certain class of \(A\)-bimodules as the bimodule homology of some precubical set.
The proof of Lemma 13 is straightforward, and implies the following equivalence of categories:
**Lemma 14**.: Let \(A=R[Q]/I\), where \(Q\) is a finite connected quiver and \(I\) an admissible ideal of \(R[Q]\). Then there exists an equivalence of categories \(F:\ Rmod_{R}\ A\to rep_{R}(FQ,I)\)
This is a direct consequence of Theorem 1.6 in Chapter 3 of [2] and of the fact that \(A\)-bimodules are \(A\otimes A^{op}\)-left modules. We are a bit more explicit how the equivalence works on objects below:
For \(M\) an \(A\)-bimodule, define \(F(M)\) as in Lemma 13. Now, consider \(f:M\to N\) a morphism of \(A\)-bimodules and \((a,b)\) a vertex in \(FQ\). Let \(x\in M_{a,b}\), it is therefore of the form \((e_{a}+i)\bullet m\bullet(e_{b}+j)\) where \(m\in M\), \(i\in I\) and \(j\in I\). Define \(F(f)_{a,b}(x)\) to be \((e_{a}+i)\bullet f(m)\bullet(e_{b}+j)\in N_{a,b}\). We have to check now that for all arrows \(u\) from \(a^{\prime}\) to \(a\) and \(v\), arrow from \(b\) to \(b^{\prime}\) in \(Q\), \(\phi^{N}_{u,v}F(f)_{a,b}=F(f)_{a^{\prime},b^{\prime}}\phi^{M}_{u,v}\). We compute, for \(x\in M_{a,b}\) of the form \(x=(e_{a}+i)\bullet m\bullet(e_{b}+j)\) for some \(m\in M\), \(i\in I\) and \(j\in I\):
\[\begin{array}{rcl}\phi^{N}_{u,v}F(f)_{a,b}(x)&=&\phi^{N}_{u,v}((e_{a}+i) \bullet f(m)\bullet(e_{b}+j))\\ &=&u\bullet((e_{a}+i)\bullet f(m)\bullet(e_{b}+j))\bullet v\\ &=&u(e_{a}+i)\bullet f(m)\bullet(e_{b}+j)v\\ &=&u\bullet f(m)\bullet v\end{array}\]
since \(u\) (resp. \(v\)) is an arrow from \(a^{\prime}\) to \(a\) (resp. from \(b\) to \(b^{\prime}\)) in \(Q\), making \(u(e_{a}+i)=u\) (resp. \((e_{b}+j)v=v\)) in the quiver algebra \(R[Q]/I\) by \(I\), whereas:
\[\begin{array}{rcl}F(f)_{a^{\prime},b^{\prime}}\phi^{M}_{u,v}(x)&=&F(f)_{a^{ \prime},b^{\prime}}(u\bullet x\bullet v)\\ &=&F(f)_{a^{\prime},b^{\prime}}(e_{a^{\prime}}u\bullet m\bullet ve_{b^{\prime }})\\ &=&e_{a^{\prime}}f(u\bullet m\bullet v)e_{b^{\prime}}\\ &=&e_{a^{\prime}}u\bullet f(m)\bullet ve_{b^{\prime}}\\ &=&u\bullet f(m)\bullet v\\ &=&\phi^{N}_{u,v}F(f)_{a,b}(x)\end{array}\]
Now, define the following transform \(G:\ rep_{R}(FQ,I)\to{}_{R}mod_{R}\ A\).
Let \((M_{a,b},\phi_{u,v})\) be a representation of \(FQ\). Define \(G(M_{a,b},\phi_{u,v})\) to be the \(A\)-bimodule which is, as a \(R\)-module, \(\coprod_{(a,b)\in FQ}M_{a,b}\), with the following left and right action of elements \(a\) and \(b\) in \(A\).
As \(A=R[Q]/I\), \(a\) (resp. \(b\)) is of the form \(u_{1}u_{2}\ldots u_{m}+i\) (resp. \(v_{1}v_{2}\ldots v_{n}+j\)) where \(i\in I\) (resp. \(j\in I\)) and \(u_{k}\) (resp. \(v_{l}\)) is an arrow in \(Q\) from \(x_{k}\) to \(x_{k+1}\) (resp. from \(y_{l}\) to \(y_{l+1}\)), with \(x_{1}=a^{\prime}\) and \(x_{m+1}=a\) (resp. \(y_{1}=b\) and \(y_{n+1}=b^{\prime}\)).
Define the action \(\bullet\) of \(u\) and \(v\) on \(m\in M_{a,b}\)
\[\begin{array}{rcl}u\bullet m&=&\phi_{u_{1},e_{b}}\phi_{u_{2},e_{b}}\ldots \phi_{u_{m},e_{b}}(m)\\ m\bullet v&=&\phi_{e_{a},v_{1}}\phi_{e_{a},v_{2}}\ldots\phi_{e_{a},v_{n}}(m) \end{array}\]
hence \(u\bullet m\bullet v=eval_{(M_{a,b},(\phi_{u,v}))}(u,v)\).
Now we see that, for any \(A\)-bimodule \(M\), \(G(F(M))\) is the \(R\)-module
\[\coprod_{(a,b)\in FQ}(e_{a}+I)\bullet M\bullet(e_{b}+I)\]
which is isomorphic to \(M\), as \(e_{a}+I\), \(e_{b}+I\), for \(a\in Q\) and \(b\in Q\) form a complete set of idempotents of \(A\).
Similarly, for any representation \((M_{a,b},\phi_{u,v}^{M})\) of \(FQ\) bound by \(I\), \(F(G(M_{a,b},\)\(\phi_{u,v}))\) is the representation \((N,\phi_{u,v}^{N})\) with \(N_{a,b}\) being \((e_{a}+I)\bullet\coprod_{(a,b)\in FQ}M_{a,b}\bullet(e_{b}+I)\) which is isomorphic to \(M_{a,b}\).
This is of course akin to the construction of the factorization category (on the category of traces) and of natural systems on this factorization category, see next paragraph. Also, Lemma 14 portrays our homology module over the path algebra construction as persistence homology of the total path space \(|X|^{\,\overline{I}\,}\) (\(X\) is a precubical set) along the decomposition into path spaces from \(a\) to \(b\), \(a\) and \(b\) varying in \(X_{0}\).
**Example 11**.: Consider again the following precubical set \(X\) of Example 4, on the left below (an empty square):
On the right hand side, we represented the quiver \(FX_{\leq 1}\) (which is \(FX\) since \(X\) is actually a quiver). The representation of \(FX\) which corresponds to \(HM_{1}[X]\) is:
where \(i_{1}\) is the inclusion of \(R\) into the first component of \(R^{2}\) and \(i_{2}\) is the inclusion of \(R\) into the second component of \(R^{2}\).
In general, there are infinitely many indecomposable \(FQ\)-modules, when \(FQ\) is not very particular (e.g. of the form of Dynkin diagram \(A_{n}\), as for unidimensional persistence), and in order to give tractable invariants, we need to resort to simpler characterizations, such as rank invariants, that we briefly detail in the next paragraph. Still, we will claim in the last paragraph of this section that our homology modules, for an interesting class of precubical sets, gives rise to "tame" bimodules of some sort.
Rank invariantsLet \(M\) be a \(A\)-bimodule where \(A\) is a basic, connected finite dimensional \(R\)-vector space. By Theorem 1, \(A\) is isomorphic to \(R[Q]/I\) where
\(Q\) is some finite connected quiver (the one of Definition 19), and by Lemma 13, we get a representation of \(FQ\) bound by \(I\) (\(M_{\alpha},\phi_{\alpha}\)) (\(\alpha\in FQ\)). We define now a categorical view on representations and modules, that will be useful in this section and in next section, Section 7:
**Definition 30**.: The functor \(\mathcal{F}_{M}\) from \(FQ\), seen as the free category on \(FQ\), to the category of \(R\)-vector spaces, which has \(\mathcal{F}_{M}(x,y)=M_{x,y}\) and \(\mathcal{F}_{M}(u,v)=\phi_{u,v}\) where \((x,y)\) is a vertex of \(FQ\) and \((u,v)\) is an arrow in \(FQ\), is called the categorical presentation of \(M\).
This categorical presentation of modules is well-known in category theory and used also in persistence, e.g. in the framework [33].
Rank invariants or generalized rank invariants have been defined as computable invariants for multi-dimensional, poset or even DAG persistence modules as in e.g. [27].
Here we will consider only simple rank invariants. Consider all intervals \(\mathcal{I}\) within \(FQ\). Now, the categorical presentation of \(M\), \(\mathcal{F}_{M}\) can be restricted on any of these intervals, to give the functor \(\mathcal{F}_{M|\mathcal{I}}\), which can be viewed as a diagram in the complete and co-complete category of \(R\)-vector spaces.
**Remark 15**.: Note that any interval \(\mathcal{I}\) in \(FQ\) corresponds to the maximal chains of the trace poset of a partially-order space, as defined in [9]. These are the ones that define unidimensional persistence modules within natural homology (that we are going to recap in next Section).
We can thus consider the canonical map \(\Phi_{\mathcal{I}}:\ \lim\limits_{\leftarrow}\mathcal{F}_{M\mathcal{I}} \rightarrow\lim\limits_{\rightarrow}\mathcal{F}_{M\mathcal{I}}\) from the limit of the diagram \(\mathcal{F}_{M\mathcal{I}}\) to the colimit of the same diagram. The dimension of \(Im\ \Phi_{\mathcal{I}}\), for all \(\mathcal{I}\) intervals in \(FQ\) defines the rank invariant of \(M\).
**Example 12**.: Intervals in \(FQ\) for Example 11 are: \((4,4)\), \((3,3)\), \((2,2)\), \((1,1)\), \((4,4)\leq(4,2)\), \((4,4)\leq(4,3)\), \(\ldots\), \((4,4)\leq(4,2),(4,1)\) and correspond to a sequence of extensions within some given maximal path.
Indeed the rank invariant corresponding to singletons \(q=(x,y)\in(FQ)_{0}\) are just the ranks of the corresponding modules \(M_{q}=e_{x}\bullet M\bullet y\). The rank of \(R\,{}^{\underline{i_{1}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Bimodule homology and natural homology
We begin by showing that the homology module framework we have been setting up measures, at least locally, the homology of the path space of precubical spaces, between two endpoints:
**Lemma 15**.: Let \(X\) be a finite precubical set that is such that is has proper non-looping length covering, \(a,b\in X_{0}\), \(R\) a field. Then the \(R\)-vector space \(e_{a}\bullet HM_{n}[X]\bullet e_{b}\), \(n\geq 1\), is the standard \(n\)th homology of the trace space \(\overrightarrow{P}(X)_{a}^{b}\) from \(a\) to \(b\).
Proof.: By Lemma 7, the quotient \(HM_{i}[X]=Ker\ \partial_{|R_{i}[X]}/Im\ \partial_{|R_{i+1}[X]}\) can be identified, as a \(R\)-module, with the coproduct of all \(R\)-vector spaces
\[Ker\ \partial_{|e_{a}\bullet R_{i}[X]\bullet e_{b}}/Im\ \partial_{|e_{a} \bullet R_{i+1}[X]\bullet e_{b}}\]
with bimodule action being defined by \(u\bullet[n]_{c,d}\bullet v=[u\bullet n\bullet v]_{a^{\prime},b^{\prime}}\) if \(c=a\) and \(d=b,0\) otherwise. Hence \(e_{a}\bullet HM_{i}[X]\bullet e_{b}\) is \(Ker\ \partial_{|e_{a}\bullet R_{i}[X]\bullet e_{b}}/Im\ \partial_{|e_{a} \bullet R_{i+1}[X]\bullet e_{b}}\) as a \(R\)-vector space.
This last quotient is the homology of the chain complex \(Ch(X)_{a}^{b}\), and by Lemma 1, this is isomorphic to the (singular) homology of the trace space \(\overrightarrow{P}(X)_{a}^{b}\).
We are now going to make the link between our bimodule homology and natural homology, at least for a certain class of precubical sets, that has been used in [13]:
**Definition 31** ([13]).: A (\(d\)-dimensional) cubical complex \(X\) is a finite set of _cubes_\((D,\vec{x})\), where \(D\subseteq\{1,2,\cdots,d\}\) and \(\vec{x}\in\mathbb{Z}^{d}\), which is closed under taking past and future faces (to be defined below). The cardinality of \(D\) is the _dimension_ of the cube \((D,\vec{x})\). Let \(\vec{1}_{k}\) be the \(d\)-tuple whose \(k\)th component is \(1\), all others being \(0\). Each cube \((D,\vec{x})\) is realized as the geometric cube \(\iota(D,\vec{x})=I_{1}\times I_{2}\times\cdots\times I_{d}\) where \(I_{k}=[x_{k},x_{k}+1]\) if \(k\in D\), \(I_{k}=[x_{k},x_{k}]\) otherwise.
When \(card\ D=n\), we write \(D[i]\) for the \(i\)th element of \(D\). For example, if \(D=\{3,4,7\}\), then \(D[1]=3\), \(D[2]=4\), \(D[3]=7\). We also write \(\partial_{i}D\) for \(D\) minus \(D[i]\). Every \(n\)-dimensional cube \((D,\vec{x})\) has \(n\) past faces \(\partial_{i}^{0}(D,\vec{x})\), defined as \((\partial_{i}D,x)\), and \(n\) future faces \(\partial_{i}^{1}(D,\vec{x})\), defined as \((\partial_{i}D,x+\vec{1}_{D[i]})\), \(1\leq i\leq n\).
Note that a cubical complex is in particular a precubical set such that it has proper non-looping length covering.
The comparison between natural homology and bimodule homology will be made through bisimulation between categorical diagrams [13], that we particularize to diagrams in the category of \(R\)-vector spaces \(Vect\), and that we recall below:
**Definition 32** ([13]).: A bisimulation \(\mathcal{B}\) between two diagrams \(F:\mathcal{C}\longrightarrow Vect\) and \(G:\mathcal{D}\longrightarrow Vect\) is a set of triples \((c,f,d)\) where \(c\) is an object of \(\mathcal{C}\), \(d\) is an object of \(\mathcal{D}\) and \(f:F(c)\longrightarrow G(d)\) is an isomorphism of \(\mathcal{M}\) such that for all \((c,f,d)\) in \(R\):
* if there exists \(i:c\longrightarrow c^{\prime}\in\mathcal{C}\) then there exists \(j:d\longrightarrow d^{\prime}\in\mathcal{D}\) and \(g:F(c^{\prime})\longrightarrow G(d^{\prime})\in Vect\) such that \(g\circ F(i)=G(j)\circ f\) and \((c^{\prime},g,d^{\prime})\in\mathcal{B}\)
* if there exists \(j:d\longrightarrow d^{\prime}\in\mathcal{D}\) then there exists \(i:c\longrightarrow c^{\prime}\in\mathcal{C}\) and \(g:F(c^{\prime})\longrightarrow G(d^{\prime})\in Vect\) such that \(g\circ F(i)=G(j)\circ f\) and \((c^{\prime},g,d^{\prime})\in\mathcal{B}\)
and such that:
* for all \(c\in\mathcal{C}\), there exists \(d\) and \(f\) such that \((c,f,d)\in\mathcal{B}\)
* for all \(d\in\mathcal{D}\), there exists \(c\) and \(f\) such that \((c,f,s)\in\mathcal{B}\)
Now we can prove, identifying \(HM_{i}[X]\) with its categorical presentation \(\mathcal{F}_{HM_{i}[X]}\):
**Theorem 2**.: Let \(X\) be a cubical complex. Then \(HM_{i}[X]\) is bisimulation equivalent to the natural homology \(HN_{i}[|X|]\), for all \(i\geq 1\).
Proof.: The proof relies on the fact that the combinatorial natural homology of \(X\) is bisimulation equivalent to the natural homology of its geometric realization \(|X|\) when \(X\) is a cubical complex. And that the bimodule homology of this paper, \(HM\), is bisimulation equivalent to combinatorial natural homology under the same hypotheses, as we will see. We first need to recap some definitions from [13].
Let \(x\) and \(y\in X\). We say that \(x\) is a future boundary (resp. a past boundary) of \(y\) if there exist \(k\geq 0\) and \(i_{0}\),..., \(i_{k}\) such that \(x=\partial^{1}_{i_{k}}\circ\cdots\circ\partial^{1}_{i_{0}}(y)\) (resp. \(x=\partial^{0}_{i_{k}}\circ\cdots\circ\partial^{0}_{i_{0}}(y)\)). We note \(x\preceq y\) when:
* either \(x\) is a past boundary of \(y\)
* either \(y\) is a future boundary of \(x\)
A _discrete trace_ from \(x\) to \(y\) in \(X\) is a sequence \(c_{0}\),..., \(c_{n}\) of \(K\) (with \(n\geq 0\)) such that \(c_{0}=x\), \(c_{n}=y\) and for all \(i\in\{1,...,n\}\)\(c_{i-1}\preceq c_{i}\). An extension of a discrete trace \(c_{0}\),..., \(c_{n}\) is a pair \(((d_{0},...,d_{m}),(e_{0},...,e_{p}))\) of discrete traces such that \(d_{m}=c_{0}\) and \(c_{n}=e_{0}\).
Discrete traces of a pre-cubical set \(X\) form a category \(\mathcal{T}^{d}_{X}\) whose objects are discrete traces, and whose morphisms are extensions of discrete traces which are discrete traces.
When \(X\) is a cubical complex, a discrete trace can be realized as a trace in the geometric realization \(|X|\). For \(x\) in \(X\), write \(\hat{x}\) the point \([x,\star]\) of \(|X|\) where \(\star=(\frac{1}{2},\ldots,\frac{1}{2})\). When \(x\) is a past boundary of \(y\), i.e. \(x=\partial^{0}_{i_{k}}\circ\cdots\circ\partial^{0}_{i_{0}}(y)\), write \(\widehat{xy}\) the path in \(|X|\) from \(\hat{x}\) to \(\hat{y}\) defined by
\[\widehat{xy}(t)=[y,t\star+(1-t)\rho^{0}_{i_{0}}\circ\ldots\circ\rho^{0}_{i_{k} }(\star)]\]
When \(y\) is a future boundary of \(x\), i.e. \(y=\partial_{i_{k}}^{1}\circ\dots\circ\partial_{i_{0}}^{1}(x)\), write \(\widehat{xy}\) the path in \(|X|\) from \(\hat{x}\) to \(\hat{y}\) defined by
\[\widehat{xy}(t)=[x,(1-t)\star+t\rho_{i_{0}}^{1}\circ\dots\circ\rho_{i_{k}}^{1}( \star)]\]
\(\widehat{xy}\) is uniquely defined when \(X\) is a non-self linked pre-cubical set (which is the case for cubical complexes of Definition 31 [17] and is a dipath also in that case.
Then, a discrete trace \(c_{0}\),..., \(c_{n}\) can be realized as the concatenation of the traces \(\widehat{c_{0}c_{1}},\dots\) with \(\widehat{c_{n-1}c_{n}}\) in \(|X|\), noted \(\widehat{c_{0},...,c_{n}}\).
Define the category \(\mathcal{T}_{X}^{d}\) to be:
* objects are discrete traces of \(X\)
* morphisms from \(c_{0}\),..., \(c_{n}\) (discrete trace from \(x\) to \(y\)) to \(d_{0}\),..., \(d_{m}\) (discrete trace from \(x^{\prime}\) to \(y^{\prime}\)) are pairs of discrete traces \(((e_{0},...,e_{p}),(f_{0},...,f_{k}))\) from \((x^{\prime},y)\) to \((x,y^{\prime})\) (so \(c_{0}=e_{p}=x\) and \(c_{n}=f_{0}=y\)) such that \(d_{0},...,d_{m}=e_{0},...,e_{p-1},c_{0},...,c_{n},f_{1},...,f_{k}\).
Then \(\widehat{\ }\) extends to a functor from \(\mathcal{T}_{X}^{d}\) to \(\mathcal{T}_{|X|}\).
For \(n\geq 1\), the \(i\)-th natural homology of \(X\) or discrete natural homology \(\overrightarrow{h}_{i}(X):\mathcal{T}_{X}^{d}\longrightarrow Mod_{R}\) as
\[\overrightarrow{h}_{i}(X)=\overrightarrow{H}_{i}(|X|)\circ\widehat{\ }.\]
Now, we are going to construct a set \(\mathcal{B}\) of triples \(((x,y),f,(c_{0},\dots,c_{n}))\) where \((c_{0},\dots,c_{n})\) is a discrete trace in \(\mathcal{T}_{X}^{d}\) from \(x\) to \(y\), and \((x,y)\in FX_{\leq 1}\); we leave the exact definition of \(f\) for the moment. This will form a bisimulation equivalence between \(HM_{i}[X]\) and \(\overrightarrow{h}_{i}(X)\) as we are going to see. First, as a consequence of Lemma 15\(\mathcal{F}_{HM_{i}[X]}(x,y)=e_{x}\bullet HM_{i}[X]\bullet e_{y}\), \(n\geq 1\), is isomorphic to the standard \(i\)th homology of the trace space \(\overrightarrow{P}(X)_{x}^{y}\) from \(x\) to \(y\). But \(\overrightarrow{h}_{i}(X)(x,y)\) is also isomorphic to \(\overrightarrow{P}(X)_{x}^{y}\), and we set \(f\) to be the composition of the two isomorphisms.
Now, for each morphism from \((c_{0},\dots,c_{n})\) (discrete trace from \(x\) to \(y\)) to \((d_{0},\cdots,d_{m})\) (discrete trace from \(x^{\prime}\) to \(y^{\prime}\)), which is a pair of discrete traces \(((e_{0},...,e_{p}),(f_{0},...,f_{k}))\) from \((x^{\prime},y)\) to \((x,y^{\prime})\) (so \(c_{0}=e_{p}=x\) and \(c_{n}=f_{0}=y\)) such that \((d_{0},...,d_{m})=(e_{0},...,e_{p-1},c_{0},...,c_{n},f_{1},...,f_{k})\), consider any morphim \((u,v)\) from \((x,y)\) to \((x^{\prime},y^{\prime})\) in \(FX_{\leq 1}\), which is made of any \(0\)-cube chain \(u\) at the boundary of \((c_{0},\dots,c_{n})\) (resp. \(v\) at the boundary of \((d_{0},\cdots,d_{m})\)). As the geometric realization of \((c_{0},\dots,c_{n})\) is dihomotopic to \(u\) (resp. of \((d_{0},\cdots,d_{m})\), to \(v\)), the pre and postcomposition by \(u\) and \(v\) is, in homology, equal to the pre and post composition by \((c_{0},\dots,c_{n})\) and \((d_{0},\cdots,d_{m})\), hence generate the same map.
Conversely, given any edge \((u,v)\) in \(FX_{\leq 1}\), this is also a morphism in \(\mathcal{T}_{d}^{X}\) and they generate the same map in homology. Hence \(\mathcal{B}\) is a bisimulation between \(HM_{i}[X]\) and \(\overrightarrow{h}_{i}(X)\)
**Remark 16**.: This, and what we saw in the previous section, should be linked to [8], which constructs natural homology as a certain colimit of persistence over directed paths of a directed space. Indeed, for \(X\) a finite precubical set with underlying graph which is a DAG, \(|X|\) is a partially-order space, for which the construction of [8] applies. It is shown there that the natural homology of \(|X|\) is the colimit of the unidimensional persistence homologies based on the restricted categorical representations \(\mathcal{F}_{HMi[X]\mathcal{I}}\), over all interval inclusions of \(\mathcal{I}\) within \(FX_{\leq 1}\).
A natural question now is, do we have a counterpart of bimodule homology that we defined on combinatorial structures (precubical sets), for continuous structures (a form of directed space)? And do we have a form of tameness in that case, as we would like in any persistence oriented theory? The result of first investigations are shown in next Section.
## 8 Bimodule homology for directed spaces and infinite precubical sets
In this section, we see how we would extend our homology bimodules from finitely presented directed spaces (finite precubical sets) to infinite discrete or continuous directed spaces, and try to make some connections. In order to do so, we are restricting the class of directed spaces we are considering to so called po-spaces:
**Definition 33** ([16]).:
1. A po-space is a topological space \(X\) with a (global) closed partial order \(\leq\) (i.e. \(\leq\) is a closed subset of \(X\times X\)). The unit segment \([0,1]\) with the usual topology and the usual order, is a po-space denoted \(\overrightarrow{I}\) and called the directed interval.
2. A dimap \(f:X\to Y\) between po-spaces \(X\) and \(Y\) is a continuous map that respects the partial orders (is non-decreasing).
3. A dipath \(f:\overrightarrow{I}\to X\) is a dimap whose source is the interval \(\overrightarrow{I}\).
When \(X\) is a finite precubical set whose underlying graph is a DAG, the geometric realization of \(X\), \(|X|\), is a po-space. But the path algebra, introduced in [21], and counterpart of \(R[X]\) would definitely not be a finite dimensional algebra and the notion of module over the path algebra is more difficult to define, as the path algebra is not unital in general.
Therefore we take advantage of the Morita equivalence between (bi)modules on the (unital) algebra \(R[X]\) and the representation of the underlying quiver \(FX_{\leq 1}\), to take as definition of a bimodule in the case of the path (non-unital) algebra of a po-space \(X\), any functor:
\[M:\ P_{X}\times P_{X}^{op}\rightarrow{}_{R}Mod_{R}\]
where \(P_{X}\) is the category whose objects are points \(x\) of \(X\), and morphisms from \(x\) to \(y\) are any dipath from \(x\) to \(y\), modulo increasing reparameterisations (this
is called a trace in e.g. [21]). \(M\) is a bimodule on the path algebra in the sense that it is the categorical representation of it as in Definition 30. To each pair of points \((x,y)\) in \(X\), it associates \(M(x,y)\), a \(R\)-bimodule, and to each extension morphism \(\langle u,v\rangle:(x,y)\to(x^{\prime},y^{\prime})\), where \(u\) is a trace from \(x^{\prime}\) to \(x\), and \(v\) a trace from \(y\) to \(y^{\prime}\), it associates a \(R\)-bimodule homomorphism from \(M(x,y)\) to \(M(x^{\prime},y^{\prime})\).
Indeed, this point of view allows also to extend the approach we took here to infinite precubical sets.
Tameness issuesIn [24], a way to reduce the fundamental category \(\overrightarrow{\pi}_{1}(X)\) of a po-space \(X\) has been defined, through the concept of component category. The main property is that \(\mathcal{C}=\overrightarrow{\pi}_{1}(X)\) enjoys Proposition 7 of [18] that we recall below (since for a po-space, \(\overrightarrow{\pi}_{1}(X)\) is a loop-free category):
**Proposition 1**.: Let \(\mathcal{C}\) be a category in which all endomorphisms are identities (this is true in particular if \(\mathcal{C}\) is loop-free). Let \(\Sigma\) be any pure left and right calculus of _Yoneda_ morphisms on \(\mathcal{C}\) and \(C_{1},C_{2}\subset Ob(\mathcal{C})\) be two objects of \(\mathcal{C}[\Sigma^{-1}]\) such that the set of morphisms (in \(\mathcal{C}[\Sigma^{-1}]\)) is finite. Then, for every \(x_{1}\in C_{1}\) there exists \(x_{2}\in C_{2}\) such that the quotient map
\[\mathcal{C}(x_{1},x_{2})\to\mathcal{C}/_{\Sigma}(C_{1},C_{2}),\;f\mapsto[f]\]
is bijective.
Consider a bimodule \(M:\;P_{X}\times P_{X}^{op}\to{}_{R}Mod_{R}\) that factors through a bimodule \(M:\;\overrightarrow{\pi}_{1}(X)\times\overrightarrow{\pi}_{1}(X)^{op}\to{}_{R }Mod_{R}\), by map \(f\), meaning, in plain english, that the homomorphisms of \(R\)-modules, images \(M(u,v)\) of morphisms of \(P_{X}\times P_{X}^{op}\;\langle u,v\rangle\), where \(u\) and \(v\) are traces, only depend on the dihomotopy class of \(u\) and \(v\). Then the component categories provide a map:
\[\pi:\;\overrightarrow{\pi}_{1}(X)\times\overrightarrow{\pi}_{1}(X)^{op}\to \overrightarrow{\pi}_{1}(X)/_{\Sigma}\times\overrightarrow{\pi}_{1}(X)^{op}/ _{\Sigma}\]
and a bimodule \(N\) on \(\overrightarrow{\pi}_{1}(X)/_{\Sigma}\) such that
\[M=(\pi\circ f)_{*}N=\underset{q\in P_{X}\times P_{X}^{op}}{\oplus}N(\pi\circ f (q))\]
since for po-spaces, geometric realizations of finite precubical sets, the dihomotopy classes between two points are finitely many (by cubical approximation, [15]).
This is the notion of encoding of [33] of a \(\overrightarrow{\pi}_{1}(X)\times\overrightarrow{\pi}_{1}(X)^{op}\)-module \(M\) by morphism \(\pi\circ f\), generalized to a categorical setting (we are considering modules on categories, not just on posets).
\(HM_{1}[X]\) provides such bimodules, hence the category of components provides an encoding of \(HM_{1}[X]\). It has been shown that for certain classes of precubical sets \(X\), in particular those generated by PV terms without loops, \(|X|\) has the property that it has finitely many components [23], and the bimodule \(HM_{1}[X]\) has then a finite encoding, in the sense of [33], which is a notion of tameness, amenable to actual computations.
We conjecture all \(HM_{i}[X]\) are tame in that sense, for all \(i\geq 2\) and for at least all finite precubical sets which are the semantics of PV terms without loops. We also believe the fringe representations of [32] should be useful for computational characterizations of the homology bimodules in that case.
## 9 Exact sequences and isomorphisms
Invariance under dihomeomorphismLet \(X\) and \(Y\) be two cubical complexes. Then we have:
**Theorem 3**.: Suppose \(|X|\) and \(|Y|\) are dihomeomorphic, i.e. isomorphic in the category of directed spaces. Then for all \(i\geq 1\), \(HM_{i}[X]\) is bisimilar to \(HM_{i}[Y]\).
Proof.: In particular, \(X\) and \(Y\) being cubical complexes, are finite precubical sets whose underlying graph is a DAG, so we can define \(HM_{*}[X]\) and \(HM_{*}[Y]\).
By Theorem 2, for all \(i\geq 1\), \(HM_{i}[X]\) (resp. \(HM_{i}[Y]\)) is bisimulation equivalent to the natural homology \(HN_{i}\) of \(|X|\) (resp. of \(|Y|\)). As \(|X|\) and \(|Y|\) are dihomeomorphic, and natural homology is an invariant under dihomeomorphism [13], \(HN_{i}(|X|)\) and \(HN_{i}(|Y|)\) are isomorphic, as natural systems of \(R\)-vector spaces.
From this and the bisimulation above, it is straightforward to construct a bisimulation directly between \(HM_{i}[X]\) and \(HM_{i}[Y]\).
DicontractibilityA po-space is called dicontractible, as in [22] (Theorem 1), if the dipath space map \(\chi:\ \overrightarrow{P}(X)\to X\times X\), which associates to each path (or trace) \(p\) its pair of endpoints: \(\chi(p)=(p(0),p(1))\), has a global section, from the subspace in \(X\times X\) of reachable pair of points \(\Gamma_{X}=\{(x,y)\in X\times X\ |\ \exists p\in dX,\ p(0)=x,\ p(1)=y\ \}\).
Then we have:
**Lemma 16**.: A po-space, geometric realization of \(X\in Cub\), is dicontractible if and only if \(HM_{1}[X]=R\), with the trivial \(R[X]\)-bimodule structure, and \(HM_{i}[X]=0\), for all \(i\geq 2\).
Proof.: By Theorem 1 of [22], \(|X|\) is such that all dipath spaces \(\overrightarrow{P}(X)_{a}^{b}\) are contractible, for all \((a,b)\in\Gamma_{X}\), hence, by Lemma 15, \(e_{a}\bullet HM_{1}\bullet e_{b}=R\) and \(e_{a}\bullet HM_{i}\bullet e_{b}=0\), for \(i\geq 2\). Therefore the homology modules are all constant, either with value \(R\) or with value \(0\).
**Remark 17**.: It is a simple exercise to see that the relation between the categorification (Definition 30) of \(HM_{1}[X]\) (resp. \(HM_{i}[X]\), \(i\geq 2\)) and the categorification of \(R\) seen as a \(A\)-bimodule with \(A\) the ring algebra of \(R\) with trivial (identity) action, which relates all its objects to the only object \(1\) of \(R\) is a bisimulation equivalence.
Change of coefficientsRestricting the algebra which acts on the homology bimodules can be seen as a well behaved filtering of the homological information present originally, in two important ways. In what follows, \(X\) is a finite precubical set with proper length covering.
**Lemma 17**.: Given \(Y\) a sub-precubical set of \(X\in Cub\), inducing, as inclusions are injective on objects, a map of algebras \(g:\ R[Y]\to R[X]\). Let \({}_{Y}R_{*}[X]\) be the chain of \(R[Y]\)-bimodules which is, in dimension \(i\), equal to \(g^{*}(R_{i}[X])\). Its homology, \(H_{i}({}_{Y}R_{*}[X])\) is isomorphic to the the \(R[Y]\)-bimodule \({}_{Y}HM_{i}[X]=g^{*}(HM_{i}[X])\).
Proof.: The \(i\)th homology of the chain of \(R[Y]\)-bimodules \({}_{Y}R_{*}[X]\) is \(Ker\ g^{*}\partial^{i}/\)\(Im\ g^{*}\partial^{i}\), which is the co-kernel of \(g^{*}f\) where \(f\) is the map \({}_{Y}R_{i+1}[X]\to Ker\ \partial^{i}\) induced canonically by \(g^{*}\partial^{i+1}:\ _{Y}R_{i+1}[X]\to{}_{Y}R_{i}[X]\) by the kernel diagram.
As the restriction of scalars functor \(g^{*}\), has a left and a right adjoint, it commutes with limits and colimits. Furthermore, as \(g^{*}(0)=0\), it commutes with kernels and co-kernels.
Hence \(Ker\ g^{*}\partial^{i}/Im\ g^{*}\partial^{i+1}\) is isomorphic to \(g^{*}(Ker\ \partial^{i}/Im\ \partial^{i+1})\), which is \({}_{Y}HM_{i}[X]\).
Indeed, the left (resp. right) action of \(y\in R[Y]\) on \({}_{Y}M\), for \(M\) any \(R[X]\)-bimodule, is the action of \(y\) seen as an element of \(R[X]\).
**Remark 18**.: Consider the extension of scalar \(g_{!}\) induced by \(g\) of Lemma 17. By definition, \({}^{X}R_{i}[X]=g_{!}(R_{i}[X])\) and \({}^{X}HM_{i}[X]=g_{!}(HM_{i}[X])\). By Lemma 11, As \(g_{!}\) is a left adjoint, and \(g_{!}(0)=0\), it preserves co-limits, hence co-kernels, but not necessarily all kernels, a priori.
Still, in the case of \(\partial\), from the free \(R[Y]\)-bimodule \(R_{i+1}[Y]\) to the free \(R[Y]\)-bimodule \(R_{i}[Y]\), we have enough properties to get a similar result as Lemma 17:
**Lemma 18**.: Given \(X\) and \(Y\) in \(Cub\) such that \((X,Y)\) a relative pair of precubical sets, \({}^{X}HM_{i}[Y]\) is isomorphic to \(H_{i}({}^{X}\!R_{*}[Y])\) for all \(i\in\mathbb{N}\), \(i\geq 1\).
Proof.: Because of Remark 18, we only have to check that the extension of scalar \(g_{!}\) preserves the kernel of \(\partial\) (we know that that it preserves all co-kernels).
It is straightforward to show that \({}^{X}Ker\ \partial_{|R_{i+1}[Y]\to R_{i}[Y]}\) is a submodule of \(Ker\ {}^{X}\partial_{|R_{i+1}[Y]\to R_{i}[Y]}\). Indeed, the \(p\otimes c\otimes q\) with \(p,q\in R[X]\) and \(c\in R_{i+1}[Y]\) with \(\partial(c)=0\) generate \({}^{X}Ker\ \partial\) and are also such that \(\partial(p\otimes c\otimes q)=p\otimes\partial(c)\otimes q=0\).
First, consider the case \(i=1\). In that case \(HM_{i}[Y]\) is the co-kernel of \(\partial:\ R_{2}[Y]\to R_{1}[Y]\) and we do not have to compute any kernel; the result of the lemma holds, trivially.
Then, let us consider \(x\in Ker^{X}\partial\subseteq{}^{X}R_{2}[Y]\), i.e. the case \(i=1\). Then \(x=\sum\limits_{j\in J}\lambda_{j}p_{j}\otimes g_{j}\otimes q_{j}\) for some \(\lambda_{j}\in R\), and \(p_{j}=(c_{1},\ldots,c_{m})\), \(c_{j}=(e_{1},\ldots,e_{l})\), \(q_{j}=(d_{1},\ldots,d_{n})\), \(d^{1}(c_{m})=d^{0}(e_{1})\) and \(d^{1}(e_{l})=d^{0}(d_{1})\) by Lemma 10. Let
us now write \(\partial(g_{j})\) as a \(R\)-linear combination of paths (0-cube chains) in \(Y\): \(\partial(g_{j})=\sum\limits_{k\in K}\mu_{k}d_{k}\) with \(d_{k}\in Ch^{=0}(Y)\) and \(\mu_{k}\in R\). Then \(\sum\limits_{j\in J,k\in K}\lambda_{j}\mu_{k}p_{j}\otimes d_{k}\otimes q_{j}\) is the canonical decomposition of \({}^{X}\partial(x)\) because of Lemma 11. Hence it is equal to 0 if and only if \(\sum\limits_{k\in K}\mu_{k}d_{k}=0=\partial(c_{i})\). Hence \(x\in{}^{X}Ker\ \partial\).
Now, consider \(x\in Ker^{X}\partial\subseteq{}^{X}R_{i+1}[Y]\), \(i\geq 2\). Then we can write
\[x=\sum\limits_{j\in J,k\in K,l\in L}\lambda_{j,k,l}p_{j}\otimes g_{k}\otimes q _{l}\]
for some \(\lambda_{j,k,l}\in R\), and \(p_{j}=(c_{1},\ldots,c_{m})\), distinct for distinct \(j\), \(g_{k}=(e_{1},\ldots,e_{o})\), distinct for distinct \(k\), \(q_{l}=(d_{1},\ldots,d_{n})\), distinct for distinct \(l\), and such that \(d^{1}(c_{m})=d^{0}(e_{1})\) and \(d^{1}(e_{o})=d^{0}(d_{1})\), by Lemma 10.
Let us write \(\partial(g_{k})\) as a \(R[Y]\)-linear combination of \(h_{k^{\prime}}\), \(k^{\prime}\in K^{\prime}\), where \(h_{k^{\prime}}\in G_{i-1}(Y)\), i.e. a \(R\)-linear combination of \(p^{\prime}_{j^{\prime}}\bullet h_{k^{\prime}}\bullet q^{\prime}_{l^{\prime}}\) with distinct \(h_{k^{\prime}}\) for distinct \(k^{\prime}\), distinct \(p^{\prime}_{j^{\prime}}\) for distinct \(j^{\prime}\), distinct \(q^{\prime}_{l^{\prime}}\) for distinct \(l^{\prime}\), and where \(p^{\prime}_{j^{\prime}}=(c^{\prime}_{1},\ldots,c^{\prime}_{m^{\prime}})\in Ch ^{=0}(Y)\), \(q^{\prime}_{l^{\prime}}=(d^{\prime}_{1},\ldots,d^{\prime}_{n^{\prime}})\in Ch ^{=0}(Y)\), \(h_{k^{\prime}}=(e^{\prime}_{1},\ldots,e^{\prime}_{o^{\prime}})\in G_{i-1}(Y)\) with \(d^{1}(c^{\prime}_{m^{\prime}})=d^{0}(e^{\prime}_{1})\) and \(d^{1}(e^{\prime}_{o^{\prime}})=d^{0}(d^{\prime}_{1})\): \(\partial(g_{k})=\sum\limits_{j^{\prime}\in J^{\prime},k^{\prime}\in K^{\prime},l^{\prime}\in L^{\prime}}\mu_{k,j^{\prime},k^{\prime},l^{\prime}}p^{\prime}_ {j^{\prime}}\bullet h_{k^{\prime}}\bullet q^{\prime}_{l^{\prime}}\). So
\[\sum\limits_{j\in J,k\in K,l\in L,j^{\prime}\in J^{\prime},k^{\prime}\in K^{ \prime},l^{\prime}\in L^{\prime}}\lambda_{j,k,l}\mu_{k,j^{\prime},k^{\prime}, l^{\prime}}p_{j}\otimes(p^{\prime}_{j^{\prime}}\bullet h_{k^{\prime}}\bullet q ^{\prime}_{l^{\prime}})\otimes q_{l}=\]
\[\sum\limits_{j\in J,l\in L,j^{\prime}\in J^{\prime},k^{\prime}\in K^{\prime},l\in L^{\prime}}(\sum\limits_{k\in K}\lambda_{j,k,l}\mu_{k,j^{\prime},k^{ \prime},l^{\prime}})p_{j}p^{\prime}_{j^{\prime}}\otimes h_{k^{\prime}}\otimes q ^{\prime}_{l^{\prime}}q_{l} \tag{2}\]
is the canonical decomposition of Lemma 10 of the element \({}^{X}\partial(x)\in Ker^{X}\partial\).
As \({}^{X}\partial(x)=0\), this means that all coefficients of \(p_{j}p^{\prime}_{j^{\prime}}\otimes h_{k^{\prime}}\otimes q^{\prime}_{l^{ \prime}}q_{l}\) are zero, i.e., we have the following equality in \(R\):
\[\forall j\in J,\ l\in L,\ j^{\prime}\in J^{\prime},\ k^{\prime}\in K^{\prime},\ l^{\prime}\in L^{\prime},\ \sum\limits_{k\in K}\lambda_{j,k,l}\mu_{k,j^{\prime},k^{\prime},l^{\prime}}=0\]
Now, consider for all \(j\in J\) and \(l\in L\),
\[y_{j,l}=\sum\limits_{k\in K}\lambda_{j,k,l}g_{k}\]
We have:
\[\begin{array}{rcl}\partial(y_{j,l})&=&\sum\limits_{k\in K}\lambda_{j,k,l} \partial(g_{k})\\ &=&\sum\limits_{k\in K,j^{\prime}\in J^{\prime},k^{\prime}\in K^{\prime},l^{ \prime}\in L^{\prime}}\lambda_{j,k,l}\mu^{k}_{j^{\prime},k^{\prime},l^{\prime} }p^{\prime}_{j^{\prime}}\otimes h_{k^{\prime}}\otimes q^{\prime}_{l^{\prime}} \\ &=&\sum\limits_{j^{\prime}\in J^{\prime},k^{\prime}\in K^{\prime},l^{\prime} \in L^{\prime}}(\sum\limits_{k\in K}\lambda_{j,k,l}\mu^{k}_{j^{\prime},k^{ \prime},l^{\prime}})p^{\prime}_{j^{\prime}}\otimes h_{k^{\prime}}\otimes q^{ \prime}_{l^{\prime}}\\ &=&0\end{array}\]
Also,
\[\begin{array}{rcl}x&=&\sum\limits_{j\in J,k\in K,l\in L}\lambda_{j,k,l}p_{j} \otimes g_{k}\otimes q_{l}\\ &=&\sum\limits_{j\in J,l\in L}\big{(}\sum\limits_{k\in K}\lambda_{j,k,l}p_{j} \otimes g_{k}\otimes q_{l}\big{)}\\ &=&\sum\limits_{j\in J,l\in L}p_{j}\otimes(\sum\limits_{k\in K}\lambda_{j,k,l}g _{k})\otimes q_{l}\\ &=&\sum\limits_{j\in J,l\in L}p_{j}\otimes y_{j,l}\otimes q_{l}\end{array}\]
Hence \({}^{X}\partial(x)=0\) implies \(x\) can be written as a sum of \(p_{j}\otimes y_{j,l}\otimes q_{l}\), with \(y_{j,l}\in Ker\ \partial\), \(p_{j}\in R[X]\) and \(q_{l}\in R[X]\).
Hence \(x\in{}^{X}Ker\ \partial\) and the lemma is proved.
**Remark 19**.: Indeed, the proof above is typical of the flat nature of the (free) modules of \(i\)-cube chains we consider.
There are other change of coefficients of interest in our case, for which we only briefly describe one, since it will not be of use in the rest of the paper:
**Lemma 19**.: Given \(Y\) a sub-precubical set of \(X\). We have:
* \(R[Y]\) is isomorphic to the algebra \(R[X]\) quotiented by the two-sided ideal \(I_{Y}^{X}=R[X](X_{\leq 1}\backslash Y_{\leq 1}\cup X_{0}\backslash Y_{0})R[X]\)
* The canonical map of algebras \(h:\ R[X]\to R[X]/I_{Y}^{X}\) induces, by restriction of scalars, a functor \(h^{*}:{}_{R[Y]}Mod_{R[Y]}\to{}_{R[X]}Mod_{R[X]}\)
* Calling \(R_{*}^{X}[Y]\) the chain of \(R[X]\)-bimodules which is, in dimension \(i\), equal to \(h^{*}(R_{i}[Y])\), its homology \(H_{i}(R_{*}^{X}[Y])\) is isomorphic to the \(R[X]\)-bimodule \(HM_{i}^{X}[Y]=h^{*}(HM_{i}[Y])\).
Proof.: Consider the following map \(R[Y]\to R[X]/I_{Y}^{X}\) which send paths \(q\) in \(Y_{\leq 1}\) to the class [q] of \(q\) in \(R[X]/I_{Y}^{X}\). This is well defined since a path in \(Y_{\leq_{1}}\) is in particular a path in \(X_{\leq 1}\). Now consider the map from \(R[X]/I_{Y}^{X}\) to \(R[Y]\) which sends paths classes \([p]\) of paths \(p\) in \(X_{\leq 1}\) modulo \(I_{Y}^{X}\) to \(0\) if \(p\) is a path in \(X_{\leq 1}\) that is not a path in \(Y_{\leq 1}\), or to \(p\) otherwise. In the first case, this means that it is either a constant path \(p=e_{a}\) with \(a\in X_{0}\backslash Y_{0}\) or a path \(p=(p_{1},\ldots,p_{m})\) with at least one \(p_{i}\in X_{1}\backslash Y_{1}\): in both case \(p\) is zero modulo \(I_{Y}^{X}\). These two maps are inverse to one another, which proves the isomorphism between \(R[Y]\) and \(R[X]/I_{Y}^{X}\).
The second statement of the lemma comes from the definition of the restriction of scalars functor, Section 3. The last statement is proven in the same way as for Lemma 17.
Indeed, the action of paths of \(X_{\leq 1}\) on \(c\in HM_{i}^{X}[Y]\) which are not paths in \(Y_{\leq 1}\) is \(0\).
**Remark 20**.: For intervals \(\mathcal{I}\) in \(F_{X_{\leq 1}}\) as in Section 6, the categorical presentation of \(HM_{i}^{X}[\mathcal{I}]\) is the restriction \(\mathcal{F}_{HM_{i}[\mathcal{I}]_{|\mathcal{I}}}\).
As \(X_{\leq 1}\) is the direct limit of the diagram made of the inclusion of its (not necessarily maximal) paths (or intervals), \(R[X]\) is the direct limit of \(R[\mathcal{I}]\) over
the diagram of path algebra inclusions of paths into the quiver \(X_{\leq 1}\), by Lemma 2.5 of [20]. We believe that \(HM_{i}[X]\) is also the colimit of all \(HM_{i}^{X}[\mathcal{I}]\), the direct limit of the unidimensional homologies on all paths of \(X_{\leq 1}\), as proved, for natural homology, in [9].
**Remark 21**.: In general, we do not have inclusions between \(R_{i}^{X}[Y]\) and \(R_{i}[X]\). Indeed, if \(Y_{\leq 1}\) is stricly included in \(X_{\leq 1}\) then, for e.g. \(x\in X_{1}\backslash Y_{1}\), we will have for all \(m\in R_{i}^{X}[Y]\), \(x\bullet m=0\), whereas \(x\bullet m\) may not be zero if we see \(m\) as an element of \(R_{i}[X]\) through the inclusion of \(Y\) in \(X\).
Kunneth formulaKunneth formula is an important tool in algebraic topology [30], and is logical to look at now since we examined in previous section, the effect of simple change of coefficients.
Such formulas have only relatively recently been considered in the context of persistent homology, see e.g. [19] for linking some form of product of filtrations of spaces with a tensor product of the respective persistence modules, at least in the context of unidimensional persistence.
Here we consider instead the problem of linking the module homology of a product space, with the tensor product of the respective module homologies. This is a deeply "multidimensional" persistence type of result, since for this, we need to consider the product of the path algebras, i.e. some form of tensor product of the underlying quivers, as a "multidimensional" persistence parameter space.
**Definition 34**.: Let \(X\) and \(Y\) be two precubical sets. Their tensor product is the precubical set \(X\otimes Y\) with:
* \((X\otimes Y)_{n}=\coprod\limits_{i+j=n}X_{i}\times Y_{j}\)
* for \((c,d)\in X_{i}\times Y_{j}\,\text{with}\,\,i+j=n\), \(d_{k}^{\epsilon}(c,d)=\left\{\begin{array}{ll}(d_{k}^{\epsilon}(c),d)&\text {if}\,\,0\leq k\leq i-1\\ (c,d_{k-i}^{\epsilon}(d))&\text{if}\,\,i\leq k\leq n-1\end{array}\right.\)
This definition actually extends to the tensor product of CW-complexes, see e.g. Chapter VI of [30], and in particular Theorem 2.1. For such cell complexes, we use the same notations as in the definition above, and the chain complexes of \(R\)-modules freely generated by the corresponding cell complexes are such that the cell complex generated by the tensor product of two cell complexes is the (chain) tensor product of the underlying cell complexes, as defined in Definition 2.1 of [30] and recapped below, for chain complexes of \(R\)-modules and then extended to chain complexes of \(A\)-bimodules (\(A\) being an algebra) in Definition 35:
**Definition 35**.: Let \(C=(C_{*},\partial_{C})\) and \((D_{*},\partial_{D})\) be two chain complexes of \(R\)-modules. Their tensor product \(C\otimes D\) is defined as the chain complex with:
* \((C\otimes D)_{n}=\mathop{\oplus}\limits_{i+j=n}C_{i}\times D_{j}\)
* \(\partial_{C\otimes D}^{n}:\ (C\otimes D)_{n}\to(C\otimes D)_{n-1}\) is such that \(\partial_{C\otimes D}^{n}(c\otimes d)=\partial_{C}^{i}(c)\otimes d+(-1)^{i}c \otimes\partial_{D}^{j}(d)\) where \(c\in C_{i}\) and \(d\in D_{j}\) with \(i+j=n\)
For chain complexes of bimodules over algebras, each \(R\)-module \((C\otimes D)_{n}\) is given the structure of a \(A\otimes B\)-bimodule with (as seen in Section 3):
\[(a\otimes b)\bullet c\otimes d\bullet(a^{\prime}\otimes b^{\prime})=(a\bullet c \bullet a^{\prime})\otimes(b\bullet d\bullet b^{\prime})\]
for \(a\), \(a^{\prime}\) in \(A\) and \(b\), \(b^{\prime}\) in \(B\).
Consider now two precubical sets \(X\in Cub\) and \(Y\in Cub\), and their directed geometric realization \(|X|\) and \(|Y|\). Let \(x\) and \(x^{\prime}\) (resp. \(y\) and \(y^{\prime}\)) be two points in \(|X|\) (resp. in \(|Y|\)) and \(p\) (resp. \(q\)) be a dipath from \(x\) to \(x^{\prime}\) (resp. from \(y\) to \(y^{\prime}\)) in \(|X|\) (resp. in \(|Y|\)). As well known [30], the geometric realization \(|X\otimes Y|\) of \(X\otimes Y\) is isomorphic to the cartesian product of the geometric realizations. This easily extends to the directed geometric realization \(|X|\times|Y|\).
Consider the \(i\)th natural homology \(HN_{i}(|X\otimes Y|)(p,q)\) of \(|X\otimes Y|=|X|\times|Y|\) at \((p,q)\). It is equal to the \((i-1)\)th homology of the path space (or equivalently the trace space) from \((x,y)\) to \((x^{\prime},y^{\prime})\) in \(|X|\times|Y|\).
Dipaths in \(|X|\times|Y|\) are directed maps from \(\overrightarrow{I}\) to \(|X|\times|Y|\), i.e. are pairs of directed maps from \(\overrightarrow{I}\) to \(|X|\) and from \(\overrightarrow{I}\) to \(|Y|\). This means dipaths in \(|X\otimes Y|\) can be identified with pairs of dipaths in \(d|X|\) and in \(d|Y|\) and dipaths from \((x,y)\) to \((x^{\prime},y^{\prime})\) in \(|X\otimes Y|\), \(d|X\otimes Y|\) are pairs of dipaths, from \(x\) to \(x^{\prime}\) and from \(y\) to \(y^{\prime}\), i.e. are the elements of \(d|X|(x,x^{\prime})\times d|Y|(y,y^{\prime})\). Thus \(H_{i-1}(d|X\otimes Y|((x,y),(x^{\prime},y^{\prime})))\) is isomorphic to \(H_{i-1}(d|X|(x,x^{\prime})\times d|Y|(y,y^{\prime}))\), which, by Kunneth formula [30], for field coefficients, is isomorphic to \((H_{*}(d|X|(x,x^{\prime}))\otimes H_{*}(d|Y|(y,y^{\prime})))_{i-1}\).
Therefore we proved that:
\[HN_{*}(|X\times Y|)(p,q)=HN_{*}(|X|)(p)\boxtimes HN_{*}(|Y|)(q)\]
where \(\boxtimes\) is a shifted version of the tensor product of chain complexes of Definition 35 as follows: \((C_{*}\boxtimes D_{*})_{n+1}=s(C_{*})\otimes s(D_{*})\) where \(s(C_{*})_{i+1}=C_{i}\) for \(i\in\mathbb{N}\). This is due to the fact that our homology modules have shifted indices with respect to the (singular) homology of the path spaces, as defined in Definition 22.
Suppose now that \(X\) and \(Y\) are cubical complexes, as in Definition 31. It is straightforward to see that \(X\otimes Y\) is a cubical complex as well.
By Lemma 15, we know that \(e_{(x,y)}\bullet HM_{i}[X\otimes Y]\bullet e_{x^{\prime},y^{\prime}}\) is isomorphic to \(HN_{i}(|X\otimes Y|)(p,q)\), for any \(p\) (resp. \(q\)) dipath from \(x\) to \(x^{\prime}\) in \(|X|\) (resp. from \(y\) to \(y^{\prime}\) in \(|Y|\)), which is isomorphic to \(HN_{*}(|X|)(p)\otimes HN_{*}(|Y|)\) by the above.
By Lemma 15 again, we know that \(HN_{i}(|X|)(p)\) (resp. \(HN_{i}(|Y|)(q)\)) is isomorphic to \(e_{x}\bullet HM_{i}[X]\bullet e_{y}\) (resp. \(e_{x^{\prime}}\bullet HM_{i}[Y]\bullet e_{y^{\prime}}\)). As this is true for all \(x\), \(x^{\prime}\), \(y\), \(y^{\prime}\), and as, as a \(R\)-module, \(HM_{i}[X]\) (resp. \(HM_{i}[Y]\)) is the coproduct of all the \(e_{x}\bullet HM_{i}[X]\bullet e_{x^{\prime}}\) (resp. of all the \(e_{y}\bullet HM_{i}[Y]\bullet e_{y^{\prime}}\)), we get that, as a chain of \(R\)-modules, \(HM_{*}[X\otimes Y]\) is isomorphic to \(HM_{*}[X]\boxtimes HM_{*}[Y]\). The tensor product in the latter formula is the chain complex tensor product of
Definition 35, where \(HM_{*}[X]\) and \(HM_{*}[Y]\) are considered as chain complexes with zero differential.
Now this isomorphism is actually an isomorphism of \(R[X\otimes Y]\)-bimodules. First, \(HM_{*}[X]\boxtimes HM_{*}[Y]\) defines not only a chain of \(R\)-modules but a chain of \(R[X]\otimes R[Y]\)-bimodules, since \(HM_{*}[X]\) (resp. \(HM_{*}[Y]\)) is a chain of \(R[X]\)-bimodules (resp. \(R[Y]\)-bimodules), using Definition 35.
The algebra \(R[X]\otimes R[Y]\) is not isomorphic to \(R[X\otimes Y]\) in general, but is rather a (generally strict) sub-algebra of \(R[X\otimes Y]\). Indeed, generators of \(R[X]\otimes R[Y]\) are \(p\otimes q\), with \(p=(p_{1},\ldots,p_{k})_{x,x^{\prime}}\) path from \(x\) to \(x^{\prime}\) and \(q=(q_{1},\ldots,q_{l})_{y,y^{\prime}}\) from \(y\) to \(y^{\prime}\), and generators of \(R[X\otimes Y]\) are shuffles of such paths, i.e. are of the form \((r_{1},\ldots,r_{k+l})_{(x,y),(x^{\prime},y^{\prime})}\) with \(r_{i}\) equal to some \(p_{\alpha(i)}\otimes e_{b}\) or \(e_{a}\otimes q_{\beta(i)}\) where \(\alpha\) (resp. \(\beta\)) is an increasing map from \(\{1,\ldots,k+l\}\) to \(\{1,\ldots,k\}\) (resp. \(\{1,\ldots,l\}\)) and \(b\) is the boundary of some \(q_{j}\), \(a\), of some \(p_{i}\). This is made more explicit in the lemma below:
**Lemma 20**.: Let \(X\) and \(Y\) be two precubical sets. The path algebra \(R[X\otimes Y]\) is generated by elements of the form \((u,e_{b})\) and \((e_{a},v)\) where \(u\in X_{1}\), \(b\in Y_{0}\), \(a\in X_{0}\) and \(v\in Y_{1}\), with the following multiplication:
\[\begin{array}{rcl}(u,e_{b})(e_{a},v)&=&0,\,\mbox{if $d^{1}(u)\neq a$ or $d^{0}(v)\neq b$}\\ (e_{a},v)(u,e_{b})&=&0,\,\mbox{if $d^{0}(u)\neq a$ or $d^{1}(v)\neq b$}\\ (e_{a},v)(e_{c},z)&=&\left\{\begin{array}{ll}(e_{a},vz)&\mbox{if $a=c$}\\ 0&\mbox{otherwise}\end{array}\right.\\ (u,e_{b})(w,e_{d})&=&\left\{\begin{array}{ll}(uw,e_{b})&\mbox{if $b=d$}\\ 0&\mbox{otherwise}\end{array}\right.\end{array}\]
Proof.: This comes directly from the definition of the product of two precubical sets, Definition 34. Elements of dimension one in \(X\otimes Y\) are or the form \((a,d)\) or \((c,b)\) with \(a\in X_{0}\), \(b\in Y_{0}\), \(c\in X_{1}\) and \(d\in Y_{1}\).
Overall, this proves that \({}_{R[X]\otimes R[Y]}HM_{*}[X\otimes Y]=HM_{*}[X]\boxtimes HM_{*}[Y]\) (the change of coefficient being applied on each element of the chain complex of bimodules), but we can in fact prove that there is a natural action of \(R[X\otimes Y]\) on \(HM[X]\boxtimes HM[Y]\) which makes it isomorphic, as a \(R[X\otimes Y]\)-bimodule to \(HM_{*}[X\otimes Y]\).
Indeed, as we saw, all shuffles of \(p\) and \(q\) go from \((x,y)\) to \((x^{\prime},y^{\prime})\), as does \(p\otimes q\). But the action of any of these shuffles (on the left, as well as on the right) on any element of \(HM_{i}[X\otimes Y]\) depends only only on \((x,y)\) and \((x^{\prime},y^{\prime})\) since all shuffles from \((x,y)\) and \((x^{\prime},y^{\prime})\) are dihomotopic in \(X\otimes Y\). Hence we can make \(HM_{*}[X]\boxtimes HM_{*}[Y]\) into a \(R[X\otimes Y]\)-bimodule, isomorphic to \(HM_{*}[X\otimes Y]\), by setting the left (resp. right) action of \((r_{1},\ldots,r_{k+l})\) on \(x\otimes y\) to be equal to \((p\bullet x)\otimes(q\bullet y)\) (resp. \((x\bullet p)\otimes(y\bullet q)\)).
In summary, we proved a version of Kunneth's formula in the field case, and for cubical complexes:
**Theorem 4**.: Let \(X\) and \(Y\) be cubical complexes and \(R\) a field. Then
\[HM_{*}[X\otimes Y]=HM_{*}[X]\boxtimes HM_{*}[Y]\]
with the action of shuffles of paths \(p\) of \(X\) with \(q\) of \(Y\) on \(HM_{*}[X]\boxtimes HM_{*}[Y]\) to be the action of \(p\) on the first component, and action of \(q\) on the second component.
**Remark 22**.: Indeed, the \(R[X\otimes Y]\)-bimodule structure that we defined on the \(R[X]\otimes R[Y]\)-bimodule \((HM_{*}[X]\boxtimes HM_{*}[Y])_{n}\) is derived from the algebra map:
\[h:\quad R[X\otimes Y]\ \ \rightarrow\ \ R[X]\otimes R[Y]\]
which to any generator \((u,v)\) of \(R[X\otimes Y]\) (see Lemma 20) associates \((u,v)\), this time ruled by the algebra multiplication of \(R[X]\otimes R[Y]\), which is \((u,v)(u^{\prime},v^{\prime})=(uu^{\prime},vv^{\prime})\).
It is easy to prove that the structure of \(R[X\otimes Y]\)-bimodule we put on \((HM_{*}[X]\boxtimes HM_{*}[Y])_{n}\) in Theorem 4 is the restriction of scalars \(h^{*}(HM_{*}[X]\boxtimes HM_{*}[Y])_{n}\).
**Remark 23**.: Although in order to prove the theorem, we went through the geometric realization, the isomorphism is simple to describe, directly in terms of cubical complexes.
Let \(i\), \(j\) and \(n\) be integers such that \(n=i+j\). There is a natural map:
\[\begin{array}{lll}HM_{i}[X]\times HM_{j}[Y]&\rightarrow&HM_{n}[X\otimes Y] \\ f:\quad([(c_{1},\ldots,c_{k})_{x,x^{\prime}}],[(d_{1},\ldots,d_{l})_{y,y^{ \prime}}])&\rightarrow&[(c_{1}\otimes e_{y},\ldots,c_{k}\otimes e_{y},\\ &&e_{x^{\prime}}\otimes d_{1},\ldots,e_{x^{\prime}}\otimes d_{l})_{(x,y),(x^{ \prime},y^{\prime})}]\end{array}\]
Then, obviously, as \((c_{1}\otimes e_{y},\ldots,c_{k}\otimes e_{y})\) is an \((i-1)\)-chain and \((e_{x^{\prime}}\otimes d_{1},\ldots,e_{x^{\prime}}\otimes d_{l})\) is a \((j-1)\)-chain, \((c_{1}\otimes e_{y},\ldots,c_{k}\otimes e_{y},e_{x^{\prime}}\otimes d_{1}, \ldots,e_{x^{\prime}}\otimes d_{l})\) is a \((i+j-2)\)-cube chain by Remark 3, which makes the definition correct given the convention for tensor products of bimodules \(\boxtimes\) we used.
We have to check that the definition does not depend on the class chosen for \(c\) and \(d\). Indeed, if \(c^{\prime}=c+\partial\gamma\) and \(d^{\prime}=d+\partial\zeta\), where \(\gamma=(\gamma_{1},\ldots,\gamma_{k})\) and \(\zeta=(\zeta_{1},\ldots,\zeta_{l})\), then \((c^{\prime}_{1}\otimes e_{y},\ldots,c^{\prime}_{k}\otimes e_{y},e_{x^{\prime}} \otimes d^{\prime}_{1},\ldots,e_{x^{\prime}}\otimes d^{\prime}_{l})=(c_{1} \otimes e_{y},\ldots,c_{k}\otimes e_{y},e_{x^{\prime}}\otimes d_{1},\ldots,e_{x ^{\prime}}\otimes d_{l})+\partial(\gamma_{1}\otimes e_{y},\ldots,\gamma_{k} \otimes e_{y},e_{x^{\prime}}\otimes\zeta_{1},\ldots,e_{x^{\prime}}\otimes \zeta_{l})\) since, by Definition 34, all boundaries of \(e_{a}\otimes u\) or \(u\otimes e_{a}\) are \(e_{a}\) tensor (on the left or on the right) \(u\) (the boundaries of any \(0\)-cube chain, including the empty one, are zero).
Showing map \(f\) is injective is simple. \(f([c],[d])=f([c^{\prime}],[d^{\prime}])\) implies \((c^{\prime}_{1}\otimes e_{y},\ldots,\)\(c^{\prime}_{k}\otimes e_{y},e_{x^{\prime}}\otimes d^{\prime}_{1},\ldots,e_{x ^{\prime}}\otimes d^{\prime}_{l})=(c_{1}\otimes e_{y},\ldots,c_{k}\otimes e_{y },e_{x^{\prime}}\otimes d_{1},\ldots,e_{x^{\prime}}\otimes d_{l})+\partial( \gamma_{1}\otimes e_{y},\ldots,\gamma_{k}\otimes e_{y},\zeta_{1},\ldots,\zeta _{l})\), which in turn implies \(c^{\prime}=c+\partial\gamma\) and \(d^{\prime}=d+\partial\zeta\), where \(\gamma=(\gamma_{1},\ldots,\gamma_{k})\) and \(\zeta=(\zeta_{1},\ldots,\zeta_{l})\). Hence \([c^{\prime}]=[c]\) and \([d^{\prime}]=[d]\).
Surjectivity of \(f\) is tough to prove though, without resorting to the geometric realization and the classical Kunneth theorem as we did.
We proved the Kunneth formula for cubical complexes, but we believe this should be true for all precubical sets that we are considering here, that is, finite precubical sets with proper length covering, at least.
**Example 13**.: Consider again Example 4, \(\overrightarrow{C}\):
\[\begin{array}{c}\includegraphics[width=142.26378pt]{figure1}\end{array}\]
We get the following path algebra:
\[\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R&0&R&0\\ R^{2}&R&R&R\end{pmatrix}\]
and indeed \(HM_{1}[\overrightarrow{C}]\) the bimodule \(R[\overrightarrow{C}]\) with the same matrix representation as the one for \(R[\overrightarrow{C}]\) above, whereas \(HM_{i}[\overrightarrow{C}]\) is \(0\) for all \(i\geq 2\).
The tensor product \(\overrightarrow{C}\otimes\overrightarrow{C}\) is, up to classical homotopy, a torus.
Now, \(HM_{1}[\overrightarrow{C}\otimes\overrightarrow{C}]\) is \((HM_{*}[\overrightarrow{C}]\boxtimes HM_{*}[\overrightarrow{C}])_{1}\), that is, \(HM_{1}[\overrightarrow{C}]\otimes HM_{1}[\overrightarrow{C}]\), using the notation, by block first, of Remark 9:
\[\begin{pmatrix}R\otimes\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R&0&R&0\\ R^{2}&R&R&R\end{pmatrix}&0&0\\ R\otimes\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R&0&R&0\\ R^{2}&R&R&R\end{pmatrix}&R\otimes\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R^{2}&R&R&R\end{pmatrix}&0&0\\ R\otimes\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R^{2}&R&R&R\end{pmatrix}&R\otimes\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R^{2}&R&R&R\end{pmatrix}&0\\ R^{2}\otimes\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R^{2}&R&R&R\end{pmatrix}&R\otimes\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R&0&R&0\\ R^{2}&R&R&R\end{pmatrix}&R\otimes\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R&0&R&0\\ R^{2}&R&R&R\end{pmatrix}&R\otimes\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R&0&R&0\\ R^{2}&R&R&R\end{pmatrix}&R\otimes\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R&0&R&0\\ R^{2}&R&R&R\end{pmatrix}&R\otimes\begin{pmatrix}R&0&0&0\\ R&R&0&0\\ R&0&R&0\\ R^{2}&R&R&R\end{pmatrix}\end{pmatrix}\]
which, on the base \(4\otimes 4^{\prime}\), \(4\otimes 3^{\prime}\), \(4\otimes 2^{\prime}\), \(4\otimes 1^{\prime}\), \(3\otimes 4^{\prime}\), \(3\otimes 3^{\prime}\), \(3\otimes 2^{\prime}\), \(3\otimes 1^{\prime}\), \(2\otimes 4^{\prime}\), \(2\otimes 3^{\prime}\), \(2\otimes 2^{\prime}\), \(2\otimes 1^{\prime}\), \(1\otimes 4^{\prime}\), \(1\otimes 3^{\prime}\), \(1\otimes 2^{\prime}\), \(1\otimes 1^{\prime}\) (where the'are given to the
vertex names of the second copy of \(\overrightarrow{C}\)) in that order, is:
\[\left(\begin{array}{cccccccccccccccc}R&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ R&R&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ R&0&R&0&0&0&0&0&0&0&0&0&0&0&0&0\\ R^{2}&R&R&R&0&0&0&0&0&0&0&0&0&0&0&0\\ R&0&0&0&R&0&0&0&0&0&0&0&0&0&0\\ R&0&0&0&R&0&0&0&0&0&0&0&0&0&0\\ R&R&0&0&R&0&R&0&0&0&0&0&0&0&0\\ R^{2}&R&R&R&R^{2}&R&R&0&0&0&0&0&0&0&0\\ R&0&0&0&0&0&0&R&0&0&0&0&0&0\\ R&R&0&0&0&0&0&R&R&0&0&0&0&0\\ R^{2}&R&R&0&0&0&0&R^{2}&R&R&0&0&0&0&0\\ R^{2}&0&0&R&0&0&0&R&0&0&R&0&0&0\\ R^{2}&0&R^{2}&0&R&0&0&R&0&R&0&0&R&0&0\\ R^{2}&0&R^{2}&R^{2}&R&0&R&0&R&0&R&0&R&0\\ R^{4}&R^{2}&R^{2}&R^{2}&R^{2}&R&R&R^{2}&R&R&R^{2}&R&R^{2}&R&R\end{array}\right)\]
In particular, there are four dipaths modulo homology from \(4\otimes 4^{\prime}\) to \(1\otimes 1^{\prime}\). Indeed, there are two connected components for the dipath space from \(4\) to \(1\) in \(\overrightarrow{C}\) and two connected components for the dipath space from \(4^{\prime}\) to \(1^{\prime}\) in the second copy of \(\overrightarrow{C}\), hence \(2\times 2=4\) components for the product dipath space.
Similarly, by Kunneth theorem, \(HM_{2}[\overrightarrow{C}\times\overrightarrow{C}]=HM_{2}[\overrightarrow {C}]\otimes HM_{1}[\overrightarrow{C}]\oplus HM_{1}[\overrightarrow{C}] \otimes HM_{2}[\overrightarrow{C}]=0\).
One may wonder why we do not have a non trivial second homology module, since classically, \(H_{2}\) of a torus is \(R\). This is due to the fact that the directed homology theory we have been constructing is highly non-abelian. We exemplify this on the product of the two directed graphs of Example 3, \(\overrightarrow{S}^{1}\otimes\overrightarrow{S}^{1}\). We recall that \(\overrightarrow{S}^{1}\) is:
The tensor product \(\overrightarrow{S}^{1}\otimes\overrightarrow{S}^{1}\) is:
plus the four 2-cells \(\alpha\otimes\alpha^{\prime}\), \(\alpha\otimes\beta^{\prime}\), \(\beta\otimes\alpha^{\prime}\) and \(\beta\otimes\beta^{\prime}\), which generate the \(R\)-module of 1-cube paths from \(1\otimes 1^{\prime}\) to \(2\otimes 2^{\prime}\). As \(\overrightarrow{S}^{1}\) is not a cubical complex
we have to compute the homology modules by hand, without resorting to our Kunneth formula, which we only proved so far for cubical complexes.
We compute their boundaries:
* \(\partial(a\otimes a^{\prime})=(1\otimes a^{\prime},a\otimes 2^{\prime})-(a \otimes 1^{\prime},2\otimes a^{\prime})\)
* \(\partial(a\otimes b^{\prime})=(1\otimes b^{\prime},a\otimes 2^{\prime})-(a \otimes 1^{\prime},2\otimes b^{\prime})\)
* \(\partial(b\otimes a^{\prime})=(1\otimes a^{\prime},b\otimes 2^{\prime})-(b \otimes 1^{\prime},2\otimes a^{\prime})\)
* \(\partial(b\otimes b^{\prime})=(1\otimes b^{\prime},b\otimes 2^{\prime})-(b \otimes 1^{\prime},2\otimes b^{\prime})\)
And there is no linear combination of these four 2-cells whose boundary sums up to zero, hence \(Ker\ \partial_{1}=0\) and \(HM_{2}[\overrightarrow{S}^{1}\otimes\overrightarrow{S}^{1}]=0\). Abelianizing the cube paths to interpret them in the \(R\)-module generated by 1-cells, as we would do in ordinary homology, we would get:
* \(\partial(a\otimes a^{\prime})=1\otimes a^{\prime}+a\otimes 2^{\prime}-a \otimes 1^{\prime}-2\otimes a^{\prime}\)
* \(\partial(a\otimes b^{\prime})=1\otimes b^{\prime}+a\otimes 2^{\prime}-a \otimes 1^{\prime}-2\otimes b^{\prime}\)
* \(\partial(b\otimes a^{\prime})=1\otimes a^{\prime}+b\otimes 2^{\prime}-b \otimes 1^{\prime}-2\otimes a^{\prime}\)
* \(\partial(b\otimes b^{\prime})=1\otimes b^{\prime}+b\otimes 2^{\prime}-b \otimes 1^{\prime}-2\otimes b^{\prime}\)
And indeed, \(\partial(a\otimes a^{\prime}-a\otimes b^{\prime}-b\otimes a^{\prime}+b\otimes b ^{\prime})=0\) and \(a\otimes a^{\prime}-a\otimes b^{\prime}-b\otimes a^{\prime}+b\otimes b^{\prime}\) would be the generator of \(H_{2}(\overrightarrow{S}^{1}\otimes\overrightarrow{S}^{1})\).
Exact sequence of relative homologyThe category \({}_{R}Mod_{R}\) is not Abelian but we have the following exact sequence, which is close to classical relative homology sequences for simplicial or singular homology theories:
**Proposition 2**.: Suppose \((X,Y)\) is a relative pair of precubical sets. We have the following relative homology sequence:
Proof.: As we have seen already, for all \(i\in\mathbb{N}\), \({}^{X}R_{i}[Y]\) is a sub-\(R[X]\)-bimodule of \(R_{i}[X]\), the inclusion map \(f_{i}:\ ^{X}R_{i}[Y]\to R_{i}[X]\) being indeed injective, hence a monomorphism in the category of \(R[X]\)-bimodules.
We also have the projection map \(\pi_{i}\) from the \(R[X]\)-bimodule \(R_{i}[X]\) onto \(R_{i}[X,Y]=R_{i}[X]/^{X}R_{i}[Y]\), which is surjective by construction. Finally, the kernel of \(\pi_{i}\) is the image of \(f_{i}\) by construction as well, giving us the following short exact sequence of \(R[X]\)-bimodules:
Consider \(\partial\), which we have seen already is a morphism of \(R[X]\)-bimodules from \(R_{i}[X]\) to \(R_{i-1}[X]\). As \(Y\) is a sub-precubical set of \(X\), all maps \(d_{k,B}:Ch^{=i-1}(X)_{v}^{w}\to Ch^{=i-2}(X)_{v}^{w}\) restrict to \(d_{k,B}:Ch^{=i-1}(Y)_{v}^{w}\to Ch^{=i-2}(Y)_{v}^{w}\) and similarly with \(\partial\), which restricts to a map from the \(R\)-module generated by \((i-1)\)-cube chains of \(Y\) to \((i-2)\)-cube chains of \(Y\). Since \(\partial\) is a morphism of \(R[X]\)-bimodules, this actually induces a morphism from \({}^{X}R_{i}[Y]\) to \({}^{X}R_{i-1}(Y)\). Finally, we have already seen that \(\partial\) induces also a map from \(R_{i}[X,Y]\) to \(R_{i-1}[X,Y]\).
Hence the short exact sequence of \(R[X]\)-bimodules above is actually a short exact sequence of chains of \(R[X]\)-bimodules. The category of \(R[X]\)-bimodules being an Abelian category, this induces the following long exact sequence of \(R[X]\)-bimodules:
But by Lemma 18 we know that \(H_{i}({}^{X}R_{*}[Y])={}^{X}HM_{i}[Y]\) hence the exact sequence of the proposition.
Let us now look more in detail at elements of \(HM_{i}[X,Y]\). First, a cycle \([c]_{{}^{X}R_{i}[Y]}\in Ker\ \partial_{|R_{i}[X]/^{X}R_{i}[Y]\to R_{i-1}[X]/^{X}R_{i-1}[Y]}\) is a class modulo \({}^{X}R_{i}[Y]\) of \(c\in R_{i}[X]\) such that \(\partial c\in{}^{X}R_{i-1}[Y]\), and in that sense, it is called, as in the classical case, a \(Y\)-relative cycle.
Such a \(Y\)-relative cycle is trivial in \(HM_{i}[X,Y]\) if it is a \(Y\)-relative boundary, i.e. is such that \(c=\partial b+a\) with \(b\in R_{i+1}[X]\) and \(a\in{}^{X}R_{i}[Y]\).
As in the classical case of singular or simplicial homology, the connecting homomorphism \(\partial^{*}\) sends an element \([c]\in HM_{i+1}[X,Y]\) represented by an \(Y\)-relative cycle \(c\in R_{i+1}[X]\), to the class represented by the boundary \(\partial c\in{}^{X}R_{i}[Y]\subseteq R_{i}[X]\).
**Example 14**.: Consider \(\overrightarrow{D}^{n}=\overrightarrow{I}^{n}\) the cartesian product of \(n\) copies of the precubical set representing a directed segment, which is a version of a directed \(n\)-disc. We write \(\overrightarrow{S}^{n-1}\) for the boundary of \(\overrightarrow{D}^{n}\), which is a version of a directed \((n-1)\)-sphere, or "hollow \(n\)-hypercube". As for the cube example, the vertices of \(\overrightarrow{D}^{n}\) are numbered using a binary word of length \(n\).
Let us computes the corresponding homology modules when \(n=2\), \(\overrightarrow{D}^{2}\) is:
\[\begin{array}{c}\begin{array}{c}00\\ \end{array}\begin{array}{c}a0\\ \end{array}\begin{array}{c}01\\ \end{array}\\ \begin{array}{c}0b\\ \end{array}\begin{array}{c}C\\ \end{array}\begin{array}{c}a1\\ \end{array}\\ \begin{array}{c}10\\ \end{array}\begin{array}{c}1b\\ \end{array}\begin{array}{c}11\\ \end{array}\end{array}\]
The algebra \(R[X]\) is the matrix algebra:
\[\left(\begin{array}{c|cccc}&11&10&01&00\\ \hline 11&R&0&0&0\\ 10&R&R&0&0\\ 01&R&0&R&0\\ 00&R^{2}&R&R&R\end{array}\right)\]
Now, \(HM_{1}[\overrightarrow{D}^{2}]\) can be represented by the matrix:
\[\left(\begin{array}{c|cccc}&11&10&01&00\\ \hline 11&R&0&0&0\\ 10&R&R&0&0\\ 01&R&0&R&0\\ 00&R&R&R&R\end{array}\right)\]
and \(HM_{2}[\overrightarrow{D}^{2}]\) can be represented by:
\[\left(\begin{array}{c|cccc}&11&10&01&00\\ \hline 11&0&0&0&0\\ 10&0&0&0&0\\ 01&0&0&0&0\\ 00&R&0&0&0\end{array}\right)\]
all other \(HM_{i}[\overrightarrow{D}^{2}]\), \(i\geq 3\) being zero.
We first compute the homology modules of \(\overrightarrow{S}^{1}\). First, \(\overrightarrow{D}^{2}HM_{1}(\overrightarrow{S}^{1})\) is:
\[\left(\begin{array}{c|cccc}&11&10&01&00\\ \hline 11&R&0&0&0\\ 10&R&R&0&0\\ 01&R&0&R&0\\ 00&R^{2}&R&R&R\end{array}\right)\]
since \(R[\overrightarrow{D}^{2}]\) is actually the same as \(R[\overrightarrow{S}^{1}]\), and \(HM_{i}(\overrightarrow{D}^{2}R_{i}(\overrightarrow{S}^{1}))\) is zero for \(i\geq 2\).
So all \(\overrightarrow{D}^{2}HM_{i}[X,Y]=0\) for \(i\geq 3\) and the long exact sequence in relative homology is:
\(0\xrightarrow{}HM_{2}[\overrightarrow{D}^{2}]\xrightarrow{}HM_{2}[\overrightarrow{D} ^{2},\overrightarrow{S}^{1}]\xrightarrow{}HM_{1}[\overrightarrow{S}^{1}] \xrightarrow{}HM_{1}[\overrightarrow{D}^{2}]\xrightarrow{}HM_{1}[ \overrightarrow{D}^{2},\overrightarrow{S}^{1}]\xrightarrow{}0\)
We can easily check now that \(HM_{2}[\overrightarrow{D}^{2},\overrightarrow{S}^{1}]\) to be:
\[\left(\begin{array}{c|cccc}&11&10&01&00\\ \hline 11&0&0&0&0\\ 10&0&0&0&0\\ 01&0&0&0&0\\ 00&R&0&0&0\end{array}\right)\]
and \(HM_{1}[\overrightarrow{D}^{2},\overrightarrow{S}^{1}]=0\) (which can be computed directly as well). The interesting entry of this exact sequence of matrices of modules is entry corresponding to line \(00\) and column \(11\):
## 10 Conclusion and future work
In this paper, we studied a simple directed homology theory for certain precubical sets, based on the category of bimodules over the underlying path algebra. Doing so, we made links to other homology theories, first with persistent homology and then to natural homology, used as a directed topology invariant.
Doing so, we unveiled the importance of the restriction of scalar functor between abelian categories of bimodules based on related algebras, and its homological properties. This showed when discussing Kunneth theorem, and other exact sequences. We will, in a forthcoming paper, axiomatize homology theories that take values in such "varying abelian categories", making use of slightly "deformed" exact sequences.
One open question still concerns the right notion of \(HM_{0}\) of a precubical set. This should definitely be linked to (pair, see [36]) component categories in general, but this still has to be properly studied. This is linked to "tameness" issues in persistence that we believe are central to both persistence homology and directed homology theories.
We also plan on using the nice computational methods (such as fringe representations, see e.g. [31]) used in the context of persistence modules to make actual pratical computations of directed invariants for precubical sets.
|
2309.16198 | Adiabatic theorem for classical stochastic processes | We apply adiabatic theorems developed for quantum mechanics to stochastic
annealing processes described by the classical master equation with a
time-dependent generator. When the instantaneous stationary state is unique and
the minimum decay rate g is nonzero, the time-evolved state is basically
relaxed to the instantaneous stationary state. By formulating an asymptotic
expansion rigorously, we derive conditions for the annealing time T that the
state is close to the instantaneous stationary state. Depending on the time
dependence of the generator, typical conditions are written as T> const/g^a
with 1<a<2. We also find that a rigorous treatment gives the scaling T>const|ln
g|/g^2. | Kazutaka Takahashi | 2023-09-28T06:47:55Z | http://arxiv.org/abs/2309.16198v2 | # Adiabatic theorem for classical stochastic processes
###### Abstract
We apply adiabatic theorems developed for quantum mechanics to stochastic annealing processes described by the classical master equation with a time-dependent generator. When the instantaneous stationary state is unique and the minimum decay rate \(g\) is nonzero, the time-evolved state is basically relaxed to the instantaneous stationary state. By formulating an asymptotic expansion rigorously, we derive conditions for the annealing time \(\tau\) that the state is close to the instantaneous stationary state. Depending on the time dependence of the generator, typical conditions are written as \(\tau>\text{const.}\times g^{-\alpha}\) with \(1\leq\alpha\leq 2\). We also find that a rigorous treatment gives the scaling \(\tau>\text{const.}\times g^{-2}|\ln g|\).
## I Introduction
One of efficient methods for changing a state to a desired state is to control the system slowly by an external operation. The system itself evolves by a time evolution law and the problem is represented by a differential equation for a nonautonomous system. Since the stationary state changes as a function of time, it is generally hard to find the solution of the differential equation. When the system is changed very slowly, we rely on the adiabatic approximation. The adiabatic theorem of quantum mechanics describes how the evolved state deviates from the ideal one [1; 2]. Rigorous treatments of the theorem have a long history and we can find many variations depending on settings [3; 4; 5; 6; 7; 8]. Those studies revealed that the naive condition discussed in a standard textbook on quantum mechanics is not necessarily correct. A careful analysis of the asymptotic expansion gives nontrivial contributions.
One of the most prominent applications of the adiabatic theorem is quantum annealing/adiabatic quantum computation [9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. When we want to know the ground state of an Ising-spin Hamiltonian, we drive the system by a transverse field. The adiabatic theorem basically evaluates the annealing time that suppresses the error. The minimum annealing time is determined from the change rate of the Hamiltonian and the instantaneous energy gap between the ground state and the excited states.
Historically, the idea of the quantum annealing stems from a classical optimization method called simulated annealing [19]. The stationary state is described by the Gibbs distribution and we decrease the temperature of the system slowly to find the optimized configuration of the free energy. In that case, Geman-Geman found a protocol that guarantees a convergence to the optimized solution [20].
The simulated annealing is a classical process. When the dynamics is assumed to be Markovian, the time evolution is generally described by the classical master equation. Since the master equation is formally equivalent to the imaginary-time Schrodinger equation, it is not difficult to apply the adiabatic theorems developed for the case of the Schrodinger equation [21]. However, the imaginary time gives a relaxation dynamics, which implies that the process is not strongly dependent on the history of the time evolution. Rigorous treatments of the adiabatic theorem in quantum mechanics give contributions which are represented by integrals over the whole time. We expect that such contributions are absent in the classical stochastic processes and the theorem is greatly simplified. In the classical master equation with a time-dependent generator, we need to discuss how the relaxation and annealing dynamics affect the state of the system.
The classical master equation treats a time evolution of a probability distribution. Although the equation is formally equivalent to the imaginary-time Schrodinger equation, the probability nature restricts possible patterns of dynamics. As a result, we expect that we can find some simplifications on the adiabatic theorem.
In the present study, we treat the case where the instantaneous stationary state is defined uniquely. This corresponds to the standard setting used in the quantum cases [2]. We do not assume explicit form of the stationary state such as the Gibbs distribution and aim at finding a general adiabatic theorem which is derived under several fundamental conditions described below.
The organization of the paper is as follows. In Sec. II, we formulate the problem and describe settings used throughout the present study. We introduce adiabatic dynamics in Sec. III. It is used to develop adiabatic theorems in Sec. IV. In Sec. V, we treat several examples to examine general results. The last section VI is devoted to conclusion.
System
In stochastic dynamical processes, the state of the system is specified by a time-dependent probability distribution. We assume that the dynamics is a Markov process and the transition rate is inhomogeneous. We introduce a scaled time \(s=t/\tau\) where \(t\) is the physical time and \(\tau\) is the annealing time. The time evolution is carried out from \(t=0\) to \(t=\tau\). Correspondingly, \(s\) runs from \(0\) to \(1\). The probability distribution denoted by \(|p_{\tau}(s)\rangle=\sum_{n=0}^{N-1}|n\rangle\langle n|p_{\tau}(s)\rangle\), where \(\langle n|p_{\tau}(s)\rangle\) represents the \(n\)th component of the probability distribution and \(N\) is the number of events, obeys the master equation
\[|\dot{p}_{\tau}(s)\rangle=\tau W(s)|p_{\tau}(s)\rangle. \tag{1}\]
\(W(s)\) represents the transition-rate matrix. We denote the derivative with respect to the scaled time \(s\) by the dot symbol. Since the probability nature must be maintained throughout the time evolution, each offdiagonal component of the transition-rate matrix is nonnegative and the matrix satisfies
\[\langle L_{0}|W(s)=0, \tag{2}\]
where \(\langle L_{0}|=\sum_{n}\langle n|\) with \(\langle m|n\rangle=\delta_{m,n}\).
Throughout this study, we assume that there exists a unique instantaneous stationary state \(|p^{\rm(st)}(s)\rangle\) defined from the relation
\[W(s)|p^{\rm(st)}(s)\rangle=0. \tag{3}\]
When we operate the system slowly, the state of the system is relaxed to the instantaneous stationary state. For simplicity, we set that the initial probability distribution \(|p_{\tau}(0)\rangle=|p_{0}\rangle\) is given by the stationary state with respect to the initial transition-rate matrix \(W(0)=W_{0}\). We are interested in a long-time behavior and the final state is expected to be insensitive to the choice of the initial state.
The decay rate is characterized by the instantaneous eigenvalues of \(W(s)\). When \(N\) is finite, we can use the spectral representation
\[W(s)=\sum_{n=0}^{N-1}\Lambda_{n}(s)|R_{n}(s)\rangle\langle L_{n}(s)|, \tag{4}\]
where the left and right eigenstates satisfy \(\langle L_{m}(s)|R_{n}(s)\rangle=\delta_{m,n}\) and \(\sum_{n}|R_{n}(s)\rangle\langle L_{n}(s)|=1\). The stationary state is denoted by the component \(n=0\). That is, we have \(|p^{\rm(st)}(s)\rangle=|R_{0}(s)\rangle\), \(|p_{0}\rangle=|p^{\rm(st)}(0)\rangle=|R_{0}(0)\rangle\), and \(\Lambda_{0}(s)=0\). The assumption of the unique stationary state denotes that the minimum decay rate is positive:
\[g(s)=\min_{n\neq 0}|{\rm Re}\,\Lambda_{n}(s)|>0. \tag{5}\]
The time-evolved state \(|p_{\tau}(s)\rangle\) is different from the instantaneous stationary state \(|p^{\rm(st)}(s)\rangle\). One of the standard quantities for measuring the deviation is given by the trace distance
\[d_{\tau}(s)=\frac{1}{2}\sum_{n}|\langle n|p_{\tau}(s)\rangle-\langle n|p^{\rm( st)}(s)\rangle|. \tag{6}\]
When the annealing time \(\tau\) is large enough, \(|p_{\tau}(s)\rangle\) is close to \(|p^{\rm(st)}(s)\rangle\). The main problem discussed in the present study is to estimate the magnitude of \(\tau\) that results in a small \(d_{\tau}(s)\).
The master equation basically describes relaxation dynamics. When the time-evolved state is different from a fixed stationary state, it is relaxed to the stationary state. The decay rate can be estimated by \(g(s)\). The state is relaxed to the stationary state immediately if the annealing time satisfies the condition
\[\tau\geq{\rm const.}\times\frac{1}{g(s)}. \tag{7}\]
In the present case, the stationary state is slowly varied as a function of time and we need to discuss adiabatic dynamics. If we naively apply the simplest version of the quantum adiabatic theorem to the present system, the condition for a small deviation from the adiabatic state is given by
\[\tau\geq{\rm const.}\times\frac{\|\dot{W}(s)\|}{g^{2}(s)}, \tag{8}\]
where \(\|\cdot\|\) is a proper norm of matrices. Since we use the transition-rate matrix for the generator of the time evolution instead of the Hamiltonian, the minimum decay rate \(g(s)\) plays the role of the energy gap. To develop the adiabatic theorem for this system, we discuss the scaling of the annealing time at \(g(s)\to 0\). In that case, the relaxation condition in Eq. (7) is automatically satisfied when the naive adiabatic condition in Eq. (8) holds, and we can focus on the adiabatic condition in a small-\(g\) regime.
A careful analysis of the adiabatic theorem in quantum mechanics indicates that the condition corresponding to Eq. (8) is not necessarily correct [7; 8]. The main aim of the present study is to find a condition that is valid in the present system. We examine an asymptotic behavior of the trace distance. It is expected to be written as
\[d_{\tau}(s)\sim\frac{\tau_{1}(s)}{\tau}+\frac{\tau_{2}(s)}{\tau^{2}}+\cdots+e^ {-\tau\int_{0}^{s}ds^{\prime}\,g(s^{\prime})}(\cdots). \tag{9}\]
In the regime we are mainly interested in, the last exponential term is negligibly small and the adiabatic condition is obtained from the power-law part. Each term of the expansion is bounded from above and we estimate a minimum annealing time such that the distance is bounded by a specified maximum error as \(d_{\tau}(s)\leq\delta\).
## III Adiabatic dynamics
The stationary state \(|p^{\rm(st)}(s)\rangle\) has nothing to do with real-time dynamics and we introduce a virtual dynamical process that results in \(|p^{\rm(st)}(s)\rangle\). Differentiating the stationary state with \(s\), we can write
\[|\dot{p}^{\rm(st)}(s)\rangle=\left(\tau W(s)+\dot{P}(s)\right)|p^{\rm(st)}(s)\rangle, \tag{10}\]
where \(P(s)\) represents the projection onto the stationary state:
\[P(s)=|R_{0}(s)\rangle\langle L_{0}|. \tag{11}\]
Since the left eigenstate \(\langle L_{0}|\) is time independent, we can write
\[\dot{P}(s)=\dot{P}(s)P(s)=Q(s)\dot{P}(s)P(s), \tag{12}\]
where \(Q(s)=1-P(s)\). This simple structure of the projection operator is one of the major differences from the quantum case [2]. We have introduced \(W(s)\) in Eq. (10) for later convenience. Since \(W(s)|p^{\rm(st)}(s)\rangle=0\), it does not give any contribution to the stationary state.
Generalizing this time-evolution law to arbitrary states, we introduce the time-evolution operator \(U_{\tau}(s,s^{\prime})\). It obeys the equation of motion
\[\partial_{s}U_{\tau}(s,s^{\prime})=\left(\tau W(s)+\dot{P}(s)\right)U_{\tau}( s,s^{\prime}), \tag{13}\]
with the boundary condition \(U_{\tau}(s,s)=1\) and the associative law \(U_{\tau}(s,s^{\prime})U_{\tau}(s^{\prime},s^{\prime\prime})=U_{\tau}(s,s^{ \prime\prime})\). We can write \(|p^{\rm(st)}(s)\rangle=U_{\tau}(s,0)|p_{0}\rangle\) and
\[U_{\tau}(s,s^{\prime})P(s^{\prime})=P(s)U_{\tau}(s,s^{\prime}). \tag{14}\]
This relation can be verified by applying the time derivative to this expression.
We use a Volterra integral form to derive the adiabatic theorem. Generally, we can write for two kinds of time-evolved states \(|p^{(1)}(t)\rangle\) and \(|p^{(2)}(t)\rangle\) as
\[|p^{(1)}(t)\rangle-|p^{(2)}(t)\rangle=\int_{0}^{t}dt^{\prime}\,U^{(2)}(t,t^{ \prime})(W^{(1)}(t^{\prime})-W^{(2)}(t^{\prime}))|p^{(1)}(t^{\prime})\rangle, \tag{15}\]
where \(W^{(1)}(t)\) is the generator for the first state \(|p^{(1)}(t)\rangle\), \(W^{(2)}(t)\) is for the second, and \(U^{(1)}(t,t^{\prime})\) is the time-evolution operator of the first state. Applying this general formula to the present case, we have
\[|p_{\tau}(s)\rangle-|p^{\rm(st)}(s)\rangle=-\int_{0}^{s}ds^{\prime}\,U_{\tau}( s,s^{\prime})\dot{P}(s^{\prime})|p_{\tau}(s^{\prime})\rangle=-\int_{0}^{s} ds^{\prime}\,U_{\tau}(s,s^{\prime})|\dot{p}^{\rm(st)}(s^{\prime})\rangle. \tag{16}\]
Remarkably, the last expression is independent of \(|p_{\tau}(s)\rangle\). This is due to the property in Eq. (12).
We note that \(U_{\tau}(s,s^{\prime})\) is the time evolution operator of the stationary state \(|p^{\rm(st)}(s)\rangle\) and not of the time-evolved state \(|p_{\tau}(s)\rangle\). In quantum mechanics, the corresponding operator was introduced in Ref. [2]. Also, \(U_{\tau}(s,s^{\prime})\) is different from the time evolution operator of the adiabatic state defined from
\[\partial_{s}U_{\tau}^{\rm(ad)}(s,s^{\prime})=\left(\tau W(s)+W^{\rm(cd)}(s) \right)U_{\tau}^{\rm(ad)}(s,s^{\prime}), \tag{17}\]
where
\[W^{\rm(cd)}(s)=\sum_{n}(1-|R_{n}(s)\rangle\langle L_{n}(s)|)|\dot{R}_{n}(s) \rangle\langle L_{n}(s)|. \tag{18}\]
When we use this time-evolution law, arbitrary initial states written as
\[|p_{0}\rangle=\sum_{n}c_{n}|R_{n}(0)\rangle, \tag{19}\]
are transformed to the adiabatic state
\[U_{\tau}^{\rm(ad)}(s,0)|p_{0}\rangle=\sum_{n}c_{n}|R_{n}(s)\rangle\exp\left( \tau\int_{0}^{s}ds^{\prime}\,\Lambda_{n}(s^{\prime})-\int_{0}^{s}ds^{\prime} \,\langle L_{n}(s^{\prime})|\dot{R}_{n}(s^{\prime})\rangle\right). \tag{20}\]
\(W^{\rm(cd)}(s)\) is the counterpart of the counterdiabatic term defined in quantum mechanics [22; 23; 24] and was used in classical stochastic processes [25; 26; 27]. The generator \(\dot{P}(s)\) for \(U_{\tau}(s,s^{\prime})\) is obtained from \(W^{\rm(cd)}(s)\) as \(\dot{P}(s)=Q(s)W^{\rm(cd)}(s)P(s)\). The driving by \(U_{\tau}^{\rm(ad)}\) with the initial condition \(|p_{0}\rangle=|R_{0}(0)\rangle\) gives the identical time evolution as that by \(U_{\tau}\). Since we can use some useful properties described by the projection operator such as the last expression in Eq. (16), we use \(U_{\tau}\) rather than \(U_{\tau}^{\rm(ad)}\).
## IV Adiabatic theorem
### Asymptotic expansion
The integral in Eq. (16) is written as
\[-\int_{0}^{s}ds^{\prime}\,U_{\tau}(s,s^{\prime})|\dot{p}^{\rm(st)}(s^{\prime} )\rangle=-\int_{0}^{s}ds^{\prime}\,Q(s)U_{\tau}(s,s^{\prime})Q(s^{\prime})| \dot{p}^{\rm(st)}(s^{\prime})\rangle. \tag{21}\]
That is, the time-evolution operator \(U_{\tau}\) in Eq. (16) acts on the projected space excluding the stationary state. To find an asymptotic form of this expression, we introduce
\[|\phi^{(1)}(s)\rangle=G(s)\partial_{s}|p^{\rm(st)}(s)\rangle, \tag{22}\]
where
\[G(s)=Q(s)\frac{1}{-W(s)}Q(s). \tag{23}\]
Equation (16) is written as
\[|p_{\tau}(s)\rangle-|p^{\rm(st)}(s)\rangle=\int_{0}^{s}ds^{\prime}\,U_{\tau}(s,s^{\prime})W(s^{\prime})|\phi^{(1)}(s^{\prime})\rangle. \tag{24}\]
By noting
\[\partial_{s^{\prime}}\left(U_{\tau}(s,s^{\prime})|\phi^{(1)}(s^{\prime}) \rangle\right)=-\tau U_{\tau}(s,s^{\prime})W(s^{\prime})|\phi^{(1)}(s^{\prime })\rangle+U_{\tau}(s,s^{\prime})|\dot{\phi}^{(1)}(s^{\prime})\rangle, \tag{25}\]
we rewrite the integral as
\[|p_{\tau}(s)\rangle-|p^{\rm(st)}(s)\rangle=-\frac{1}{\tau}\left(|\phi^{(1)}(s )\rangle-U_{\tau}(s,0)|\phi^{(1)}(0)\rangle\right)+\frac{1}{\tau}\int_{0}^{s} ds^{\prime}\,U_{\tau}(s,s^{\prime})|\dot{\phi}^{(1)}(s^{\prime})\rangle. \tag{26}\]
The last term has a similar form as the integral in Eq. (16) and we can apply similar transformations repeatedly. We introduce
\[|\phi^{(k)}(s)\rangle=G(s)\partial_{s}|\phi^{(k-1)}(s)\rangle=\left(G(s)\partial_{ s}\right)^{k}|p^{(\rm st)}(s)\rangle, \tag{27}\]
for an integer \(k\) to write
\[|p_{\tau}(s)\rangle-|p^{(\rm st)}(s)\rangle=\sum_{k=1}^{M}\left(\frac{-1}{ \tau}\right)^{k}\left(|\phi^{(k)}(s)\rangle-U_{\tau}(s,0)|\phi^{(k)}(0)\rangle \right)-\left(\frac{-1}{\tau}\right)^{M}\int_{0}^{s}ds^{\prime}\,U_{\tau}(s,s^ {\prime})|\dot{\phi}^{(M)}(s^{\prime})\rangle, \tag{28}\]
where \(M\) is an arbitrary integer.
This technique is essentially the same as that used in quantum adiabatic theorems [3; 6; 7]. The projection operator is written in an integral form as
\[P(s)=\oint\frac{dz}{2\pi i}\,\frac{1}{z-W(s)}. \tag{29}\]
The integral contour in the complex-\(z\) plane encloses the origin and the other eigenvalues of \(W(s)\) denoted by points in the complex plane are outside the contour. We note that the condition \(g(s)>0\) is required to make the integral well-defined. In a similar way, we can define a transformation
\[\tilde{X}(s)=\oint\frac{dz}{2\pi i}\,\frac{1}{z-W(s)}X(s)\frac{1}{z-W(s)}, \tag{30}\]
for an arbitrary operator \(X(s)\). We can show that \(\Phi^{(k)}(s)=\tilde{\tilde{\Phi}}^{(k-1)}(s)\) with \(\Phi^{(0)}(s)=P(s)\) is written as \(\Phi^{(k)}(s)=|\phi^{(k)}(s)\rangle\langle L_{0}|\) with \(|\phi^{(0)}(s)\rangle=|R_{0}(s)\rangle=|p^{(\rm st)}(s)\rangle\). Due to the property in Eq. (12), the resulting form takes a considerably simpler form compared to the quantum case.
We can perform the expansion up to a desired order \(M\). Truncating the expansion is accomplished by neglecting the last term in Eq. (28). In rigorous treatments of the adiabatic theorems, the last term contribution is kept to derive a nontrivial result.
### Bounds of expansion coefficients (1)
Equation (28) consists of three parts. The first part is a simple expansion with respect to \(1/\tau\) and each term is characterized by \(|\phi^{(k)}(s)\rangle\). The second part involves \(U_{\tau}(s,0)\). It acts on a projected space as \(Q(s)U_{\tau}(s,0)Q(0)\), which implies that it involves exponentially-decaying factors like \(\exp(-\tau\int_{0}^{s}ds^{\prime}\,g(s^{\prime}))\). Then, \(U_{\tau}(1,0)|\phi^{(k)}(0)\rangle\) is negligibly small for \(\tau\int_{0}^{1}ds\,g(s)\gg 1\). This property is reasonable since the final state must be insensitive to the choice of the initial state, if the ergodicity condition is satisfied [21]. The last part is an integral over the whole time period and represents the correction when the expansion in the first part is truncated at a finite \(M\).
In this subsection, by taking \(M\to\infty\) formally, we examine
\[|p_{\tau}(s)\rangle-|p^{(\rm st)}(s)\rangle\sim\sum_{k=1}^{\infty}\left(\frac {-1}{\tau}\right)^{k}|\phi^{(k)}(s)\rangle. \tag{31}\]
When we use the trace distance in Eq. (6) as a measure, it is bounded by using the relation
\[\frac{1}{2}\sum_{n=0}^{N-1}|\langle n|\phi\rangle|=\frac{1}{2}\sum_{n}{\rm sgn }\left(\langle n|\phi\rangle\right)\cdot\langle n|\phi\rangle\leq\frac{1}{2} \sqrt{N\sum_{n}(\langle n|\phi\rangle)^{2}}. \tag{32}\]
We obtain
\[d_{\tau}(1)\sim\frac{1}{2}\sum_{n=0}^{N-1}\left|\sum_{k=1}^{\infty}\left(\frac {-1}{\tau}\right)^{k}\langle n|\phi^{(k)}(1)\rangle\right|\leq\frac{\sqrt{N}} {2}\sum_{k=1}^{\infty}\frac{1}{\tau^{k}}\||\phi^{(k)}(1)\rangle\|, \tag{33}\]
where \(\|\cdot\|\) denotes the vector norm. We use \(\|c_{1}|\phi_{1}\rangle+c_{2}|\phi_{2}\rangle\|\leq|c_{1}|\,\|\phi_{1}\rangle\|+| c_{2}|\,\|\phi_{2}\rangle\|\).
To evaluate the norm of each term, we need to know explicit forms of \(|\phi^{(k)}(s)\rangle\). Using the relation \(W(s)|\dot{p}^{(\rm st)}(s)\rangle=-\dot{W}(s)|p^{(\rm st)}(s)\rangle\), we write \(|\phi^{(1)}(s)\rangle\) as
\[|\phi^{(1)}(s)\rangle=G^{2}(s)\dot{W}(s)|p^{(\rm st)}(s)\rangle. \tag{34}\]
This form has a bound at \(s=1\) as
\[\||\phi^{(1)}(1)\rangle\|\leq\frac{\|\dot{W}(1)\|}{g^{2}(1)}, \tag{35}\]
where \(\|\cdot\|\) on the right hand side denotes the operator norm. Similarly, we have
\[|\phi^{(2)}(s)\rangle=\left(2G^{3}(s)\dot{W}(s)G(s)\dot{W}(s)+G^{2}(s)\dot{W}( s)G^{2}(s)\dot{W}(s)+G^{3}(s)\ddot{W}(s)\right)|p^{(\rm st)}(s)\rangle, \tag{36}\]
and
\[\||\phi^{(2)}(1)\rangle\|\leq 3\frac{\|\dot{W}(1)\|^{2}}{g^{4}(1)}+\frac{\| \ddot{W}(1)\|}{g^{3}(1)}. \tag{37}\]
These calculations indicate that at each order \(k\) the leading term is proportional to \((\|\dot{W}(s)\|/g^{2}(s))^{k}\). That is, we can write
\[\frac{1}{\tau^{k}}\||\phi^{(k)}(1)\rangle\|\leq(2k-1)!!\left(\frac{\|\dot{W}( 1)\|}{\tau g^{2}(1)}\right)^{k}+\cdots+\frac{\|\partial^{k}W(1)\|}{\tau^{k}g^ {k+1}(1)}. \tag{38}\]
In the simplest case where \(W(s)\) is linear in \(s\), only the first term remains. We obtain the naive result in Eq. (8) as the adiabatic condition. This conclusion is changed when \(\dot{W}(1)=0\). For example, when \(\partial^{k}W(1)=0\) for \(k\neq 2\), the expansion parameter is \(\|\ddot{W}(1)\|/\tau^{2}g^{3}(1)\) and the adiabatic condition is changed to
\[\tau\geq\mbox{const.}\times\frac{\sqrt{\|\ddot{W}(1)\|}}{g^{3/2}(1)}. \tag{39}\]
Generally, many terms contribute to the bound and we would find a complicated behavior \(\tau\sim\mbox{const.}\times g^{-\alpha}(1)\) ranging from \(\alpha=1\) to \(\alpha=2\). In a regime where \(g(1)\) is small, the contribution with \(\alpha=2\) gives the worst case bound.
We note that the result is only dependent on quantities at \(s=1\) and is independent of the history of the time evolution. This is because the relaxation occurs quickly in the present time evolution.
### Bounds of expansion coefficients (2)
The first term on the right hand side of Eq. (38) involves a factor \((2k-1)!!\) which becomes very large for \(k\to\infty\) and it is not clear whether the infinite series expansion makes sense. In rigorous treatments of adiabatic theorems, we take \(M\) finite and keep the last term in Eq. (28).
\(U_{\tau}(s,s^{\prime})\) in the last term acts on a projected space as \(Q(s)U_{\tau}(s,s^{\prime})Q(s^{\prime})\) and involves an exponentially-small factor. We decompose the integral at \(s=1\) as
\[\int_{0}^{1}ds\,U_{\tau}(1,s)|\dot{\phi}^{(M)}(s)\rangle=\int_{0}^{1-\delta} ds\,U_{\tau}(1,s)|\dot{\phi}^{(M)}(s)\rangle+\int_{1-\delta}^{1}ds\,U_{\tau}(1,s)| \dot{\phi}^{(M)}(s)\rangle. \tag{40}\]
We set \(\frac{1}{\tau g(1)}\ll\delta\ll 1\). The first term is exponentially suppressed as
\[\int_{0}^{1-\delta}ds\,e^{-\tau\int_{s}^{1}ds^{\prime}\,g(s^{\prime})}<\int_ {0}^{1-\delta}ds\,e^{-\tau\int_{1-\delta}^{1}ds^{\prime}\,g(s^{\prime})}\sim \left(1-\delta\right)e^{-\tau g(1)\delta}\ll 1. \tag{41}\]
The second term is evaluated as
\[\int_{1-\delta}^{1}ds\,U_{\tau}(1,s)|\dot{\phi}^{(M)}(s)\rangle\sim\int_{1- \delta}^{1}ds\,U_{\tau}(1,1)|\dot{\phi}^{(M)}(1)\rangle\sim\delta|\dot{\phi}^{ (M)}(1)\rangle. \tag{42}\]
Now we obtain
\[d_{\tau}(1) \sim \frac{1}{2}\sum_{n=0}^{N-1}\left|\sum_{k=1}^{M}\left(\frac{-1}{\tau} \right)^{k}\langle n|\phi^{(k)}(1)\rangle-\delta\left(\frac{-1}{\tau}\right)^{M }\langle n|\dot{\phi}^{(M)}(1)\rangle\right| \tag{43}\] \[\leq \frac{\sqrt{N}}{2}\left(\sum_{k=1}^{M}\frac{1}{\tau^{k}}\|\phi^{( k)}(1)\rangle\|+\frac{\delta}{\tau^{M}}\|\dot{\phi}^{(M)}(1)\rangle\|\right).\]
We examine the simplest case where \(W(s)\) is linear in \(s\). In that case, we obtain
\[\sum_{k=1}^{M}\frac{1}{\tau^{k}}\||\phi^{(k)}(1)\rangle\|+\frac{\delta}{\tau^{ M}}\||\dot{\phi}^{(M)}(1)\rangle\|=\sum_{k=1}^{M}(2k-1)!!\left(\frac{\|\dot{W}(1) \|}{\tau g^{2}(1)}\right)^{k}+\delta(2M+1)!!\left(\frac{\|\dot{W}(1)\|}{\tau g ^{2}(1)}\right)^{M+1}\tau g(1). \tag{44}\]
We discuss the condition that the last term on the right hand side of Eq. (44) is small. We optimize \(M\) so that this term is minimized [8]. When \(M\) is large enough, we can write
\[(2M+1)!!\delta\left(\frac{\|\dot{W}(1)\|}{\tau g^{2}(1)}\right)^{M+1}\tau g(1 )\sim\delta\left(\frac{2M\|\dot{W}(1)\|}{\tau g^{2}(1)}\right)^{M}\frac{\|\dot {W}(1)\|}{g(1)}, \tag{45}\]
and the optimized value \(M=M_{\rm opt}\) is obtained as
\[M_{\rm opt}\sim\frac{e^{-1}\tau g^{2}(1)}{2\|\dot{W}(1)\|}. \tag{46}\]
Then, we have
\[\delta\left(\frac{2M_{\rm opt}\|\dot{W}(1)\|}{\tau g^{2}(1)}\right)^{M_{\rm opt }}\frac{\|\dot{W}(1)\|}{g(1)}\sim\delta\exp\left(-\frac{e^{-1}\tau g^{2}(1)}{2 \|\dot{W}(1)\|}\right)\frac{\|\dot{W}(1)\|}{g(1)}. \tag{47}\]
This expression implies that the condition in Eq. (8) expected from the naive adiabatic theorem is not justified. To make this quantity small for \(g(1)\to 0\) we need
\[\tau\geq 2e\alpha\frac{\|\dot{W}(1)\|}{g^{2}(1)}\left|\ln\frac{g(1)}{g_{0}} \right|, \tag{48}\]
with \(\alpha\geq 1\). \(g_{0}\) represents a proper scale that makes \(g(1)/g_{0}\) dimensionless.
Finally, we show that the first term in Eq. (44) is negligible when we use Eqs. (46) and (48). It can be shown as
\[\sum_{k=1}^{M_{\rm opt}}(2k-1)!!\left(\frac{\|\dot{W}(1)\|}{\tau g ^{2}(1)}\right)^{k} \sim \sum_{k=1}^{M_{\rm opt}}\left(2k\frac{\|\dot{W}(1)\|}{\tau g^{2} (1)}\right)^{k} \tag{49}\] \[\leq \sum_{k=1}^{\sqrt{M_{\rm opt}}}\left(\sqrt{2e^{-1}\frac{\|\dot{W} (1)\|}{\tau g^{2}(1)}}\right)^{k}+\sum_{k=\sqrt{M_{\rm opt}}+1}^{M_{\rm opt}} \left(e^{-1}\right)^{k}\] \[\leq 2\sqrt{2e^{-1}\frac{\|\dot{W}(1)\|}{\tau g^{2}(1)}}+2e^{-\sqrt{ M_{\rm opt}}}.\]
In the last line, we use \(\sum_{k=1}^{M}r^{k}\leq 2r\) for \(r\ll 1\). The last expression approaches zero for \(g(1)\to 0\).
In conclusion, the distance in Eq. (6) is kept small at \(g(1)\to 0\) if, but not only if, the condition in Eq. (48) is satisfied. We can apply a similar analysis when \(W(s)\) is nonlinear in \(s\). We note that the analysis in the present subsection gives the worst case bound. The logarithmic factor in Eq. (48) becomes large only when \(g(1)\) takes a considerably small value.
## V Examples
### Two-state system
As the simplest nontrivial case, we treat two-state processes. The transition-rate matrix is generally parametrized as
\[W(s)=g(s)\left(\begin{array}{cc}-(1-p(s))&p(s)\\ 1-p(s)&-p(s)\end{array}\right). \tag{50}\]
\(p(s)\) represents a probability and the stationary distribution is given by
\[|p^{\rm(st)}(s)\rangle=\left(\begin{array}{c}p(s)\\ 1-p(s)\end{array}\right). \tag{51}\]
\(g(s)\) is positive and represents the decay rate in Eq. (5). In the present case, the decay rate appears in the transition-rate matrix as an overall factor and the stationary state is independent of \(g\)(s).
The time-evolved state is written as
\[|p_{\tau}(s)\rangle-|p^{\rm(st)}(s)\rangle=-\int_{0}^{s}ds^{\prime}\,\dot{p}(s^ {\prime})e^{-\tau\int_{s^{\prime}}^{s}ds^{\prime\prime}\,g(s^{\prime\prime})} \left(\begin{array}{c}1\\ -1\end{array}\right), \tag{52}\]
and the asymptotic expansion gives
\[d_{\tau}(1) = \left|\sum_{k=1}^{M}\left(-\frac{1}{\tau}\right)^{k}\left[\left. \left(\frac{1}{g(s)}\partial_{s}\right)^{k}p(s)\right|_{s=1}-e^{-\tau\int_{0} ^{1}ds\,g(s)}\left.\left(\frac{1}{g(s)}\partial_{s}\right)^{k}p(s)\right|_{s=0 }\right]\right. \tag{53}\] \[\left.-\left(-\frac{1}{\tau}\right)^{M}\int_{0}^{1}ds\,e^{-\tau \int_{s}^{1}ds\,g(s)}\partial_{s}\left(\frac{1}{g(s)}\partial_{s}\right)^{M}p (s)\right|.\]
In the following, we treat the case \(p(s)=s\). The stationary state changes linearly from \(|p^{\rm(st)}(0)\rangle=(0,1)^{\rm T}\) to \(|p^{\rm(st)}(1)\rangle=(1,0)^{\rm T}\). As for the decay rate \(g(s)\), we consider the linear and quadratic cases
\[g(s)=\left\{\begin{array}{ll}g_{0}[1-(1-r)s]&\mbox{linear}\\ g_{0}[r+(1-r)(1-s)^{2}]&\mbox{quadratic}\end{array},\right. \tag{54}\]
Figure 1: The trace distance \(d_{\tau}(s)\) for a two-state process. We consider the linear case in the panel (a) and the quadratic case in (b). The inset in each panel represents \(g(s)\).
where \(g_{0}\) and \(r\) are positive. We mainly discuss the domain with \(r\ll 1\). Then, the decay rate decreases as a function of \(s\) from \(g(0)=g_{0}\) to \(g(1)=g_{0}r\). The parameter \(g_{0}\) determines the overall scale of the dynamics and the result is dependent on \(\tau\) and \(r\). The relaxation condition gives \(\tau g(1)=\tau g_{0}r\gg 1\). We show \(d_{\tau}(s)\) for several values of \(\tau\) and \(r\) in Fig. 1.
Figure 2: \(d_{\tau}(1)\) as a function of \(r=g(1)/g(0)\) for the linear case (a) and the quadratic case (b). The inset in each panel shows that all curves collapse into a single curve. We use notations \(g=g(1)\), \(\Delta=|\dot{g}(1)|=g_{0}(1-r)\) in the panel (a), and \(\Delta=\ddot{g}(1)=2g_{0}(1-r)\) in the panel (b).
In the linear case, neglecting exponentially-decaying contributions, we can write Eq. (53) as
\[d_{\tau}(1)\sim\left|\sum_{k=1}^{M}(2k-3)!!\left(-\frac{|\dot{g}(1)|}{\tau g^{2}( 1)}\right)^{k}\frac{g(1)}{|\dot{g}(1)|}-(2M-1)!!\int_{0}^{1}ds\,e^{-\tau\int_{s }^{1}ds\,g(s)}\left(-\frac{|\dot{g}(1)|}{\tau g^{2}(s)}\right)^{M}\right|. \tag{55}\]
This expression is slightly different from Eq. (44). In the present case, the stationary state is independent of \(g(s)\), which gives a difference from the general argument. The form of the first term implies that \(|\dot{g}(1)|d_{\tau}(1)/g(1)\) is a function of \(|\dot{g}(1)|/\tau g^{2}(1)\). In a similar way, we can find the asymptotic form of the quadratic case as
\[d_{\tau}(1)\sim\left|\frac{\tau g^{2}(1)}{\tilde{g}(1)}\left[\frac{\tilde{g}( 1)}{\tau^{2}g^{3}(1)}-\left(\frac{\tilde{g}(1)}{\tau^{2}g^{3}(1)}\right)^{2}+ 10\left(\frac{\tilde{g}(1)}{\tau^{2}g^{3}(1)}\right)^{3}+\cdots\right]\right|. \tag{56}\]
This implies that \(\tilde{g}(1)d_{\tau}(1)/\tau g^{2}(1)\) is a function of \(\tilde{g}(1)/\tau^{2}g^{3}(1)\). We can confirm these expectations in Fig. 2.
Applying the method discussed in Sec. IV.3, we find the adiabatic condition
\[\tau\geq\left\{\begin{array}{ll}\mbox{const.}\times\frac{|\dot{g}(1)|}{g^{2 }(1)}&\mbox{linear}\\ \mbox{const.}\times\frac{|\tilde{g}(1)|^{1/2}}{g^{3/2}(1)}&\mbox{quadratic} \end{array}\right.. \tag{57}\]
We note that a logarithmic factor is not required in this case. To discuss the scaling relations, we calculate the minimum annealing time \(\tau_{\rm min}\) satisfying the condition \(d_{\tau}(1)\leq\delta\) with a small constant \(\delta\). The result is plotted by solid lines in Fig. 3. The inset of each panel implies that the scaling relation is different from Eq. (57). Scaled annealing time \(\tau_{\rm min}g^{\alpha}(1)\) with \(\alpha=2\) or \(3/2\) approaches zero at \(g(1)\to 0\). This behavior is due to an overall factor in Eqs. (55) and (56). To remove the effect of the factor, we consider modified conditions \(|\dot{g}(1)|d_{\tau}(1)/g(1)\leq\delta\) for the linear case and \(\ddot{g}(1)d_{\tau}(1)/\tau g^{2}(1)\leq\delta\) for the quadratic case. The result is plotted by dashed lines in Fig. 3. The plot in each panel clearly indicates \(\tau_{\rm min}g^{\alpha}(1)\sim\mbox{const.}\) with \(\alpha=2\) or \(3/2\). Since the modified condition guarantees \(d_{\tau}(1)\ll 1\), we can conclude the scaling relations (57).
### Three-state system
We treat a three-state system depicted in Fig. 4. The transition-rate matrix is linear in \(s\) and is given by
\[W(s)=g_{0}\left(\begin{array}{ccc}-2s&1-s&\epsilon_{1}\\ s&-(1-s)&\epsilon_{2}\\ s&0&-(\epsilon_{1}+\epsilon_{2})\end{array}\right), \tag{58}\]
Figure 4: A three-state inhomogeneous process. The state is basically transferred from the node 1 to 2. Each transition rate is characterized by the quantity attached to each arrow.
where \(g_{0}\), \(\epsilon_{1}\), and \(\epsilon_{2}\) are positive. The state is in the node 1 at \(s=0\) and is basically transferred to the node 2. The stationary state is given by
\[|p^{\rm(st)}(s)\rangle=\frac{1}{\epsilon_{1}+\epsilon_{2}(1+s)+s(1-s)}\left( \begin{array}{c}(\epsilon_{1}+\epsilon_{2})(1-s)\\ (\epsilon_{1}+2\epsilon_{2})s\\ s(1-s)\end{array}\right). \tag{59}\]
Transitions to the node 3 give deviations from the stationary state. The minimum decay rate is given by
\[g(s)=\frac{g_{0}}{2}\left[1+s+\epsilon_{1}+\epsilon_{2}-\sqrt{(1+s+\epsilon_{ 1}+\epsilon_{2})^{2}-4[\epsilon_{1}+\epsilon_{2}(1+s)+s(1-s)]}\right]. \tag{60}\]
This function becomes minimum at \(s=1\) as shown in the inset of Fig. 5. \(g(1)\) takes a small value when \(\epsilon_{1}\ll 1\) and \(\epsilon_{2}\ll 1\). As we show in Fig. 5, \(d_{\tau}(s)\) deviates from zero when \(g(s)\) takes a small value.
Figure 5: The trace distance \(d_{\tau}(s)\) for a three-state process. The inset represents \(g(s)\).
Figure 6: (a) \(d_{\tau}(1)\) as a function of \(g=g(1)\). (b) The same data is plotted as a function of \(1/\tau g^{2}\).
Since the effects of \(\epsilon_{1}\) and \(\epsilon_{2}\) give basically similar results, we set \(\epsilon_{2}=0\) in the following analysis. Figure 6 shows \(d_{\tau}(1)\) as a function of \(g(1)\). In the present case, the transition-rate matrix is linear in \(s\) and we can use Eq. (44). When \(d_{\tau}(1)\) is plotted as a function of \(1/\tau g^{2}(1)\), all curves collapse into a single curve if \(\tau g(1)\) is large enough as we see in the panel (b) of Fig. 6.
Figure 7 shows the minimum annealing time \(\tau_{\rm min}\) that satisfies \(d_{\tau}(1)\leq\delta\). The plots in the panel (b) show that the scaling \(\tau_{\rm min}\sim 1/g^{2}(1)\) holds approximately. Since the general discussion uses several inequality relations, it is generally difficult to fit the data by the scaling of the upper bound.
## VI Conclusion
In conclusion, we have developed the adiabatic theorem for classical stochastic processes. The formulation goes along the theorem for quantum systems and we find a great simplification due to several properties of the probability distribution. Since the classical master equation describes relaxation to the stationary state, the time-evolved state is insensitive to the whole history of the time evolution. As a result, the theorem can be described basically by using quantities defined instantaneously.
By using a rigorous treatment, we find Eq. (48) as a worst case bound. It is a sufficient condition and the logarithmic factor become large only when the minimum decay rate takes a considerably small value. In fact, we found in several examples that the naive scaling in Eq. (8) is enough to guarantee the error suppression.
In the present study, we treated the generic form of the classical master equation. One of most interesting applications is simulated annealing where the stationary distribution is given by the Gibbs distribution of an Ising Hamiltonian. To reproduce the Geman-Geman formula [20] from the quantum-classical mapping [28], we need to discuss the energy gap of the effective Hamiltonian. Combining with the condition in Eq. (8), we can find a time dependence of the temperature [29; 21]. It is not clear how the result is changed when the condition in Eq. (8) is modified and we leave the problem as an open one.
Figure 7: The minimum annealing time \(\tau_{\rm min}\) that satisfies \(d_{\tau}(1)\leq\delta\) with \(\delta=0.01\). All curves in the panel (a) and (b) represent the same data.
###### Acknowledgements.
The author is grateful to Yasuhiro Utsumi for useful discussions. This work was supported by JSPS KAKENHI Grants No. JP20H05666 and No. JP20H01827.
|
2307.16434 | Entangling quantum logic gates in neutral atoms via the microwave-driven
spin-flip blockade | The Rydberg dipole-blockade has emerged as the standard mechanism to induce
entanglement between neutral atom qubits. In these protocols, laser fields that
couple qubit states to Rydberg states are modulated to implement entangling
gates. Here we present an alternative protocol to implement entangling gates
via Rydberg dressing and a microwave-field-driven spin-flip blockade [Y.-Y. Jau
et al, Nat. Phys. 12, 71 (2016)]. We consider the specific example of qubits
encoded in the clock states states of cesium. An auxiliary hyperfine state is
optically dressed so that it acquires partial Rydberg character. It thus acts
as a proxy Rydberg state, with a nonlinear light-shift that plays the role of
blockade strength. A microwave-frequency field coupling a qubit state to this
dressed auxiliary state can be modulated to implement entangling gates. Logic
gate protocols designed for the optical regime can be imported to this
microwave regime, for which experimental control methods are more robust. We
show that unlike the strong dipole-blockade regime usually employed in Rydberg
experiments, going to a moderate-spin-flip-blockade regime results in faster
gates and smaller Rydberg decay. We study various regimes of operations that
can yield high-fidelity two-qubit entangling gates and characterize their
analytical behavior. In addition to the inherent robustness of microwave
control, we can design these gates to be more robust to thermal fluctuations in
atomic motion as well to laser amplitude, and other noise sources such as stray
background fields. | Vikas Buchemmavari, Sivaprasad Omanakuttan, Yuan-Yu Jau, Ivan Deutsch | 2023-07-31T06:41:26Z | http://arxiv.org/abs/2307.16434v2 | # Entangling quantum logic gates in neutral atoms via the microwave-driven spin-flip blockade
###### Abstract
The Rydberg dipole-blockade has emerged as the standard mechanism to induce entanglement between neutral atom qubits. In these protocols, laser fields that couple qubit states to Rydberg states are modulated to implement entangling gates. Here we present an alternative protocol to implement entangling gates via Rydberg dressing and a microwave-field-driven spin-flip blockade [1]. We consider the specific example of qubits encoded in the clock states states of cesium. An auxiliary hyperfine state is optically dressed so that it acquires partial Rydberg character. It thus acts as a proxy Rydberg state, with a nonlinear light-shift that plays the role of blockade strength. A microwave-frequency field coupling a qubit state to this dressed auxiliary state can be modulated to implement entangling gates. Logic gate protocols designed for the optical regime can be imported to this microwave regime, for which experimental control methods are more robust. We show that unlike the strong dipole-blockade regime usually employed in Rydberg experiments, going to a moderate-spin-flip-blockade regime results in faster gates and smaller Rydberg decay. We study various regimes of operations that can yield high-fidelity two-qubit entangling gates and characterize their analytical behavior. In addition to the inherent robustness of microwave control, we can design these gates to be more robust to thermal fluctuations in atomic motion as well to laser amplitude, and other noise sources such as stray background fields.
## I Introduction
Arrays of optically trapped atoms have emerged as a powerful platform for quantum information processing (QIP) [2; 3; 4; 5]. This architecture has a number of unique capabilities including the potential to scale to arrays with hundreds of atoms [6; 7], the ability to reconfigure the geometry through atom transport [8; 9; 10], and multiple atomic species and mechanisms with which to encode, manipulate, and measure quantum information [11; 12; 13; 14; 15; 16; 17; 18]. Applications include quantum metrology [19; 20; 21; 22; 23], simulation of many-body physics [6; 7; 24; 21], optimization of combinatoric problems [25; 26; 27], and universal quantum computing with potential paths to fault-tolerant error correction [28; 29; 10]. The development of neutral atom quantum computing has paralleled the development of atomic clocks, both in the traditional alkali atoms (cesium and rubidium) that define microwave frequency standards and in alkaline earth-like atoms (strontium and ytterbium) that define optical frequency standards [30; 31; 32]. While the latter provides a path to ultra-precise clocks, manipulating qubits at microwave frequencies offers potential advantages for coherent control required for QIP, and will be the focus of this work.
The standard entangling mechanism for neutral-atom QIP is the Rydberg dipole-blockade, arising from the strong Van der Waals interaction \(V_{rr}\)[3]. Typically protocols for entangling gates are achieved through the phases that are accumulated as Rabi oscillations between ground and Rydberg states lead to a closed loop trajectories on the Bloch sphere [12]. Progress has been made in the recent years to improve and optimize these gates, especially using quantum optimal control algorithms [33; 34], which have been demonstrated in experiments [35; 15]. While the fidelity is fundamentally limited by the lifetime of Rydberg states, the current gate infidelities are dominated by other noise sources including inhomogeneities due to atomic thermal motion, laser noise, and imperfect shielding of stray electric fields. In principle, some of these effects can be mitigated using dynamical decoupling or other composite pulse schemes and robust control techniques [36; 37].
Alternative protocols for implementing entangling gates employ "Rydberg dressing" whereby one of the qubit states is optically dressed with some Rydberg character [38]. Due to the strong interaction between the atoms when both are in the Rydberg state, the resulting light shift of two dressed atoms is different from two independently dressed atoms. The difference in the light shift energies, \(J\), provides an entangling mechanism [19; 39; 40]. Of particular interest is the phenomenon of the spin-flip blockade for qubits encoded in the clock states of alkali atoms [1]. In the presence of Rydberg dressing, a microwave photon can flip the qubit spin from the lower to upper hyperfine state, but the spin of two qubits is blockaded when \(J\) is sufficiently large. Like its optical Rydberg-blockade counterpart[13], the microwave spin-flip blockade provides a direct route to creating Bell states [1].
In this paper, we extend the use of the spin-flip blockade from Bell-state preparation to full two-qubit logic
gates. A new ingredient is the introduction of an auxiliary dressed hyperfine level that plays the role of the Rydberg state but for microwave-driven transitions. Importantly, this allows us to map the dipole-blockade physics implemented in the optical regime to the spin-flip blockade in the microwave regime. Hence, any gate protocols that can be implemented in the optical regime, using an optical/UV field coupling to the Rydberg state, can be implemented in the microwave regime. This can potentially lead to more robust implementations as we can employ the mature tools of microwave control, and reduce other inhomogeneities such as noise from residual Doppler shifts.
The remainder of this paper is organized as follows. In Sec. (II), we show how to map the physics of the Rydberg blockade to the spin-flip blockade using dressed states, which forms the basis of our gate protocols. In Sec. (III) we consider specific gate protocols and the use of quantum control to optimize performance. In Sec. (IV) we consider the most important sources of noise due to thermal fluctuations in the atomic motion and show how robust optimal control can be used to safeguard against this. Sec. (V) summarizes the results and presents an outlook for future work.
## II Mapping from optical to microwave control
To begin we describe how coherent control associated with the Rydberg-blockade in the optical regime can be mapped to the microwave regime via Rydberg-dressing and the spin-flip blockade. For concreteness, we consider \({}^{133}\)Cs where qubits are stored in the hyperfine levels of the electronic ground state \(6S_{1/2}\), though this easily generalizes to other species for both alkali and alkaline-earth-like atoms. We take the logical states to be the clock states, \(\left|0\right\rangle=\left|F=4,m_{F}=0\right\rangle\), \(\left|1\right\rangle=\left|F=3,m_{F}=0\right\rangle\).
In the presence of a bias magnetic field, when a two-qubit gate is to be performed, the \(\left|0\right\rangle\)-state of each atom can be shelved in the \(\left|F=3,m_{F}=1\right\rangle\) state, where it is effectively removed from further interactions. The state \(\left|F=4,m_{F}=0\right\rangle\equiv\left|a\right\rangle\) is then an auxiliary state, which is coupled to a Rydberg state \(\left|r\right\rangle\) via a near-resonance optical/UV laser field with detuning \(\Delta_{\mathrm{L}}\) and Rabi frequency \(\Omega_{\mathrm{L}}\), as shown in Fig. 1a. For a single atom this results in a dressed Autler-Townes doublet
\[\begin{split}&\left|\tilde{a}\right\rangle=\cos\frac{\theta}{2} \left|a\right\rangle+\sin\frac{\theta}{2}\left|r\right\rangle\\ &\left|\tilde{r}\right\rangle=\cos\frac{\theta}{2}\left|r\right\rangle -\sin\frac{\theta}{2}\left|a\right\rangle\end{split} \tag{1}\]
where \(\tan\theta=-\Omega_{\mathrm{L}}/\Delta_{\mathrm{L}}\). These states are light-shifted by energies \(-\Delta_{\mathrm{L}}/2\pm\sqrt{\Omega_{\mathrm{L}}^{2}+\Delta_{\mathrm{L}}^{2 }}/2\) (here and throughout \(\hbar=1\)). Due to the admixture, the dressed \(\left|\tilde{a}\right\rangle\) state then plays the role of the Rydberg-state for entangling the spin qubits. The states \(\left|1\right\rangle\) and \(\left|\tilde{a}\right\rangle\) can be coupled by a microwave/Raman field with an effective Rabi frequency \(\tilde{\Omega}_{\mu\nu}=\cos(\theta/2)\Omega_{\mu\nu}\), where \(\Omega_{\mu\nu}\) is the Rabi frequency coupling \(\left|1\right\rangle\) to the bare state \(\left|a\right\rangle\). When driving this transition resonantly, if \(\sqrt{\Omega_{\mathrm{L}}^{2}+\Delta_{\mathrm{L}}^{2}}\gg\tilde{\Omega}_{\mu\nu}\), we can neglect the coupling from \(\left|1\right\rangle\) to \(\left|\tilde{r}\right\rangle\).
To entangle two qubits we consider symmetrically dressing the atoms by the same uniform laser and microwave/Raman fields. Interactions between the qubits only occur when both atoms are initially in the \(\left|1\right\rangle\) state and then excited by microwaves. To describe the entangling interaction, we need only to consider the symmetric subspace, \(\left|11\right\rangle\), \(\left|1a\right\rangle_{+}\), and \(\left|aa\right\rangle\); here and throughout \(\left|xy\right\rangle_{\pm}\equiv(\left|xy\right\rangle\pm\left|yx\right\rangle )/\sqrt{2}\). The \(\left|1a\right\rangle_{+}\) state is coupled to \(\left|1r\right\rangle_{+}\) with Rabi frequency \(\Omega_{\mathrm{L}}\), yielding the same single atom dressed states as Eq. (1), denoted \(\left|1\tilde{a}\right\rangle_{+}\) and \(\left|1\tilde{r}\right\rangle_{+}\). The state \(\left|aa\right\rangle\) is coupled to \(\left|ar\right\rangle_{+}\) with Rabi frequency \(\sqrt{2}\Omega_{\mathrm{L}}\). For conditions well-approximated by a perfect
Figure 1: Dressing scheme to map the optical Rydberg blockade to the microwave spin-flip blockade. (a) Single atom level structure. Electronic ground states \(\left|a\right\rangle\) and \(\left|1\right\rangle\) (e.g., hyperfine clock states) can be coupled by a microwave/Raman field with Rabi frequency \(\Omega_{\mu\nu}\) and detuning \(\Delta_{\mu\nu}\). The “auxiliary” state \(\left|a\right\rangle\) is dressed through optical coupling to Rydberg state \(\left|r\right\rangle\) with Rabi frequency \(\Omega_{\mathrm{L}}\) and detuning \(\Delta_{L}\), leading to a one-atom light shift \(E_{LS}^{(1)}\). The admixture of Rydberg character modifies the microwave coupling Rabi frequency \(\tilde{\Omega}_{\mu\nu}=\langle a|\tilde{a}\rangle\Omega_{\mu\nu}\) and detuning, \(-\tilde{\Delta}_{\mu\nu}=-\Delta_{\mu\nu}+E_{LS}^{(1)}\). (b) Level diagram in the two-atom symmetric subspace, including atom-atom interactions. In the symmetric state \(\left|1a\right\rangle_{+}=(\left|1a\right\rangle+\left|a1\right\rangle)/\sqrt{2}\), only one atom can be optically dressed through coupling to \(\left|1r\right\rangle_{+}\) with Rabi frequency \(\Omega_{\mathrm{L}}\), leading to dressed state \(\left|1\tilde{a}\right\rangle\). The state \(\left|aa\right\rangle\), in which two atoms can be optical excited, is coupled to \(\left|ar\right\rangle_{+}\) with Rabi frequency \(\sqrt{2}\Omega_{L}\), but this state is blockaded from excitation to \(\left|rr\right\rangle\) by the Van der Waals energy \(V_{rr}\). The result is a two-atom light shift \(E_{LS}^{(2)}=2E_{LS}^{(1)}+J\), leading to the dressed ground-state \(\left|\widetilde{a}\widetilde{a}\right\rangle\) shown on the right. The state \(\left|11\right\rangle\) is coupled to \(\left|1\tilde{a}\right\rangle_{+}\) by microwave/Raman photons, but the state \(\left|1\tilde{a}\right\rangle_{+}\) is blockaded from excitation to \(\left|\widetilde{a}\widetilde{a}\right\rangle\) due to the nonlinear light shift \(J\) (the spin-flip blockade). The level diagram for the optically-coupled triplet \(\{\left|aa\right\rangle,\left|ar\right\rangle_{+},\left|rr\right\rangle\}\) maps directly to the microwave-coupled dressed triplet \(\{\left|11\right\rangle,\left|1\tilde{a}\right\rangle_{+},\left|\widetilde{a} \widetilde{a}\right\rangle\}\) level diagram.
Rydberg blockade, the state \(\left|rr\right\rangle\) can be ignored and the resulting dressed states are denoted \(\left|\widetilde{a}\widetilde{a}\right\rangle\), \(\left|\widetilde{a}\widetilde{r}\right\rangle_{+}\). This doublet is split by \(\sqrt{2\Omega_{\mathrm{L}}^{2}+\Delta_{\mathrm{L}}^{2}}\). The nonlinear behavior of the light shift, due to the Rydberg blockage, implies that the energy of the two interacting atoms is not equal to the sum of the energies of the noninteracting atoms. The difference is the "entangling energy," defined as \(J=E_{\widetilde{a}\widetilde{a}}-2E_{1\widetilde{a}_{+}}=\frac{1}{2}[\Delta_{ \mathrm{L}}\pm(\sqrt{2\Omega_{\mathrm{L}}^{2}+\Delta_{\mathrm{L}}^{2}}-2 \sqrt{\Omega_{\mathrm{L}}^{2}+\Delta_{\mathrm{L}}^{2}}]\). The \(\pm\) sign corresponds to two branches of the light shift, adiabatically connected to the red or blue side of resonance[40].
Critically, in the presence of dressing, when driving the microwave transition \(\left|11\right\rangle\rightarrow\left|1\widetilde{a}\right\rangle_{+}\) with Rabi frequency \(\tilde{\Omega}_{\mu\mu}\), the transition \(\left|1\widetilde{a}\right\rangle_{+}\rightarrow\left|\widetilde{a} \widetilde{a}\right\rangle\) will be blockaded when \(J\gg\tilde{\Omega}_{\mu\nu}\). This is the spin-flip blockade, as first observed in [1]. As seen in Fig. 1, the level structure of the dressed states mimics that of the optically driven ground-Rydberg system with the dressed auxiliary state as a proxy for the Rydberg state, and the spin-flip-blockade replacing the Rydberg blockade. This allows us to map all gate protocols from the optical regime to the microwave regime.
While the essential physics is well understood assuming a perfect Rydberg blockade, this may not be the optimal operating point given the fundamental limit on the gate fidelity determined by the decay rate of Rydberg state \(\Gamma_{r}\). More generally when two interacting atoms participate in dressing, including some admixture of doubly-excited Rydberg states, the result is the dressed triplet,
\[\left|\widetilde{a}\widetilde{a}\right\rangle =\alpha\left|aa\right\rangle+\beta\left|ar\right\rangle_{+}+\gamma \left|rr\right\rangle,\] \[\left|\widetilde{a}\widetilde{r}\right\rangle_{+} =\alpha_{2}\left|ar\right\rangle_{+}+\beta_{2}\left|aa\right\rangle +\gamma_{2}\left|rr\right\rangle, \tag{2}\] \[\left|\widetilde{rr}\right\rangle =\alpha_{3}\left|rr\right\rangle+\beta_{3}\left|aa\right\rangle +\gamma_{3}\left|ar\right\rangle_{+},\]
with dressed energies \(E_{\widetilde{a}\widetilde{a}}\), \(E_{\widetilde{a}\widetilde{r}_{+}}\) and \(E_{\widetilde{rr}}\) respectively. In this case the Rabi frequency for the coupling \(\left|11\right\rangle\leftrightarrow\left|1\widetilde{a}\right\rangle_{+}\) is given by \(\sqrt{2}\tilde{\Omega}_{\mu\nu}=\sqrt{2}(\cos\frac{\theta}{2})\Omega_{\mu\nu}\) and for \(\left|1\widetilde{a}\right\rangle_{+}\leftrightarrow\left|\widetilde{a} \widetilde{a}\right\rangle\), \(\sqrt{2}\tilde{\Omega}_{\mu\nu}^{\prime}=\sqrt{2}(\alpha\cos\frac{\theta}{2}+ \beta\sin\frac{\theta}{2}/\sqrt{2})\Omega_{\mu\nu}\). Again, for appropriate microwave detuning we can neglect coupling of \(\left|1\widetilde{a}\right\rangle_{+}\) to \(\left|\widetilde{a}\widetilde{r}\right\rangle_{+}\) and \(\left|\widetilde{rr}\right\rangle\). Note, since the states \(\left|\widetilde{a}\right\rangle,\left|\widetilde{a}\widetilde{a}\right\rangle\) have only partial Rydberg character, their effective decay rates will be proportional to the admixture of Rydberg state in the dressing,
\[\Gamma_{\widetilde{a}} =\sin^{2}\frac{\theta}{2}\Gamma_{r} \tag{3}\] \[\Gamma_{\widetilde{a}\widetilde{a}} =(\left|\beta\right|^{2}+2|\gamma|^{2})\Gamma_{r} \tag{4}\]
where we assumed that the state \(\left|rr\right\rangle\) decays twice as fast as state \(\left|r\right\rangle\).
In summary, through Rydberg dressing the optical physics of the dipole blockade is mapped to the microwave spin-flip blockade, with \(\left|\tilde{a}\right\rangle\) playing the role of \(\left|r\right\rangle\) according to mapping:
\[\Omega_{\mathrm{L}} \rightarrow\tilde{\Omega}_{\mu\nu},\tilde{\Omega}_{\mu\nu}^{\prime} \tag{5}\] \[V_{rr} \to J\] (6) \[\Gamma_{r} \rightarrow\Gamma_{\widetilde{a}}\] (7) \[\Gamma_{rr} \rightarrow\Gamma_{\widetilde{a}\widetilde{a}} \tag{8}\]
One key difference is the ability to control the entangling energy, \(J\), by controlling the laser parameters \(\Omega_{\mathrm{L}},\Delta_{\mathrm{L}}\) and the well defined nature of the state \(\left|\widetilde{a}\widetilde{a}\right\rangle\). This is in contrast to the optical regime where the doubly-excited Rydberg spectrum can be complex and \(V_{rr}\) is strongly dependent on the interatomic distance. We will use this to our advantage below by operating in the moderate-blockade regime where \(J\approx\Omega_{\mu\nu}\).
## III Entangling gate protocol
In this section, we use the optical-to-microwave mapping described above as a vehicle for implementing two-qubit entangling gates. Previous work was based on adiabatic dressing [39; 40; 41], which is intrinsically robust to certain forms of noise. Here we demonstrate the versatility of Rydberg dressing and microwave control in its application of the most successful gate to date, the Levine-Pichler (LP) gate [12]. The LP-gate is also well suited to optimal control, as was theoretically studied in [33; 34] and recently demonstrated in [15; 35] based on optical excitations. Microwave control will open the door to additional robust protocols that can leverage well-developed technologies.
In the original LP-gate [12], a sequence of optical pulses with carefully chosen Rabi frequency, detuning, and relative phase are used to implement a CZ gate based on the phases accumulated by the computational basis states as rotations occur on the Bloch sphere. We focus here on its generalization based on quantum optimal control [33; 34]. Not only are these protocols faster and yield higher fidelity, but they can be made robust to some inhomogeneities. In addition, we can design waveforms beyond the perfect blockade regime, which can lead to further optimization.
In the case of cesium considered here, the qubit is encoded in the clock states \(\left|0\right\rangle\equiv\left|F=4,m_{F}=0\right\rangle\) and \(\left|1\right\rangle\equiv\left|F=3,m_{F}=0\right\rangle\), and a pulse maps \(\left|F=4,m_{F}=0\right\rangle\leftrightarrow\left|F=3,m_{F}=1\right\rangle\) to shelve all of the qubit computational states in the \(F=3\) manifold similar to [42]. The unpopulated state \(\left|F=4,m_{F}=0\right\rangle\) is then designated as the auxiliary state \(\left|a\right\rangle\), which can be dressed using a Rydberg laser without coupling any population in the computational states. This results in the creation of a level diagram as shown in Fig. 2. A microwave/Raman laser field is used to couple the qubit state \(\left|1\right\rangle\) to the dressed state \(\left|\tilde{a}\right\rangle\), with effective Rabi frequency \(\tilde{\Omega}_{\mu\nu}\), detuning \(\tilde{\Delta}_{\mu\nu}\) and phase \(\xi_{\mu\nu}\). The dressed auxiliary state acts as a proxy to the Rydberg state. We set \(\Omega_{\mu\nu}\) to be a constant and detuning \(\tilde{\Delta}_{\mu\nu}=0\) with respect to the dressed
state \(\ket{\tilde{a}}\) and modulate the phase \(\xi_{\mu w}(t)\) of the microwave field over a period of time \(\tau\) to implement a CZ gate.
As shown in Fig. 2d, the computational basis states \(\ket{k};k\equiv\{00,01,10,11\}\) do not couple to each other during the evolution. Thus we consider only diagonal gates, a process in which all the basis states evolve as \(\ket{k}\rightarrow\ket{\psi_{k}(t)}\) and the populations return to these initial states at the end of the pulse, \(t=\tau\). The net result is an accumulation of phase, \(\ket{k}\rightarrow\ket{\psi_{k}(\tau)}=e^{i\phi_{k}}\ket{k}\). If these phases satisfy the condition \(\phi_{11}-2\phi_{01}=\pm\pi\), the gate that is implemented is equivalent to a CZ-gate up to local \(e^{-i\phi_{01}\sigma_{z}/2}\) gates on both qubits. We will refer to this family of gates as "CZ gate" hereon for simplicity. Hence, for a unitary two-qubit gate that we implement, \(U\), we define the CZ gate fidelity as
\[\mathcal{F}=\ket{1+e^{-i\phi_{01}}\bra{01}U\ket{01}+e^{-i\phi_{10}}\bra{10}U \ket{10}-e^{-i\phi_{11}}\bra{11}U\ket{11}\ket{2}^{2}/16.} \tag{9}\]
Note that \(\phi_{00}=0\) and \(\phi_{01}=\phi_{10}\). Also note that because all the population begins and ends in the \(F=3\) hyperfine manifold, the dressing lasers can be turned on and off rapidly, without the need for adiabatic ramping.
To implement a CZ gate, we design a control waveform to modulate the microwave phase \(\xi_{\mu w}(t)\). As a proof-of-principle, we choose the waveform to be \(N\) piece-wise constant pulses such that \(\xi_{\mu w}(t)=\xi_{i}\) if \(t\in[i\frac{\tau}{N},(i+1)\frac{\tau}{N})\). This waveform implements a unitary \(U(\vec{\xi})=\prod_{i}U(\xi_{i})\). Our objective is to find a \(\vec{\xi}\) such that \(U[\vec{\xi}]\) is locally equivalent CZ-gate unitary. We cast this problem as a minimization problem where we are trying to minimize the cost function \(-\mathcal{F}[\vec{\xi}]\). The "Gradient Ascent Pulse Engineering" (GRAPE) algorithm helps us efficiently calculate the gradient \(-\nabla_{\vec{\xi}}\mathcal{F}\) which greatly speeds up minimization algorithms [43]. Using GRAPE, we can find a waveform that implements a CZ-gate with arbitrarily close to unit fidelity as long as the gate time \(\tau\) is larger
Figure 2: Level diagram for implementing entangling gates in the microwave regime. (a) The hyperfine states of the cesium electronic ground state, labeled \(\ket{F,M_{F}}\), are shown in the presence of a bias magnetic field, near the clock states with \(M_{F}=0\) in which a qubit is encoded. Before implementing an entangling gate, the logical state \(\ket{0}\) is shelved in \(\ket{3,1}\) through a microwave resonance. (b) \(\ket{4,0}\) is now an auxiliary state \(\ket{a}\), dressed with a Rydberg laser (Rabi frequency \(\Omega_{\mathrm{L}}\), detuning \(\Delta_{\mathrm{L}}\)). (c) The phase of a microwave/Raman field of correct frequency and polarization couples \(\ket{1}\) and dressed state \(\ket{\tilde{a}}\), leaving \(\ket{0}\) untouched, is modulated to implement the gate. The microwave here is shown tuned to resonance with the dressed state and the Rabi frequency, \(\tilde{\Omega}_{\mu w}(t)\), is modified from its bare value due to the admixture of \(\ket{r}\) in \(\ket{\tilde{a}}\). (d) Two-atom symmetric basis showing the coupling of the logical basis to the Rydberg-dressed states. The state \(\ket{01}(\ket{10})\) couples to the dressed state \(\ket{0\tilde{a}}(\ket{\tilde{a}0})\) with modified Rabi frequency \(\tilde{\Omega}_{\mu w}\). The two-atom state \(\ket{11}\) couples to the bright state \(\ket{\tilde{a}1}_{+}\) and it to \(\ket{\widetilde{a}\tilde{a}}\) with modified Rabi frequencies \(\sqrt{2\tilde{\Omega}_{\mu w}},\sqrt{2\tilde{\Omega}_{\mu w}}\). The state \(\ket{\widetilde{a}\tilde{a}}\) has a nonlinear light shift that we call the entangling energy, \(J\), which determines the strength of the atom-atom coupling.
than the so-called "quantum speed limit" (QSL), denoted by \(\tau_{*}\). We seek to choose parameters such that \(\tau_{*}\) is as small as possible, given physical constraints. We will show that this is achieved by going beyond the regime of a perfect spin-flip blockade, and choosing \(J\approx\Omega_{\mu w}\).
### Optimal dressing strength
In the absence of errors, one can reach arbitrary fidelity using optimal control. The fundamental source of error is decoherence due to the finite lifetime on the Rydberg state, \(1/\Gamma_{r}\), which is of the order of \(100\mu s\) for typical principal quantum numbers, \(n\sim 50\), including decay stimulated by blackbody photons. Because the branching ratio for the decay of population back to the computational subspace is small, we can approximate the master equation simply by a non-Hermitian effective Hamiltonian \(H_{\rm eff}=H-i\frac{\Gamma_{r}}{2}\sum\left|r\right\rangle\left\langle r\right|\), giving a decoherence limited fidelity \(\mathcal{F}_{r}\). Furthermore, we define the time spent in the Rydberg state as \(T_{r}=(T_{01}+T_{10}+T_{11})/4\), where \(T_{k}=\int_{0}^{\tau}\sum_{i=1,2}\left|\left\langle r_{i}|\psi_{k}(t)\right\rangle \right|^{2}\!dt\) and \(\left|r_{1}\right\rangle,\left|r_{2}\right\rangle\) are the Rydberg states of the two atoms. We shall call \(T_{r}\) Rydberg time hereon for brevity. These are related as \(\mathcal{F}_{r}\approx 1-\Gamma_{r}T_{r}\). \(T_{r}\) and \(\mathcal{F}_{r}\) thus act as figures of merit with which to compare various protocols.
In the protocol above, the detuning of the Rydberg laser affects the strength of the entangling energy, \(J\), as well as the decay rates \(\Gamma_{\bar{a}},\Gamma_{\overline{a}\overline{a}}\) and the effective Rabi frequencies \(\tilde{\Omega}_{\mu w},\tilde{\Omega}^{\prime}_{\mu w}\). In the context of adiabatic dressing gates, we have shown that the optimal operating point was to tune the Rydberg laser toward resonance. Far off resonance, the rapid decrease in \(\left|J\right|\) outweighed the decrease in decoherence [40]. For this diabatic optimal control protocol based on the spin-flip blockade, there are additional considerations: the cost of a smaller entangling energy for larger detunings can be offset by larger \(\tilde{\Omega}_{\mu w},\tilde{\Omega}^{\prime}_{\mu w}\) and smaller \(\Gamma_{\bar{a}},\Gamma_{\overline{a}\overline{a}}\). Below, we will show that if \(J_{\rm max}>\Omega_{\mu w}\), there is an optimal point and when the dressing is such that \(J\sim\Omega_{\mu w}\) we get the minimum \(T_{r}\). The waveforms that minimize \(T_{r}\) also generally minimize the total gate time \(\tau_{*}\). If \(J_{\rm max}<\Omega_{\mu w}\), which is the experimentally less likely scenario, resonant dressing remains optimal.
We demonstrate this optimal dressing for an example parameter set: \(\Omega_{\rm L}/2\pi=10\) MHz, \(\Omega_{\mu w}/2\pi=1\) MHz and a perfect Rydberg blockade, \(V_{rr}\approx\infty\). For resonant Rydberg dressing at \(\Delta_{\rm L}=0\), the value of \(\left|J\right|\) peaks at \(J_{\rm max}=(2-\sqrt{2})\Omega_{\rm L}/2\approx 0.29\Omega_{\rm L}\)[1] and the dressed state \(\left|\bar{a}\right\rangle=(\left|a\right\rangle+\left|r\right\rangle)\sqrt{2}\) has the strongest Rydberg character leading to the weakest effective microwave Rabi frequencies, \(\tilde{\Omega}_{\mu w}=\Omega_{\mu w}/\sqrt{2},\tilde{\Omega}^{\prime}_{\mu w }=\frac{1}{2}(1+\frac{1}{\sqrt{2}})\Omega_{\mu w}\), and largest effective decay rates \(\Gamma_{\bar{a}}=\Gamma_{\overline{a}\overline{a}}=\Gamma_{r}/2\). As we increase the laser detuning, reducing the dressing strength, we reduce the value of \(J\) along with the Rydberg character in the dressed state \(\left|\left\langle r\bar{a}\right\rangle\right|^{2}\), thus reducing the photon scattering rate and increasing \(\tilde{\Omega}_{\mu w}\). At various dressing strengths we plot the effective Rabi frequencies \(\tilde{\Omega}_{\mu w},\tilde{\Omega}^{\prime}_{\mu w}\), and decay rates \(\Gamma_{\bar{a}},\Gamma_{\overline{a}\overline{a}}\) in Fig. 3a,b.
We use GRAPE to find the QSL for each of these scenarios and plot the QSL gate time \(\tau_{*}\) and Rydberg time \(T_{r}\) in Fig. 3c,d. We can see the best operating point at \(\Delta_{\rm L}/2\pi=-5.9\) MHz, where \(J=\Omega_{\mu w}=2\pi\times 1\) MHz. This corresponds to a blockade ratio of \(J/\Omega_{\mu w}=1\). We find the gate time in this scenario to be \(\tau_{*}=1.3\mu s=1.3\times 2\pi/\Omega_{\mu w}\), which is approximately 25% faster than the resonant dressing scenario. The phase waveform \(\xi_{\mu w}(t)\) of this gate shown in Fig. 4a is similar to that found in [33] for optical control. The gate properties are given in Fig. 4b,c, which shows plots of the population in the various levels as a function of time. For the \(\left|11\right\rangle\) state, during the central part of the of waveform, a majority of the population is held is the doubly-excited dressed state \(\left|\widetilde{a}\overline{a}\right\rangle\). This leads to faster gates.
For these parameters, the dressed states are given by \(\left|\tilde{a}\right\rangle=0.85\left|a\right\rangle+0.52\left|r\right\rangle\) and \(\left|\widetilde{a}\overline{a}\right\rangle=0.81\left|aa\right\rangle+0.59 \left|ar\right\rangle_{+}\). This results in the effective Rabi rates \(\tilde{\Omega}_{\mu w}/2\pi\approx 0.85\) MHz, \(\tilde{\Omega}^{\prime}_{\mu w}/2\pi\approx 0.9\) MHz. The effective decay rates in this case are \(\Gamma_{\bar{a}}=0.27\Gamma_{r}\), \(\Gamma_{\overline{a}\overline{a}}=0.35\Gamma_{r}\). For a Rydberg lifetime of \(150\,\mu s\), we get a decay-limited-fidelity of \(99.92\%\). This is the best \(\mathcal{F}_{r}\) achievable in this protocol with our chosen Rabi frequencies as shown in Fig. 3d, due to a combination of fast gate time, moderately large effective Rabi frequencies and moderately small dressed decay rates.
This particular speedup in the moderate blockade regime can be understood from the foundations of Lie-algebraic control theory. Control in the Hilbert space occurs due to the mixing of noncommuting Hamiltonians whose affect on the dynamics is seen in the Magnus expansion through nested commutators [44]. Here the two noncommuting Hamiltonian are the microwave-driven Rabi oscillations and the nonlinear light shift. The speed limit is fastest when these effects are on the same order, \(J\approx\Omega_{\mu w}\). A similar effect is seen in the optical regime, when \(V_{rr}\approx\Omega_{\rm L}\) for the optical gates (see Appendix ), but \(V_{rr}\) is not easily controlled and the doubly excited Rydberg state often leads to other inelastic processes. In contrast, \(J\) can be accurately controlled by tuning the Rydberg laser detuning, and has a soft-core dependence on distance [1; 38].
More generally, Fig. 5a shows a plot of the QSL as a function of the ratio of the laser and microwave Rabi frequencies \(\Omega_{\rm L}/\Omega_{\mu w}\) and laser detuning, \(\Delta_{\rm L}/\Omega_{\rm L}\). The valley running across the plot corresponds to the optimal dressing strength that minimizes the gate time. This valley reliably corresponds to dressing strengths at which \(J=\Omega_{\mu w}\) in the regime where \(J_{\rm max}>\Omega_{\mu w}\). If \(\Omega_{\rm L}<3.5\Omega_{\mu w}\implies J_{\rm max}<\Omega_{\mu w}\), we can never achieve \(J\sim\Omega_{\mu w}\). In this scenario, resonant dressing is the optimal, similar to [40], as seen in Fig. 5a. We look at the properties of these optimally dressed gates for different limits of \(\Omega_{\rm L}/\Omega_{\mu w}\).
In a realistic experimental scenario, we do not have access to unlimited field strengths. For \({}^{131}\)Cs un
der consideration, we are typically limited by the microwave/Raman Rabi frequency, (\(\Omega_{\mu w}<J_{\rm max}\approx 0.3\Omega_{\rm L}\)). For the limiting case \(\Omega_{\rm L}\gg\Omega_{\mu w}\), the optimal dressing strength is achieved by detuning the laser such that \(J\approx\Omega_{\mu w}\). In this case, \(\Delta_{\rm L}\gg\Omega_{\rm L}\), the weak dressing regime (wdr) where[1]
\[J^{wdr}\approx\frac{\Omega_{\rm L}^{4}}{8\Delta_{\rm L}^{3}}. \tag{10}\]
Setting \(J^{\rm wdr}=\Omega_{\mu w}\), for weak dressing we find the optimal dressing detuning,
\[\Delta_{L}^{\rm wdr}=\Omega_{\rm L}^{4/3}/2\Omega_{\mu w}^{1/3} \tag{11}\]
Also in this regime, \(\tilde{\Omega}_{\mu w},\tilde{\Omega}_{\mu w}^{\prime}\rightarrow\Omega_{\mu w}\) (up to first order in \(\Omega_{\rm L}/\Delta_{\rm L}\)) and \(\Gamma_{\tilde{a}}\rightarrow\frac{\Omega_{\rm L}^{2}}{4\Delta_{\rm L}^{2}} \Gamma_{r},\Gamma_{\tilde{a}\widetilde{a}}\rightarrow\frac{\Omega_{\rm L}^{2 }}{2\Delta_{\rm L}^{2}}\Gamma_{r}\). By finding the CZ gate waveform for this scenario, we find that
\[T_{r}^{\rm wdr}=\frac{7\Gamma_{\tilde{a}}}{\Omega_{\mu w}\Gamma_{r}}=\frac{3. 5}{\Omega_{\rm L}^{2/3}\Omega_{\mu w}^{1/3}}. \tag{12}\]
When we are not limited by microwave Rabi frequency (\(\Omega_{\mu w}>J_{\rm max}\)), and \(V_{rr}\gg\Omega_{\rm L}\), the best regime of op
Figure 4: Simulation of a CZ-gate implemented through microwave phase modulation driving Rydberg-dressed hyperfine states. Parameters: \(\Omega_{\rm L}/2\pi=10\,\mathrm{MHz},\Delta_{\rm L}/2\pi=-5.9\,\mathrm{MHz}, \Omega_{\mu w}/2\pi=1\,\mathrm{MHz}\). For these dressing parameters, \(J=2\pi\times 0.999\,\mathrm{MHz}\approx\Omega_{\mu w}\), corresponding to the optimal point in Fig. 3. (a) The phase waveform of the microwave field, \(\xi_{\mu w}(t)\) found using GRAPE optimization. (b) The populations in various levels during the evolution of state when initially in the logical \(|01\rangle\) state (also the same for \(|10\rangle\)) when only one atom can be excited by the microwave. (c) The population occupations during the evolution of state when initially in the logical \(|11\rangle\) state and two atoms can be excited by the microwave. We work here beyond the regime of the perfect spin-flip blockade. In the middle portion of the gate, most of the population gets pumped into \(|\overline{a}\overline{a}\rangle\) where it accumulates the entangling phase due to the nonlinear light shift, \(J\).
Figure 3: The effects of the Rydberg-laser detuning on the entangling gate implemented for \(\Omega_{\rm L}/2\pi=10\,\mathrm{MHz}\), \(\Omega_{\mu w}/2\pi=1\,\mathrm{MHz}\). (a) The behavior of \(J\) as a function of Rydberg detuning \(\Delta_{\rm L}\). \(J_{\rm max}=0.29\Omega_{\rm L}\) at resonant dressing and falls rapidly with the magnitude of the detuning. Around \(\Delta_{\rm L}/2\pi=-5.9\,\mathrm{MHz}\), we satisfy the condition \(J\approx\Omega_{\mu w}\). (b) As the magnitude of laser detuning is increased, the effective Rabi frequencies, \(\tilde{\Omega}_{\mu w}\) and \(\tilde{\Omega}_{\mu w}^{\prime}\) increase and the effective decay rates, \(\Gamma_{\tilde{a}}\) and \(\Gamma_{\tilde{a}\tilde{a}}\), decrease due to decreased admixture of the Rydberg state in the dressed states \(|\tilde{a}\rangle\,,|\widetilde{a}\overline{a}\rangle\). (c) Using optimal control, a CZ gate waveform can be found at all laser detunings, as shown by the unit theoretical gate fidelity orange dots. The Rydberg decay limited fidelity is shown in blue dots, with the highest occurring around the point where \(J\approx\Omega_{\mu w}\), at \(\mathcal{F}_{r}=0.9992\) for \(1/\Gamma_{r}=150\,\mu s\). (d) The gate time is also shortest around \(J\approx\Omega_{\mu w}\) where \(\tau\approx 1.26\,\mu\mathrm{s}\). This speedup is due to a combination of larger effective Rabi frequencies compared to resonant dressing and the moderate blockade speedup effect shown in
eration is on Rydberg resonance (\(\Delta_{\rm L}=0\)), the strong dressing regime (sdr) similar to [19; 40]. The gate time \(\tau\) is also minimized for this scenario. For this resonant dressing, \(\tilde{\Omega}_{\mu\nu}=\Omega_{\mu\nu}/\sqrt{2},\tilde{\Omega}^{\prime}_{\mu \nu}=\Omega_{\mu\nu}(1+1/\sqrt{2})\) and \(\Gamma_{\tilde{a}}=\Gamma_{\widetilde{a}\widetilde{a}}=\frac{\Gamma_{r}}{2}\). We consider the limit of very large \(\Omega_{\mu\nu}\), i.e., \(\Omega_{\mu\nu}\gg\tilde{\Omega}_{\rm L}\), for which the gate time asymptotically reaches \(\pi/J_{\rm max}\). Hence, we find
\[T_{r}^{\rm sdr}\sim\frac{\pi}{J_{\rm max}}\frac{(2\Gamma_{\tilde{a}}+\Gamma_{ \widetilde{a}\widetilde{a}})}{4\Gamma_{r}}\approx 1.66\times\frac{\pi}{ \Omega_{\rm L}}. \tag{13}\]
In the wdr, the time spent in the Rydberg state \(\sim\Omega_{\rm L}^{-2/3}\Omega_{\mu\nu}^{-1/3}\) is generally longer than that achievable by an optically driven gate at the same UV Rabi frequency (\(\sim\Omega_{\rm L}^{-1}\)). Currently, the state-of-the-art gate fidelities are far from the limit set by the Rydberg lifetime, owing to experimental noise. Hence, protecting against the inhomogeneous noise sources is more important in the near future. In the following section, we explain how the major sources of noise in our gate boil down to uncertainty in \(J\) and how this can be mitigated.
## IV Robust control
The fidelity of entangling gates is fundamentally limited by the finite lifetime of the Rydberg state. Another important source of error is thermal fluctuations in atomic motion. This leads to inhomogeneities in the atomic positions and momenta which affect their coupling to external fields and the atom-atom interactions. Deleterious effects include Doppler shifts and uncertainty in the Rabi frequencies due to position uncertainty of an atom across a tightly focused beam profile. The microwave-driven gates have some intrinsic robustness to such noise, and using the tools of robust optimal control one can further reduce the residual effects of inhomogeneities.
To study this we include atomic motion with the following Hamiltonian associated with two atoms that can be excited to the Rydberg state [39],
\[H =H_{0}+H_{\rm rel}+H_{\rm com}. \tag{14}\] \[H_{0} =-\Delta\,|B\rangle\!\langle B|-(2\Delta-V_{rr})\,|rr\rangle\! \langle rr|\] \[+\frac{\Omega_{L}^{+}}{2}(|11\rangle\!\langle B|+|B\rangle\! \langle rr|+{\rm h.c.}).\] (15) \[H_{\rm rel} =-\Delta\,|D\rangle\!\langle D|+\frac{\Omega_{L}^{-}}{2}(|11 \rangle\!\langle D|+|D\rangle\!\langle rr|+{\rm h.c.})\] \[-\frac{k_{L}p_{\rm rel}}{m}(|B\rangle\!\langle D|+|D\rangle\! \langle B|).\] \[H_{\rm com} =\frac{k_{L}P_{\rm com}}{2m}(|D\rangle\!\langle D|+|B\rangle\! \langle B|+2\,|rr\rangle\!\langle rr|). \tag{16}\]
Here \(H_{0}\) is the ideal Hamiltonian, where the ground logical state \(|11\rangle\) is coupled to the singly-excited "bright state,"
\[|B\rangle=(|ra\rangle+|ar\rangle)/\sqrt{2} \tag{17}\]
via Rabi frequency
\[\Omega_{\rm L}^{+}\equiv(\Omega_{\rm L1}+\Omega_{\rm L2})/\sqrt{2}. \tag{18}\]
where \(\Omega_{\rm L1},\Omega_{\rm L2}\) are the Rydberg Rabi frequencies of the two atoms.
Due to unequal Rabi frequencies, the so-called "dark state" participates in the dynamics
\[|D\rangle=(|ra\rangle-|ar\rangle)/\sqrt{2}. \tag{19}\]
which couples to \(|11\rangle\) with Rabi frequency
\[\Omega_{\rm L}^{-}\equiv(\Omega_{\rm L1}-\Omega_{\rm L2})/\sqrt{2}. \tag{20}\]
In addition, the bright and dark states are coupled to one another due to the relative momentum (\(p_{\rm rel}=(p_{1}-p_{2})/2\)) as this affects the relative phase of the laser seen by the two atoms. Here, \(k_{L}\) is the laser wavenumber and \(m\) is the mass of an atom and \(p_{1}\) and \(p_{2}\) are the momenta of the two atoms in the direction of laser propagation.
Finally, in \(H_{\rm com}\), thermal fluctuations lead to a random Doppler shift due to the center of mass momentum (\(P_{\rm com}=p_{1}+p_{2}\)) of the two atoms. We note that for single-atom excitations driven from the logical states \(|01\rangle\) and \(|10\rangle\), we will have a Doppler shifts and potential uncertainty laser Rabi frequency. These effects are not included in the Hamiltonian above as they play a minor role in gate infidelity.
The net effect of thermal motion on dressing and the microwave-driven gates is thus as follows. Firstly, Doppler shifts at microwave frequencies are negligible due to a much smaller microwave wavenumber \(k_{\mu\nu}=2\pi/\lambda_{\mu w}\). The major source of Doppler noise is the dressing laser itself, due to a wavenumber \(k_{L}=2\pi/\lambda_{\rm L}\) which \(10^{5}\) times larger. This affects the system in two ways: (1) the dressed states themselves are modified; (2) the light shifts and associated entangling energies are modified. The dressed states \(|0\widetilde{a}\rangle\), \(|\widetilde{a}0\rangle\), and \(|1\widetilde{a}\rangle_{+}\) receive a one-atom light shift, only slightly modified due to the Doppler shift when the excitation to the Rydberg state is sufficiently far from resonance, \(\Delta_{\rm L}\gg P_{\rm com}k_{L}/(2m)\). The dressed state \(|\widetilde{a}\widetilde{a}\rangle\) receives a two-atom light shift. This arises as \(|aa\rangle\) is coupled to the bright and dark states, \(|B\rangle\) and \(|D\rangle\), which are themselves coupled together through the relative motion as shown in Fig. 6a; here we assume a perfect Rydberg blockade. This leads to a slightly different dressed state \(|\widetilde{a}\widetilde{a}\rangle\) as compared to the ideal case, which is now an admixture of \(|aa\rangle\), \(|B\rangle\), and \(|D\rangle\) with amplitudes that depend on the random motion of the particles. The strength of the entangling interaction \(J\) is modified as well. The changes in the dressed state changes the dressed Rabi frequencies \(\tilde{\Omega}_{\mu\nu},\tilde{\Omega}^{\prime}_{\mu\nu}\) but this change is much weaker than the change in \(J\), as seen in Fig. 3. The dominant noise effect due to thermal motion is thus the uncertainty in the value of \(J\).
Given how the dressed states are modified by inhomogeneities, we can determine its effect on gate fidelity. As discussed in Sec. II, there are two possible regimes
of operation. Under the condition of a strong spin-flip blockade, when \(J\gg\tilde{\Omega}^{\prime}_{\mu\nu}\), the actual value of \(J\) is unimportant, and its small uncertainty is negligible. The residual effect of inhomogeneities is on the one-atom light shifts. This, again, is negligible when the laser detuning is large compared to the Doppler width. In this regime, the microwave-drive dressed gates are intrinsically robust to small thermal fluctuations. Note, however, that this is not necessarily the regime of the highest fidelity due to the finite Rydberg lifetime. In Sec. II we found that the fastest gates with the smallest integrated Rydberg time was when \(J\approx\tilde{\Omega}^{\prime}_{\mu\nu}\). In that regime, we must consider the uncertainty in \(J\). To do so we can employ the methods of "robust optimal control."
Figure 6: Effects of Rydberg inhomogeneities and mitigating them via robust control methods. (a) Level diagram for two-atom dressing in the presence of atomic motion and imperfect Rabi frequencies. The asymmetric dressing of the two atoms results in unequal Rydberg-laser Rabi frequencies, \(\Omega_{\rm L1},\Omega_{\rm L2}\), and coupling from \(|aa\rangle\) to both the bright state \(|B\rangle=|ar\rangle_{+}\) and the dark state \(|D\rangle=|ar\rangle_{-}\), with Rabi frequencies \(\Omega_{\rm L}^{+}=\frac{\Omega_{\rm L1}+\Omega_{\rm L2}}{\sqrt{2}}\) and \(\Omega_{\rm L}^{-}=\frac{\Omega_{\rm L1}-\Omega_{\rm L2}}{\sqrt{2}}\) respectively. The center-of-mass motion results in a Doppler-shifted detuning of \(\frac{k_{L}P_{\rm cm}}{2m}\) per atom in the Rydberg state. The relative momentum results in a coupling between the bright and dark states with Rabi frequency \(\frac{k_{L}P_{\rm ext}}{m}\). Because the dressed state is an eigenstate of the total Hamiltonian in this subspace, the net effect of all motional coupling is uncertainty in the light-shifts and the dressed microwave-Rabi frequencies. The dominant effect is the uncertainty in the entangling energy, \(J\). (b) Sensitivity to uncertainty in \(\Omega_{\rm L1}\) for the optimal gate (orange) Fig. 4 and the robust gate (blue) found by using robust optimal control. The dashed correspond to the ideal fidelity as function of inhomogeneity, assuming unitary evolution, while the solid lines include decoherence to Rydberg decay. We choose \(\Omega_{\rm L1}/\Omega_{L}=\Omega_{\rm L2}/\Omega_{\rm L}\) (\(\Omega_{\rm L}\) is the ideal Rabi frequency). The robust waveform achieves significant resilience to uncertainty in \(\Omega_{\rm L}\) and the price of longer gate times. This implies a trade-off with Rydberg decay. The regime in which roust control is useful is seen for large inhomogeneities, where the solid blue cure is above the organ curve. (c) Sensitivity to uncertainty in \(\Delta_{\rm L1}\), with \(\Delta_{\rm L2}=0\). The fidelity as a function of \(\Delta_{\rm L1}/\Delta_{\rm L}\) using the same waveform that optimizes robustness against inhomogeneity in \(\Omega_{\rm L1}\) shows significant robustness. Typically, a gate optimized to be robust against one noise source has increased sensitivity against other noises. In this case however, since both noises are effectively the same uncertainity in the dressed state \(|\widetilde{a}\widetilde{a}\rangle\), we are able to achieve robustness against both major noises.
We consider the method of inhomogeneous control originally introduced by Khaneja [45; 46]. The key idea is to optimize the average fidelity \(\bar{\mathcal{F}}=-\int d\vec{\lambda}\mathcal{P}(\vec{\lambda})\mathcal{F}( \vec{\lambda})\), where \(\mathcal{F}(\vec{\lambda})\) is the fidelity at a fixed set of the uncertain parameters, and \(P(\vec{\lambda})\) is their probability distribution. We again use GRAPE to optimize the modified cost function \(\bar{\mathcal{F}}=\{2\mathcal{F}(\Omega_{1}=\Omega_{2}=\Omega_{\mathrm{L}})+ \mathcal{F}(\Omega_{1}=\Omega_{2}=1.02\Omega_{\mathrm{L}})+\mathcal{F}(\Omega_ {1}=\Omega_{2}=0.98\Omega_{\mathrm{L}})\}/4\) where \(\Omega_{\mathrm{L}}\) is the ideal Rydberg Rabi frequency. We are effectively optimizing the fidelity in a \(\pm 2\%\) window of the ideal Rabi frequency, but with a greater weight at the ideal parameters. We expect to get better robustness as we increase the gate time, but this generally worsens the sensitivity against other noise. In optical control protocols, waveforms that are robust control against both Rabi frequency uncertainty and Doppler noise require even longer gate times, as shown in [36; 37]. In the dressed-state case we find that pulses that are robust against Rabi frequency noise are also robust against Doppler detuning noise, as shown in Fig. 6b,c. We attribute this to the fact that effectively, both noises manifest as uncertainty in \(J\) and we are actually engineering robustness against this one quantity which manifests as reduced sensitivity to uncertainty in both \(\Omega_{\mathrm{L}},\Delta_{\mathrm{L}}\).
## V Summary and Outlook
We proposed a new method to implement entangling gates between neutral atoms based on the spin-flip blockade. This has the potential advantage that all coherent control can be performed robustly with microwave fields. We showed for the specific case of alkali atoms, such as cesium, by optically dressing an auxiliary hyperfine state with Rydberg character, the level structure associated with the optically-excited Rydberg blockade can be mapped to the microwave-excited spin-flip blockade. In this way any protocol that could be implemented with optical control can be mapped into the microwave regime.
We studied the specific example of the spin-flip version of the generalized Levine-Pichler gate, implemented through modulation of the microwave phase, whose waveform is found by quantum optimal control. In contrast to the optical regime, where the doubly-excited Rydberg potential \(V_{rr}\) is not well defined at a short interatomic distances, in the microwave regime we consider going beyond the perfect blockade regime. This is profitable as the doubly spin-excited state is well defined, and the blockade strength is a much more stable function of interatomic motion due to its soft-core nature. We find that the optimal quantum speed limit occurs when the entangling energy \(J\) is approximately equal to the driving microwave Rabi frequency \(\Omega_{\mu w}\). As \(J\) is controllable by laser detuning, we find that for a laser Rabi frequency \(\Omega_{L}\gg\Omega_{\mu w}\), we can achieve high fidelity in the off-resonance regime. For \(\Omega_{\mu w}/2\pi=1\) MHz, we can achieve a fidelity \(>99.9\%\), limited solely by the finite lifetime of the Rydberg state. While higher fidelity is in principle possible with optically controlled gates, in practice, the fidelity is far from the theoretical maximum, limited otherwise by technical noise sources. The robustness of microwave control may allow us to reach closer to the fundamental limits.
Of particular importance is noise arising from atomic thermal motion which gives rise to Doppler shifts and breaking of addressing symmetry of the ideal protocol. The laser detuning and amplitude are then uncertain. We show that the dominant net effect of thermal motion boils down to uncertainty in one parameter, the entangling energy, \(J\). This allows us to use the method of inhomogeneous control to engineer robustness against both laser amplitude and phase noise simultaneously, with only a modest increase in gate time.
These conclusions extend to other forms of noise such as fluctuations in laser intensity (Rabi frequency), phase (detuning) and uncontrolled electric fields that lead to random Stark shifts (detuning). Such noise leads to inhomogeneous broadening on the timescale \(T_{2}^{*}\). When \(T_{2}^{*}\ll 1/\Gamma_{r}\), off-resonance dressing, \(\Delta_{L}\gg 1/T_{2}^{*}\), enables higher fidelity gates so long as there is sufficient laser power that we are not too far off resonance, e.g., \(\Delta_{L}\sim\Omega_{L}\), and \(J\) remains large compared the dressed-state decay rate.
While we have focused here in the microwave version of the generalized LP-gate, in principle any protocol developed in the optical regime can be mapped into the microwave regime. For example, the recently studied protocol involving adiabatic rapid passage beyond the perfect blockade [47] can be mapped to a microwave/Raman adiabatic passage. Such gates might achieve even higher fidelity as they can take full advantage of the entangling energy \(J\). Overall, the scheme proposed here should enable exploration of different tradeoffs in speed, noise, and robustness in order to obtain the highest fidelity entangling gates.
## VI Acknowledgements
We thank Matt Chow, Bethany Little, Anupam Mitra, Sven Jandura and Bharath Hebbe Madhusudhana for helpful discussions. This work is supported by Sandia Laboratory Directed Research and Development (LDRD) and Quantum Systems through Entangled Science and Engineering (Q-SEnSE) funded by National Science Foundation Quantum Leap Challenge Institute (NSF QLCI). Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DENA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the
views of the U.S. Department of Energy or the United States Government.
## Appendix
In our studies of the microwave-driven gates on dressed states, we have seen that optimal gates can be designed when \(J/\Omega_{\mu w}\approx 1\). This maps to \(V_{rr}/\Omega_{r}\approx 1\) in the optical regime. While this moderate blockade optical protocol is not experimentally common due to the difficulty in controlling \(V_{rr}\), analyzing it further provides valuable insight into our gate protocol. In Fig. 7 we show the QSL gate time as a function of the blockade ratio for the optical protocols at various \(V_{rr}/\Omega_{\mathrm{L}}\). We use GRAPE to find \(\xi_{L}(t)\) laser-phase waveforms that implement CZ gates similar to [33, 34] at various \(V_{rr}/\Omega_{\mathrm{L}}\). Perhaps surprisingly, we see that a blockade ratio as low as \(V_{rr}/\Omega_{\mathrm{L}}\approx 1\) doesn't make the gate much slower; In fact, the gates are slightly faster at this moderate blockade ratio. At large blockade ratios, the state \(\ket{rr}\) is essentially absent from the dynamics which explains the saturation of the gate time at \(7.6/\Omega_{\mathrm{L}}\) and these gates were studied in [33, 34]. On the other end, for very weak blockade ratios, the entangling energy is simply too small, and a larger gate time is needed to compensate. At very small \(V_{rr}/\Omega_{\mathrm{L}}\), the gates approach a protocol similar to [39] where the populations are driven from computational states \(\ket{01},\ket{11}\) into the Rydberg states \(\ket{0r},\ket{1r}_{+}\) and kept there until they accumulate the right amount of dynamical phases, \(\phi_{11}-2\phi_{01}\approx(E_{rr}-2E_{r})\tau=\pi\). The gate time asymptotically approaches \(\pi/V_{rr}\) in this limit (similar to the weak-blockade limit in [47]). Near the optimal point of \(V_{rr}\approx\Omega_{\mathrm{L}}\), the entirety of the nonlinear phase accumulated, \(\phi_{11}-2\phi_{01}\) is geometrical in nature, with no contribution from dynamical phases. This explains the fast gate time, as these geometrical gates tend to follow the geodesic paths in the Hilbert space, resulting in optimal gate times[49]. This counter-intuitive result explains why choosing a dressing regime where \(J\approx\Omega_{\mu w}\) would give us the optimal gate times, which is borne out by the results in Fig. 3c.
|
2309.09053 | Well-posedness and optimal control for a viscous Cahn-Hilliard-Oono
system with dynamic boundary conditions | In this paper we consider a nonlinear system of PDEs coupling the viscous
Cahn-Hilliard-Oono equation with dynamic boundary conditions enjoying a similar
structure on the boundary. After proving well-posedness of the corresponding
initial boundary value problem, we study an associated optimal control problem
related to a tracking-type cost functional, proving first-order necessary
conditions of optimality. The controls enter the system in the form of a
distributed and a boundary source. We can account for general potentials in the
bulk and in the boundary part under the common assumption that the boundary
potential is dominant with respect to the bulk one. For example, the regular
quartic potential as well as the logarithmic potential can be considered in our
analysis. | Gianni Gilardi, Elisabetta Rocca, Andrea Signori | 2023-09-16T17:24:16Z | http://arxiv.org/abs/2309.09053v1 | # Well-posedness and optimal control
###### Abstract
In this paper we consider a nonlinear system of PDEs coupling the viscous Cahn-Hilliard-Oono equation with dynamic boundary conditions enjoying a similar structure on the boundary. After proving well-posedness of the corresponding initial boundary value problem, we study an associated optimal control problem related to a tracking-type cost functional, proving first-order necessary conditions of optimality. The controls enter the system in the form of a distributed and a boundary source. We can account for general potentials in the bulk and in the boundary part under the common assumption that the boundary potential is dominant with respect to the bulk one. For example, the regular quartic potential as well as the logarithmic potential can be considered in our analysis.
**Keywords:** Cahn-Hilliard-Oono equation, dynamic boundary conditions, bulk-surface mass dynamics, well-posedness, optimal control.
**AMS (MOS) Subject Classification:** 35K55, 35K61, 49J20, 49J50, 49K20.
Introduction
In this paper, we study the following nonlinear PDE system, referred to in the literature as the viscous Cahn-Hilliard-Oono (CHO) system (cf. [43, 44, 45]), which is of great interest in the study of pattern formations in phase-separating materials
\[\partial_{t}\varphi-\Delta\mu=\gamma(u-\varphi)\quad\text{and}\quad\tau \partial_{t}\varphi-\Delta\varphi+F^{\prime}(\varphi)=\mu\quad\text{in }Q:=\Omega\times(0,T). \tag{1.1}\]
Here the unknowns are the order parameter \(\varphi\) and the chemical potential \(\mu\), while \(\Omega\) is the spatial domain where the evolution takes place and \(T\) is a prescribed final time. In the above equations, \(\gamma\) is a positive constant, \(u\) is a source/sink term that later on will play the role of (distributed) control, \(\tau\) is a positive viscosity coefficient, and \(F^{\prime}\) is the derivative of a double-well potential \(F\).
Typical and important examples for this latter are the so-called classical _regular potential_ and the _logarithmic_ double-well potential. They are given, in the order, by
\[F_{\text{reg}}(r) :=\frac{1}{4}\,(r^{2}-1)^{2}\,,\quad r\in\mathbb{R}, \tag{1.2}\] \[F_{\text{log}}(r) :=((1+r)\ln(1+r)+(1-r)\ln(1-r))-c_{1}r^{2}\,,\quad r\in(-1,1), \tag{1.3}\]
where \(c_{1}>1\) is such that \(F_{\text{log}}\) is nonconvex (then, \(F_{log}\) is extended outside the physical interval \((-1,1)\) in the usual way, i.e., by continuity at \(\pm 1\) and as \(+\infty\) elsewhere, in order to preserve semicontinuity). Another example is the following _double obstacle potential_, where \(c_{2}>0\),
\[F_{2\text{obs}}(r):=-c_{2}r^{2}\quad\text{if }|r|\leq 1\quad\text{and}\quad F _{2\text{obs}}(r):=+\infty\quad\text{if }|r|>1. \tag{1.4}\]
In cases like (1.4), one has to split \(F\) into a nondifferentiable convex part (the indicator function of \([-1,1]\) in the present example) and a smooth perturbation (typically quadratic). Accordingly, one has to replace the derivative of the convex part of the potential by its subdifferential and interpret the second identity in (1.1) as a differential inclusion.
System (1.1) can be seen as a Cahn-Hilliard (CH) equation with reaction and turns out to be useful in several applications such as biological models [32], inpainting algorithms [3], and polymers [1]. The main feature that makes the CHO equation different from the standard CH one is the fact that in (1.1) the conservation of mass is not guaranteed even in presence of "standard" Neumann homogeneous boundary conditions for the chemical potential \(\mu\). The reaction term in (1.1) takes into account long-range interactions and, of course, more general terms could be considered, in particular, nonlinearities (cf. for example, [38], where the nonlocal CH equation with nonlinear reaction term has been analyzed). Furthermore, we note that interest in including reaction terms in CH-type systems is particularly growing due to the development of diffuse interface models of tumor growth that couple CH-type systems with nutrient diffusion and other equations. In fact, in this class of models, the parameter \(\varphi\) represents the concentration of the tumor phase, and therefore, in this case, it is of particular interest not to have mass conservation, but to observe a possible growth or decrease of the tumor mass due to cell proliferation or death. In this regard, one can see, e.g., [7, 9, 10, 18, 22, 25, 26, 27, 33, 40, 46, 47, 48]
and the references therein for results on well-posedness, long-time behavior of solutions, asymptotic analyses and optimal control.
The solvency and existence of global and exponential attractors for the CHO equation with regular potential (1.2) were studied in [39], while in [30] the authors investigated the case of the singular logarithmic potential (1.3), establishing some regularization properties of the unique solution in finite time. Both in two and three dimensions, the existence of a global attractor has been proved. Moreover, the authors of [30], exploiting the regularization effects they established in two dimensions and the associated strict separation property, proved the existence of an exponential attractor and the convergence to a single equilibrium.
In the present work, we couple (1.1) with the following dynamic boundary condition for both \(\mu\) and \(\varphi\). Namely, we complement system (1.1) with
\[\partial_{t}\varphi_{\Gamma}+\partial_{\boldsymbol{\nu}}\mu-\Delta_{\Gamma} \mu_{\Gamma}=\gamma(u^{\Gamma}-\varphi_{\Gamma}),\quad\tau\partial_{t}\varphi _{\Gamma}+\partial_{\boldsymbol{\nu}}\varphi-\Delta_{\Gamma}\varphi_{\Gamma} +F^{\prime}_{\Gamma}(\varphi_{\Gamma})=\mu_{\Gamma}\quad\text{on }\Sigma, \tag{1.5}\]
with \(\Sigma:=\partial\Omega\times(0,T)\) and \(\Gamma:=\partial\Omega\), where \(\mu_{\Gamma}\) and \(\varphi_{\Gamma}\) stand for the traces of \(\mu\) and \(\varphi\), respectively, \(\Delta_{\Gamma}\) is the Laplace-Beltrami operator on the boundary, \(u^{\Gamma}\) is a boundary source/sink term that will later play the role of (boundary) control on the boundary \(\Gamma:=\partial\Omega\) of the domain \(\Omega\), \(F^{\prime}_{\Gamma}\) is the derivative of another double-well shaped potential \(F_{\Gamma}\), and \(\partial_{\boldsymbol{\nu}}\) denotes the outward normal derivative to \(\Gamma\). Let us notice that here we consider the same viscosity coefficient \(\tau\) in the bulk (cf. (1.1)) and on the boundary (cf. (1.5)) for simplicity. However, up to redefining the scalar product and the functional spaces, different values of \(\tau\) could be encompassed in the analysis.
This is the first time that the CHO equation is coupled with dynamic boundary conditions. Indeed, in most works, system (1.1) is endowed with homogeneous Neumann boundary conditions for both \(\mu\) and \(\varphi\), that is,
\[\partial_{\boldsymbol{\nu}}\mu=\partial_{\boldsymbol{\nu}}\varphi=0\quad \text{on }\Sigma.\]
Those have, in the order, physical connection with the mass conservation, when no source terms are considered (i.e., when \(\gamma=0\)), and the contact angle problem as the no-flux condition for \(\varphi\) entails that the interface is orthogonal to the boundary.
In the last decades, physicists have introduced the so-called dynamic boundary conditions, in the sense that the kinetics, i.e., \(\partial_{t}\varphi\) appears explicitly in the boundary conditions, in order to account for the interaction of the components with the walls for a confined system (see, e.g., [20, 21] and also [31, 37] where numerical simulations are performed). Notice that for \(\tau>0\), we obtain the viscous CH equation introduced in [42] (see [4], [19], and [41] for the mathematical analysis of the viscous CH equation with classical boundary conditions and [24] for dynamic boundary conditions and regular potentials). The interest in the analysis of CH-type equations coupled with dynamic boundary conditions has grown more and more in the recent literature. In this regard, we can quote [28] and [12], where both the viscous and the nonviscous CH equations with possibly singular potentials, combined with these kinds of boundary conditions have been investigated (see also [29] for the longtime behavior). In the former paper it is assumed that the bulk potential is dominant with respect to the boundary one, while the assumption in the opposite direction is made in the latter, as well as in the present paper. Furthermore, we have to mention [8, 13, 17, 23, 24, 32, 34, 36, 41] and references therein, where problems related
to the CH equation combined with different types of dynamic boundary conditions have been studied.
Regarding the main scope of the manuscript, since we are interested in the optimal control problem related to the previous PDE system coupled with dynamic boundary conditions, it is crucial to study the dependence of the solution on the variables \(u\) and \(u^{\Gamma}\) that will play the role of controls in the second part of the work. Indeed, a major part of the paper is devoted to the well-posedness of the system and the continuous dependence of the solutions with respect to \(u\) and \(u^{\Gamma}\). These results can be proved under quite general assumptions on the potentials \(F\) and \(F_{\Gamma}\), requiring that the boundary potential \(F_{\Gamma}\) is dominant with respect to the bulk one \(F\) (see also [12] for similar assumptions). The case of the smooth (1.2) and the singular (1.3) potentials can be included in the analysis, while the double obstacle potential (1.4) can not be considered.
Up to our knowledge, the only other reference dealing with optimal control for a CHO equation is [11] where, however, the standard Neumann homogeneous boundary conditions were considered for both \(\varphi\) and \(\mu\). Hence, well-posedness cannot be deduced from [11] and the preexisting literature. Being our final aim the application to the aforementioned control problem, we need first to prove well-posedness, regularity of solutions (including the separation property for \(\varphi\) in case of singular potentials), as well as the Frechet differentiability of the control to-state-mapping. These are indeed preliminary results which will let us to prove first-order necessary optimality conditions for the minimization problem associated to the following tracking-type cost functional
\[\mathcal{J}(u,u^{\Gamma};\varphi,\varphi_{\Gamma}):=\frac{\alpha _{1}}{2}\int_{Q}|\varphi-\phi^{Q}|^{2}+\frac{\alpha_{2}}{2}\int_{\Sigma}| \varphi_{\Gamma}-\phi^{\Sigma}|^{2}\] \[\quad+\frac{\alpha_{3}}{2}\int_{\Omega}|\varphi(T)-\phi^{Q}|^{2} +\frac{\alpha_{4}}{2}\int_{\Gamma}|\varphi_{\Gamma}(T)-\phi^{\Gamma}|^{2}+ \frac{\alpha_{5}}{2}\int_{Q}|u|^{2}+\frac{\alpha_{6}}{2}\int_{\Sigma}|u^{ \Gamma}|^{2},\]
where \((u,u^{\Gamma})\) belongs to a proper set of admissible controls (cf. (2.32)) and \(\varphi\) and \(\varphi_{\Gamma}\) are the components of the solution to problem (1.1) and (1.5) (with suitable initial conditions) corresponding to the pair \((u,u^{\Gamma})\). Above, \(\phi^{Q}\), \(\phi^{\Sigma}\), \(\phi^{\Omega}\), and \(\phi^{\Gamma}\) are given target functions, respectively in \(Q\) and \(\Omega\), and on \(\Sigma\) and \(\Gamma\), whereas \(\alpha_{i}\), \(i=1,\ldots,6\), are nonnegative constants.
After showing the existence of (at least) one optimal strategy, we investigate the first-order optimality conditions. In this direction, the main difficult is to prove the Frechet differentiability of the control-to-state operator between suitable functional spaces, which allows us to establish first-order necessary optimality conditions in terms of the solution to a proper adjoint problem. The proof of solvability and uniqueness of solutions of the resulting adjoint system is one of the main difficulties of the paper, due to the fact that we are dealing with a CH-type equation with inhomogeneous mass source (CHO equation) as well as with dynamic boundary conditions of the same type.
The paper is organized as follows. In the next section, we list our assumptions and notation and state our results. The proofs of those regarding the well-posedness of the problem, the regularity and the continuous dependence of its solution on the variables \(u\) and \(u^{\Gamma}\) are given in Sections 3 and 4. Finally, Section 5 is devoted to the analysis of the control problem and the corresponding derivation of the first-order necessary optimality conditions.
## 2 Statement of the problem and results
In this section, we state precise assumptions and notations and present our results. First of all, the set \(\Omega\subset\mathbb{R}^{3}\) is assumed to be bounded, connected and smooth (the lower dimensional cases can be treated in a similar way). As in the Introduction, \(\partial_{\boldsymbol{\nu}}\) and \(\Delta_{\Gamma}\) stand for the outward normal derivative on \(\Gamma:=\partial\Omega\) and the Laplace-Beltrami operator, respectively. Furthermore, we denote by \(\nabla_{\Gamma}\) the surface gradient and write \(|\Omega|\) and \(|\Gamma|\) for the volume of \(\Omega\) and the area of \(\Gamma\), respectively. Next, if \(X\) is a Banach space, \(\|\,\cdot\,\|_{X}\) denotes its norm, with the only exception for the norms in the \(L^{\infty}\) spaces, which are denoted by \(\|\,\cdot\,\|_{\infty}\,\). Moreover, the symbol \(\|\cdot\,\|_{X}\) will also stand for the norm in \(X^{3}\). Similarly, when no confusion can arise, we simply write, e.g., \(L^{2}(0,T;X)\) in place of \(L^{2}(0,T;X^{3})\). Moreover, for every Banach space \(X\), the symbols \(X^{*}\) and \(\langle\,\cdot\,,\,\cdot\,\rangle_{X}\) denote the dual space of \(X\) and the dual pairing between \(X^{*}\) and \(X\), respectively. Furthermore, we introduce the shorthand
\[H:=L^{2}(\Omega)\,,\quad V:=H^{1}(\Omega)\quad\text{and}\quad W:= H^{2}(\Omega), \tag{2.1}\] \[H_{\Gamma}:=L^{2}(\Gamma)\,,\quad V_{\Gamma}:=H^{1}(\Gamma)\quad \text{and}\quad W_{\Gamma}:=H^{2}(\Gamma),\] (2.2) \[\mathcal{H}:=H\times H_{\Gamma}\,,\quad\mathcal{V}:=\{(v,v_{ \Gamma})\in V\times V_{\Gamma}:\ v_{\Gamma}=v_{|\Gamma}\}\] \[\text{and}\quad\mathcal{W}:=\big{(}W\times W_{\Gamma}\big{)} \cap\mathcal{V}\,. \tag{2.3}\]
Finally, we define
\[\text{mean}(z,z^{\Gamma})=\frac{\int_{\Omega}z+\int_{\Gamma}z^{\Gamma}}{| \Omega|+|\Gamma|}\quad\text{for }(z,z^{\Gamma})\in\mathcal{H} \tag{2.4}\]
and term it extended mean value of \((z,z^{\Gamma})\).
To help the reader, we adopt the subscript \(\Gamma\) in pairs of type \((v,v_{\Gamma})\) only when the second component is the trace of the first one, while we use superscripts in the opposite case (as in (2.4)).
Now, we list our assumptions. As for the structure of our system, we postulate that
\[\tau\]
and
\[\gamma\]
are positive real numbers (2.5) \[F,F_{\Gamma}:\mathbb{R}\to(-\infty,+\infty]\quad\text{admit the decompositions}\] \[F=\widehat{\beta}+\widehat{\pi}\quad\text{and}\quad F_{\Gamma} =\widehat{\beta}_{\Gamma}+\widehat{\pi}_{\Gamma}\quad\text{where}\] (2.6) \[\widehat{\beta},\,\widehat{\beta}_{\Gamma}:\mathbb{R}\to[0,+ \infty]\quad\text{are convex and l.s.c.\ with}\quad\widehat{\beta}(0)=\widehat{\beta}_{ \Gamma}(0)=0\] (2.7) \[\widehat{\pi},\,\widehat{\pi}_{\Gamma}:\mathbb{R}\to\mathbb{R} \quad\text{are of class $C^{2}$ with Lipschitz continuous derivatives.}\] (2.8)
We set, for convenience,
\[\beta:=\partial\widehat{\beta}\,,\quad\beta_{\Gamma}:=\partial\widehat{\beta }_{\Gamma}\,,\quad\pi:=\widehat{\pi}^{\prime}\quad\text{and}\quad\pi_{ \Gamma}:=\widehat{\pi}^{\prime}_{\Gamma} \tag{2.9}\]
and require the compatibility condition
\[D(\beta_{\Gamma})\subseteq D(\beta)\quad\text{and}\quad|\beta^{ \circ}(r)|\leq C^{*}\big{(}|\beta_{\Gamma}^{\circ}(r)|+1\big{)}\] \[\quad\text{for some $C^{*}>0$ and every $r\in D(\beta_{\Gamma})$}\,. \tag{2.10}\]
In the above lines, the symbol \(\partial\) indicates the subdifferential operator: we refer to [5] for more details. Here, the symbols \(D(\beta)\) and \(D(\beta_{\Gamma})\) denote the domains of \(\beta\) and \(\beta_{\Gamma}\)
respectively. More generally, we use the notation \(D(\sigma)\) for every maximal monotone graph \(\sigma\) in \(\mathbb{R}\times\mathbb{R}\). Moreover, for \(r\in D(\sigma)\), \(\sigma^{\circ}(r)\) stands for the element of \(\sigma(r)\) having minimum modulus. Finally, we still term \(\sigma\) the maximal monotone operators induced on \(L^{2}\) spaces.
**Remark 2.1**.: Condition (2.10) prescribes that the boundary potential is dominant compared to the one in the bulk. This condition has been assumed in several papers (see [11] among others). A condition in the opposite direction has instead been postulated in [28].
For the data, we make the assumptions listed below. They involve the constant \(M\) that is related to the optimal control problem we state later on. At the present stage, one can think that \(M\) is just a given constant. We assume that
\[u\in L^{\infty}(Q),\quad u^{\Gamma}\in L^{\infty}(\Sigma)\quad \text{with}\quad\|u\|_{\infty}\leq M\quad\text{and}\quad\|u^{\Gamma}\|_{ \infty}\leq M \tag{2.11}\] \[(\varphi_{0}\,,\,\varphi_{0|\Gamma})\in\mathcal{V}\,,\quad \widehat{\beta}(\varphi_{0})\in L^{1}(\Omega)\quad\text{and}\quad\widehat{ \beta}_{\Gamma}(\varphi_{0|\Gamma})\in L^{1}(\Gamma)\] (2.12) \[\text{if}\quad m_{0}:=\text{mean}(\varphi_{0},\varphi_{0|\Gamma} )\quad\text{and}\quad\rho:=\frac{M}{\gamma}\quad\text{then}\] \[\quad-m_{0}^{-}-\rho\ \text{ and }\ m_{0}^{+}+\rho\ \text{ belong to }\ \text{int}\,D(\beta_{\Gamma}) \tag{2.13}\]
where \(m_{0}^{-}\) and \(m_{0}^{+}\) denote the negative and positive part of \(m_{0}\), respectively.
Let us now come to our notion of solution. It corresponds to the weak form of the problem presented in the Introduction that one deduces this way: given an arbitrary \((v,v_{\Gamma})\in\mathcal{V}\), one formally multiplies the equations in (1.1) by \(v\) and those in (1.5) by \(v_{\Gamma}\) and integrates by parts in space. Then, the normal derivatives \(\partial_{\boldsymbol{\nu}}\mu\) and \(\partial_{\boldsymbol{\nu}}\varphi\) simplify. However, although a lower level of regularity would be sufficient to make the definition meaningful, we require that the solution is smoother. Thus, we look for a six-tuple \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma},\xi,\xi^{\Gamma})\) (or a triplet of pairs) that satisfies the regularity requirements
\[(\mu,\mu_{\Gamma})\in L^{2}(0,T;\mathcal{W}) \tag{2.14}\] \[(\varphi,\varphi_{\Gamma})\in H^{1}(0,T;\mathcal{H})\cap L^{ \infty}(0,T;\mathcal{V})\cap L^{2}(0,T;\mathcal{W})\] (2.15) \[(\xi,\xi^{\Gamma})\in L^{2}(0,T;\mathcal{H}) \tag{2.16}\]
and solves
\[\int_{\Omega}\partial_{t}\varphi\,v+\int_{\Gamma}\partial_{t} \varphi_{\Gamma}\,v_{\Gamma}+\int_{\Omega}\nabla\mu\cdot\nabla v+\int_{\Gamma }\nabla_{\Gamma}\mu_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[=\gamma\int_{\Omega}(u-\varphi)v+\gamma\int_{\Gamma}(u^{\Gamma}- \varphi_{\Gamma})v_{\Gamma}\] \[\quad\text{a.e. in }(0,T)\text{ and for every }(v,v_{\Gamma})\in \mathcal{V}, \tag{2.17}\] \[\tau\int_{\Omega}\partial_{t}\varphi\,v+\tau\int_{\Gamma} \partial_{t}\varphi_{\Gamma}\,v_{\Gamma}+\int_{\Omega}\nabla\varphi\cdot \nabla v+\int_{\Gamma}\nabla_{\Gamma}\varphi_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[\quad+\int_{\Omega}\bigl{(}\xi+\pi(\varphi)\bigr{)}v+\int_{\Gamma }\bigl{(}\xi^{\Gamma}+\pi_{\Gamma}(\varphi_{\Gamma})\bigr{)}v_{\Gamma}=\int_ {\Omega}\mu v+\int_{\Gamma}\mu_{\Gamma}v_{\Gamma}\] \[\quad\text{a.e. in }(0,T)\text{ and for every }(v,v_{\Gamma})\in \mathcal{V},\] (2.18) \[\xi\in\beta(\varphi)\quad\text{a.e. in }Q\quad\text{and}\quad\xi^{ \Gamma}\in\beta_{\Gamma}(\varphi_{\Gamma})\quad\text{a.e. on }\Sigma\] (2.19) \[\varphi(0)=\varphi_{0}\quad\text{a.e. in }\Omega\,. \tag{2.20}\]
We did not explicitly assume that \(\varphi_{\Gamma}(0)=\varphi_{0|\Gamma}\) a.e. on \(\Gamma\) since this condition is clearly implied by (2.20).
**Remark 2.2**.: One can show that every solution to (2.17)-(2.20) in the sense specified above actually solves the problem in a strong form, i.e., equations (1.1) and (1.5) are satisfied with \(F^{\prime}(\varphi)\) and \(F^{\prime}_{\Gamma}(\varphi_{\Gamma})\) replaced by \(\xi+\pi(\varphi)\) and \(\xi^{\Gamma}+\pi_{\Gamma}(\varphi_{\Gamma})\), respectively. This could be proved by suitably applying the forthcoming Lemma 2.9.
The first results of ours regard the existence of a solution and a stability estimate.
**Theorem 2.3**.: _Assume (2.5)-(2.10) on the structure of the system and (2.11)-(2.13) on the data. Then, problem (2.17)-(2.20) has at least one solution \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma},\xi,\xi^{\Gamma})\) satisfying the regularity requirements (2.14)-(2.16) as well as the stability estimate_
\[\|(\mu,\mu_{\Gamma})\|_{L^{2}(0,T;\mathcal{W})}+\|(\varphi,\varphi_{\Gamma})\| _{H^{1}(0,T;\mathcal{W})\cap L^{\infty}(0,T;\mathcal{V})\cap L^{2}(0,T; \mathcal{W})}+\|(\xi,\xi^{\Gamma})\|_{L^{2}(0,T;\mathcal{H})}\leq K_{1} \tag{2.21}\]
_with a positive constant \(K_{1}\) that depends only on the structure of the system, \(\Omega\), \(T\), the constant \(M\) appearing in (2.11) and the initial datum \(\varphi_{0}\). In particular, \(K_{1}\) is independent of the pair \((u,u^{\Gamma})\)._
Under further regularity assumptions on the data, we can prove the existence of a more regular solution.
**Theorem 2.4**.: _Let assumptions (2.5)-(2.10) on the structure hold and, in addition to (2.11)-(2.13) for the data, suppose that \((u,u^{\Gamma})\) and \(\varphi_{0}\) satisfy for some positive constant \(M^{\prime}\)_
\[(u,u^{\Gamma})\in H^{1}(0,T;\mathcal{H})\quad\text{with}\quad \max\{\|\partial_{t}u\|_{L^{2}(0,T;H)},\|\partial_{t}u^{\Gamma}\|_{L^{2}(0,T;H _{\Gamma})}\}\leq M^{\prime} \tag{2.22}\] \[(\varphi_{0},\varphi_{0|\Gamma})\in\mathcal{W}\quad\text{and}\quad (\beta^{\circ}(\varphi_{0}),\beta^{\circ}_{\Gamma}(\varphi_{0|\Gamma}))\in \mathcal{H}. \tag{2.23}\]
_Then, problem (2.17)-(2.20) has at least a solution \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma},\xi,\xi^{\Gamma})\) that also satisfies_
\[(\mu,\mu_{\Gamma})\in L^{\infty}(0,T;\mathcal{W})\,,\quad( \varphi,\varphi_{\Gamma})\in W^{1,\infty}(0,T;\mathcal{H})\cap H^{1}(0,T; \mathcal{V})\cap L^{\infty}(0,T;\mathcal{W})\] \[\text{and}\quad(\xi,\xi^{\Gamma})\in L^{\infty}(0,T;\mathcal{H}) \tag{2.24}\] \[\|(\mu,\mu_{\Gamma})\|_{L^{\infty}(0,T;\mathcal{W})}+\|(\varphi, \varphi_{\Gamma})\|_{W^{1,\infty}(0,T;\mathcal{V})\cap H^{1}(0,T;\mathcal{V}) \cap L^{\infty}(0,T;\mathcal{W})}\] \[\quad+\|(\xi,\xi^{\Gamma})\|_{L^{\infty}(0,T;\mathcal{H})}\leq K _{2} \tag{2.25}\]
_with a positive constant \(K_{2}\) that depends only on the structure of the system, \(\Omega\), \(T\), the constants \(M\) and \(M^{\prime}\) and initial datum \(\varphi_{0}\). In particular, as a consequence of the continuous embedding \(\mathcal{W}\hookrightarrow L^{\infty}(\Omega)\times L^{\infty}(\Gamma)\), the first four components of the solution are uniformly bounded and their \(L^{\infty}\) norms are estimated by a constant with the same dependence of \(K_{2}\)._
In the case of everywhere defined potentials (e.g., (1.2)), the boundedness of the component \(\varphi\) already given by Theorem 2.4 is satisfactory. If instead potentials of logarithmic type are considered, one essentially deals with smooth potentials only if the values of the component \(\varphi\) are far away from the end-points of the domain. Our last results regarding the properties of the solutions cover this scenario. However, regular potentials are also included in our analysis. Indeed, we require that
\[\text{either}\ \ D:=D(\beta)=D(\beta_{\Gamma})=\mathbb{R}\ \ \text{or}\ \ D:=D(\beta)=D(\beta_{\Gamma})=(-1,1) \tag{2.26}\] \[\beta,\,\beta_{\Gamma}:D\to\mathbb{R}\ \ \text{are single-valued and locally Lipschitz continuous}. \tag{2.27}\]
Due to the last assumption, we can ignore the components \(\xi\) and \(\xi^{\Gamma}\) from the six-tuple \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma},\xi,\xi^{\Gamma})\) and consider just the quadruplet \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma})\) when speaking of a solution, as \(\xi=\beta(\varphi)\) and \(\xi^{\Gamma}=\beta_{\Gamma}(\varphi_{\Gamma})\).
**Theorem 2.5**.: _In addition to assumptions (2.5)-(2.10) on the structure, assume that (2.26)-(2.27) are satisfied. Moreover, assume that the data satisfy (2.11)-(2.13) and (2.22)-(2.23) as well as_
\[\varphi_{0}\in D_{0}\quad\text{a.e.\ in\ }\Omega\quad\text{for some compact subset $D_{0}\subset D$}. \tag{2.28}\]
_Then, there exists a solution \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma})\) to problem (2.17)-(2.20) satisfying_
\[\varphi_{*}\leq\varphi\leq\varphi^{*}\quad\text{a.e.\ in\ }Q\quad\text{and} \quad\varphi_{*}\leq\varphi_{\Gamma}\leq\varphi^{*}\quad\text{a.e.\ on\ }\Sigma \tag{2.29}\]
_for some constants \(\varphi_{*},\varphi^{*}\in D\) that depend only on the structure of the system, \(\Omega\), \(T\), the constants \(M\) and \(M^{\prime}\) and the initial datum \(\varphi_{0}\)._
The next result regards the uniqueness of the solution and its continuous dependence on the pair \((u,u^{\Gamma})\). We are not able to prove uniqueness in the general case, unfortunately. However, the result we state below covers both the case of classical and logarithmic potentials.
**Theorem 2.6**.: _Under the same assumptions of Theorem 2.5 on the structure and the data, the solution to problem (2.17)-(2.20) satisfying the theses of Theorems 2.4 and 2.5 is unique. Moreover, if \((u_{i},u_{\Gamma,i})\), \(i=1,2\), are two choices of \((u,u^{\Gamma})\) and \((\mu_{i},\mu_{\Gamma,i},\varphi_{i},\varphi_{\Gamma,i})\) are the corresponding solutions, the following estimate_
\[\|(\mu_{1},\mu_{\Gamma,1})-(\mu_{2},\mu_{\Gamma,2})\|_{L^{2}(0,T; \mathcal{V})}+\|(\varphi_{1},\varphi_{\Gamma,1})-(\varphi_{2},\varphi_{\Gamma,2})\|_{H^{1}(0,T;\mathcal{V})\cap L^{\infty}(0,T;\mathcal{V})}\] \[\leq K_{3}\,\|(u_{1},u_{\Gamma,1})-(u_{2},u_{\Gamma,2})\|_{L^{2}( 0,T;\mathcal{V})} \tag{2.30}\]
_holds true with a constant \(K_{3}\) that depends only on the structure of the system, \(\Omega\), \(T\), the constants \(M\) and \(M^{\prime}\) and the initial datum \(\varphi_{0}\)._
Once the well-posedness of the problem and some regularity of its solution are established, one can deal with a corresponding control problem. Even though a slightly more general cost functional could be considered (see the forthcoming Remark 5.3), the problem we aim at studying is the following:
Minimize the cost functional \[\mathcal{J}(u,u^{\Gamma};\varphi,\varphi_{\Gamma}):=\frac{\alpha_{1}}{2} \int_{Q}|\varphi-\phi^{Q}|^{2}+\frac{\alpha_{2}}{2}\int_{\Sigma}|\varphi_{ \Gamma}-\phi^{\Sigma}|^{2}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+ \frac{\alpha_{3}}{2}\int_{\Omega}|\varphi(T)-\phi^{\Omega}|^{2}+\frac{\alpha_{ 4}}{2}\int_{\Gamma}|\varphi_{\Gamma}(T)-\phi^{\Gamma}|^{2}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+\frac{ \alpha_{5}}{2}\int_{Q}|u|^{2}+\frac{\alpha_{6}}{2}\int_{\Sigma}|u^{\Gamma}|^{2}\] (2.31) subject to the constraints \[i)\quad(u,u^{\Gamma})\ \text{belongs to the set of admissible controls}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
For the quantities appearing in (2.31) and (2.32), we require that
\[\alpha_{i}\;\;(i=1,\ldots,6)\;\;\mbox{are nonnegative constants} \tag{2.34}\] \[\phi^{Q}\in L^{2}(Q),\quad\phi^{\Sigma}\in L^{2}(\Sigma),\quad\phi ^{\Omega}\in L^{2}(\Omega)\quad\mbox{and}\quad\phi^{\Gamma}\in L^{2}(\Gamma)\] (2.35) \[u_{\min},\,u_{\max}\in L^{\infty}(Q)\quad\mbox{and}\quad u_{ \min}^{\Gamma},\,u_{\max}^{\Gamma}\in L^{\infty}(\Sigma)\] (2.36) \[\mbox{condition (\ref{eq:13}) is satisfied with}\] \[M:=\max\{\|u_{\min}\|_{\infty}\,,\,\|u_{\max}\|_{\infty}\,,\,\|u_{ \min}^{\Gamma}\|_{\infty}\,,\,\|u_{\max}^{\Gamma}\|_{\infty}\}\] (2.37) \[M^{\prime}\mbox{ is a positive constant}. \tag{2.38}\]
Finally, we assume that
\[\mathcal{U}_{ad}\mbox{ is nonempty}. \tag{2.39}\]
**Remark 2.7**.: Notice that (2.39) implies that \(u_{\min}\leq u_{\max}\) a.e. in \(Q\) and \(u_{\min}^{\Gamma}\leq u_{\max}^{\Gamma}\) a.e. on \(\Sigma\). On the contrary, these conditions do not imply (2.39), because of the restriction on the time derivative. We also observe that the special case \(u_{\min}=u_{\max}\) is allowed provided that (2.39) still holds. In this case, the only variable from the pair \((u,u^{\Gamma})\) is \(u^{\Gamma}\) and problem (2.31)-(2.33) becomes a boundary control problem. Accordingly, the bulk integral in the necessary condition of optimality we describe below disappears and just the boundary integral survives. Similarly, we can consider just the distributed control problem with the control acting only in the bulk.
Thanks to the well-posedness and regularity results stated above, we can assume that problem (2.17)-(2.20) is well-posed and that its solution is smooth and satisfies the proper uniform bounds as well as the separation property. As for the latter, we recall that the main assumptions we have made on the structure were (2.26)-(2.27). However, to develop the complete control theory, we need more regularity on the potentials. Therefore, we reinforce (2.27) by requiring that
\[F,\,F_{\Gamma}:D\to\mathbb{R}\;\;\mbox{are functions of class $C^{3}$}. \tag{2.40}\]
Notice that this condition is still met by the regular and logarithmic potentials (1.2) and (1.3). Here is our first result, in which it is understood that all the assumptions we have listed are satisfied.
**Theorem 2.8**.: _The control problem (2.31)-(2.33) has at least one solution \((u_{*},u_{*}^{\Gamma})\)._
Our last result consists in finding necessary conditions for optimality. We will prove the following: if \((u_{*},u_{*}^{\Gamma})\) is an optimal control and \((\mu^{*},\mu_{\Gamma}^{*},\varphi^{*},\varphi_{\Gamma}^{*})\) is the corresponding state, then, we have that
\[\int_{Q}(\gamma p+\alpha_{5}u_{*})(u-u_{*})+\int_{\Sigma}(\gamma p_{\Gamma}+ \alpha_{6}u_{*}^{\Gamma})(u^{\Gamma}-u_{*}^{\Gamma})\geq 0\quad\mbox{for every $(u,u^{\Gamma})\in \mathcal{U}_{ad}$}\]
where \(p\) and \(p_{\Gamma}\) are the components of the solution \((p,p_{\Gamma},q,q_{\Gamma})\) to a proper adjoint problem which is discussed in detail in the last section. Here, we only anticipate that a strong form of it is the following backward in time problem, \(p_{\Gamma}\) and \(q_{\Gamma}\) being the traces of \(p\) and
\(q\) on the boundary:
\[-\partial_{t}(p+\tau q)-\Delta q+F^{\prime\prime}(\varphi^{*})\,q+ \gamma p=\alpha_{1}(\varphi^{*}-\phi^{Q})\] \[\quad\text{and}\quad-\Delta p=q\quad\text{in }Q\] \[-\partial_{t}(p_{\Gamma}+\tau q_{\Gamma})+\partial_{\nu}q-\Delta_ {\Gamma}q_{\Gamma}+F^{\prime\prime}_{\Gamma}(\varphi^{*}_{\Gamma})\,q_{\Gamma} +\gamma p_{\Gamma}=\alpha_{2}(\varphi^{*}_{\Gamma}-\phi^{\Sigma})\] \[\quad\text{and}\quad\partial_{\nu}p-\Delta_{\Gamma}p_{\Gamma}=q_{ \Gamma}\quad\text{on }\Sigma\] \[(p+\tau q)(T)=\alpha_{3}(\varphi^{*}(T)-\phi^{\Omega})\quad\text{ in }\Omega\] \[\quad\text{and}\quad(p_{\Gamma}+\tau q_{\Gamma})(T)=\alpha_{4}( \varphi^{*}_{\Gamma}(T)-\phi^{\Gamma})\quad\text{on }\Gamma\]
where \(\varphi^{*}\) and \(\varphi^{*}_{\Gamma}\) are the components of the solution \((\mu^{*},\mu^{*}_{\Gamma},\varphi^{*},\varphi^{*}_{\Gamma})\) corresponding to the optimal control \((u_{*},u^{\Gamma}_{*})\). However, this is completely formal without specific assumptions on the ingredients of the cost functional.
The rest of the paper is organized as follows. We continue the present section by fixing some notations and recalling some tools. The next Section 3 is devoted to the proof of our existence and regularity results, whereas continuous dependence and uniqueness are proved in Section 4. Finally, the control problem is discussed in the last section.
Throughout the paper, we will repeatedly use the Young inequality
\[a\,b\leq\delta\,a^{2}+\frac{1}{4\delta}\,b^{2}\quad\text{for all }a,b\in \mathbb{R}\text{ and }\delta>0 \tag{2.41}\]
as well as the Holder and Schwarz inequalities. Moreover, we employ the abbreviations
\[Q_{t}:=\Omega\times(0,t)\quad\text{and}\quad\Sigma_{t}:=\Gamma\times(0,t)\quad \text{for }t\in(0,T]. \tag{2.42}\]
Next, we introduce some tools related to the notion of generalized mean value given by (2.4). We observe that
\[\int_{\Omega}(v-\text{mean}(v,v_{\Gamma}))+\int_{\Gamma}(v_{\Gamma}-\text{ mean}(v,v_{\Gamma}))=0\quad\text{for every }(v,v_{\Gamma})\in\mathcal{V}. \tag{2.43}\]
It then follows that
\[\mathcal{V}=\mathcal{V}_{0}\oplus\text{span}\{(1,1)\} \tag{2.44}\]
where
\[\mathcal{V}_{0}:=\{(v,v_{\Gamma})\in\mathcal{V}:\ \text{mean}(v,v_{\Gamma})=0\}. \tag{2.45}\]
Of course, the components of the pair \((1,1)\) in (2.44) are the constant functions \(1\) on \(\Omega\) and \(\Gamma\), respectively. More generally, if \(y\) is any real number (e.g., some mean value), the same symbol \(y\) also denotes the corresponding constant functions on \(\Omega\) and \(\Gamma\). The same convention is used if \(y\) is time dependent. One also can deduce from (2.44) the Poincare type inequality
\[\|(v,v_{\Gamma})\|_{\mathcal{V}}^{2}\leq C_{\Omega}\big{(}\|\nabla v\|_{H}^{2} +\|\nabla_{\Gamma}v_{\Gamma}\|_{H_{\Gamma}}^{2}+|\,\text{mean}(v,v_{\Gamma})| ^{2}\big{)}\quad\text{for every }(v,v_{\Gamma})\in\mathcal{V} \tag{2.46}\]
where \(C_{\Omega}\) depends only on \(\Omega\). It follows that the map
\[\mathcal{V}\ni(v,v_{\Gamma})\mapsto\big{(}\|\nabla v\|_{H}^{2}+\|\nabla_{ \Gamma}v_{\Gamma}\|_{H_{\Gamma}}^{2}+|\,\text{mean}(v,v_{\Gamma})|^{2}\big{)} ^{1/2}\]
yields a Hilbert norm on \(\mathcal{V}\) that is equivalent to the natural one. Finally, we will take advantage of the following regularity result (that can be found in [14, Lem. 3.1]).
**Lemma 2.9**.: _Let \(\sigma:\mathbb{R}\to\mathbb{R}\) be monotone and Lipschitz continuous function and assume that \((w,w_{\Gamma})\in\mathcal{V}\) and \((g,g^{\Gamma})\in\mathcal{H}\) verify_
\[\int_{\Omega}\nabla w\cdot\nabla v+\int_{\Gamma}\nabla w_{\Gamma}\cdot\nabla v_ {\Gamma}+\int_{\Gamma}\sigma(w_{\Gamma})v_{\Gamma}=\int_{\Omega}gv+\int_{ \Gamma}g^{\Gamma}v_{\Gamma}\quad\text{for every }(v,v_{\Gamma})\in\mathcal{V}. \tag{2.47}\]
_Then, we have_
\[(w,w_{\Gamma})\in\mathcal{W}\quad\text{and}\quad\|(w,w_{\Gamma})\|_{\mathcal{W }}+\|\sigma(w_{\Gamma})\|_{H_{\Gamma}}\leq C_{\Omega}\big{(}\|(w,w_{\Gamma})\| _{\mathcal{V}}+\|(g,g^{\Gamma})\|_{\mathcal{H}}\big{)} \tag{2.48}\]
_where \(C_{\Omega}\) depends only on \(\Omega\) and not on \(\sigma\). Moreover, \((w,w_{\Gamma})\) solves the boundary value problem_
\[-\Delta w=g\quad\text{a.e.\ in }\Omega\quad\text{and}\quad\partial_{\boldsymbol{ \nu}}w-\Delta_{\Gamma}w_{\Gamma}+\sigma(w_{\Gamma})=g^{\Gamma}\quad\text{a.e. on }\Gamma\,. \tag{2.49}\]
We conclude this section by stating a general rule concerning the constants that appear in the estimates we perform in the following. We use the small-case symbol \(\,c\,\) for a generic constant whose actual values may change from line to line and even within the same line and depend only on \(\Omega\), the structure of the system, and the constants and the norms of the functions involved in the assumptions of the statements. In particular, the value of \(\,c\,\) may depend on the constant \(M\) that appears in (2.11) but they are independent of \(u\) and \(u^{\Gamma}\) and it is also clear that they do not depend on the parameter \(\,\varepsilon\,\) we introduce in Section 3.1. A small-case symbol with a subscript like \(c_{\delta}\) indicates that the constant may depend on the parameter \(\delta\), in addition. On the contrary, we mark precise constants that we can refer to by using different symbols (e.g., capital or Greek letters like in (2.10) and (2.13)).
## 3 Existence and regularity
In this section, we prove Theorems 2.3, 2.4 and 2.5 whose proofs are inspired by the papers [11] and [14]. In this direction, as a first step, we introduce a regularized problem that is obtained by replacing the functionals \(\widehat{\beta}\) and \(\widehat{\beta}_{\Gamma}\) and the operators \(\beta\) and \(\beta_{\Gamma}\) by their Moreau-Yosida regularizations \(\widehat{\beta}_{\varepsilon}\), \(\widehat{\beta}_{\Gamma,\,\varepsilon}\), \(\beta_{\varepsilon}\) and \(\beta_{\Gamma,\,\varepsilon}\), respectively, with \(\varepsilon\in(0,1)\) (see, e.g., [5, pp. 28 and 39]). Thus, the regularized problem consists in finding a quadruplet \((\mu^{\varepsilon},\mu^{\varepsilon}_{\Gamma},\varphi^{\varepsilon},\varphi^{ \varepsilon}_{\Gamma})\) (or a pair of pairs) that satisfies the regularity requirements
\[(\mu^{\varepsilon},\mu^{\varepsilon}_{\Gamma})\in L^{2}(0,T; \mathcal{W}) \tag{3.1}\] \[(\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\in H^{1}(0,T;\mathcal{H})\cap L^{\infty}(0,T;\mathcal{V})\cap L^{2}(0,T;\mathcal{W}) \tag{3.2}\]
and solves
\[\int_{\Omega}\partial_{t}\varphi^{\varepsilon}\,v+\int_{\Gamma} \partial_{\bar{t}}\varphi^{\varepsilon}_{\Gamma}\,v_{\Gamma}+\int_{\Omega} \nabla\mu^{\varepsilon}\cdot\nabla v+\int_{\Gamma}\nabla_{\Gamma}\mu^{ \varepsilon}_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[=\gamma\int_{\Omega}(u-\varphi^{\varepsilon})v+\gamma\int_{ \Gamma}(u^{\Gamma}-\varphi^{\varepsilon}_{\Gamma})v_{\Gamma}\] \[\qquad\text{a.e. in }(0,T)\text{ and for every }(v,v_{\Gamma})\in \mathcal{V} \tag{3.3}\]
\[\tau\int_{\Omega}\partial_{t}\varphi^{\varepsilon}\,v+\tau\int_{ \Gamma}\partial_{t}\varphi^{\varepsilon}_{\Gamma}\,v_{\Gamma}+\int_{\Omega} \nabla\varphi^{\varepsilon}\cdot\nabla v+\int_{\Gamma}\nabla_{\Gamma}\varphi^{ \varepsilon}_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[\quad+\int_{\Omega}(\beta_{\varepsilon}+\pi)(\varphi^{ \varepsilon})\,v+\int_{\Gamma}(\beta_{\Gamma,\,\varepsilon}+\pi_{\Gamma})( \varphi^{\varepsilon}_{\Gamma})\,v_{\Gamma}=\int_{\Omega}\mu^{\varepsilon}v+ \int_{\Gamma}\mu^{\varepsilon}_{\Gamma}v_{\Gamma}\] \[\quad\text{a.e. in }(0,T)\text{ and for every }(v,v_{\Gamma})\in\mathcal{V}, \tag{3.4}\] \[\varphi^{\varepsilon}(0)=\varphi_{0}\quad\text{a.e. in }\Omega\,. \tag{3.5}\]
Let us claim that this problem has a unique solution for every \(\varepsilon\in(0,1)\). We do not prove this fact and just make two remarks. First, uniqueness is not used in the sequel. Nevertheless, it could be proved as we do for Theorem 2.6 since \(\beta_{\varepsilon}\) and \(\beta_{\Gamma,\,\varepsilon}\) are Lipschitz continuous. As for existence, its proof could be obtained by discretizing the problem by means of a Faedo-Galerkin scheme as done, e.g., in [14]. The estimates we formally establish in the sequel can be performed in a rigorous way on the discrete solution since, e.g., even its time derivative actually is an admissible test function. So, we start from the solution \((\mu^{\varepsilon},\mu^{\varepsilon}_{\Gamma},\varphi^{\varepsilon},\varphi^{ \varepsilon}_{\Gamma})\) to (3.3)-(3.5) and perform some formal estimates that allow us to let \(\varepsilon\) tend to zero. To this end, we make some preliminary observations.
We notice that (2.7) and (2.9) hold for the approximation. More precisely, for every \(\varepsilon\in(0,1)\), we have that
\[0\leq\widehat{\beta}_{\varepsilon}(r)\leq\widehat{\beta}(r) \quad\text{and}\quad 0\leq\widehat{\beta}_{\Gamma,\,\varepsilon}(r)\leq \widehat{\beta}_{\Gamma}(r)\quad\text{for every }r\in\mathbb{R} \tag{3.6}\] \[|\beta_{\varepsilon}(r)|\leq|\beta^{\circ}(r)|\quad\text{and} \quad|\beta_{\Gamma,\,\varepsilon}(r)|\leq|\beta^{\circ}_{\Gamma}(r)|\quad \text{for every }r\in D(\beta). \tag{3.7}\]
Furthermore, (2.10) also holds true for \(\beta_{\varepsilon}\) and \(\beta_{\Gamma,\,\varepsilon}\) with a similar constant that does not depend on \(\varepsilon\) (see [6, Lemma 4.4]). We thus write
\[|\beta_{\varepsilon}(r)|\leq C^{*}\big{(}|\beta_{\Gamma,\,\varepsilon}(r)|+1 \big{)}\quad\text{for every }r\in\mathbb{R} \tag{3.8}\]
with the same constant \(C^{*}\), without loss of generality. We also deduce that
\[|\widehat{\beta}_{\varepsilon}(r)|\leq C^{*}\big{(}|\widehat{\beta}_{\Gamma, \,\varepsilon}(r)|+|r|\big{)}\quad\text{for every }r\in\mathbb{R} \tag{3.9}\]
just by integration. Next, since \(\beta_{\varepsilon}\) and \(\beta_{\Gamma,\,\varepsilon}\) have the same sign, we see that (3.8) and the Young inequality imply
\[\beta_{\Gamma,\,\varepsilon}(r)\beta_{\varepsilon}(r)\geq\frac{1}{2C^{*}}\,| \beta_{\varepsilon}(r)|^{2}-C_{*}\quad\text{for every }r\in\mathbb{R} \tag{3.10}\]
with a similar constant \(C_{*}\). We also notice that the domains inclusion \(D(\beta_{\Gamma})\subseteq D(\beta)\) prescribed in (2.10) and (2.13) imply the existence of positive constants \(\delta_{0}\) and \(C_{0}\) such that
\[\beta_{\varepsilon}(r)(r-r_{0})\geq\delta_{0}|\beta_{\varepsilon} (r)|-C_{0}\quad\text{and}\quad\beta_{\Gamma,\,\varepsilon}(r)(r-r_{0})\geq \delta_{0}|\beta_{\Gamma,\,\varepsilon}(r)|-C_{0}\] \[\quad\text{for every }r\in\mathbb{R},\,r_{0}\in[-m_{0}^{-}-\rho,m_{0} ^{+}+\rho]\text{ and }\varepsilon\in(0,1). \tag{3.11}\]
This is a generalization of [41, Appendix, Prop. A.1]. The detailed proof given in [28, p. 908] with a fixed \(r_{0}\) also works in the present case with minor changes.
### Existence
We now prove Theorem 2.3. We start from the solution \((\mu^{\varepsilon},\mu^{\varepsilon}_{\Gamma},\varphi^{\varepsilon},\varphi^{ \varepsilon}_{\Gamma})\) to the approximating problem (3.3)-(3.5) and perform a number of a priori estimates that are uniform with respect to \(\varepsilon\). The first bound we establish regards the mean value of \((\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\).
Estimate of a mean value.We set for brevity
\[m:=\operatorname{mean}(\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma}) \quad\text{and}\quad\omega:=\operatorname{mean}(u,u^{\Gamma})\quad\text{a.e. in }(0,T) \tag{3.12}\]
and test (3.3) evaluated at the time \(t\) by \((1,1)/(|\Omega|+|\Gamma|)\) to obtain that
\[m^{\prime}(t)+\gamma\,m(t)=\gamma\,\omega(t)\quad\text{a.e. in }(0,T). \tag{3.13}\]
On the other hand, (3.5) implies that \(m(0)=m_{0}:=\operatorname{mean}(\varphi_{0},\varphi_{0|\Gamma})\). Therefore, \(m\) does not depend on \(\varepsilon\) and we have that
\[m(t)=m_{0}\,e^{-\gamma t}+\int_{0}^{t}e^{-\gamma(t-s)}\omega(s)\,ds\quad \text{for every }t\in[0,T].\]
Now, for every \(t\in[0,T]\), it results that
\[-m_{0}^{-}\leq-m_{0}^{-}\,e^{-\gamma t}\leq m_{0}\,e^{-\gamma t }\leq m_{0}^{+}\,e^{-\gamma t}\leq m_{0}^{+}\] \[\Big{|}\int_{0}^{t}e^{-\gamma(t-s)}\omega(s)\,ds\Big{|}\leq M \Big{|}\!\int_{0}^{t}e^{-\gamma(t-s)}\,ds\Big{|}\leq\frac{M}{\gamma}=\rho\]
the latter since \(|\omega|\leq M\) a.e. in \((0,T)\) by (2.11) and the definition of \(\rho\) (see (2.13)). Hence, we conclude that
\[-m_{0}^{-}-\rho\leq\operatorname{mean}(\varphi^{\varepsilon}(t),\varphi^{ \varepsilon}_{\Gamma}(t))\leq m_{0}^{+}+\rho\quad\text{for every }t\in[0,T]\text{ and } \varepsilon\in(0,1). \tag{3.14}\]
An auxiliary estimate.We set for convenience
\[\alpha_{\varepsilon}:=\operatorname{mean}(\mu^{\varepsilon},\mu^{\varepsilon }_{\Gamma}) \tag{3.15}\]
and still use the notation \(m\) introduced in (3.12). We then test (3.4) by \((\varphi^{\varepsilon}-m,\varphi^{\varepsilon}_{\Gamma}-m)\) and obtain a.e. in \((0,T)\)
\[\int_{\Omega}\beta_{\varepsilon}(\varphi^{\varepsilon})(\varphi^ {\varepsilon}-m)+\int_{\Gamma}\beta_{\Gamma,\varepsilon}(\varphi^{ \varepsilon}_{\Gamma})(\varphi^{\varepsilon}_{\Gamma}-m)+\int_{\Omega}| \nabla\varphi^{\varepsilon}|^{2}+\int_{\Gamma}|\nabla_{\Gamma}\varphi^{ \varepsilon}_{\Gamma}|^{2}\] \[=-\tau\int_{\Omega}\partial_{t}\varphi^{\varepsilon}(\varphi^{ \varepsilon}-m)-\tau\int_{\Gamma}\partial_{t}\varphi^{\varepsilon}_{\Gamma}( \varphi^{\varepsilon}_{\Gamma}-m)\] \[\quad-\int_{\Omega}\pi(\varphi^{\varepsilon})(\varphi^{ \varepsilon}-m)-\int_{\Gamma}\pi_{\Gamma}(\varphi^{\varepsilon}_{\Gamma})( \varphi^{\varepsilon}_{\Gamma}-m)\] \[\quad+\int_{\Omega}(\mu^{\varepsilon}-\alpha_{\varepsilon})( \varphi^{\varepsilon}-m)+\int_{\Gamma}(\mu^{\varepsilon}_{\Gamma}-\alpha_{ \varepsilon})(\varphi^{\varepsilon}_{\Gamma}-m)\]
where, in the last line, we have subtracted \(\int_{\Omega}\alpha_{\varepsilon}(\varphi^{\varepsilon}-m)+\int_{\Gamma}\alpha_{ \varepsilon}(\varphi^{\varepsilon}_{\Gamma}-m)\), which is zero due to (2.43). Now, we owe to (3.14) and apply (3.11). Schwarz's inequality in \(\mathcal{H}\) and the Lipschitz continuity of \(\pi\) and \(\pi_{\Gamma}\) imply that
\[\delta_{0}\int_{\Omega}|\beta_{\varepsilon}(\varphi^{\varepsilon} )|+\delta_{0}\int_{\Gamma}|\beta_{\Gamma,\,\varepsilon}(\varphi^{\varepsilon} _{\Gamma})|-2\,C_{0}\] \[\leq\tau\|(\partial_{t}\varphi^{\varepsilon},\partial_{t}\varphi^ {\varepsilon}_{\Gamma})\|_{\mathcal{H}}\,\|(\varphi^{\varepsilon}-m,\varphi^{ \varepsilon}_{\Gamma}-m)\|_{\mathcal{H}}\] \[\quad+c\,\big{(}\|(\varphi^{\varepsilon},\varphi^{\varepsilon}_{ \Gamma})\|_{\mathcal{H}}+1\big{)}\|(\varphi^{\varepsilon}-m,\varphi^{ \varepsilon}_{\Gamma}-m)\|_{\mathcal{H}}\] \[\quad+\|(\mu^{\varepsilon}-\alpha_{\varepsilon},\mu^{\varepsilon} _{\Gamma}-\alpha_{\varepsilon})\|_{\mathcal{H}}\,\|(\varphi^{\varepsilon}-m, \varphi^{\varepsilon}_{\Gamma}-m)\|_{\mathcal{H}}\,.\]
On the other hand, by testing (3.4) by \((1,1)\), we obtain a.e. in \((0,T)\) that
\[\left(|\Omega|+|\Gamma|\right)\alpha_{\varepsilon}=\int_{\Omega} \beta_{\varepsilon}(\varphi^{\varepsilon})+\int_{\Gamma}\beta_{\Gamma, \varepsilon}(\varphi^{\varepsilon}_{\Gamma})\] \[\quad+\tau\int_{\Omega}\partial_{t}\varphi^{\varepsilon}+\tau \int_{\Gamma}\partial_{t}\varphi^{\varepsilon}_{\Gamma}+\int_{\Omega}\pi( \varphi^{\varepsilon})+\int_{\Gamma}\pi_{\Gamma}(\varphi^{\varepsilon}_{ \Gamma}).\]
By combining the above equalities and inequalities and owing once again to (3.14), we conclude that
\[|\alpha_{\varepsilon}|\leq c\,\big{\{}\|(\partial_{t}\varphi^{ \varepsilon},\partial_{t}\varphi^{\varepsilon}_{\Gamma})\|_{\mathcal{H}}\, \big{(}\|(\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\|_{\mathcal{H }}+1\big{)}\] \[\quad+\|(\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\| _{\mathcal{H}}^{2}+1\big{\}}\quad\text{a.e. in }(0,T)\,. \tag{3.16}\]
At this point, we are ready to prove the basic estimates.
First a priori estimate.We test (3.3) by \((\mu^{\varepsilon},\mu^{\varepsilon}_{\Gamma})\). At the same time, we test (3.4) by \((\partial_{t}\varphi^{\varepsilon},\partial_{t}\varphi^{\varepsilon}_{\Gamma })+\gamma(\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\). Then, we sum up and observe that several cancellations occur. Hence, we have, almost everywhere in \((0,T)\), that
\[\int_{\Omega}|\nabla\mu^{\varepsilon}|^{2}+\int_{\Gamma}|\nabla_{ \Gamma}\mu^{\varepsilon}_{\Gamma}|^{2}+\tau\int_{\Omega}|\partial_{t}\varphi^ {\varepsilon}|^{2}+\tau\int_{\Gamma}|\partial_{t}\varphi^{\varepsilon}_{ \Gamma}|^{2}\] \[\quad+\frac{1}{2}\,\frac{d}{dt}\int_{\Omega}|\nabla\varphi^{ \varepsilon}|^{2}+\frac{1}{2}\,\frac{d}{dt}\int_{\Gamma}|\nabla_{\Gamma} \varphi^{\varepsilon}_{\Gamma}|^{2}+\frac{d}{dt}\int_{\Omega}\widehat{\beta}_ {\varepsilon}(\varphi^{\varepsilon})+\frac{d}{dt}\int_{\Gamma}\widehat{\beta} _{\Gamma,\varepsilon}(\varphi^{\varepsilon}_{\Gamma})\] \[\quad+\frac{\tau\gamma}{2}\,\frac{d}{dt}\int_{\Omega}|\varphi^{ \varepsilon}|^{2}+\frac{\tau\gamma}{2}\,\frac{d}{dt}\int_{\Gamma}|\varphi^{ \varepsilon}_{\Gamma}|^{2}+\gamma\int_{\Omega}|\nabla\varphi^{\varepsilon}|^{2 }+\gamma\int_{\Gamma}|\nabla\varphi^{\varepsilon}_{\Gamma}|^{2}\] \[\quad+\gamma\int_{\Omega}\beta_{\varepsilon}(\varphi^{\varepsilon} )\varphi^{\varepsilon}+\gamma\int_{\Gamma}\beta_{\Gamma,\varepsilon}(\varphi^{ \varepsilon}_{\Gamma})\varphi^{\varepsilon}_{\Gamma}+\gamma\int_{\Omega}\pi( \varphi^{\varepsilon})\varphi^{\varepsilon}+\gamma\int_{\Gamma}\pi_{\Gamma}( \varphi^{\varepsilon}_{\Gamma})\varphi^{\varepsilon}_{\Gamma}\] \[=\gamma\int_{\Omega}u\mu^{\varepsilon}+\gamma\int_{\Gamma}u^{ \Gamma}\mu^{\varepsilon}_{\Gamma}\,.\]
Two integrals are nonnegative since \(\beta_{\varepsilon}(r)\), \(\beta_{\Gamma,\varepsilon}(r)\) and \(r\) have the same sign for every \(r\in\mathbb{R}\). By ignoring them, integrating over \((0,t)\), where \(t\in(0,T)\) is arbitrary, accounting for the initial condition (3.5) and the Lipschitz continuity of \(\pi\) and \(\pi_{\Gamma}\), and applying the
Young inequality with an arbitrary \(\delta>0\), we deduce that (see (2.42) for the notation)
\[\int_{Q_{t}}|\nabla\mu^{\varepsilon}|^{2}+\int_{\Sigma_{t}}|\nabla _{\Gamma}\mu_{\Gamma}^{\varepsilon}|^{2}+\tau\int_{Q_{t}}|\partial_{t}\varphi^ {\varepsilon}|^{2}+\tau\int_{\Sigma_{t}}|\partial_{t}\varphi^{\varepsilon}_{ \Gamma}|^{2}\] \[\quad+\frac{1}{2}\int_{\Omega}|\nabla\varphi^{\varepsilon}(t)|^{2 }+\frac{1}{2}\int_{\Gamma}|\nabla_{\Gamma}\varphi^{\varepsilon}_{\Gamma}(t)|^{ 2}+\int_{\Omega}\widehat{\beta}_{\varepsilon}(\varphi^{\varepsilon}(t))+\int_ {\Gamma}\widehat{\beta}_{\Gamma,\varepsilon}(\varphi^{\varepsilon}_{\Gamma}(t))\] \[\quad+\frac{\tau\gamma}{2}\int_{\Omega}|\varphi^{\varepsilon}(t)| ^{2}+\frac{\tau\gamma}{2}\int_{\Gamma}|\varphi^{\varepsilon}_{\Gamma}(t)|^{2}+ \gamma\int_{Q_{t}}|\nabla\varphi^{\varepsilon}|^{2}+\gamma\int_{\Sigma_{t}}| \nabla_{\Gamma}\varphi^{\varepsilon}_{\Gamma}|^{2}\] \[\leq\frac{1}{2}\int_{\Omega}|\nabla\varphi_{0}|^{2}+\frac{1}{2} \int_{\Gamma}|\nabla_{\Gamma}\varphi_{0|\Gamma}|^{2}+\int_{\Omega}\widehat{ \beta}_{\varepsilon}(\varphi_{0})+\int_{\Gamma}\widehat{\beta}_{\Gamma, \varepsilon}(\varphi_{0|\Gamma})\] \[\quad+\frac{\tau\gamma}{2}\int_{\Omega}|\varphi_{0}|^{2}+\frac{ \tau\gamma}{2}\int_{\Gamma}|\varphi_{0|\Gamma}|^{2}\] \[\quad+\delta\int_{Q_{t}}|\partial_{t}\varphi^{\varepsilon}|^{2}+ \delta\int_{\Sigma_{t}}|\partial_{t}\varphi^{\varepsilon}_{\Gamma}|^{2}+c_{ \delta}\int_{Q_{t}}(|\varphi^{\varepsilon}|^{2}+1)+c_{\delta}\int_{\Sigma_{t}} (|\varphi^{\varepsilon}_{\Gamma}|^{2}+1)\] \[\quad+\gamma\int_{Q_{t}}u\mu^{\varepsilon}+\gamma\int_{\Sigma_{t} }u^{\Gamma}\mu^{\varepsilon}_{\Gamma}\,.\]
By recalling the assumption (2.12) on the initial datum and the properties of the Yosida approximation (see, e.g., (3.6)), it is clear that just the last two integrals need some treatment. With the notation (3.15) already used, by also recalling (2.11) and the Poincare inequality (2.46), we have that
\[\int_{Q_{t}}u\mu^{\varepsilon}+\int_{\Sigma_{t}}u^{\Gamma}\mu^{ \varepsilon}_{\Gamma}\] \[=\int_{Q_{t}}u(\mu^{\varepsilon}-\alpha_{\varepsilon})+\int_{ \Sigma_{t}}u^{\Gamma}(\mu^{\varepsilon}_{\Gamma}-\alpha_{\varepsilon})+\int_ {0}^{t}\alpha_{\varepsilon}(s)\Bigl{(}\int_{\Omega}u(s)+\int_{\Gamma}u^{ \Gamma}(s)\Bigr{)}\,ds\] \[\leq\delta\int_{Q_{t}}|\nabla\mu^{\varepsilon}|^{2}+\delta\int_ {\Sigma_{t}}|\nabla_{\Gamma}\mu^{\varepsilon}_{\Gamma}|^{2}+c_{\delta}+c\int_ {0}^{t}|\alpha_{\varepsilon}(s)|\,ds\,.\]
On the other hand, by our preliminary estimate (3.16) and the Young and Poincare inequalities once more, we have that
\[\int_{0}^{t}|\alpha_{\varepsilon}(s)|\,ds\leq c\int_{0}^{t}\bigl{\{} \|(\partial_{t}\varphi^{\varepsilon},\partial_{t}\varphi^{\varepsilon}_{ \Gamma})\|_{\mathcal{IG}}\left(\|(\varphi^{\varepsilon},\varphi^{\varepsilon}_{ \Gamma})\|_{\mathcal{IG}}+1\right)\] \[\quad+\|(\mu^{\varepsilon}-\alpha_{\varepsilon},\mu^{\varepsilon} _{\Gamma}-\alpha_{\varepsilon})\|_{\mathcal{IG}}\left(\|(\varphi^{\varepsilon}, \varphi^{\varepsilon}_{\Gamma})\|_{\mathcal{IG}}+1\right)+\|(\varphi^{ \varepsilon},\varphi^{\varepsilon}_{\Gamma})\|_{\mathcal{IG}}^{2}+1\bigr{\}}\,ds\] \[\leq\delta\int_{Q_{t}}|\partial_{t}\varphi^{\varepsilon}|^{2}+ \delta\int_{\Sigma_{t}}|\partial_{t}\varphi^{\varepsilon}_{\Gamma}|^{2}+\delta \int_{Q_{t}}|\nabla\mu^{\varepsilon}|^{2}+\delta\int_{\Sigma_{t}}|\nabla\mu^{ \varepsilon}_{\Gamma}|^{2}\] \[\quad+c_{\delta}\int_{Q_{t}}\bigl{(}|\varphi^{\varepsilon}|^{2}+1 \bigr{)}+c_{\delta}\int_{\Sigma_{t}}\bigl{(}|\varphi^{\varepsilon}_{\Gamma}|^{2} +1\bigr{)}.\]
Therefore, choosing \(\delta\) small enough and applying the Gronwall lemma, lead us to conclude that
\[\|(\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\|_{H^{1}(0,T; \mathcal{IG})\cap L^{\infty}(0,T;\mathcal{IG})}+\|(\nabla\mu^{\varepsilon}, \nabla_{\Gamma}\mu^{\varepsilon}_{\Gamma})\|_{L^{2}(0,T;\mathcal{IG})}\leq c\,. \tag{3.17}\]
Consequence.We come back to (3.16) and account for (3.17). We realize that
\[\int_{0}^{T}|\alpha_{\varepsilon}(s)|^{2}\,ds\leq c\big{\{}\|( \partial_{t}\varphi^{\varepsilon},\partial_{t}\varphi^{\varepsilon}_{\Gamma}) \|_{L^{2}(0,T;\mathcal{H})}^{2}\left(\|(\varphi^{\varepsilon},\varphi^{ \varepsilon}_{\Gamma})\|_{L^{\infty}(0,T;\mathcal{H})}^{2}+1\right)\] \[\quad+\|(\mu^{\varepsilon}-\alpha_{\varepsilon},\mu^{\varepsilon} _{\Gamma}-\alpha_{\varepsilon})\|_{L^{2}(0,T;\mathcal{H})}^{2}\left(\|(\varphi ^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\|_{L^{\infty}(0,T;\mathcal{H}) }^{2}+1\right)\] \[\quad+\|(\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\|_ {L^{\infty}(0,T;\mathcal{H})}^{4}+1\big{\}}\leq c\,.\]
Therefore, using once more the Poincare inequality, we conclude that
\[\|(\mu^{\varepsilon},\mu^{\varepsilon}_{\Gamma})\|_{L^{2}(0,T;\mathcal{H})} \leq c\,. \tag{3.18}\]
Second a priori estimate.We test (3.4) by \((\beta_{\varepsilon}(\varphi^{\varepsilon}),\beta_{\varepsilon}(\varphi^{ \varepsilon}_{\Gamma}))\) and integrate with respect to time. After, rearranging, we obtain for every \(t\in[0,T]\) that
\[\tau\int_{\Omega}\widehat{\beta}_{\varepsilon}(\varphi^{ \varepsilon}(t))+\tau\int_{\Gamma}\widehat{\beta}_{\varepsilon}(\varphi^{ \varepsilon}_{\Gamma}(t))+\int_{Q_{t}}\beta^{\prime}_{\varepsilon}(\varphi^{ \varepsilon})|\nabla\varphi^{\varepsilon}|^{2}+\int_{\Sigma_{t}}\beta^{ \prime}_{\varepsilon}(\varphi^{\varepsilon}_{\Gamma})|\nabla_{\Gamma} \varphi^{\varepsilon}_{\Gamma}|^{2}\] \[\quad+\int_{Q_{t}}|\beta_{\varepsilon}(\varphi^{\varepsilon})|^{2 }+\int_{\Sigma_{t}}\beta_{\Gamma,\varepsilon}(\varphi^{\varepsilon}_{\Gamma}) \,\beta_{\varepsilon}(\varphi^{\varepsilon}_{\Gamma})\] \[=\tau\int_{\Omega}\widehat{\beta}_{\varepsilon}(\varphi_{0})+ \tau\int_{\Gamma}\widehat{\beta}_{\varepsilon}(\varphi_{0|\Gamma})-\int_{Q_{t }}\pi(\varphi^{\varepsilon})\,\beta_{\varepsilon}(\varphi^{\varepsilon})- \int_{\Sigma_{t}}\pi_{\Gamma}(\varphi^{\varepsilon}_{\Gamma})\,\beta_{ \varepsilon}(\varphi^{\varepsilon}_{\Gamma})\] \[\quad+\int_{Q_{t}}\mu^{\varepsilon}\beta_{\varepsilon}(\varphi^{ \varepsilon})+\int_{\Sigma_{t}}\mu^{\varepsilon}_{\Gamma}\beta_{\varepsilon}( \varphi^{\varepsilon}_{\Gamma})\,.\]
All of the terms on the left-hand side are nonnegative. However, as for the last one, we can say something better, namely,
\[\int_{\Sigma_{t}}\beta_{\Gamma,\varepsilon}(\varphi^{\varepsilon}_{\Gamma}) \,\beta_{\varepsilon}(\varphi^{\varepsilon}_{\Gamma})\geq\frac{1}{2C^{*}}\int _{\Sigma_{t}}|\beta_{\varepsilon}(\varphi^{\varepsilon}_{\Gamma})|^{2}-c\]
due to (3.10). This also makes easier to bound the right-hand side, on account of (3.6), (3.9), (2.12), the Lipschitz continuity of \(\pi\) and \(\pi_{\Gamma}\), the Young inequality and estimates (3.17)-(3.18) and allow us to conclude that
\[\|\beta_{\varepsilon}(\varphi^{\varepsilon})\|_{L^{2}(0,T;H)}+\|\beta_{ \varepsilon}(\varphi^{\varepsilon}_{\Gamma})\|_{L^{2}(0,T;H_{\Gamma})}\leq c\,. \tag{3.19}\]
Third a priori estimate.We write (3.4) for a.a. \(t\in(0,T)\) in the form
\[\int_{\Omega}\nabla\varphi^{\varepsilon}(t)\cdot\nabla v+\int_{ \Gamma}\nabla_{\Gamma}\varphi^{\varepsilon}_{\Gamma}(t)\cdot\nabla_{\Gamma} v_{\Gamma}+\int_{\Gamma}\beta_{\Gamma,\,\varepsilon}(\varphi^{\varepsilon}_{ \Gamma}(t))\,v_{\Gamma}\] \[=\int_{\Omega}g\,v+\int_{\Gamma}g^{\Gamma}\,v_{\Gamma}\quad\text{ for every }(v,v_{\Gamma})\in\mathcal{V},\quad\text{where}\] \[g:=\mu^{\varepsilon}(t)-\tau\partial_{t}\varphi^{\varepsilon}(t )-(\beta_{\varepsilon}+\pi)(\varphi^{\varepsilon}(t))\quad\text{and}\quad g^{ \Gamma}:=\mu^{\varepsilon}_{\Gamma}(t)-\tau\partial_{t}\varphi^{\varepsilon}_ {\Gamma}(t)-\pi_{\Gamma}(\varphi^{\varepsilon}_{\Gamma}(t))\]
and apply Lemma 2.9 with \(\sigma=\beta_{\Gamma,\,\varepsilon}\). As the constant \(C_{\Omega}\) appearing in (2.48) does not depend on \(\varepsilon\), by squaring (2.48) itself in our case, integrating over \((0,T)\) and accounting for the estimates already obtained, we conclude that
\[\|(\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\|_{L^{2}(0,T;\mathcal{ W})}+\|\beta_{\Gamma,\,\varepsilon}(\varphi^{\varepsilon}_{\Gamma})\|_{L^{2}(0,T;H_{ \Gamma})}\leq c\,. \tag{3.20}\]
Similarly, by applying the lemma to (3.3), we obtain that
\[\|(\mu^{\varepsilon},\mu^{\varepsilon}_{\Gamma})\|_{L^{2}(0,T;\mathcal{W})}\leq c\,. \tag{3.21}\]
Conclusion of the existence proof.By collecting the estimates we have established and applying well-known weak and weak star compactness results, we deduce that
\[(\mu^{\varepsilon},\mu^{\varepsilon}_{\Gamma})\to(\mu,\mu_{\Gamma })\quad\text{weakly in }L^{2}(0,T;\mathcal{W})\] \[(\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\to( \varphi,\varphi_{\Gamma})\quad\text{weakly star in }H^{1}(0,T;\mathcal{H})\cap L^{ \infty}(0,T;\mathcal{V})\cap L^{2}(0,T;\mathcal{W})\] \[(\beta_{\varepsilon}(\varphi^{\varepsilon}),\beta_{\Gamma, \varepsilon}(\varphi^{\varepsilon}_{\Gamma}))\to(\xi,\xi^{\Gamma})\quad\text {weakly in }L^{2}(0,T;\mathcal{H})\]
(at least for a subsequence \(\varepsilon_{k}\searrow 0\)) for some limit six-tuple \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma},\xi,\xi^{\Gamma})\). Then it is clear that the integrated versions of (2.17) and (2.18) with time dependent test functions \((v,v_{\Gamma})\) in \(L^{2}(0,T;\mathcal{V})\) are satisfied. Moreover, \(\varphi\) clearly verifies the initial condition (2.20). Finally, the conditions (2.19) hold as well since \((\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\) converges strongly, e.g., in \(L^{2}(0,T;\mathcal{H})\) by the Aubin-Lions lemma (see, e.g., [35, Thm. 5.1, p. 58]) and one can apply well-known results on monotone operators (see, e.g., [2, Lemma 2.3, p. 38]) to infer that \(\xi\in\beta(\varphi)\) and \(\xi^{\Gamma}\in\beta_{\Gamma}(\varphi_{\Gamma})\), as desired. This concludes the proof of Theorem 2.3.
### Regularity and separation
We first prove Theorem 2.4. Due to the existence proof just given, it is clear that it suffices to establish further a priori estimates on the solution to the approximating problem that corresponds to the regularity (2.24) and the inequality (2.25). Also in this case, we proceed formally acting on the solution \((\mu^{\varepsilon},\mu^{\varepsilon}_{\Gamma},\varphi^{\varepsilon},\varphi^{ \varepsilon}_{\Gamma})\), directly. We notice that the time derivative and the initial value of the approximating chemical potential do not exist. However, they do exist for the discrete solution given by a Faedo-Galerkin scheme. Indeed, the discrete phase field solves a systems of ODEs and is in fact much smoother, whence the discrete chemical potential is smoother too, as a consequence of the discrete analogue of (3.4). Thus, the estimates we prove just formally would be correct when performed on the discrete solution. Of course, in realizing our project, we account for the estimates already established in the previous section. As before, \(\delta\) indicates a positive parameter whose value is chosen when it is necessary to do so.
Fourth a priori estimate.We test (3.3) by \((\partial_{t}\mu^{\varepsilon},\partial_{t}\mu^{\varepsilon}_{\Gamma})\). At the same time, we differentiate (3.4) with respect to time and test the equality we obtain by \((\partial_{t}\varphi^{\varepsilon},\partial_{t}\varphi^{\varepsilon}_{\Gamma})\). Then, we sum up and observe that some cancellation occur obtaining that
\[\frac{1}{2}\,\frac{d}{dt}\int_{\Omega}|\nabla\mu^{\varepsilon}|^{ 2}+\frac{1}{2}\,\frac{d}{dt}\int_{\Gamma}|\nabla_{\Gamma}\mu^{\varepsilon}_{ \Gamma}|^{2}+\frac{1}{2}\,\frac{d}{dt}\int_{\Omega}|\partial_{t}\varphi^{ \varepsilon}|^{2}+\frac{1}{2}\,\frac{d}{dt}\int_{\Gamma}|\partial_{t}\varphi^ {\varepsilon}_{\Gamma}|^{2}\] \[\quad+\int_{\Omega}|\nabla\partial_{t}\varphi^{\varepsilon}|^{2 }+\int_{\Gamma}|\nabla_{\Gamma}\partial_{t}\varphi^{\varepsilon}_{\Gamma}|^{2 }+\int_{\Omega}\beta^{\prime}_{\varepsilon}(\varphi^{\varepsilon})|\partial_{t} \varphi^{\varepsilon}|^{2}+\int_{\Gamma}\beta^{\prime}_{\Gamma,\,\varepsilon}( \varphi^{\varepsilon}_{\Gamma})|\partial_{t}\varphi^{\varepsilon}_{\Gamma}|^{2}\] \[=\gamma\int_{\Omega}(u-\varphi^{\varepsilon})\partial_{t}\mu^{ \varepsilon}+\gamma\int_{\Gamma}(u^{\Gamma}-\varphi^{\varepsilon}_{\Gamma}) \partial_{t}\mu^{\varepsilon}_{\Gamma}-\int_{\Omega}\pi^{\prime}(\varphi^{ \varepsilon})|\partial_{t}\varphi^{\varepsilon}|^{2}-\int_{\Gamma}\pi^{ \prime}_{\Gamma}(\varphi^{\varepsilon}_{\Gamma})|\partial_{t}\varphi^{ \varepsilon}_{\Gamma}|^{2}\,.\]
At this point, we integrate over \((0,t)\) with an arbitrary \(t\in[0,T]\). The volume integrals involving \(\pi^{\prime}\) and \(\pi^{\prime}_{\Gamma}\) are bounded due to the previous estimates and to (2.8). On the
contrary, the initial values of the chemical potentials and those of the time derivatives of the phase variables that appear have to be estimated. At the same time, we have to control the integrals arising from the terms involving \(\partial_{t}\mu^{\varepsilon}\) and \(\partial_{t}\mu^{\varepsilon}_{\Gamma}\). Let us first consider the latter, which can be handled by integration by parts as done below, where we also subtract and add the same quantities for convenience. With the notation in (3.15), we obtain that
\[\int_{Q_{t}}(u-\varphi^{\varepsilon})\partial_{t}\mu^{\varepsilon }+\int_{\Sigma_{t}}(u^{\Gamma}-\varphi^{\varepsilon}_{\Gamma})\partial_{t}\mu^ {\varepsilon}_{\Gamma}\] \[=-\int_{Q_{t}}(\partial_{t}u-\partial_{t}\varphi^{\varepsilon}) \mu^{\varepsilon}-\int_{\Sigma_{t}}(\partial_{t}u^{\Gamma}-\partial_{t}\varphi ^{\varepsilon}_{\Gamma})\mu^{\varepsilon}_{\Gamma}\] \[\quad+\int_{\Omega}(u(t)-\varphi^{\varepsilon}(t))\big{(}\mu^{ \varepsilon}(t)-\alpha_{\varepsilon}(t)\big{)}+\int_{\Gamma}\!\big{(}u^{ \Gamma}(t)-\varphi^{\varepsilon}_{\Gamma}(t)\big{)}\big{(}\mu^{\varepsilon }_{\Gamma}(t)-\alpha_{\varepsilon}(t)\big{)}\] \[\quad-\int_{\Omega}\!\big{(}u(0)-\varphi_{0}\big{)}\mu^{ \varepsilon}(0)-\int_{\Omega}\!\big{(}u^{\Gamma}(0)-\varphi_{0|\Gamma}\big{)} \mu^{\varepsilon}_{\Gamma}(0).\]
The first two integrals on the right-hand side are bounded due to our assumption (2.22) on \((u,u^{\Gamma})\) and to the estimates (3.17) and (3.18) of the previous section. The whole second line of the right-hand side, which we term \(\mathbb{I}\) for brevity, can be treated by the Young and Poincare inequalities and it is bounded from above as follows
\[\mathbb{I}\leq\delta\|(\nabla\mu^{\varepsilon}(t),\nabla_{\Gamma}\mu^{ \varepsilon}_{\Gamma}(t))\|^{2}_{\mathcal{I}}+c_{\delta}\,.\]
As for the next line, it suffices to estimate the mean value. By recalling (3.16) and (3.17), we have that
\[|\alpha_{\varepsilon}(t)|\leq\delta\|(\partial_{t}\varphi^{\varepsilon}(t), \partial_{t}\varphi^{\varepsilon}_{\Gamma}(t))\|^{2}_{\mathcal{I}}+\delta\| (\nabla\mu^{\varepsilon}(t),\nabla_{\Gamma}\mu^{\varepsilon}_{\Gamma}(t))\|^ {2}_{\mathcal{I}}+c_{\delta}\,. \tag{3.22}\]
Finally, the last line is under control once the initial value of \((\mu,\mu_{\Gamma})\) is estimated in \(\mathcal{H}\). Now, we proceed in estimating all the initial values we have to consider. We write (3.3) and (3.4) at the time \(t=0\) and have that
\[\int_{\Omega}\partial_{t}\varphi^{\varepsilon}(0)\,v+\int_{\Gamma }\partial_{t}\varphi^{\varepsilon}_{\Gamma}(0)\,v_{\Gamma}+\int_{\Omega} \nabla\mu^{\varepsilon}(0)\cdot\nabla v+\int_{\Gamma}\nabla_{\Gamma}\mu^{ \varepsilon}_{\Gamma}(0)\cdot\nabla_{\Gamma}v_{\Gamma}\] \[=\int_{\Omega}f_{\varepsilon}v+\int_{\Gamma}f^{\Gamma}_{ \varepsilon}v_{\Gamma} \tag{3.23}\] \[\tau\int_{\Omega}\partial_{t}\varphi^{\varepsilon}(0)\,v+\tau \int_{\Gamma}\partial_{t}\varphi^{\varepsilon}_{\Gamma}(0)\,v_{\Gamma}+\int_ {\Omega}\nabla\varphi_{0}\cdot\nabla v+\int_{\Gamma}\nabla_{\Gamma}\varphi_{ 0|\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[=\int_{\Omega}\mu^{\varepsilon}(0)v+\int_{\Gamma}\mu^{\varepsilon }_{\Gamma}(0)v_{\Gamma}+\int_{\Omega}g_{\varepsilon}v+\int_{\Gamma}g^{\Gamma}_ {\varepsilon}v_{\Gamma} \tag{3.24}\]
both for every \((v,v_{\Gamma})\in\mathcal{V}\), with \(f_{\varepsilon}\) and \(g_{\varepsilon}\) bounded in \(H\) and \(f^{\Gamma}_{\varepsilon}\) and \(g^{\Gamma}_{\varepsilon}\) bounded in \(H_{\Gamma}\) (in particular, thanks to (3.7) and (2.23)). Now, we subtract (3.24) from (3.23) multiplied by \(\tau\) and deduce that
\[\tau\int_{\Omega}\nabla\mu^{\varepsilon}(0)\cdot\nabla v+\tau\int _{\Gamma}\nabla_{\Gamma}\mu^{\varepsilon}_{\Gamma}(0)\cdot\nabla_{\Gamma}v_{ \Gamma}+\int_{\Omega}\mu^{\varepsilon}(0)v+\int_{\Gamma}\mu^{\varepsilon}_{ \Gamma}(0)v_{\Gamma}\] \[=\int_{\Omega}(\tau f_{\varepsilon}-g_{\varepsilon})v+\int_{ \Gamma}(\tau f^{\Gamma}_{\varepsilon}-g^{\Gamma}_{\varepsilon})v_{\Gamma}+\int _{\Omega}\nabla\varphi_{0}\cdot\nabla v+\int_{\Gamma}\nabla_{\Gamma}\varphi_{ 0|\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\]
still for every \((v,v_{\Gamma})\in\mathcal{V}\). By applying the Lax-Milgram lemma (or the Riesz representation theorem with an equivalent inner product in \(\mathcal{V}\)), we obtain the estimate
\[\|(\mu^{\varepsilon}(0),\mu^{\varepsilon}_{\Gamma}(0))\|_{\mathcal{V}}\leq c\,.\]
Now, by coming back to (3.24) and recalling that \((\varphi_{0},\varphi_{0|\Gamma})\in\mathcal{W}\), we infer that
\[\tau\int_{\Omega}\partial_{t}\varphi^{\varepsilon}(0)\,v+\tau\int _{\Gamma}\partial_{t}\varphi^{\varepsilon}_{\Gamma}(0)\,v_{\Gamma}\] \[=\int_{\Omega}\bigl{(}\mu^{\varepsilon}(0)+g_{\varepsilon}+ \Delta\varphi_{0}\bigr{)}v-\int_{\Gamma}\partial_{\boldsymbol{\nu}}\varphi_{0 }\,v_{\Gamma}+\int_{\Gamma}\bigl{(}\mu^{\varepsilon}_{\Gamma}(0)+g^{\Gamma}_{ \varepsilon}+\Delta_{\Gamma}(\varphi_{0|\Gamma})\bigr{)}v_{\Gamma}\]
and deduce that
\[\|(\partial_{t}\varphi^{\varepsilon}(0),\partial_{t}\varphi^{\varepsilon}_{ \Gamma}(0))\|_{\mathcal{V}}\leq c\,.\]
By collecting all this, choosing \(\delta\) small enough and applying the Gronwall lemma, we conclude that
\[\|(\nabla\mu^{\varepsilon},\nabla_{\Gamma}\mu^{\varepsilon}_{\Gamma})\|_{L^{ \infty}(0,T;\mathcal{V})}+\|(\partial_{t}\varphi^{\varepsilon},\partial_{t} \varphi^{\varepsilon}_{\Gamma})\|_{L^{\infty}(0,T;\mathcal{V})\cap L^{2}(0,T; \mathcal{V})}\leq c\,. \tag{3.25}\]
On the other hand, we also deduce from this estimate and (3.22) that \(\alpha_{\varepsilon}\) is bounded in \(L^{\infty}(0,T)\). Therefore, we improve the above estimate by obtaining
\[\|(\mu^{\varepsilon},\mu^{\varepsilon}_{\Gamma})\|_{L^{\infty}(0,T;\mathcal{V })}\leq c\,. \tag{3.26}\]
Fifth a priori estimate.Next, we argue as done in the derivation of (3.19), but avoid integration in time. We have a.e. in \((0,T)\) that
\[\int_{\Omega}\beta^{\prime}_{\varepsilon}(\varphi^{\varepsilon}) |\nabla\varphi^{\varepsilon}|^{2}+\int_{\Gamma}\beta^{\prime}_{\varepsilon}( \varphi^{\varepsilon}_{\Gamma})|\nabla_{\Gamma}\varphi^{\varepsilon}_{\Gamma }|^{2}+\int_{\Omega}|\beta_{\varepsilon}(\varphi^{\varepsilon})|^{2}+\int_{ \Gamma}\beta_{\Gamma,\varepsilon}(\varphi^{\varepsilon}_{\Gamma})\,\beta_{ \varepsilon}(\varphi^{\varepsilon}_{\Gamma})\] \[=\int_{\Omega}\bigl{(}\mu^{\varepsilon}-\tau\partial_{t} \varphi^{\varepsilon}-\pi(\varphi^{\varepsilon})\bigr{)}\beta_{\varepsilon}( \varphi^{\varepsilon})+\int_{\Gamma}\bigl{(}\mu^{\varepsilon}_{\Gamma}-\tau \partial_{t}\varphi^{\varepsilon}_{\Gamma}-\pi_{\Gamma}(\varphi^{\varepsilon}_{ \Gamma})\bigr{)}\beta_{\varepsilon}(\varphi^{\varepsilon}_{\Gamma})\]
and accounting for (2.10) and (3.25)-(3.26), we easily deduce that
\[\|\beta_{\varepsilon}(\varphi^{\varepsilon})\|_{L^{\infty}(0,T;H)}+\|\beta_{ \varepsilon}(\varphi^{\varepsilon}_{\Gamma})\|_{L^{\infty}(0,T;H_{\Gamma})} \leq c\,. \tag{3.27}\]
Sixth a priori estimate.At this point, we can read (3.3) written for a.a. \(t\in(0,T)\) as a particular case of (2.47) with \(\sigma\equiv 0\), apply Lemma 2.9 and account for the \(L^{\infty}\) estimates just obtained. We conclude that
\[\|(\varphi^{\varepsilon},\varphi^{\varepsilon}_{\Gamma})\|_{L^{\infty}(0,T; \mathcal{W})}+\|\beta_{\Gamma,\varepsilon}(\varphi^{\varepsilon}_{\Gamma})\|_ {L^{\infty}(0,T;H_{\Gamma})}\leq c\,. \tag{3.28}\]
Similarly, by applying the lemma to (3.3), we obtain that
\[\|(\mu^{\varepsilon},\mu^{\varepsilon}_{\Gamma})\|_{L^{\infty}(0,T;\mathcal{W} )}\leq c\,. \tag{3.29}\]
Conclusion of the proof of Theorem 2.4.By proceeding as at the end of the proof of Theorem 2.3 given in Section 3.1, we see that the approximating solution converges
(as \(\varepsilon\searrow 0\) at least for a subsequence) to a solution to problem (2.17)-(2.20) also in the weak star topology associated to the regularity requirements (2.24) and that this solution satisfies estimate (2.25) with a constant \(K_{2}\) with the dependence specified in the statement. \(\square\)
Now, we move to prove Theorem 2.5. Recalling (2.26), it is clear that the thesis in the first case trivially follows from the boundedness of the component \(\varphi\) ensured by Theorem 2.4. Thus, we suppose that \(D\) is a bounded open interval. In fact, we prove that every solution \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma})\) satisfies the separation property whenever the components \(\mu\) and \(\mu_{\Gamma}\) are bounded. In particular, this holds for the solution given by Theorem 2.4. We assume \(D=(-1,1)\), without loss of generality, and notice that, since \(\beta\) and \(\beta_{\Gamma}\) are maximal monotone and \(\pi\) and \(\pi_{\Gamma}\) are smooth in the whole of \(\mathbb{R}\), we have that
\[\lim_{r\searrow-1^{+}}F^{\prime}(r)=\lim_{r\searrow-1^{+}}F^{\prime}_{\Gamma}( r)=-\infty\quad\text{and}\quad\lim_{r\nearrow 1^{-}}F^{\prime}(r)=\lim_{r \nearrow 1^{-}}F^{\prime}_{\Gamma}(r)=+\infty\,. \tag{3.30}\]
Hence, thanks to assumption (2.28) on \(\varphi_{0}\) and the boundedness of \((\mu,\mu_{\Gamma})\) we are assuming, we can fix a real number \(r_{0}\) satisfying
\[\|\varphi_{0}\|_{\infty}\leq r_{0}<1 \tag{3.31}\]
and, with \(N:=\|\mu\|_{L^{\infty}(Q)}\,,\)
\[F^{\prime}(r)\leq-N\quad\text{and}\quad F^{\prime}_{\Gamma}(r) \leq-N\quad\text{if }r\in(-1,-r_{0}] \tag{3.32}\] \[F^{\prime}(r)\geq N\quad\text{and}\quad F^{\prime}_{\Gamma}(r) \geq N\quad\text{if }r\in[r_{0},1). \tag{3.33}\]
Then, we choose a function \(\zeta:\mathbb{R}\to\mathbb{R}\) that is monotone and Lipschitz continuous and satisfies
\[\zeta(r)=0\quad\text{if }|r|\leq r_{0},\quad\zeta(r)<0\quad\text{if }r<-r_{0}\quad\text{and}\quad\zeta(r)>0\quad\text{if }r>r_{0}\]
and test (2.18) by \((\zeta(\varphi),\zeta(\varphi_{\Gamma}))\). By setting \(\widehat{\zeta}(r):=\int_{0}^{r}\zeta(s)\,ds\) for \(r\in\mathbb{R}\), we obtain that
\[\tau\,\frac{d}{dt}\Bigl{(}\int_{\Omega}\widehat{\zeta}(\varphi)+ \int_{\Gamma}\widehat{\zeta}(\varphi_{\Gamma})\Bigr{)}+\int_{\Omega}\zeta^{ \prime}(\varphi)|\nabla\varphi|^{2}+\int_{\Gamma}\zeta^{\prime}(\varphi_{ \Gamma})|\nabla_{\Gamma}\varphi_{\Gamma}|^{2}\] \[=\int_{\Omega}\bigl{(}\mu-F^{\prime}(\varphi)\bigr{)}\zeta( \varphi)+\int_{\Gamma}\bigl{(}\mu_{\Gamma}-F^{\prime}_{\Gamma}(\varphi_{ \Gamma})\bigr{)}\zeta(\varphi_{\Gamma})\leq 0\,.\]
We then deduce that
\[\int_{\Omega}\widehat{\zeta}(\varphi(t))+\int_{\Gamma}\widehat{\zeta}(\varphi _{\Gamma}(t))\leq\int_{\Omega}\widehat{\zeta}(\varphi_{0})+\int_{\Gamma} \widehat{\zeta}(\varphi_{0|\Gamma})=0\]
for every \(t\in[0,T]\). Since \(\widehat{\zeta}\) is nonnegative, we conclude that \(\zeta(\varphi)\) vanishes identically, i.e., that \(|\varphi|\leq r_{0}\) in the whole of \(Q\) leading to the existence of \(\varphi_{*}\) and \(\varphi^{*}\) as in the statement. \(\square\)
## 4 Continuous dependence and uniqueness
This section is devoted to the proof of Theorem 2.6. More precisely, we prove estimate (2.30) for an arbitrary pair of solutions, so that uniqueness follows as a consequence. Let
us fix two pairs \((u_{i},u_{\Gamma,i})\), \(i=1,2\), of forcing terms and consider arbitrary corresponding solutions \((\mu_{i},\mu_{\Gamma,i},\varphi_{i},\varphi_{\Gamma,i})\). For brevity, we term \((u,u^{\Gamma})\) and \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma})\) the differences. Then, \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma})\) solves the equations
\[\int_{\Omega}\partial_{t}\varphi\,v+\int_{\Gamma}\partial_{t} \varphi_{\Gamma}\,v_{\Gamma}+\int_{\Omega}\nabla\mu\cdot\nabla v+\int_{\Gamma} \nabla_{\Gamma}\mu_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[=\gamma\int_{\Omega}(u-\varphi)v+\gamma\int_{\Gamma}(u^{\Gamma}- \varphi_{\Gamma})v_{\Gamma} \tag{4.1}\] \[\tau\int_{\Omega}\partial_{t}\varphi\,v+\tau\int_{\Gamma} \partial_{t}\varphi_{\Gamma}\,v_{\Gamma}+\int_{\Omega}\nabla\varphi\cdot \nabla v+\int_{\Gamma}\nabla_{\Gamma}\varphi_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[\quad\quad+\int_{\Omega}\bigl{(}F^{\prime}(\varphi_{1})-F^{\prime }(\varphi_{2})\bigr{)}v+\int_{\Gamma}\bigl{(}F^{\prime}_{\Gamma}(\varphi_{ \Gamma,1})-F^{\prime}_{\Gamma}(\varphi_{\Gamma,2})\bigr{)}v_{\Gamma}=\int_{ \Omega}\mu v+\int_{\Gamma}\mu_{\Gamma}v_{\Gamma} \tag{4.2}\]
both a.e. in \((0,T)\) and for every \((v,v_{\Gamma})\in\mathcal{V}\), as well as the initial condition \(\varphi(0)=0\). We test the above equations by \((\mu,\mu_{\Gamma})\) and \((\partial_{t}\varphi,\partial_{t}\varphi_{\Gamma})+\gamma(\varphi,\varphi_{ \Gamma})\), respectively. Then, we sum up and notice that a number of cancellation occur. Moreover, we recall that the values of \(\varphi_{1}\) and \(\varphi_{2}\) belong to the interval \([\varphi_{*},\varphi^{*}]\) given by Theorem 2.5 and that \(F\) and \(F_{\Gamma}\) are Lipschitz continuous in this interval. Thus, we can use this fact in treating the terms involving \(F\) and \(F_{\Gamma}\) moved to the right-hand side. By also owing to the Young inequality, we have that
\[\int_{\Omega}|\nabla\mu|^{2}+\int_{\Gamma}|\nabla_{\Gamma}\mu_{ \Gamma}|^{2}\] \[\quad+\tau\int_{\Omega}|\partial_{t}\varphi|^{2}+\tau\int_{ \Gamma}|\partial_{t}\varphi_{\Gamma}|^{2}+\frac{1}{2}\,\frac{d}{dt}\int_{ \Omega}|\nabla\varphi|^{2}+\frac{1}{2}\,\frac{d}{dt}\int_{\Gamma}|\nabla_{ \Gamma}\varphi_{\Gamma}|^{2}\] \[\quad+\frac{\tau\gamma}{2}\,\frac{d}{dt}\int_{\Omega}|\varphi|^{ 2}+\frac{\tau\gamma}{2}\,\frac{d}{dt}\int_{\Gamma}|\varphi_{\Gamma}|^{2}+ \gamma\int_{\Omega}|\nabla\varphi|^{2}+\gamma\int_{\Gamma}|\nabla_{\Gamma} \varphi_{\Gamma}|^{2}\] \[\leq\gamma\int_{\Omega}u\mu+\gamma\int_{\Gamma}u^{\Gamma}\mu_{ \Gamma}+\frac{\tau}{4}\int_{\Omega}|\partial_{t}\varphi|^{2}+\frac{\tau}{4} \int_{\Gamma}|\partial_{t}\varphi_{\Gamma}|^{2}+c\int_{\Omega}|\varphi|^{2}+c \int_{\Gamma}|\varphi_{\Gamma}|^{2}\,. \tag{4.3}\]
Now, we test (2.18) by \((1,1)\). By setting \(\alpha:=\mathrm{mean}(\mu,\mu_{\Gamma})\), and using the Lipschitz continuity of \(F\) and \(F_{\Gamma}\) once more, we obtain
\[(|\Omega|+|\Gamma|)|\alpha|\leq\tau\int_{\Omega}|\partial_{t}\varphi|+\tau\int _{\Gamma}|\partial_{t}\varphi_{\Gamma}|+c\int_{\Omega}|\varphi|+c\int_{\Gamma} |\varphi_{\Gamma}|\]
whence
\[(|\Omega|+|\Gamma|)^{2}|\alpha|^{2}\leq 4\tau^{2}\Bigl{(}\int_{ \Omega}|\partial_{t}\varphi|\Bigr{)}^{2}+4\tau^{2}\Bigl{(}\int_{\Gamma}| \partial_{t}\varphi_{\Gamma}|\Bigr{)}^{2}+c\Bigl{(}\int_{\Omega}|\varphi| \Bigr{)}^{2}+c\Bigl{(}\int_{\Gamma}|\varphi_{\Gamma}|\Bigr{)}^{2}\] \[\leq 4\tau^{2}|\Omega|\int_{\Omega}|\partial_{t}\varphi|^{2}+4\tau^ {2}|\Gamma|\int_{\Gamma}|\partial_{t}\varphi_{\Gamma}|^{2}+c\int_{\Omega}| \varphi|^{2}+c\int_{\Gamma}|\varphi_{\Gamma}|^{2}\,.\]
By dividing by \(16\tau(|\Omega|+|\Gamma|)\), we conclude that
\[\frac{|\Omega|+|\Gamma|}{16\tau}\,|\alpha|^{2}\leq\frac{\tau}{4}\int_{\Omega}| \partial_{t}\varphi|^{2}+\frac{\tau}{4}\int_{\Gamma}|\partial_{t}\varphi_{ \Gamma}|^{2}+c\int_{\Omega}|\varphi|^{2}+c\int_{\Gamma}|\varphi_{\Gamma}|^{2}\,.\]
Next, we add this inequality to (4.3) and rearrange. At this point, we see that we can treat the first two integrals on the right-hand side by accounting for the Young and Poincare
inequalities (2.41) and (2.46). For every \(\delta>0\), we have indeed that
\[\gamma\int_{\Omega}u\mu+\gamma\int_{\Gamma}u^{\Gamma}\mu_{\Gamma} \leq\delta\int_{\Omega}|\mu^{2}|+\delta\int_{\Gamma}|\mu_{\Gamma}|^{2}+c_{\delta }\int_{\Omega}|u|^{2}+c_{\delta}\int_{\Gamma}|u^{\Gamma}|^{2}\,.\] \[\leq\delta C_{\Omega}\Bigl{(}\int_{\Omega}|\nabla\mu|^{2}+\int_{ \Gamma}|\nabla_{\Gamma}\mu_{\Gamma}|^{2}+|\alpha|^{2}\Bigr{)}+c_{\delta}\int_{ \Omega}|u|^{2}+c_{\delta}\int_{\Gamma}|u^{\Gamma}|^{2}\,.\]
Therefore, by choosing \(\delta\) small enough, integrating in time and applying the Gronwall lemma, we obtain (2.30) with a constant \(K_{3}\) as in the statement.
## 5 The control problem
In this section, we study the control problem (2.31)-(2.33) and prove the results we have stated in Section 2. As already anticipated, the controls \(u\) and \(u^{\Gamma}\) enter the state system (2.17)-(2.20) in the form of distributed and boundary source terms: of course, all the results stated above continue to hold for every admissible pair of controls, i.e., for every \((u,u^{\Gamma})\in\mathcal{U}_{ad}\). In the whole section, it is understood that all of the assumptions we have listed there are satisfied. However, before starting, it is convenient to introduce a neighborhood of the control box and make an observation. For \(R>0\), we set
\[\mathcal{U}_{R}:=\bigl{\{}(u,u^{\Gamma})\in\mathcal{U}:\] \[\|u\|_{\infty}<M+R\ \ \text{and}\ \ \|u^{\Gamma}\|_{\infty}<M+R,\] \[\|\partial_{t}u\|_{L^{2}(0,T;H)}<M^{\prime}+R\ \ \text{and}\ \ \|\partial_{t}u^{\Gamma}\|_{L^{2}(0,T;H_{\Gamma})}<M^{\prime}+R\bigr{\}} \tag{5.1}\] \[\text{where}\quad\mathcal{U}:=\bigl{(}L^{\infty}(Q)\times L^{ \infty}(\Sigma)\bigr{)}\cap H^{1}(0,T;\mathcal{H}) \tag{5.2}\]
and we fix \(R\) small in order that
\[\text{assumption (\ref{eq:1}) is satisfied with $M$ replaced by $M+R$}. \tag{5.3}\]
Therefore, all our results hold for every \((u,u^{\Gamma})\in\mathcal{U}_{R}\) and the constants of the stability estimates (now depending on \(R\) in addition) can be fixed once and for all since \(R\) is fixed. In particular, the boundedness of the component \(\varphi\) in the case of regular potentials or the separation property in the case of logarithmic type potential combined with the regularity condition (2.27) ensure that
\[\|F^{(i)}(\varphi)\|_{\infty}\leq K\quad\text{and}\quad\|F^{(i)}_{\Gamma}( \varphi_{\Gamma})\|_{\infty}\leq K\quad\text{for $0\leq i\leq 3$} \tag{5.4}\]
with a fixed constant \(K\), where \(\varphi\) and \(\varphi_{\Gamma}\) are the components of the solution to problem (2.17)-(2.20) corresponding to any \((u,u^{\Gamma})\in\mathcal{U}_{R}\). For the same reason, we can assume that
\[F^{(i)}\ \text{and}\ F^{(i)}_{\Gamma}\ \text{are Lipschitz continuous for}\ 0\leq i\leq 2 \tag{5.5}\]
without loss of generality.
### Existence of an optimal control
We prove Theorem 2.8 by using the direct method of calculus of variations. Thus, we pick a minimizing sequence \(\{(u_{n},u_{n}^{\Gamma})\}\) in \(\mathcal{U}_{ad}\) and the corresponding sequence of the solutions, labeled with the subscripts \(n\), to the state system. Since \(\mathcal{U}_{ad}\) is bounded in \(\mathcal{U}\), we can assume that
\[(u_{n},u_{n}^{\Gamma})\to(u_{*},u_{*}^{\Gamma})\quad\text{weakly star in $\mathcal{U}$}\]
as \(n\nearrow\infty\) for some limit pair \((u_{*},u_{*}^{\Gamma})\), which must belong to \(\mathcal{U}_{ad}\), since it is convex and closed. Besides, the corresponding solutions are bounded as well in the topologies associated with our well-posedness and regularity results. Hence, it follows that
\[(\varphi^{n},\varphi_{\Gamma}^{n})\to(\varphi,\varphi_{\Gamma}) \quad\text{weakly star in $W^{1,\infty}(0,T;\mathcal{H})\cap H^{1}(0,T; \mathcal{V})\cap L^{\infty}(0,T;\mathcal{W})$}\] \[(\mu^{n},\mu_{\Gamma}^{n})\to(\mu,\mu_{\Gamma})\quad\text{weakly star in $L^{\infty}(0,T;\mathcal{W})$}\]
for some limit functions. By also accounting for (5.5), it is immediately seen that the quadruplet \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma})\) is the solution to the state system corresponding to \((u_{*},u_{*}^{\Gamma})\) and that
\[\mathcal{J}(u_{n},u_{n}^{\Gamma};\varphi^{n},\varphi_{\Gamma}^{n})\to \mathcal{J}(u_{*},u_{*}^{\Gamma};\varphi,\varphi_{\Gamma})\,.\]
On the other hand, we also have that \(\lim_{n\nearrow\infty}\mathcal{J}(u_{n},u_{n}^{\Gamma};\varphi^{n},\varphi_{ \Gamma}^{n})\) coincides with the infimum of \(\mathcal{J}\) since the sequence \(\{(u_{n},u_{n}^{\Gamma})\}\) is minimizing \(\mathcal{J}\). Therefore, \((u_{*},u_{*}^{\Gamma})\) is an optimal control.
### The control-to-state mapping
In this section, we introduce the _control-to-state_ mapping and prove its Frechet differentiability. Along with the space \(\mathcal{U}\) defined by (5.2), we introduce the state space
\[\mathcal{Y}:=H^{1}(0,T;\mathcal{H})\cap L^{\infty}(0,T;\mathcal{V}) \tag{5.6}\]
and the solution mapping \(\mathcal{S}:\mathcal{U}_{R}\to\mathcal{Y}\) defined by
\[\text{for $(u,u^{\Gamma})\in\mathcal{U}_{R}$, the equality $(\varphi,\varphi_{\Gamma})=\mathcal{S}(u,u^{\Gamma})$ means that $\varphi$ and $\varphi_{\Gamma}$ are the components of the solution $(\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma})$ to (\ref{eq:17})-(\ref{eq:20}) corresponding to $(u,u^{\Gamma})$.} \tag{5.7}\]
Thanks to all the observations we have made before, this map is well-defined. The main result of this section is showing its Frechet differentiability. This is related to the linearized system we introduce at once. Let \((u,u^{\Gamma})\in\mathcal{U}_{R}\) and let \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma})\) be the corresponding solution to problem (2.17)-(2.20). Then, the linearized system corresponding to \((u,u^{\Gamma})\) and to the variation \((h,h^{\Gamma})\in\mathcal{U}\) is the system
\[\int_{\Omega}\partial_{t}\psi\,v+\int_{\Gamma}\partial_{t}\psi_{ \Gamma}\,v_{\Gamma}+\int_{\Omega}\nabla\eta\cdot\nabla v+\int_{\Gamma}\nabla _{\Gamma}\eta_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[=\gamma\int_{\Omega}(h-\psi)v+\gamma\int_{\Gamma}(h^{\Gamma}- \psi_{\Gamma})v_{\Gamma}\quad\text{a.e. in $(0,T)$ and for every $(v,v_{\Gamma})\in\mathcal{V}$} \tag{5.8}\]
\[\tau\int_{\Omega}\partial_{t}\psi\,v+\tau\int_{\Gamma}\partial_{t} \psi_{\Gamma}\,v_{\Gamma}+\int_{\Omega}\nabla\psi\cdot\nabla v+\int_{\Gamma} \nabla_{\Gamma}\psi_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[\quad+\int_{\Omega}F^{\prime\prime}(\varphi)\psi v+\int_{\Gamma} F^{\prime\prime}_{\Gamma}(\varphi_{\Gamma})\psi_{\Gamma}v_{\Gamma}\] \[=\int_{\Omega}\eta v+\int_{\Gamma}\eta_{\Gamma}v_{\Gamma}\quad \text{a.e. in }(0,T)\text{ and for every }(v,v_{\Gamma})\in\mathcal{V} \tag{5.9}\] \[\psi(0)=0 \tag{5.10}\]
where the unknown is the \(\mathcal{V}\times\mathcal{V}\) valued function \((\eta,\eta_{\Gamma},\psi,\psi_{\Gamma})\).
One can show that this system has a unique solution satisfying
\[(\eta,\eta_{\Gamma})\in L^{2}(0,T;\mathcal{V})\quad\text{and}\quad(\psi,\psi_ {\Gamma})\in\mathcal{Y}\,.\]
We do not provide the proof for brevity. The existence of a solution can be proved by starting from a Faedo-Galerkin scheme. As the system (5.8)-(5.10) is linear, uniqueness can be deduced as a trivial consequence of the stability estimate we establish at once. Indeed, if \((h,h^{\Gamma})=(0,0)\), the estimate implies that \((\eta,\eta_{\Gamma},\psi,\psi_{\Gamma})=(0,0,0,0)\).
**Lemma 5.1**.: _Let \((u,u^{\Gamma})\in\mathcal{U}_{R}\) and \((h,h^{\Gamma})\in\mathcal{U}\), and let \((\eta,\eta_{\Gamma},\psi,\psi_{\Gamma})\) be any solution to the corresponding linearized system. Then the estimate_
\[\|(\eta,\eta_{\Gamma})\|_{L^{2}(0,T;\mathcal{V})}+\|(\psi,\psi_{\Gamma})\|_{ \mathcal{Y}}\leq K\,\|(h,h^{\Gamma})\|_{L^{2}(0,T;\mathcal{V})} \tag{5.11}\]
_holds true with a positive constant \(K\) that does not depend on \((h,h^{\Gamma})\)._
Proof.: We just sketch the proof for brevity. We test equations (5.8) and (5.9) by \((\eta,\eta_{\Gamma})\) and \((\partial_{t}\psi,\partial_{t}\psi_{\Gamma})+\gamma(\psi,\psi_{\Gamma})-(1/ \tau)(\eta,\eta_{\Gamma})\), respectively, and sum up. Then, several cancellations occur. Moreover, we move some terms from one side to the other one in order to have just nonnegative integrals on the left-hand side and repeatedly owe to the Young inequality on the right-hand side by recalling that \(F^{\prime\prime}(\varphi)\) and \(F^{\prime\prime}_{\Gamma}(\varphi_{\Gamma})\) are bounded as a consequence of Theorem 2.5 (or already of Theorem 2.4 in the case of everywhere defined potentials). Then, the only delicate term is handled as follows
\[\int_{\Omega}\partial_{t}\psi\,\eta\leq\frac{\tau}{2}\int_{\Omega}|\partial_{ t}\psi|^{2}+\frac{1}{2\tau}\int_{\Omega}|\eta|^{2}\]
and similar treatment is reserved to the analogous boundary contribution. Therefore, by integrating over \((0,t)\) with an arbitrary \(t\in(0,T]\) and applying the Gronwall lemma, we immediately obtain (5.11).
Here is our differentiability result.
**Theorem 5.2**.: _Given any \((u,u^{\Gamma})\in\mathcal{U}_{R}\), the map \(\mathcal{S}\) is Frechet differentiable at \((u,u^{\Gamma})\) and its Frechet derivative \(D\mathcal{S}(u,u^{\Gamma})\) is the map belonging to \(\mathcal{L}(\mathcal{U},\mathcal{Y})\) that to every \((h,h^{\Gamma})\in\mathcal{U}\) associates the pair \((\psi,\psi_{\Gamma})\) where \((\eta,\eta_{\Gamma},\psi,\psi_{\Gamma})\) is the solution to the linearized system corresponding to \((u,u^{\Gamma})\) and to the variation \((h,h^{\Gamma})\)._
Proof.: The fact that the linear map described in the statement belongs to \(\mathcal{L}(\mathcal{U},\mathcal{Y})\) is a consequence of Lemma 5.1. For the remaining part of the proof, without loss of generality,
we assume that \(\|(h,h^{\Gamma})\|_{\mathcal{U}}\) is small enough, namely, such that the perturbed control pair \((u+h,u^{\Gamma}+h^{\Gamma})\) also belongs to the open set \(\mathcal{U}_{R}\). We consider the solutions to the state system corresponding to \((u,u^{\Gamma})\) and to \((u+h,u^{\Gamma}+h^{\Gamma})\). We term \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma})\) the former and \((\widehat{\mu},\widehat{\mu}_{\Gamma},\widehat{\varphi},\widehat{\varphi}_{ \Gamma})\) the latter, and define
\[\vartheta:=\widehat{\mu}-\mu-\eta,\quad\vartheta_{\Gamma}:=\widehat{\mu}_{ \Gamma}-\mu_{\Gamma}-\eta_{\Gamma},\quad\rho:=\widehat{\varphi}-\varphi- \psi\quad\text{and}\quad\rho_{\Gamma}:=\widehat{\varphi}_{\Gamma}-\varphi_{ \Gamma}-\psi_{\Gamma}\,.\]
Then, the quadruplet \((\vartheta,\vartheta_{\Gamma},\rho,\rho_{\Gamma})\) solves the system
\[\int_{\Omega}\partial_{t}\rho\,v+\int_{\Gamma}\partial_{t}\rho_{ \Gamma}\,v_{\Gamma}+\int_{\Omega}\nabla\vartheta\cdot\nabla v+\int_{\Gamma} \nabla_{\Gamma}\vartheta_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[=-\gamma\int_{\Omega}\rho v-\gamma\int_{\Gamma}\rho_{\Gamma}v_{ \Gamma}\quad\text{a.e. in $(0,T)$ and for every $(v,v_{\Gamma})\in\mathcal{V}$} \tag{5.12}\] \[\tau\int_{\Omega}\partial_{t}\rho\,v+\tau\int_{\Gamma}\partial_{ t}\rho_{\Gamma}\,v_{\Gamma}+\int_{\Omega}\nabla\rho\cdot\nabla v+\int_{ \Gamma}\nabla_{\Gamma}\rho_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}+\int_{ \Omega}\chi v+\int_{\Gamma}\chi^{\Gamma}v_{\Gamma}\] \[=\int_{\Omega}\vartheta v+\int_{\Gamma}\vartheta_{\Gamma}v_{ \Gamma}\quad\text{a.e. in $(0,T)$ and for every $(v,v_{\Gamma})\in\mathcal{V}$}\] (5.13) \[\rho(0)=0 \tag{5.14}\]
where we have set
\[\chi:=F^{\prime}(\widehat{\varphi})-F^{\prime}(\varphi)-F^{\prime\prime}( \varphi)\psi\quad\text{and}\quad\chi^{\Gamma}:=F^{\prime}_{\Gamma}(\widehat{ \varphi}_{\Gamma})-F^{\prime}_{\Gamma}(\varphi_{\Gamma})-F^{\prime\prime}_{ \Gamma}(\varphi_{\Gamma})\psi_{\Gamma}\,. \tag{5.15}\]
We notice at once that the Taylor formula with integral remainder yields that
\[\chi=F^{\prime\prime}(\varphi)\rho+(\widehat{\varphi}-\varphi)^{ 2}\,\mathcal{R}\quad\text{where}\] \[|\mathcal{R}|=\Big{|}\int_{0}^{1}(1-s)F^{(3)}(\varphi+s( \widehat{\varphi}-\varphi))\,ds\Big{|}\leq c\quad\text{a.e. in $Q$}\]
and that a similar observation holds for the boundary term \(\chi^{\Gamma}\). Therefore, since \(F^{\prime\prime}(\varphi)\) and \(F^{\prime\prime}_{\Gamma}(\varphi_{\Gamma})\) are bounded, for every \(t\in(0,T]\), we have that
\[\int_{Q_{t}}|\chi|^{2}+\int_{\Sigma_{t}}|\chi^{\Gamma}|^{2}\leq c\int_{Q_{t}}| \rho|^{2}+c\int_{\Sigma_{t}}|\rho_{\Gamma}|^{2}+c\int_{Q_{t}}|\widehat{\varphi }-\varphi|^{4}+c\int_{\Sigma_{t}}|\widehat{\varphi}_{\Gamma}-\varphi_{\Gamma}| ^{4}\,.\]
By accounting for inequality (2.30), the continuous embedding \(L^{\infty}(0,T;V)\subset L^{4}(Q)\) and the boundary analogue, we deduce that
\[\int_{Q_{t}}|\chi|^{2}+\int_{\Sigma_{t}}|\chi^{\Gamma}|^{2}\leq c\int_{Q_{t}}| \rho|^{2}+c\int_{\Sigma_{t}}|\rho_{\Gamma}|^{2}+c\,\|(h,h^{\Gamma})\|_{L^{2}(0, T;\mathcal{V})}^{4}\,. \tag{5.16}\]
At this point, we test (5.12) by \((\vartheta,\vartheta_{\Gamma})\) and (5.13) by \((\partial_{t}\rho,\partial_{t}\rho_{\Gamma})+\gamma(\rho,\rho_{\Gamma})\). Then, we add the resulting equalities to each other and notice that some cancellations occur. We obtain that
\[\int_{\Omega}|\nabla\vartheta|^{2}+\int_{\Gamma}|\nabla_{\Gamma} \vartheta_{\Gamma}|^{2}\] \[\quad+\tau\int_{\Omega}|\partial_{t}\rho|^{2}+\tau\int_{\Gamma}| \partial_{t}\rho_{\Gamma}|^{2}+\frac{1}{2}\frac{d}{dt}\int_{\Omega}|\nabla\rho| ^{2}+\frac{1}{2}\frac{d}{dt}\int_{\Gamma}|\nabla_{\Gamma}\rho_{\Gamma}|^{2}\] \[\quad+\frac{\tau\gamma}{2}\,\frac{d}{dt}\int_{\Omega}|\rho|^{2}+ \frac{\tau\gamma}{2}\,\frac{d}{dt}\int_{\Gamma}|\rho_{\Gamma}|^{2}+\gamma\int_{ \Omega}|\nabla\rho|^{2}+\gamma\int_{\Gamma}|\nabla_{\Gamma}\rho_{\Gamma}|^{2}\] \[=-\int_{\Omega}\chi\big{(}\partial_{t}\rho+\gamma\rho\big{)}-\int_{ \Gamma}\chi^{\Gamma}\big{(}\partial_{t}\rho_{\Gamma}+\gamma\rho_{\Gamma}\big{)}\,. \tag{5.17}\]
We aim at applying the Gronwall lemma. To this end, we estimate the terms that arose from the right-hand side by integrating over \((0,t)\). They can be easily treated by accounting for the Young inequality and the preliminary estimate (5.16). Namely, we find that
\[-\int_{Q_{t}}\chi\big{(}\partial_{t}\rho+\gamma\rho\big{)}-\int_{ \Sigma_{t}}\chi^{\Gamma}\big{(}\partial_{t}\rho_{\Gamma}+\gamma\rho_{\Gamma} \big{)}\] \[\leq\frac{\tau}{2}\int_{Q_{t}}|\partial_{t}\rho|^{2}+\frac{\tau}{2 }\int_{\Sigma_{t}}|\partial_{t}\rho_{\Gamma}|^{2}+\int_{Q_{t}}|\rho|^{2}+\int_ {\Sigma_{t}}|\rho_{\Gamma}|^{2}+c\,\|(h,h^{\Gamma})\|_{L^{2}(0,T;\mathcal{H})} ^{4}\,.\]
Therefore, we can rearrange (5.17) integrated over \((0,t)\) and apply the Gronwall lemma. This yields
\[\|(\nabla\vartheta,\nabla_{\Gamma}\vartheta_{\Gamma})\|_{L^{2}(0,T;\mathcal{H })}^{2}+\|(\rho,\rho_{\Gamma})\|_{H^{1}(0,T;\mathcal{H})\cap L^{\infty}(0,T; \mathcal{H})}^{2}\leq c\,\|(h,h^{\Gamma})\|_{L^{2}(0,T;\mathcal{H})}^{4} \tag{5.18}\]
for every \((h,h^{\Gamma})\) whose \(\mathcal{U}\)-norm is small enough. As a consequence, we have that
\[\|\mathcal{S}(u+h,u^{\Gamma}+h^{\Gamma})-\mathcal{S}(u,u^{\Gamma})-(\psi,\psi _{\Gamma})\|=\|(\rho,\rho_{\Gamma})\|_{y}=o(\|(h,h^{\Gamma})\|_{\mathcal{U}}) \quad\text{as }\|(h,h^{\Gamma})\|_{\mathcal{U}}\searrow 0\]
and this implies that \(\mathcal{S}\) is Frechet differentiable at \((u,u^{\Gamma})\) and that its Frechet derivative acts as expressed in the statement.
**Remark 5.3**.: We observe that estimating \((\vartheta,\vartheta_{\Gamma})\) from (5.13) with the help of (5.18) yields that
\[\|(\vartheta,\vartheta_{\Gamma})\|_{L^{2}(0,T;\mathcal{H})}^{2}\leq c\,\|(h,h ^{\Gamma})\|_{L^{2}(0,T;\mathcal{H})}^{4}\,.\]
This combined with (5.18) itself and a corresponding improvement of (5.11) shows that also the map that to every \((u,u^{\Gamma})\in\mathcal{U}_{R}\) associates the whole solution \((\mu,\mu_{\Gamma},\varphi,\varphi_{\Gamma})\) to the original problem is Frechet differentiable and that its Frechet derivative at the given \((u,u^{\Gamma})\) is the map that to every \((h,h^{\Gamma})\in\mathcal{U}\) associates the whole solution \((\eta,\eta_{\Gamma},\psi,\psi_{\Gamma})\) to the linearized problem. This fact would allow us to consider a more general cost functional. Namely, on the right-hand side of (2.31) we could add two similar integrals on \(Q\) and \(\Sigma\) involving \(\mu\) and \(\mu_{\Gamma}\), respectively.
### First-order optimality conditions
The above Frechet differentiability result permits us to apply the chain rule to the composite map
\[\mathcal{U}_{R}\ni(u,u^{\Gamma})\mapsto(u,u^{\Gamma};\varphi,\varphi_{\Gamma} )\mapsto\mathcal{J}(u,u^{\Gamma};\varphi,\varphi_{\Gamma})\quad\text{with} \quad(\varphi,\varphi_{\Gamma})=\mathcal{S}(u,u^{\Gamma}).\]
Since \(\mathcal{U}_{ad}\) is convex, one immediately sees that a necessary condition for \((u_{*},u_{*}^{\Gamma})\) to be an optimal control is given by the variational inequality
\[\alpha_{1} \int_{Q}(\varphi^{*}-\phi^{Q})\psi+\alpha_{2}\int_{\Sigma}( \varphi_{\Gamma}^{*}-\phi^{\Sigma})\psi_{\Gamma}\] \[+\alpha_{3}\int_{\Omega}(\varphi^{*}(T)-\phi^{\Omega})\psi(T)+ \alpha_{4}\int_{\Gamma}(\varphi_{\Gamma}^{*}(T)-\phi^{\Gamma})\psi_{\Gamma}(T)\] \[+\alpha_{5}\int_{Q}u_{*}(u-u_{*})+\alpha_{6}\int_{\Sigma}u_{*}^{ \Gamma}(u^{\Gamma}-u_{*}^{\Gamma})\] \[\geq 0\quad\text{for every }(u,u^{\Gamma})\in\mathcal{U}_{ad} \tag{5.19}\]
where \((\varphi^{*},\varphi^{*}_{\Gamma})=\mathcal{S}(u_{*},u^{\Gamma}_{*})\) and \(\psi\) and \(\psi_{\Gamma}\) are the components of the solution \((\eta,\eta_{\Gamma},\psi,\psi_{\Gamma})\) to the linearized problem corresponding to the pair \((u_{*},u^{\Gamma}_{*})\) and to the variation \((h,h^{\Gamma}):=(u-u_{*},u^{\Gamma}-u^{\Gamma}_{*})\). However, the condition just written is rather unpleasant. Indeed, it involves the linearized problem infinitely many times, since \((u,u^{\Gamma})\) is arbitrary in \(\mathcal{U}_{ad}\). As usual, this issue is bypassed by introducing a proper adjoint problem.
We give a rather weak formulation of it since we do not require any time differentiability of the single components of its solution. Namely, the only time derivative we consider is understood in the sense of the Hilbert triplet \((\mathcal{V},\mathcal{H},\mathcal{V}^{*})\) obtained by identifying \(\mathcal{H}\) to a subspace of \(\mathcal{V}^{*}\) in the usual way, i.e., in order that \(\langle y,z\rangle_{\mathcal{V}}=(y,z)_{\mathcal{H}}\) for every \(y\in\mathcal{H}\) and \(z\in\mathcal{V}\), where \((\,\cdot\,,\,\cdot\,)_{\mathcal{H}}\) denotes the standard inner product of \(\mathcal{H}\). To make the presentation more readable, we introduce some abbreviations setting
\[\lambda :=F^{\prime\prime}(\varphi^{*}),\quad\lambda^{\Gamma}:=F^{ \prime\prime}_{\Gamma}(\varphi^{*}_{\Gamma})\] \[\zeta_{1} :=\alpha_{1}(\varphi^{*}-\phi^{Q}),\quad\zeta_{2}:=\alpha_{2}( \varphi^{*}_{\Gamma}-\phi^{\Sigma})\] \[\zeta_{3} :=\alpha_{3}(\varphi^{*}(T)-\phi^{\Omega}),\quad\zeta_{4}:= \alpha_{4}(\varphi^{*}_{\Gamma}(T)-\phi^{\Gamma})\]
where \((u_{*},u^{\Gamma}_{*})\) is the optimal control at hand and \((\varphi^{*},\varphi^{*}_{\Gamma}):=\mathcal{S}(u_{*},u^{\Gamma}_{*})\). We recall that \(\lambda\) and \(\lambda^{\Gamma}\) are bounded and that the other functions are \(L^{2}\) functions. Then, the adjoint problem associated to \((u_{*},u^{\Gamma}_{*})\) consists in looking for a quadruplet (or a pair of pairs) \((p,p_{\Gamma},q,q_{\Gamma})\) satisfying
\[(p,p_{\Gamma})\in L^{\infty}(0,T;\mathcal{V})\quad\text{and}\quad( q,q_{\Gamma})\in L^{\infty}(0,T;\mathcal{H})\cap L^{2}(0,T;\mathcal{V}) \tag{5.20}\] \[(p+\tau q,p_{\Gamma}+\tau q_{\Gamma})\in H^{1}(0,T;\mathcal{V}^{ *}) \tag{5.21}\]
and solving the system
\[-\langle\partial_{t}(p+\tau q,p_{\Gamma}+\tau q_{\Gamma}),(v,v_{ \Gamma})\rangle_{\mathcal{V}}\] \[\quad+\int_{\Omega}\nabla q\cdot\nabla v+\int_{\Gamma}\nabla_{ \Gamma}q_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}+\int_{\Omega}(\gamma p+ \lambda q)v+\int_{\Gamma}(\gamma p_{\Gamma}+\lambda^{\Gamma}q_{\Gamma})v_{\Gamma}\] \[=\int_{\Omega}\zeta_{1}v+\int_{\Gamma}\zeta_{2}v_{\Gamma}\quad \text{a.e. in $(0,T)$ and for every $(v,v_{\Gamma})\in\mathcal{V}$} \tag{5.22}\] \[\int_{\Omega}\nabla p\cdot\nabla v+\int_{\Gamma}\nabla_{\Gamma} p_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[=\int_{\Omega}qv+\int_{\Gamma}q_{\Gamma}v_{\Gamma}\quad\text{a.e. in $(0,T)$ and for every $(v,v_{\Gamma})\in\mathcal{V}$}\] (5.23) \[\big{(}(p+\tau q,p_{\Gamma}+\tau q_{\Gamma})(T),(v,v_{\Gamma}) \big{)}_{\mathcal{H}}\] \[=\int_{\Omega}\zeta_{3}v+\int_{\Gamma}\zeta_{4}v_{\Gamma}\quad \text{for every $(v,v_{\Gamma})\in\mathcal{V}$}. \tag{5.24}\]
**Remark 5.4**.: We notice that (5.20)-(5.21) imply that
\[(p+\tau q,p_{\Gamma}+\tau q_{\Gamma})\in H^{1}(0,T;\mathcal{V}^{*})\cap L^{2}( 0,T;\mathcal{V})\subset C^{0}([0,T];\mathcal{H})\]
so that the final value \((p+\tau q,p_{\Gamma}+\tau q_{\Gamma})(T)\) is meaningful. We also remark that the final condition (5.24) is equivalent to
\[(p+\tau q)(T)=\zeta_{3}\quad\text{and}\quad(p_{\Gamma}+\tau q_{\Gamma})(T)= \zeta_{4}\]
and that our assumptions on the cost functional simply yield that \((\zeta_{3},\zeta_{4})\in\mathcal{H}\). In particular, it is not required that \(\zeta_{4}\) is the trace of \(\zeta_{3}\). Such a condition would make sense only under stronger assumptions on the ingredients of the cost functional and would be necessary if we were pretending \((p+\tau q,p_{\Gamma}+\tau q_{\Gamma})\) to be a continuous \(\mathcal{V}\)-valued function.
At this point, we are ready to prove our results regarding the adjoint problem and its connection with the control problem.
**Theorem 5.5**.: _The adjoint problem (5.22)-(5.24) has a unique solution \((p,p_{\Gamma},q,q_{\Gamma})\) satisfying the regularity requirements (5.20)-(5.21)._
Proof.: Our procedure is inspired by the proof of the comparable result [15, Thm. 4.4]. As for existence, we introduce an approximating problem depending on a positive parameter \(\varepsilon\) that we then let tend to zero. This consists in looking for a quadruplet \((p^{\varepsilon},p^{\varepsilon}_{\Gamma},q^{\varepsilon},q^{\varepsilon}_{ \Gamma})\) satisfying
\[(p^{\varepsilon},p^{\varepsilon}_{\Gamma}),\,(q^{\varepsilon},q^{\varepsilon} _{\Gamma})\in H^{1}(0,T;\mathcal{H})\cap L^{\infty}(0,T;\mathcal{V}) \tag{5.25}\]
and solving the system
\[-\int_{\Omega}\partial_{t}(p^{\varepsilon}+\tau q^{\varepsilon}) \,v-\int_{\Gamma}\partial_{t}(p^{\varepsilon}_{\Gamma}+\tau q^{\varepsilon}_{ \Gamma})\,v_{\Gamma}\] \[\quad+\int_{\Omega}\nabla q^{\varepsilon}\cdot\nabla v+\int_{ \Gamma}\nabla_{\Gamma}q^{\varepsilon}_{\Gamma}\cdot\nabla_{\Gamma}v_{ \Gamma}+\int_{\Omega}(\gamma p^{\varepsilon}+\lambda q^{\varepsilon})v+\int_ {\Gamma}(\gamma p^{\varepsilon}_{\Gamma}+\lambda^{\Gamma}q^{\varepsilon}_{ \Gamma})v_{\Gamma}\] \[=\int_{\Omega}\zeta_{1}v+\int_{\Gamma}\zeta_{2}v_{\Gamma}\quad \text{a.e. in $(0,T)$ and for every $(v,v_{\Gamma})\in\mathcal{V}$} \tag{5.26}\] \[-\varepsilon\int_{\Omega}\partial_{t}p^{\varepsilon}\,v- \varepsilon\int_{\Omega}\partial_{t}p^{\varepsilon}_{\Gamma}\,v_{\Gamma}+\int _{\Omega}\nabla p^{\varepsilon}\cdot\nabla v+\int_{\Gamma}\nabla_{\Gamma}p^{ \varepsilon}_{\Gamma}\cdot\nabla_{\Gamma}v_{\Gamma}\] \[=\int_{\Omega}q^{\varepsilon}v+\int_{\Gamma}q^{\varepsilon}_{ \Gamma}v_{\Gamma}\quad\text{a.e. in $(0,T)$ and for every $(v,v_{\Gamma})\in\mathcal{V}$}\] (5.27) \[(p^{\varepsilon},p^{\varepsilon}_{\Gamma})(T)=(0,0)\quad\text{ and}\quad(q^{\varepsilon},q^{\varepsilon}_{\Gamma})(T)=(\zeta_{3}^{\varepsilon}/\tau,\zeta_{4}^{ \varepsilon}/\tau) \tag{5.28}\]
where the approximating data \(\zeta_{3}^{\varepsilon}\) and \(\zeta_{4}^{\varepsilon}\) are chosen such that
\[(\zeta_{3}^{\varepsilon},\zeta_{4}^{\varepsilon})\in\mathcal{V}\quad\text{ for $\varepsilon>0$}\quad\text{and}\quad(\zeta_{3}^{\varepsilon},\zeta_{4}^{ \varepsilon})\to(\zeta_{3},\zeta_{4})\text{ in $\mathcal{H}$}\quad\text{as $ \varepsilon\searrow 0$}. \tag{5.29}\]
Let us claim that this problem has a unique solution. However, we do not give the proof of this statement since it can be obtained just by closely following the analogous one of [15, Thm. 4.3], where an ad hoc inner product in \(\mathcal{H}\) is designed in order to make the problem parabolic. On the contrary, we perform in detail the estimates that allow us to let \(\varepsilon\) tend to zero. We assume \(\varepsilon\in(0,1)\) (so that \(\varepsilon^{2}<\varepsilon\)) and employ the abbreviations
\[Q^{t}:=\Omega\times(t,T)\quad\text{and}\quad\Sigma^{t}:=\Gamma\times(t,T)\quad \text{for $t\in[0,T)$.}\]
First a priori estimate.We test (5.26) by \((q^{\varepsilon},q^{\varepsilon}_{\Gamma})\) and rearrange the terms to find that
\[-\int_{\Omega}\partial_{t}p^{\varepsilon}\,q^{\varepsilon}-\int_{ \Gamma}\partial_{t}p^{\varepsilon}_{\Gamma}\,q^{\varepsilon}_{\Gamma}-\frac{ \tau}{2}\,\frac{d}{dt}\int_{\Omega}|q^{\varepsilon}|^{2}-\frac{\tau}{2}\,\frac{ d}{dt}\int_{\Gamma}|q^{\varepsilon}_{\Gamma}|^{2}\] \[\quad+\int_{\Omega}|\nabla q^{\varepsilon}|^{2}+\int_{\Gamma}| \nabla_{\Gamma}q^{\varepsilon}_{\Gamma}|^{2}\] \[=-\int_{\Omega}(\gamma p^{\varepsilon}+\lambda q^{\varepsilon})q^{ \varepsilon}-\int_{\Gamma}(\gamma p^{\varepsilon}_{\Gamma}+\lambda^{\Gamma}q^{ \varepsilon}_{\Gamma})q^{\varepsilon}_{\Gamma}+\int_{\Omega}\zeta_{1}q^{ \varepsilon}+\int_{\Gamma}\zeta_{2}q^{\varepsilon}_{\Gamma}\,.\]
At the same time, by noting that \(-\partial_{t}(p^{\varepsilon},p^{\varepsilon}_{\Gamma})\) belongs to \(L^{2}(0,T;\mathcal{V})\) (since \((q^{\varepsilon},q^{\varepsilon}_{\Gamma})\in L^{2}(0,T;\mathcal{V})\)), we use it as test function in (5.27) and obtain that
\[\varepsilon\int_{\Omega}|\partial_{t}p^{\varepsilon}|^{2}+ \varepsilon\int_{\Gamma}|\partial_{t}p^{\varepsilon}_{\Gamma}|^{2}-\frac{1}{2} \,\frac{d}{dt}\int_{\Omega}|\nabla p^{\varepsilon}|^{2}-\frac{1}{2}\,\frac{d}{ dt}\int_{\Gamma}|\nabla_{\Gamma}p^{\varepsilon}_{\Gamma}|^{2}\] \[=-\int_{\Omega}q^{\varepsilon}\partial_{t}p^{\varepsilon}-\int_{ \Gamma}q^{\varepsilon}_{\Gamma}\partial_{t}p^{\varepsilon}_{\Gamma}\,.\]
Now, we sum up, account for the cancellations that occur, and estimate the right-hand side. Using that \(\lambda\) and \(\lambda^{\Gamma}\) are bounded and that \(\zeta_{1}\) and \(\zeta_{2}\) are \(L^{2}\) functions, we deduce that
\[-\frac{\tau}{2}\,\frac{d}{dt}\int_{\Omega}|q^{\varepsilon}|^{2}- \frac{\tau}{2}\,\frac{d}{dt}\int_{\Gamma}|q^{\varepsilon}_{\Gamma}|^{2}+\int_ {\Omega}|\nabla q^{\varepsilon}|^{2}+\int_{\Gamma}|\nabla_{\Gamma}q^{ \varepsilon}_{\Gamma}|^{2}\] \[\quad+\varepsilon\int_{\Omega}|\partial_{t}p^{\varepsilon}|^{2}+ \varepsilon\int_{\Gamma}|\partial_{t}p^{\varepsilon}_{\Gamma}|^{2}-\frac{1}{ 2}\,\frac{d}{dt}\int_{\Omega}|\nabla p^{\varepsilon}|^{2}-\frac{1}{2}\,\frac{d }{dt}\int_{\Gamma}|\nabla_{\Gamma}p^{\varepsilon}_{\Gamma}|^{2}\] \[\leq\int_{\Omega}|p^{\varepsilon}|^{2}+\int_{\Gamma}|p^{ \varepsilon}_{\Gamma}|^{2}+c\int_{\Omega}|q^{\varepsilon}|^{2}+c\int_{\Gamma }|q^{\varepsilon}_{\Gamma}|^{2}+c\,.\]
At this point, we integrate over \((t,T)\). As for the final values, we account for (5.28) and (5.29). We obtain that
\[\frac{\tau}{2}\int_{\Omega}|q^{\varepsilon}(t)|^{2}+\frac{\tau}{ 2}\int_{\Gamma}|q^{\varepsilon}_{\Gamma}(t)|^{2}+\int_{Q^{t}}|\nabla q^{ \varepsilon}|^{2}+\int_{\Sigma^{t}}|\nabla_{\Gamma}q^{\varepsilon}_{\Gamma}| ^{2}\] \[\quad+\varepsilon\int_{Q^{t}}|\partial_{t}p^{\varepsilon}|^{2}+ \varepsilon\int_{\Sigma^{t}}|\partial_{t}p^{\varepsilon}_{\Gamma}|^{2}+\frac{ 1}{2}\int_{\Omega}|\nabla p^{\varepsilon}(t)|^{2}+\frac{1}{2}\int_{\Gamma}| \nabla_{\Gamma}p^{\varepsilon}_{\Gamma}(t)|^{2}\] \[\leq\int_{Q^{t}}|p^{\varepsilon}|^{2}+\int_{\Sigma^{t}}|p^{ \varepsilon}_{\Gamma}|^{2}+c\int_{Q^{t}}|q^{\varepsilon}|^{2}+c\int_{\Sigma^{ t}}|q^{\varepsilon}_{\Gamma}|^{2}+c\,. \tag{5.30}\]
Now, we recall the Poincare inequality (2.46) and have that
\[\int_{Q^{t}}|p^{\varepsilon}|^{2}+\int_{\Sigma^{t}}|p^{ \varepsilon}_{\Gamma}|^{2}\] \[\leq c\Bigl{(}\int_{Q^{t}}|\nabla p^{\varepsilon}|^{2}+\int_{ \Sigma^{t}}|\nabla_{\Gamma}p^{\varepsilon}_{\Gamma}|^{2}+\int_{t}^{T}|\, \mathrm{mean}(p^{\varepsilon},p^{\varepsilon}_{\Gamma})(s)|^{2}\,ds\Bigr{)}. \tag{5.31}\]
Next, we write the term \(\gamma p^{\varepsilon}\) in (5.26) as \(\gamma(p^{\varepsilon}+\tau q^{\varepsilon})-\gamma\tau q^{\varepsilon}\) and we do the same with \(\gamma p^{\varepsilon}_{\Gamma}\). Moreover, we set for brevity
\[m:=(|\Omega|+|\Gamma|)\,\mathrm{mean}(p^{\varepsilon}+\tau q^{ \varepsilon},p^{\varepsilon}_{\Gamma}+\tau q^{\varepsilon}_{\Gamma})=\int_{ \Omega}(p^{\varepsilon}+\tau q^{\varepsilon})+\int_{\Gamma}(p^{\varepsilon}_{ \Gamma}+\tau q^{\varepsilon}_{\Gamma})\] \[g:=\zeta_{1}+(\gamma\tau-\lambda)q^{\varepsilon}\quad\text{and} \quad g^{\Gamma}:=\zeta_{2}+(\gamma\tau-\lambda^{\Gamma})q^{\varepsilon}_{ \Gamma}\,.\]
Then, we test both (5.26) with the above modification and (5.28) by \((1,1)\) to obtain that
\[-m^{\prime}+\gamma m=\int_{\Omega}g+\int_{\Gamma}g^{\Gamma}\quad\text{a.e. in }(0,T)\quad\text{and}\quad m(T)=\int_{\Omega}\zeta_{3}^{\varepsilon}+\int_{\Gamma}\zeta_{4}^{\varepsilon}\]
whence
\[m(t)=e^{-\gamma(T-t)}\Bigl{(}\int_{\Omega}\zeta_{3}^{\varepsilon}+\int_{\Gamma }\zeta_{4}^{\varepsilon}\Bigr{)}+\int_{t}^{T}e^{-\gamma(s-t)}\Bigl{(}\int_{ \Omega}g(s)+\int_{\Gamma}g^{\Gamma}(s)\Bigr{)}\,ds\]
for every \(t\in[0,T]\). Therefore, by also recalling (5.29), we easily conclude that
\[|\operatorname{mean}(p^{\varepsilon}+\tau q^{\varepsilon},p^{\varepsilon}_{ \Gamma}+\tau q^{\varepsilon}_{\Gamma})(t)|^{2}\leq c\int_{Q^{t}}|q^{\varepsilon }|^{2}+c\int_{\Sigma^{t}}|q^{\varepsilon}_{\Gamma}|^{2}+c\quad\text{for every $t\in[0,T]$.}\]
From this, we derive that
\[|\operatorname{mean}(p^{\varepsilon},p^{\varepsilon}_{\Gamma}) (t)|^{2}\leq C\left(\int_{\Omega}|q^{\varepsilon}(t)|^{2}+\int_{\Gamma}|q^{ \varepsilon}_{\Gamma}(t)|^{2}\right)\] \[\quad+c\int_{\Sigma^{t}}|q^{\varepsilon}|^{2}+c\int_{\Sigma^{t}} |q^{\varepsilon}_{\Gamma}|^{2}+c\quad\text{for every $t\in[0,T]$} \tag{5.32}\]
with a fixed an computable positive constant \(C\) depending only on \(\Omega\) and \(\tau\). At this point, we add (5.30) and (5.32) multiplied by \(\tau/(4C)\) to each other, account for (5.31), rearrange, and apply the (backward) Gronwall lemma. By using the Poincare inequality once more, we conclude that
\[\|(p^{\varepsilon},p^{\varepsilon}_{\Gamma})\|_{L^{\infty}(0,T;\mathcal{V})}+ \|(q^{\varepsilon},q^{\varepsilon}_{\Gamma})\|_{L^{\infty}(0,T;\mathcal{V}) \cap L^{2}(0,T;\mathcal{V})}+\varepsilon^{1/2}\,\|\partial_{t}(p^{ \varepsilon},p^{\varepsilon}_{\Gamma})\|_{L^{2}(0,T;\mathcal{V})}\leq c\,. \tag{5.33}\]
Second a priori estimate.We take any \((v,v_{\Gamma})\in L^{2}(0,T;\mathcal{V})\), test (5.26) by \((v,v_{\Gamma})\) and integrate over \((0,T)\). By owing to (5.33), we easily obtain that
\[\int_{Q}\partial_{t}(p^{\varepsilon}+\tau q^{\varepsilon})\,v+\int_{\Sigma} \partial_{t}(p^{\varepsilon}_{\Gamma}+\tau q^{\varepsilon}_{\Gamma})\,v_{ \Gamma}\leq c\,\big{(}\|v\|_{L^{2}(0,T;V)}+\|v_{\Gamma}\|_{L^{2}(0,T;V_{ \Gamma})}\big{)}.\]
Equivalently, we have that
\[\int_{0}^{T}\langle\partial_{t}(p^{\varepsilon}+\tau q^{\varepsilon},p^{ \varepsilon}_{\Gamma}+\tau q^{\varepsilon}_{\Gamma})(t),(v,v_{\Gamma})(t) \rangle_{\mathcal{V}}\,dt\leq c\,\|(v,v_{\Gamma})\|_{L^{2}(0,T;\mathcal{V})}\]
whereas it follows that
\[\|\partial_{t}(p^{\varepsilon}+\tau q^{\varepsilon},p^{\varepsilon}_{\Gamma}+ \tau q^{\varepsilon}_{\Gamma})\|_{L^{2}(0,T;\mathcal{V}^{*})}\leq c\,. \tag{5.34}\]
Conclusion of the existence proof.Thanks to (5.33)-(5.34) and to standard compactness results, we have that
\[(p^{\varepsilon},p^{\varepsilon}_{\Gamma})\to(p,p_{\Gamma}) \qquad\text{weakly star in $L^{\infty}(0,T;\mathcal{V})$}\] \[(q^{\varepsilon},q^{\varepsilon}_{\Gamma})\to(q,q_{\Gamma}) \qquad\text{weakly star in $L^{\infty}(0,T;\mathcal{V})\cap L^{2}(0,T; \mathcal{V})$}\] \[\partial_{t}(p^{\varepsilon}+\tau q^{\varepsilon},p^{\varepsilon}_ {\Gamma}+\tau q^{\varepsilon}_{\Gamma})\to\partial_{t}(p+\tau q,p_{\Gamma}+ \tau q_{\Gamma})\qquad\text{weakly in $L^{2}(0,T;\mathcal{V}^{*})$}\]
as \(\varepsilon\searrow 0\), for some quadruplet \((p,p_{\Gamma},q,q_{\Gamma})\) satisfying (5.20)-(5.21), at least for a subsequence. Moreover, (5.33) also implies that
\[\varepsilon\,\partial_{t}(p^{\varepsilon},p^{\varepsilon}_{\Gamma})\to(0,0) \qquad\text{strongly in $L^{2}(0,T;\mathcal{H})$}\,.\]
Then, it is straightforward to see that this limit quadruplet solves equalities that are equivalent to (5.22) and (5.23), namely, their integrated versions with time dependent test functions \((v,v_{\Gamma})\in L^{2}(0,T;\mathcal{V})\). On the other hand, the above convergence properties
imply that \((p^{\varepsilon}+\tau q^{\varepsilon},p^{\varepsilon}_{\Gamma}+\tau q^{ \varepsilon}_{\Gamma})\) converges to \((p+\tau q,p_{\Gamma}+\tau q_{\Gamma})\) weakly in \(C^{0}([0,T];\mathcal{H})\). Thus, the terminal condition (5.28) and the convergence in (5.29) imply that (5.24) is fulfilled by \((p+\tau q,p_{\Gamma}+\tau q_{\Gamma})\). This completes the proof of the existence of a solution.
Uniqueness.Since the problem is linear, it is sufficient to prove that the unique solution \((p,p_{\Gamma},q,q_{\Gamma})\) to (5.22)-(5.24) is \((0,0,0,0)\) if all the functions \(\zeta_{i}\) are replaced by zeros. In this direction, we set for \(t\in[0,T]\)
\[P(t):=-\int_{t}^{T}p(s)\,ds,\quad Q(t):=-\int_{t}^{T}q(s)\,ds, \quad\Lambda(t):=-\int_{t}^{T}(\lambda q)(s)\,ds\] \[P_{\Gamma}(t):=-\int_{t}^{T}p_{\Gamma}(s)\,ds,\quad Q_{\Gamma}(t ):=-\int_{t}^{T}q_{\Gamma}(s)\,ds,\quad\Lambda^{\Gamma}(t):=-\int_{t}^{T}( \lambda^{\Gamma}q_{\Gamma})(s)\,ds\]
and integrate (5.22) written at the time \(s\) over \((t,T)\) with respect to \(s\). Due to (5.24), it results that
\[\langle(p+\tau q,p_{\Gamma}+\tau q_{\Gamma})(t),(v,v_{\Gamma}) \rangle_{\mathcal{V}}-\int_{\Omega}\nabla Q(t)\cdot\nabla v-\int_{\Gamma} \nabla_{\Gamma}Q_{\Gamma}(t)\cdot\nabla_{\Gamma}v_{\Gamma}\] \[=\gamma\int_{\Omega}P(t)v+\gamma\int_{\Gamma}P_{\Gamma}(t)v_{ \Gamma}+\int_{\Omega}\Lambda(t)v+\int_{\Gamma}\Lambda^{\Gamma}(t)v_{\Gamma} \tag{5.35}\]
for every \(t\in[0,T]\) and every \((v,v_{\Gamma})\in\mathcal{V}\). Now, we test this equation by \((q,q_{\Gamma})(t)\). By avoiding writing the time \(t\) for brevity, we obtain that
\[\int_{\Omega}pq+\tau\int_{\Omega}|q|^{2}+\int_{\Gamma}p_{\Gamma} q_{\Gamma}+\tau\int_{\Gamma}|q_{\Gamma}|^{2}-\frac{1}{2}\,\frac{d}{dt}\int_{ \Omega}|\nabla Q|^{2}-\frac{1}{2}\,\frac{d}{dt}\int_{\Gamma}|\nabla_{\Gamma}Q _{\Gamma}|^{2}\] \[=\gamma\int_{\Omega}Pq+\gamma\int_{\Gamma}P_{\Gamma}q_{\Gamma}+ \int_{\Omega}\Lambda q+\int_{\Gamma}\Lambda^{\Gamma}q_{\Gamma}\,. \tag{5.36}\]
Next, we test (5.23) by \((p,p_{\Gamma})\) and add the same quantity \(-\frac{1}{2}\,\frac{d}{dt}\int_{\Omega}|P|^{2}=-\int_{\Omega}Pp\) and the boundary analogue to both sides. We get that
\[\int_{\Omega}|\nabla p|^{2}+\int_{\Gamma}|\nabla_{\Gamma}p_{\Gamma }|^{2}-\frac{1}{2}\,\frac{d}{dt}\int_{\Omega}|P|^{2}-\frac{1}{2}\,\frac{d}{dt }\int_{\Gamma}|P_{\Gamma}|^{2}\] \[=\int_{\Omega}qp+\int_{\Gamma}q_{\Gamma}p_{\Gamma}-\int_{\Omega} Pp-\int_{\Gamma}P_{\Gamma}p_{\Gamma}\,. \tag{5.37}\]
Finally, we test (5.35) by \(\frac{1}{2\tau}(p,p_{\Gamma})\) and have that
\[\frac{1}{2\tau}\int_{\Omega}|p|^{2}+\frac{1}{2\tau}\int_{\Gamma}| p_{\Gamma}|^{2}\] \[=-\frac{1}{2}\int_{\Omega}qp-\frac{1}{2}\int_{\Gamma}q_{\Gamma}p_ {\Gamma}+\frac{1}{2\tau}\int_{\Omega}\nabla Q\cdot\nabla p+\frac{1}{2\tau}\int _{\Gamma}\nabla_{\Gamma}Q_{\Gamma}\cdot\nabla p_{\Gamma}\] \[\quad+\frac{\gamma}{2\tau}\int_{\Omega}Pp+\frac{\gamma}{2\tau} \int_{\Gamma}P_{\Gamma}p_{\Gamma}+\frac{1}{2\tau}\int_{\Omega}\Lambda p+\frac{ 1}{2\tau}\int_{\Gamma}\Lambda^{\Gamma}p_{\Gamma}\,. \tag{5.38}\]
At this point, we add (5.36)-(5.38) to each other and notice that some cancellation occur. Moreover, we integrate in time over \((t,T)\). The sum of the bulk terms on the left-hand side is given by
\[S_{l}(t):=\tau\int_{Q^{t}}|q|^{2}+\frac{1}{2}\int_{\Omega}|\nabla Q(t)|^{2}+ \int_{Q^{t}}|\nabla p|^{2}+\frac{1}{2}\int_{\Omega}|P(t)|^{2}+\frac{1}{2\tau} \int_{Q^{t}}|p|^{2} \tag{5.39}\]
while that on the right-hand side is
\[S_{r}(t) :=\gamma\int_{Q^{t}}Pq+\int_{Q^{t}}\Lambda q-\int_{Q^{t}}Pp-\frac{1} {2}\int_{Q^{t}}qp\] \[+\frac{1}{2\tau}\int_{Q^{t}}\nabla Q\cdot\nabla p+\frac{\gamma}{2 \tau}\int_{Q^{t}}Pp+\frac{1}{2\tau}\int_{Q^{t}}\Lambda p. \tag{5.40}\]
The most delicate term is the fourth integral in (5.40), for which sharp coefficients are needed in applying the Young inequality (while for the other ones such precision is not necessary). We thus have for every \(\delta>0\) (where the above terms are estimated in their order)
\[S_{r}(t)\leq\delta\int_{Q^{t}}|q|^{2}+c_{\delta}\int_{Q^{t}}|P|^ {2}+\delta\int_{Q^{t}}|q|^{2}+c_{\delta}\int_{Q^{t}}|\Lambda|^{2}\] \[\quad+\delta\int_{Q^{t}}|p|^{2}+c_{\delta}\int_{Q^{t}}|P|^{2}+ \frac{\tau}{4}\int_{Q^{t}}|q|^{2}+\frac{1}{4\tau}\int_{Q^{t}}|p|^{2}\] \[\quad+\frac{1}{2}\int_{Q^{t}}|\nabla p|^{2}+c\int_{Q^{t}}|\nabla Q |^{2}+\delta\int_{Q^{t}}|p|^{2}+c_{\delta}\int_{Q^{t}}|P|^{2}+\delta\int_{Q^{t }}|p|^{2}+c_{\delta}\int_{Q^{t}}|\Lambda|^{2}\,.\]
On the other hand, we have that
\[|\Lambda(s)|^{2}\leq\Bigl{(}\int_{s}^{T}c\,|q(s^{\prime})|\,ds^{\prime}\Bigr{)} ^{2}\leq c\int_{s}^{T}|q(s^{\prime})|^{2}\,ds^{\prime}\quad\text{for every $s\in[0,T]$}\]
whence also
\[\int_{Q^{t}}|\Lambda|^{2}\leq c\int_{t}^{T}\Bigl{(}\int_{\Omega}\Bigl{(}\int_{ s}^{T}|q(s^{\prime})|^{2}\,ds^{\prime}\Bigr{)}\Bigr{)}\,ds=c\int_{t}^{T} \Bigl{(}\int_{Q^{s}}|q|^{2}\Bigr{)}\,ds\,.\]
Since the boundary terms coming from (5.36)-(5.38) are similar, those of the right-hand side can be estimated in the same way. Therefore, by collecting everything, rearranging the terms, choosing \(\delta\) small enough, and applying the (backward) Gronwall lemma, we conclude that \((p,p_{\Gamma},q,q_{\Gamma})=(0,0,0,0)\).
Finally, we express the necessary condition for optimality (5.19) in terms of the solution to the adjoint problem. As sketched in the Introduction, our result is the following:
**Theorem 5.6**.: _Let \((u_{*},u_{*}^{\Gamma})\) and \((\mu^{*},\mu_{\Gamma}^{*},\varphi^{*},\varphi_{\Gamma}^{*})\) be an optimal control and the corresponding state, respectively, and let \((p,p_{\Gamma},q,q_{\Gamma})\) be the solution to the corresponding adjoint problem. Then, the following variational inequality is satisfied_
\[\int_{Q}(\gamma p+\alpha_{5}u_{*})(u-u_{*})+\int_{\Sigma}(\gamma p_{\Gamma}+ \alpha_{6}u_{*}^{\Gamma})(u^{\Gamma}-u_{*}^{\Gamma})\geq 0 \tag{5.41}\]
_for every \((u,u^{\Gamma})\in\mathcal{U}_{ad}\)._
Proof.: We also consider the linearized system (5.8)-(5.10) associated with \((\mu^{*},\mu_{\Gamma}^{*},\varphi^{*},\varphi_{\Gamma}^{*})\) and to the variation \((h,h^{\Gamma}):=(u-u_{*},u^{\Gamma}-u_{*}^{\Gamma})\). We test (5.8), (5.9), (5.22) and (5.23) by \((p,p_{\Gamma})\), \((q,q_{\Gamma})\), \(-(\psi,\psi_{\Gamma})\) and \(-(\eta,\eta_{\Gamma})\), respectively. We obtain the following identities
\[\int_{\Omega}\partial_{t}\psi\,p+\int_{\Gamma}\partial_{t}\psi_{\Gamma}\,p_{ \Gamma}+\int_{\Omega}\nabla\eta\cdot\nabla p+\int_{\Gamma}\nabla_{\Gamma}\eta _{\Gamma}\cdot\nabla_{\Gamma}p_{\Gamma}\]
\[=\gamma\int_{\Omega}(u-u_{*})p+\gamma\int_{\Gamma}(u^{\Gamma}-u_{*}^ {\Gamma}-\psi_{\Gamma})p_{\Gamma}\] \[\tau\int_{\Omega}\partial_{t}\psi\,q+\tau\int_{\Gamma}\partial_{t} \psi_{\Gamma}\,q_{\Gamma}+\int_{\Omega}\nabla\psi\cdot\nabla q+\int_{\Gamma} \nabla_{\Gamma}\psi_{\Gamma}\cdot\nabla_{\Gamma}q_{\Gamma}\] \[\quad\quad+\int_{\Omega}\lambda\psi q+\int_{\Gamma}\lambda^{ \Gamma}\psi_{\Gamma}q_{\Gamma}\] \[=\int_{\Omega}\eta q+\int_{\Gamma}\eta_{\Gamma}q_{\Gamma}\] \[\langle\partial_{t}(p+\tau q,p_{\Gamma}+\tau q_{\Gamma}),(\psi, \psi_{\Gamma})\rangle_{\mathcal{V}}\] \[\quad\quad-\int_{\Omega}\nabla q\cdot\nabla\psi-\int_{\Gamma} \nabla_{\Gamma}q_{\Gamma}\cdot\nabla_{\Gamma}\psi_{\Gamma}-\int_{\Omega}( \gamma p+\lambda q)\psi-\int_{\Gamma}(\gamma p_{\Gamma}+\lambda^{\Gamma}q_{ \Gamma})\psi_{\Gamma}\] \[=-\int_{\Omega}\zeta_{1}\psi-\int_{\Gamma}\zeta_{2}\psi_{\Gamma}\] \[-\int_{\Omega}\nabla p\cdot\nabla\eta-\int_{\Gamma}\nabla_{ \Gamma}p_{\Gamma}\cdot\nabla_{\Gamma}\eta_{\Gamma}\] \[=-\int_{\Omega}q\eta-\int_{\Gamma}q_{\Gamma}\eta_{\Gamma}\]
which hold a.e. in \((0,T)\). At this point, we sum up. Almost all terms cancel each other and it remains that
\[\langle\partial_{t}(p+\tau q,p_{\Gamma}+\tau q_{\Gamma}),(\psi, \psi_{\Gamma})\rangle_{\mathcal{V}}+\int_{\Omega}(p+\tau q)\partial_{t}\psi+ \int_{\Gamma}(p_{\Gamma}+\tau q_{\Gamma})\partial_{t}\psi_{\Gamma}\] \[=\gamma\int_{\Omega}(u-u_{*})p+\gamma\int_{\Gamma}(u^{\Gamma}-u_{ *}^{\Gamma})p_{\Gamma}-\int_{\Omega}\zeta_{1}\psi-\int_{\Gamma}\zeta_{2}\psi_{ \Gamma}\,. \tag{5.42}\]
On the other hand, we recall that
\[(p+\tau q,p_{\Gamma}+\tau q_{\Gamma})\in H^{1}(0,T;\mathcal{V}^{*})\cap L^{2} (0,T;\mathcal{V})\quad\text{and}\quad(\psi,\psi_{\Gamma})\in H^{1}(0,T; \mathcal{H})\cap L^{2}(0,T;\mathcal{V})\]
whence, we can write
\[\langle\partial_{t}(p+\tau q,p_{\Gamma}+\tau q_{\Gamma}),(\psi, \psi_{\Gamma})\rangle_{\mathcal{V}}+\int_{\Omega}(p+\tau q)\partial_{t}\psi+ \int_{\Gamma}(p_{\Gamma}+\tau q_{\Gamma})\partial_{t}\psi_{\Gamma}\] \[=\langle\partial_{t}(p+\tau q,p_{\Gamma}+\tau q_{\Gamma}),(\psi, \psi_{\Gamma})\rangle_{\mathcal{V}}+\big{(}(p+\tau q,p_{\Gamma}+\tau q_{\Gamma }),\partial_{t}(\psi,\psi_{\Gamma})\big{)}_{\mathcal{H}}\] \[=\frac{d}{dt}\left((p+\tau q,p_{\Gamma}+\tau q_{\Gamma}),(\psi, \psi_{\Gamma})\right)_{\mathcal{H}}\quad\text{a.e. in }(0,T).\]
Hence, by integrating (5.42) over \((0,T)\) and accounting for the Cauchy conditions (5.10) and (5.24), we deduce that
\[\big{(}(\zeta_{3},\zeta_{4}),(\psi,\psi_{\Gamma})(T)\big{)}_{ \mathcal{H}}\] \[=\gamma\int_{Q}(u-u_{*})p+\gamma\int_{\Sigma}(u^{\Gamma}-u_{*}^{ \Gamma})p_{\Gamma}-\int_{Q}\zeta_{1}\psi-\int_{\Sigma}\zeta_{2}\psi_{\Gamma}\,.\]
Equivalently, we have that
\[\int_{Q}\zeta_{1}\psi+\int_{\Sigma}\zeta_{2}\psi_{\Gamma}+\int_{ \Omega}\zeta_{3}\psi(T)+\int_{\Gamma}\zeta_{4}\psi_{\Gamma}(T)\] \[=\gamma\int_{Q}(u-u_{*})p+\gamma\int_{\Sigma}(u^{\Gamma}-u_{*}^{ \Gamma})p_{\Gamma}\]
so that (5.41) coincides with (5.19).
## Acknowledgments
ER and AS gratefully acknowledge support from the MIUR-PRIN Grant 2020F3NCPX "Mathematics for industry 4.0 (Math4I4)" and their affiliation to the GNAMPA (Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni) of INdAM (Istituto Nazionale di Alta Matematica).
|
2309.03218 | Remarks on additive representations of natural numbers | For two relatively prime square-free positive integers $a$ and $b$, we study
integers of the form $a p+b P_{2}$ and give a new lower bound for it, where $a
p$ and $b P_{2}$ are both square-free, $p$ denotes a prime, and $P_{2}$ has at
most two prime factors. We also consider some special cases where $p$ is small,
$p$ and $P_2$ are within short intervals, $p$ and $P_2$ are within arithmetical
progressions and a Goldbach-type upper bound result. Our new results generalize
and improve the previous results. | Runbo Li | 2023-09-07T13:31:16Z | http://arxiv.org/abs/2309.03218v6 | # Remarks on additive representations of natural numbers
###### Abstract.
For two relatively prime square-free positive integers \(a\) and \(b\), we study integers of the form \(ap+bP_{2}\) and give a new lower bound for it, where \(ap\) and \(bP_{2}\) are both square-free, \(p\) is a prime, and \(P_{2}\) has at most two prime factors. We also consider some special cases where \(p\) is small, \(p\) and \(P_{2}\) are within short intervals, \(p\) and \(P_{2}\) are within arithmetical progressions and a Goldbach-type upper bound result. Our new results generalize and improve the previous results.
Key words and phrases:Prime, Goldbach-type problems, Sieve, Application of sieve method
###### Contents
* 1 Introduction
* 2 The sets we want to sieve
* 3 Preliminary lemmas
* 4 Mean value theorems
* 5 Weighted sieve method
* 6 Proof of Theorem 1.1
* 6.1 Evaluation of \(S_{1},S_{2},S_{3}\)
* 6.2 Evaluation of \(S_{4},S_{7}\)
* 6.3 Evaluation of \(S_{6}\)
* 6.4 Evaluation of \(S_{5}\)
* 6.5 Proof of theorem 1.1
* 7 Proof of Theorem 1.2
* 7.1 Evaluation of \(S^{\prime}_{1},S^{\prime}_{2},S^{\prime}_{3}\)
* 7.2 Evaluation of \(S^{\prime}_{4},S^{\prime}_{7}\)
* 7.3 Evaluation of \(S^{\prime}_{6}\)
* 7.4 Evaluation of \(S^{\prime}_{5}\)
* 7.5 Proof of theorem 1.2
* 8 An outline of the proof of Theorems 1.3-1.8 Acknowledgements
## 1. Introduction
Let \(N_{e}\) be a sufficiently large even integer, \(p\) be a prime, and let \(P_{r}\) denote an integer with at most \(r\) prime factors counted with multiplicity. For each \(N_{e}\geqslant 4\) and \(r\geqslant 2\), we define
\[D_{1,r}(N_{e}):=\left|\{p:p\leqslant N_{e},N_{e}-p=P_{r}\}\right|. \tag{1}\]
In 1966 Jingrun Chen [8] proved his remarkable Chen's theorem: let \(N_{e}\) be a sufficiently large even integer, then
\[D_{1,2}(N_{e})\geqslant 0.67\frac{C_{e}(N_{e})N_{e}}{(\log N_{e})^{2}} \tag{2}\]
where
\[C_{e}(N_{e}):=\prod_{\begin{subarray}{c}p\mid N_{e}\\ p>2\end{subarray}}\frac{p-1}{p-2}\prod_{p>2}\left(1-\frac{1}{(p-1)^{2}}\right). \tag{3}\]
and the detail was published in [9]. The original proof of Jingrun Chen was simplified by Pan, Ding and Wang [25], Halberstam and Richert [15], Halberstam [14], Ross [28]. As Halberstam and Richert indicated in [15], it would be interesting to know whether a more elaborate weighting procedure could be adapted to the purpose of (2). This might lead to numerical improvements and could be important. Chen's constant \(0.67\) was improved successively to
\[0.689,0.7544,0.81,0.8285,0.836,0.867,0.899\]
by Halberstam and Richert [15][14], Chen [12][10], Cai and Lu [7], Wu [36], Cai [2] and Wu [37] respectively.
In 1990, Wu [33] proved that
\[D_{1,3}(N_{e})\geqslant 0.67\frac{C_{e}(N_{e})N_{e}}{(\log N_{e})^{2}}\log \log N_{e} \tag{4}\]
and
\[D_{1,r}(N_{e})\geqslant 0.67\frac{C_{e}(N_{e})N_{e}}{(\log N_{e})^{2}}(\log \log N_{e})^{r-2}. \tag{5}\]
Kan [18] also proved the similar result in 1991:
\[D_{1,r}(N_{e})\geqslant\frac{0.77}{(r-2)!}\frac{C_{e}(N_{e})N_{e}}{(\log N_{e })^{2}}(\log\log N_{e})^{r-2}, \tag{6}\]
which is better than Wu's result when \(r=3\). In 2023, Li [24] improved their results and obtained
\[D_{1,3}(N_{e})\geqslant 0.8671\frac{C_{e}(N_{e})N_{e}}{(\log N_{e})^{2}}\log \log N_{e} \tag{7}\]
and
\[D_{1,r}(N_{e})\geqslant 0.8671\frac{C_{e}(N_{e})N_{e}}{(\log N_{e})^{2}}(\log \log N_{e})^{r-2}. \tag{8}\]
Kan [19] proved the more generalized theorem in 1992:
\[D_{s,r}(N_{e})\geqslant\frac{0.77}{(s-1)!(r-2)!}\frac{C_{e}(N_{e})N_{e}}{( \log N_{e})^{2}}(\log\log N_{e})^{s+r-3}, \tag{9}\]
where \(s\geqslant 1\),
\[D_{s,r}(N_{e}):=\left|\{P_{s}:P_{s}\leqslant N_{e},N_{e}-P_{s}=P_{r}\}\right|. \tag{10}\]
Furthermore, for two relatively prime square-free positive integers \(a\) and \(b\), let \(N\) be a sufficiently large integer that is relatively prime to both \(a\) and \(b\), \(a,b<N^{\varepsilon}\) and let \(N\) be even if \(a\) and \(b\) are both odd. Let \(R_{a,b}(N)\) be the number of primes \(p\) such that \(ap\) and \(N-ap\) are both square-free, \(b\mid(N-ap)\), and \(\frac{N-ap}{b}=P_{2}\). In 1976, Ross [[29], Chapter 3] proved a similar result without the square-free restrictions on \(ap\) and \(N-ap\). In 2023, Li [23] proved that
\[R_{a,b}(N)\geqslant 0.68\frac{C(N)N}{ab(\log N)^{2}}, \tag{11}\]
where
\[C(N):=\prod_{\begin{subarray}{c}p\mid ab\\ p>2\end{subarray}}\frac{p-1}{p-2}\prod_{\begin{subarray}{c}p\mid N\\ p>2\end{subarray}}\frac{p-1}{p-2}\prod_{p>2}\left(1-\frac{1}{(p-1)^{2}} \right). \tag{12}\]
In this paper, we improve the result by using a delicate sieve process similar to that of [2] and prove that
**Theorem 1.1**.: \[R_{a,b}(N)\geqslant 0.8671\frac{C(N)N}{ab(\log N)^{2}}.\]
It is easy to see that when we take \(a=1\) and \(b=1\), Theorem 1.1 implies Cai's result on Chen's theorem [[2], Theorem 1]; when we take \(a=1\) and \(b=2\), Theorem 1.1 improves Li's result related to the Lemoine's conjecture [[22], Theorem 1]. When we take \(a=q_{1}q_{2}\cdots q_{s}\) and \(b=q_{1}^{\prime}q_{2}^{\prime}\cdots q_{r}^{\prime}\) where \(q,q^{\prime}\) denote prime numbers satisfy
\[s,r\geqslant 1,\quad q_{i},q_{j}^{\prime}<N^{\varepsilon},\quad(q_{i},N)=(q_{j} ^{\prime},N)=1\,\text{ for every }\,1\leqslant i\leqslant s,1\leqslant j \leqslant r,\]
Theorem 1.1 generalizes and improves the previous results of Kan [[18], Theorem 2] [[19], Theorem 2], Wu [[33], Theorems 1 and 2], and Li [[24], Theorems 1.1 and 1.2]. Clearly one can modify our proof of Theorem 1.1 to get a similar lower bound on the twin prime version. For this, we refer the interested readers to Ross's PhD thesis [29] and [[13], Sect. 25.6].
Chen's theorem with small primes was first studied by Cai [1]. For \(0<\theta\leqslant 1\), we define
\[D_{1,r}^{\theta}(N_{e}):=\left|\left\{p:p\leqslant N_{e}^{\theta},N_{e}-p=P_{ r}\right\}\right|. \tag{13}\]
Then it is proved in [1] that for \(0.95\leqslant\theta\leqslant 1\), we have
\[D_{1,2}^{\theta}(N_{e})\gg\frac{C_{e}(N_{e})N_{e}^{\theta}}{(\log N_{e})^{2}}. \tag{14}\]
Cai's range \(0.95\leqslant\theta\leqslant 1\) was extended successively to \(0.945\leqslant\theta\leqslant 1\) in [4] and to \(0.941\leqslant\theta\leqslant 1\) in [3].
In this paper, we generalize their results to integers of the form \(ap+bP_{2}\). For two relatively prime square-free positive integers \(a\) and \(b\), let \(N\) be a sufficiently large integer that is relatively prime to both \(a\) and \(b\), \(a,b<N^{\varepsilon}\) and let \(N\) be even if \(a\) and \(b\) are both odd. Let \(R_{a,b}^{\theta}(N)\) be the number of primes \(p\leqslant N^{\theta}\) such that \(ap\) and \(N-ap\) are both square-free, \(b\mid(N-ap)\), and \(\frac{N-ap}{b}=P_{2}\). In 1976, Ross [[29], Chapter 5] proved a similar result without the square-free restrictions on \(ap\) and \(N-ap\) and showed that \(0.959\leqslant\theta\leqslant 1\) is admissible. Now by using a delicate sieve process similar to that of [3], we prove that
**Theorem 1.2**.: _For \(0.941\leqslant\theta\leqslant 1\) we have_
\[R_{a,b}^{\theta}(N)\gg\frac{C(N)N^{\theta}}{ab(\log N)^{2}}.\]
Chen's theorem in short intervals was first studied by Ross [30]. For \(0<\kappa\leqslant 1\), we define
\[D_{1,r}(N_{e},\kappa):=\left|\left\{p:N_{e}/2-N_{e}^{\kappa}\leqslant p,P_{r} \leqslant N_{e}/2+N_{e}^{\kappa},N_{e}=p+P_{r}\right\}\right|. \tag{15}\]
Then it is proved in [30] that for \(\kappa\geqslant 0.98\), we have
\[D_{1,2}(N_{e},\kappa)\gg\frac{C_{e}(N_{e})N_{e}^{\kappa}}{(\log N_{e})^{2}}. \tag{16}\]
The constant \(0.98\) was improved successively to
\[0.974,0.973,0.9729,0.972,0.971,0.97\]
by Wu [34][35], Salerno and Vitolo [31], Cai and Lu [6], Wu [36] and Cai [2] respectively.
In this paper, we generalize their results to integers of the form \(ap+bP_{2}\). For two relatively prime square-free positive integers \(a\) and \(b\), let \(N\) be a sufficiently large integer that is relatively prime to both \(a\) and \(b\), \(a,b<N^{\varepsilon}\) and let \(N\) be even if \(a\) and \(b\) are both odd. Let \(R_{a,b}(N,\kappa)\) be the number of primes \(N/2-N^{\kappa}\leqslant p\leqslant N/2+N^{\kappa}\) such that \(ap\) and \(N-ap\) are both square-free, \(b\mid(N-ap)\), \(\frac{N-ap}{b}=P_{2}\), and \(N/2-N^{\kappa}\leqslant\frac{N-ap}{b}\leqslant N/2+N^{\kappa}\). In [30], Ross mentioned that his method can be used to prove similar results of \(R_{a,b}(N,\kappa)\) with \(\kappa\geqslant 0.98\) and a detailed proof was given in [[29], Chapter 5]. Now by using a delicate sieve process similar to that of [2], we prove that
**Theorem 1.3**.: _For \(\kappa\geqslant 0.97\) we have_
\[R_{a,b}(N,\kappa)\gg\frac{C(N)N^{\kappa}}{ab(\log N)^{2}}.\]
From our Theorems 1.1-1.3, it can be seen that the first aim of this paper is to improve the old results on the natural numbers of the form \(ap+bP_{2}\) to be consistent with the results on the even numbers of the form \(p+P_{2}\). Before our work, all results on this topic are weaker than those of binary Goldbach problem. For
Theorem 1.1, the constants \(0.608\) in [29] and \(0.68\) in [23] are smaller than \(0.867\) in [2]. For Theorems 1.2-1.3, Ross's exponent \(0.959\) and \(0.98\) are again weaker than those in [3] and [7].
Chen's theorem in arithmetical progressions was first studied by Kan and Shan [20]. If we define
\[D_{1,r}(N_{e},c,d):=\left|\{p:p\leqslant N_{e},p\equiv d(\mathrm{mod}c),(c,d)=1,(N_{e}-p,c)=1,N_{e}-p=P_{r}\}\right|, \tag{17}\]
then it is proved in [20] that for \(c\leqslant(\log N_{e})^{C_{1}}\) where \(C_{1}\) is a positive constant, we have
\[D_{1,2}(N_{e},c,d)\geqslant 0.77\prod_{\begin{subarray}{c}p\mid c\\ p\nmid N_{e}\\ p>2\end{subarray}}\left(\frac{p-1}{p-2}\right)\frac{C_{e}(N_{e})N_{e}}{\varphi (c)(\log N_{e})^{2}} \tag{18}\]
and
\[D_{1,r}(N_{e},c,d)\geqslant\frac{0.77}{(r-2)!}\prod_{\begin{subarray}{c}p \mid c\\ p\nmid N_{e}\\ p>2\end{subarray}}\left(\frac{p-1}{p-2}\right)\frac{C_{e}(N_{e})N_{e}}{\varphi (c)(\log N_{e})^{2}}(\log\log N_{e})^{r-2}. \tag{19}\]
Clearly their results (18) and (19) generalized the previous results (2), (4), (5) and (6). They also got the similar results on the twin prime version and Lewulis [21] considered the similar problem. However, their results are only valid when \(c\) is "small". In 1999, Cai and Lu [5] considered this problem with "big" \(c\) and proved that for \(c\leqslant N_{e}^{\frac{1}{37}}\), except for \(O\left(N_{e}^{\frac{1}{37}}(\log N_{e})^{-A}\right)\) exceptional values, we have
\[D_{1,2}(N_{e},c,d)\gg\prod_{\begin{subarray}{c}p\mid c\\ p\nmid N_{e}\\ p>2\end{subarray}}\left(\frac{p-1}{p-2}\right)\frac{C_{e}(N_{e})N_{e}}{\varphi (c)(\log N_{e})^{2}} \tag{20}\]
and they mentioned that the exponent \(\frac{1}{37}\) can be improved to \(0.028\). In this paper, we further generalize their results to integers of the form \(ap+bP_{2}\). For two relatively prime square-free positive integers \(a\) and \(b\), let \(N\) be a sufficiently large integer that is relatively prime to both \(a\) and \(b\), \(a,b<N^{\varepsilon}\) and let \(N\) be even if \(a\) and \(b\) are both odd. Let \(R_{a,b}(N,c,d)\) be the number of primes \(p\equiv d(\mathrm{mod}c)\) such that \(ap\) and \(N-ap\) are both square-free, \(b\mid(N-ap)\), and \(\frac{N-ap}{b}=P_{2}\). Then by using a delicate sieve process similar to that of [2], we prove that
**Theorem 1.4**.: _For \(c\leqslant(\log N)^{C_{1}}\), we have_
\[R_{a,b}(N,c,d)\geqslant 0.8671\prod_{\begin{subarray}{c}p\mid c\\ p\nmid N\\ p>2\end{subarray}}\left(\frac{p-1}{p-2}\right)\frac{C(N)N}{\varphi(c)ab(\log N )^{2}}.\]
**Theorem 1.5**.: _For \(c\leqslant N^{0.028}\), except for \(O\left(N^{0.028}(\log N)^{-A}\right)\) exceptional values, we have_
\[R_{a,b}(N,c,d)\gg\prod_{\begin{subarray}{c}p\mid c\\ p\nmid N\\ p>2\end{subarray}}\left(\frac{p-1}{p-2}\right)\frac{C(N)N}{\varphi(c)ab(\log N )^{2}}.\]
Now we combine Theorems 1.4-1.5 with Theorems 1.2-1.3. For two relatively prime square-free positive integers \(a\) and \(b\), let \(N\) be a sufficiently large integer that is relatively prime to both \(a\) and \(b\), \(a,b<N^{\varepsilon}\) and let \(N\) be even if \(a\) and \(b\) are both odd. Let \(R_{a,b}^{\theta}(N,c,d)\) be the number of primes \(p\equiv d(\mathrm{mod}c)\) such that \(p\leqslant N^{\theta}\), \(ap\) and \(N-ap\) are both square-free, \(b\mid(N-ap)\), and \(\frac{N-ap}{b}=P_{2}\). And let \(R_{a,b}(N,c,d,\kappa)\) be the number of primes \(p\equiv d(\mathrm{mod}c)\) such that \(N/2-N^{\kappa}\leqslant p\leqslant N/2+N^{\kappa}\), \(ap\) and \(N-ap\) are both square-free, \(b\mid(N-ap)\), \(\frac{N-ap}{b}=P_{2}\), and \(N/2-N^{\kappa}\leqslant\frac{N-ap}{b}\leqslant N/2+N^{\kappa}\). Then by using a delicate sieve process similar to that of [2] and [3], we prove that
**Theorem 1.6**.: _For \(c\leqslant(\log N)^{C_{1}}\) and \(0.941\leqslant\theta\leqslant 1\), we have_
\[R_{a,b}^{\theta}(N,c,d)\gg\prod_{\begin{subarray}{c}p|c\\ p\nmid N\\ p>2\end{subarray}}\left(\frac{p-1}{p-2}\right)\frac{C(N)N^{\theta}}{\varphi(c) ab(\log N)^{2}}.\]
**Theorem 1.7**.: _For \(c\leqslant(\log N)^{C_{1}}\) and \(\kappa\geqslant 0.97\), we have_
\[R_{a,b}(N,c,d,\kappa)\gg\prod_{\begin{subarray}{c}p|c\\ p\nmid N\\ p>2\end{subarray}}\left(\frac{p-1}{p-2}\right)\frac{C(N)N^{\kappa}}{\varphi(c) ab(\log N)^{2}}.\]
Clearly our Theorems 1.6-1.7 focus on the case when \(c\) is "small". For "big" \(c\), we need to control the size of both \(\theta\) (or \(\kappa\)) and \(c\), and it seems hard to say what is "optimal". For example, we can show that for some \(0<\delta_{1}<0.028\), \(0.941<\delta_{2}<1\) and \(c\leqslant N^{\delta_{1}}\), except for \(O\left(N^{\delta_{1}}(\log N)^{-A}\right)\) exceptional values, we have
\[R_{a,b}^{\delta_{2}}(N,c,d)\gg\prod_{\begin{subarray}{c}p|c\\ p\nmid N\\ p>2\end{subarray}}\left(\frac{p-1}{p-2}\right)\frac{C(N)N^{\delta_{2}}}{ \varphi(c)ab(\log N)^{2}}, \tag{21}\]
but we cannot say what \(\delta_{1}\) and \(\delta_{2}\) are the optimal values.
From our Theorems 1.4-1.7, it can be seen that the second aim of this paper is to construct some new results on the natural numbers of the form \(ap+bP_{2}\) that generalize the results on the even numbers of the form \(p+P_{2}\), \(p+P_{r}\) and \(P_{s}+P_{r}\).
The last theorem in this paper is a Goldbach-type upper bound result. Similar to [[23], Theorem 1. (2)], we also improve the upper bound of the number of primes \(p\) such that \(ap\) and \(N-ap\) are both square-free, \(b\mid(N-ap)\), and \(\frac{N-ap}{b}\) is a prime number. By using a delicate sieve process similar to that of [[26], Chap. 9.2], we prove that
**Theorem 1.8**.: \[\sum_{\begin{subarray}{c}ap_{1}+bp_{2}=N\\ p_{1}\text{ and }p_{2}\text{ are primes}\end{subarray}}1\leqslant 7.928\frac{C(N)N}{ab( \log N)^{2}}.\]
In fact, Lemmas 5.1-5.6 are also valid for the sets \(\mathcal{A}_{3}\)-\(\mathcal{A}_{6}\) in section 2 if we make some suitable modifications. Since the detail of the proof of Theorems 1.3-1.8 are similar to those of [6], [20], [5], [26] and Theorems 1.1-1.2 so we omit them in this paper.
In this paper, we do not focus on Chen's double sieve technique. Maybe this can be used to improve our Theorems 1.1-1.8. For this, we refer the interested readers to [11], [36], [37] and David Quarel's thesis [27].
## 2. The sets we want to sieve
We first list the sets that we will work with later. Let \(\theta=0.941\) in this paper. Put
\[\mathcal{A}_{1}= \left\{\frac{N-ap}{b}:p\text{ is a prime },p\leqslant\frac{N}{a},(p,abN)=1,\right.\] \[\left.p\equiv Na_{b^{2}}^{-1}+kb\left(\operatorname{mod}b^{2} \right),0\leqslant k\leqslant b-1,(k,b)=1\right\},\] \[\mathcal{A}_{2}= \left\{\frac{N-ap}{b}:p\text{ is a prime },p\leqslant\frac{N^{\theta}}{a},(p,abN)=1,\right.\] \[\left.p\equiv Na_{b^{2}}^{-1}+kb\left(\operatorname{mod}b^{2} \right),0\leqslant k\leqslant b-1,(k,b)=1\right\},\] \[\mathcal{A}_{3}= \left\{\frac{N-ap}{b}:p\text{ is a prime },\frac{N/2-N^{0.97}}{a} \leqslant p\leqslant\frac{N/2+N^{0.97}}{a},(p,abN)=1,\right.\] \[\left.p\equiv Na_{b^{2}}^{-1}+kb\left(\operatorname{mod}b^{2} \right),0\leqslant k\leqslant b-1,(k,b)=1\right\},\] \[\mathcal{A}_{4}= \left\{\frac{N-ap}{b}:p\text{ is a prime },p\leqslant\frac{N}{a},(p,abN)=1,p\equiv d( \operatorname{mod}c),(c,d)=1,\right.\]
\[\left(\frac{N-ad}{b},c\right)=1,p\equiv Na_{b^{2}}^{-1}+kb\left( \operatorname{mod}b^{2}\right),0\leqslant k\leqslant b-1,(k,b)=1\right\},\] \[\mathcal{A}_{5}= \left\{\frac{N-ap}{b}:p\text{ is a prime },p\leqslant\frac{N^{ \theta}}{a},(p,abN)=1,p\equiv d(\operatorname{mod}c),(c,d)=1,\right.\] \[\left.\left(\frac{N-ad}{b},c\right)=1,p\equiv Na_{b^{2}}^{-1}+kb \left(\operatorname{mod}b^{2}\right),0\leqslant k\leqslant b-1,(k,b)=1\right\},\] \[\mathcal{A}_{6}= \left\{\frac{N-ap}{b}:p\text{ is a prime },\frac{N/2-N^{0.97}}{a} \leqslant p\leqslant\frac{N/2+N^{0.97}}{a},(p,abN)=1,p\equiv d(\operatorname{ mod}c),\right.\] \[\left.(c,d)=1,\left(\frac{N-ad}{b},c\right)=1,p\equiv Na_{b^{2}}^ {-1}+kb\left(\operatorname{mod}b^{2}\right),0\leqslant k\leqslant b-1,(k,b)= 1\right\},\] \[\mathcal{B}_{1}= \left\{\frac{N-bp_{1}p_{2}p_{3}}{a}:p_{1},p_{2},p_{3}\text{ are primes, }\left(p_{1}p_{2}p_{3},abN\right)=1,\right.\] \[\left.p_{3}\leqslant\frac{N}{bp_{1}p_{2}},\left(\frac{N}{b} \right)^{\frac{1}{13.2}}\leqslant p_{1}\leqslant\left(\frac{N}{b}\right)^{ \frac{1}{3}}<p_{2}\leqslant\left(\frac{N}{bp_{1}}\right)^{\frac{1}{2}},\right.\] \[\left.p_{3}\equiv N\left(bp_{1}p_{2}\right)_{a^{2}}^{-1}+ja\left( \operatorname{mod}a^{2}\right),0\leqslant j\leqslant a-1,(j,a)=1\right\},\] \[\mathcal{B}_{2}= \left\{\frac{N-bp_{1}p_{2}p_{3}}{a}:p_{1},p_{2},p_{3}\text{ are primes, }\left(p_{1}p_{2}p_{3},abN\right)=1,\right.\] \[\left.\frac{N-N^{\theta}}{bp_{1}p_{2}}\leqslant p_{3}\leqslant \frac{N}{bp_{1}p_{2}},\left(\frac{N}{b}\right)^{\frac{1}{14}}\leqslant p_{1} \leqslant\left(\frac{N}{b}\right)^{\frac{1}{3.100}}<p_{2}\leqslant\left(\frac{ N}{bp_{1}}\right)^{\frac{1}{2}},\right.\] \[\left.p_{3}\equiv N\left(bp_{1}p_{2}\right)_{a^{2}}^{-1}+ja\left( \operatorname{mod}a^{2}\right),0\leqslant j\leqslant a-1,(j,a)=1\right\},\] \[\mathcal{C}_{1}= \left\{\frac{N-bmp_{1}p_{2}p_{3}p_{4}}{a}:p_{1},p_{2},p_{3},p_{4} \text{ are primes, }\left(p_{1}p_{2}p_{4},abN\right)=1,\right.\] \[\left.\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\leqslant p_{1}<p_ {4}<p_{2}<\left(\frac{N}{b}\right)^{\frac{1}{3.4}},1\leqslant m\leqslant\frac{ N}{bp_{1}p_{2}^{2}p_{4}},\left(m,p_{1}^{-1}abNP\left(p_{4}\right)\right)=1,\right.\] \[\left.p_{3}\equiv N\left(bmp_{1}p_{2}p_{4}\right)_{a^{2}}^{-1}+ja \left(\operatorname{mod}a^{2}\right),0\leqslant j\leqslant a-1,(j,a)=1,\right.\] \[\left.p_{2}<p_{3}<\min\left(\left(\frac{N}{b}\right)^{\frac{1}{ 14}},\left(\frac{N}{bmp_{1}p_{2}p_{4}}\right)\right)\right\},\] \[\mathcal{C}_{2}= \left\{\frac{N-bmp_{1}p_{2}p_{3}p_{4}}{a}:p_{1},p_{2},p_{3},p_{4} \text{ are primes, }\left(p_{1}p_{2}p_{3}p_{4},abN\right)=1,\right.\] \[\left.\left(\frac{N}{b}\right)^{\frac{1}{14}}\leqslant p_{1}<p_{2} <p_{3}<p_{4}<\left(\frac{N}{b}\right)^{\frac{1}{8.8}},\right.\] \[\left.bmp_{1}p_{2}p_{3}p_{4}\equiv N+ja\left(\operatorname{mod}a^ {2}\right),0\leqslant j\leqslant a-1,(j,a)=1,\right.\] \[\left.\frac{N-N^{\theta}}{bp_{1}p_{2}p_{3}p_{4}}\leqslant m \leqslant\frac{N}{bp_{1}p_{2}p_{3}p_{4}},\left(m,p_{1}^{-1}abNP\left(p_{2} \right)\right)=1\right\},\] \[\mathcal{E}_{1}= \left\{p_{1}p_{2}:p_{1},p_{2}\text{ are primes, }\left(p_{1}p_{2},abN\right)=1,\right.\]
\[\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\leqslant p_{1}<\left(\frac{N}{b} \right)^{\frac{1}{3}}\leqslant p_{2}<\left(\frac{N}{bp_{1}}\right)^{\frac{1}{2}} \right\},\] \[\mathcal{E}_{2}= \left\{p_{1}p_{2}:p_{1},p_{2}\text{ are primes, }\left(p_{1}p_{2},abN \right)=1,\right.\] \[\left(\frac{N}{b}\right)^{\frac{1}{14}}\leqslant p_{1}<\left( \frac{N}{b}\right)^{\frac{1}{3.100}}\leqslant p_{2}<\left(\frac{N}{bp_{1}} \right)^{\frac{1}{2}}\right\},\] \[\mathcal{F}_{1}= \left\{mp_{1}p_{2}p_{4}:p_{1},p_{2},p_{4}\text{ are primes, }\left(\frac{N}{b}\right)^{\frac{1}{13.2}} \leqslant p_{1}<p_{4}<p_{2}<\left(\frac{N}{b}\right)^{\frac{1}{8.4}},\right.\] \[\left.\left(p_{1}p_{2}p_{4},abN\right)=1,1\leqslant m\leqslant \frac{N}{bp_{1}p_{2}^{2}p_{4}},\left(m,p_{1}^{-1}abNP\left(p_{4}\right)\right) =1\right\},\] \[\mathcal{F}_{2}= \left\{bmp_{1}p_{2}p_{3}p_{4}:p_{1},p_{2},p_{3},p_{4}\text{ are primes, }\left(p_{1}p_{2}p_{3}p_{4},abN\right)=1,\right.\] \[\left.\left(\frac{N}{b}\right)^{\frac{1}{14}}\leqslant p_{1}<p_{2 }<p_{3}<p_{4}<\left(\frac{N}{b}\right)^{\frac{1}{8.8}},\right.\] \[\left.\frac{N-N^{\theta}}{bp_{1}p_{2}p_{3}p_{4}}\leqslant m \leqslant\frac{N}{bp_{1}p_{2}p_{3}p_{4}},\left(m,p_{1}^{-1}abNP\left(p_{2} \right)\right)=1\right\},\] \[\mathcal{F}_{3}= \left\{bmp_{1}p_{2}p_{3}p_{4}:p_{1},p_{2},p_{3},p_{4}\text{ are primes, }\left(p_{1}p_{2}p_{3}p_{4},abN\right)=1,\right.\] \[\left.\left(\frac{N}{b}\right)^{\frac{1}{14}}\leqslant p_{1}<p_{2 }<p_{3}<\left(\frac{N}{b}\right)^{\frac{1}{8.8}}\leqslant p_{4}<\left(\frac{N }{b}\right)^{\frac{\theta}{2}-\frac{3}{14}}p_{3}^{-1},\right.\] \[\left.\frac{N-N^{\theta}}{bp_{1}p_{2}p_{3}p_{4}}\leqslant m \leqslant\frac{N}{bp_{1}p_{2}p_{3}p_{4}},\left(m,p_{1}^{-1}abNP\left(p_{2} \right)\right)=1\right\},\]
where \(a_{b^{2}}^{-1}\) is the multiplicative inverse of \(a\bmod b^{2}\), which exists by our assumption \((a,b)=1\).
## 3. Preliminary lemmas
Let \(\mathcal{A}\) denote a finite set of positive integers, \(\mathcal{P}\) denote an infinite set of primes and \(z\geqslant 2\). Suppose that \(|\mathcal{A}|\sim X_{\mathcal{A}}\) and for square-free \(d\), put
\[\mathcal{P}=\{p:(p,N)=1\},\quad\mathcal{P}(r)=\{p:p\in\mathcal{P},(p,r)=1\},\]
\[P(z)=\prod_{\begin{subarray}{c}p\in\mathcal{P}\\ p<z\end{subarray}}p,\quad\mathcal{A}_{d}=\{a:a\in\mathcal{A},a\equiv 0(\bmod d)\}, \quad S(\mathcal{A};\mathcal{P},z)=\sum_{\begin{subarray}{c}a\in\mathcal{A}\\ (a,P(z))=1\end{subarray}}1.\]
**Lemma 3.1**.: _([19], Lemma 1)). If_
\[\sum_{z_{1}\leqslant p<z_{2}}\frac{\omega(p)}{p}=\log\frac{\log z_{2}}{\log z _{1}}+O\left(\frac{1}{\log z_{1}}\right),\quad z_{2}>z_{1}\geqslant 2,\]
_where \(\omega(d)\) is a multiplicative function, \(0\leqslant\omega(p)<p,X>1\) is independent of \(d\). Then_
\[S(\mathcal{A};\mathcal{P},z)\geqslant X_{\mathcal{A}}W(z)\left\{f\left(\frac{ \log D}{\log z}\right)+O\left(\frac{1}{\log^{\frac{1}{5}}D}\right)\right\}- \sum_{\begin{subarray}{c}n\leqslant D\\ n|P(z)\end{subarray}}\eta(X_{\mathcal{A}},n)\]
\[S(\mathcal{A};\mathcal{P},z)\leqslant X_{\mathcal{A}}W(z)\left\{F\left(\frac{ \log D}{\log z}\right)+O\left(\frac{1}{\log^{\frac{1}{5}}D}\right)\right\}+ \sum_{\begin{subarray}{c}n\leqslant D\\ n|P(z)\end{subarray}}\eta(X_{\mathcal{A}},n)\]
_where_
\[W(z)=\prod_{\begin{subarray}{c}p<z\\ (p,N)=1\end{subarray}}\left(1-\frac{\omega(p)}{p}\right),\quad\eta(X_{\mathcal{A }},n)=\left|\left|\mathcal{A}_{n}\right|-\frac{\omega(n)}{n}X_{\mathcal{A}} \right|=\left|\sum_{\begin{subarray}{c}a\in\mathcal{A}\\ a\equiv 0\pmod{n}\end{subarray}}1-\frac{\omega(n)}{n}X_{\mathcal{A}}\right|,\]
\(\gamma\) _denotes the Euler's constant, \(f(s)\) and \(F(s)\) are determined by the following differential-difference equation_
\[\begin{cases}F(s)=\frac{2e^{\gamma}}{s},\quad f(s)=0,&0<s\leqslant 2,\\ (sF(s))^{\prime}=f(s-1),\quad(sf(s))^{\prime}=F(s-1),&s\geqslant 2.\end{cases}\]
**Lemma 3.2**.: _([2], Lemma 2], deduced from [15])._
\[F(s)= \frac{2e^{\gamma}}{s},\quad 0<s\leqslant 3;\] \[F(s)= \frac{2e^{\gamma}}{s}\left(1+\int_{2}^{s-1}\frac{\log(t-1)}{t}dt \right),\quad 3\leqslant s\leqslant 5;\] \[F(s)= \frac{2e^{\gamma}}{s}\left(1+\int_{2}^{s-1}\frac{\log(t-1)}{t}dt +\int_{2}^{s-3}\frac{\log(t-1)}{t}dt\int_{t+2}^{s-1}\frac{1}{u}\log\frac{u-1} {t+1}du\right),\quad 5\leqslant s\leqslant 7;\] \[f(s)= \frac{2e^{\gamma}\log(s-1)}{s},\quad 2\leqslant s\leqslant 4;\] \[f(s)= \frac{2e^{\gamma}}{s}\left(\log(s-1)+\int_{3}^{s-1}\frac{dt}{t} \int_{2}^{t-1}\frac{\log(u-1)}{u}du\right),\quad 4\leqslant s\leqslant 6;\] \[f(s)= \frac{2e^{\gamma}}{s}\left(\log(s-1)+\int_{3}^{s-1}\frac{dt}{t} \int_{2}^{t-1}\frac{\log(u-1)}{u}du\right.\] \[+\int_{2}^{s-4}\frac{\log(t-1)}{t}dt\int_{t+2}^{s-2}\frac{1}{u} \log\frac{u-1}{t+1}\log\frac{s}{u+2}du\right),\quad 6\leqslant s\leqslant 8.\]
**Lemma 3.3**.: _([2], Lemma 4], deduced from [17], [26]). Let_
\[x>1,\quad z=x^{\frac{1}{u}},\quad Q(z)=\prod_{p<z}p.\]
_Then for \(u\geqslant 1\), we have_
\[\sum_{\begin{subarray}{c}n\leqslant x\\ (n,Q(z))=1\end{subarray}}1=w(u)\frac{x}{\log z}+O\left(\frac{x}{\log^{2}z} \right),\]
_where \(w(u)\) is determined by the following differential-difference equation_
\[\begin{cases}w(u)=\frac{1}{u},&1\leqslant u\leqslant 2,\\ (uw(u))^{\prime}=w(u-1),&u\geqslant 2.\end{cases}\]
_Moreover, we have_
\[\begin{cases}w(u)\leqslant\frac{1}{1.763},&u\geqslant 2,\\ w(u)<0.5644,&u\geqslant 3,\\ w(u)<0.5617,&u\geqslant 4.\end{cases}\]
_Remark 3.4_.: ([[4], Lemma 2.6], [[6], Lemma 4]). Let
\[x>1,\quad x^{\frac{19}{24}+e}\leqslant y_{1}\leqslant\frac{x}{\log x},\quad x ^{\frac{3}{3}}\leqslant y_{2}<x,\quad z=x^{\frac{1}{u}},\quad Q(z)=\prod_{p<z}p.\]
Then for \(u>1\), we have
\[\sum_{\begin{subarray}{c}x-y_{1}\leqslant n\leqslant x\\ (n,Q(z))=1\end{subarray}}1=w(u)\frac{y_{1}}{\log z}+O\left(\frac{y_{1}}{\log^ {2}z}\right),\]
\[\sum_{\begin{subarray}{c}x\leq n<x+y_{2}\\ (n,Q(z))=1\end{subarray}}1=w(u)\frac{y_{2}}{\log z}+O\left(\frac{y_{2}}{\log^{2} z}\right),\]
where \(w(u)\) is defined in Lemma 3.3.
**Lemma 3.5**.: _If \(N^{\frac{1}{\alpha}-\varepsilon}<z\leqslant N^{\frac{1}{\alpha}}\), then we have_
\[W(z)=\frac{2\alpha e^{-\gamma}C(N)(1+o(1))}{\log N}.\]
Proof.: By [[23], Lemma 2] we have
\[W(z)=\frac{N}{\varphi(N)}\prod_{(p,N)=1}\left(1-\frac{\omega(p)}{p}\right) \left(1-\frac{1}{p}\right)^{-1}\frac{e^{-\gamma}}{\log z}\left(1+O\left(\frac{ 1}{\log z}\right)\right),\]
where \(\varphi\) is the Euler's totient function and \(\gamma\) is the Euler's constant. Since \(2\mid abN\), we have
\[W(z) =\frac{N}{\varphi(N)}\prod_{(p,N)=1}\left(1-\frac{\omega(p)}{p} \right)\left(1-\frac{1}{p}\right)^{-1}\frac{e^{-\gamma}}{\log z}\left(1+O \left(\frac{1}{\log z}\right)\right)\] \[=\prod_{p\mid N}\frac{p}{p-1}\prod_{(p,N)=1}\left(1-\frac{\omega( p)}{p}\right)\left(1-\frac{1}{p}\right)^{-1}\frac{\alpha e^{-\gamma}(1+o(1))}{ \log N}\] \[=\prod_{p\mid N}\frac{p}{p-1}\prod_{p\mid ab}\left(1-\frac{1}{p} \right)^{-1}\prod_{(p,abN)=1}\left(1-\frac{1}{p-1}\right)\left(1-\frac{1}{p} \right)^{-1}\frac{\alpha e^{-\gamma}(1+o(1))}{\log N}\] \[=\prod_{\begin{subarray}{c}p\mid N\\ p>2\end{subarray}}\frac{p}{p-1}\prod_{\begin{subarray}{c}p\mid ab\\ p>2\end{subarray}}\frac{p}{p-1}\frac{\prod_{p>2}\frac{p(p-2)^{2}}{(p-1)^{2}}}{ \prod_{p\mid abN}\frac{p(p-2)}{(p-1)^{2}}}\frac{2\alpha e^{-\gamma}(1+o(1))}{ \log N}\] \[=\frac{2\alpha e^{-\gamma}C(N)(1+o(1))}{\log N}.\]
**Lemma 3.6**.: _([23], Lemma 1], deduced from [[16], Corollary 5.29]). Let \(q\geqslant 1\) and \(A>0\). Let \(\pi(x;q,l)\) be the number of primes up to \(x\) that are congruent to \(l\) modulo \(q\). For \((l,q)=1\), we have_
\[\pi(x;q,l)=\frac{\mathrm{Li}(x)}{\varphi(q)}+O\left(\frac{x}{(\log x)^{A}} \right),\]
_where_
\[\mathrm{Li}(x)=\int_{2}^{x}\frac{\mathrm{d}t}{\log t}.\]
**Lemma 3.7**.: _([23], Lemma 6], deduced from [[32], p. 579]). We have_
\[\sum_{n\leqslant x}\frac{1}{\varphi(n)}=C_{2}\left(\log x+C_{3}\right)+O\left( \frac{\log x}{x}\right)\]
_for some constants \(C_{2}\) and \(C_{3}\)._
## 4. Mean value theorems
Now we provide some mean value theorems which will be used in bounding various sieve error terms later.
The first lemma and remark come from Pan and Pan's book [26] and they were first proven by Pan, Ding and Wang.
**Lemma 4.1**.: _([26], p. 192, Corollary 8.2]). Let_
\[\pi(x;k,d,l)=\sum_{\begin{subarray}{c}kp\leq x\\ kp\equiv l\ (\bmod\ d)\end{subarray}}1\]
_and let \(g(k)\) be a real function, \(g(k)\ll 1\). Then, for any given constant \(A>0\), there exists a constant \(B=B(A)>0\) such that_
\[\sum_{d\leqslant x^{1/2}(\log x)^{-B}}\max_{y\leqslant x}\max_{(l,d)=1}\left| \sum_{\begin{subarray}{c}k\leqslant E(x)\\ (k,d)=1\end{subarray}}g(k)H(y;k,d,l)\right|\ll\frac{x}{\log^{A}x},\]
_where_
\[H(y;k,d,l)=\pi(y;k,d,l)-\frac{1}{\varphi(d)}\pi(y;k,1,1)=\sum_{ \begin{subarray}{c}kp\leqslant y\\ kp\equiv l\ (\bmod\ d)\end{subarray}}1-\frac{1}{\varphi(d)}\sum_{kp\leqslant y}1,\] \[\frac{1}{2}\leqslant E(x)\ll x^{1-\alpha},\quad 0<\alpha\leqslant 1, \quad B(A)=\frac{3}{2}A+17.\]
_Remark 4.2_.: ([[26], p. 195-196, Corollary 8.3 and 8.4]). Let \(r_{1}(y)\) be a positive function depending on \(x\) and satisfying \(r_{1}(y)\ll x^{\alpha}\) for \(y\leqslant x\). Then under the conditions in Lemma 4.1, we have
\[\sum_{d\leqslant x^{1/2}(\log x)^{-B}}\max_{y\leqslant x}\max_{(l,d)=1}\left| \sum_{\begin{subarray}{c}k\leqslant E(x)\\ (k,d)=1\end{subarray}}g(k)H\left(kr_{1}(y);k,d,l\right)\right|\ll\frac{x}{\log ^{A}x}.\]
Let \(r_{2}(k)\) be a positive function depending on \(x\) and \(y\) such that \(kr_{2}(k)\ll x\) for \(k\leqslant E(x)\), \(y\leqslant x\). Then under the conditions in Lemma 4.1, we have
\[\sum_{d\leqslant x^{1/2}(\log x)^{-B}}\max_{y\leqslant x}\max_{(l,d)=1}\left| \sum_{\begin{subarray}{c}k\leqslant E(x)\\ (k,d)=1\end{subarray}}g(k)H\left(kr_{2}(k);k,d,l\right)\right|\ll\frac{x}{ \log^{A}x}.\]
The second lemma and remark were first proven by Wu [34], and they are the "short interval" version of Lemma 4.1 and Remark 4.2. These will help us deal with the sieve error terms involved in evaluation of \(S^{\prime}_{4}\) and \(S^{\prime}_{7}\).
**Lemma 4.3**.: _([34], Theorem 2]). Let \(g(k)\) be a real function such that_
\[\sum_{k\leqslant x}\frac{g^{2}(k)}{k}\ll\log^{C}x\]
_for some \(C>0\). Then, for any given constant \(A>0\), there exists a constant \(B=B(A,C)>0\) such that_
\[\sum_{d\leqslant x^{t-1/2}(\log x)^{-B}}\max_{x/2\leqslant y\leqslant x}\max_ {(l,d)=1}\max_{h\leqslant x^{t}}\left|\sum_{\begin{subarray}{c}k\leqslant x^{ \beta}\\ (k,d)=1\end{subarray}}g(k)\bar{H}(y,h,k,d,l)\right|\ll\frac{x^{t}}{\log^{A}x},\]
_where_
\[\bar{H}(y,h,k,d,l)= \left(\pi(y+h;k,d,l)-\pi(y;k,d,l)\right)\] \[-\frac{1}{\varphi(d)}\left(\pi(y+h;k,1,1)-\left(\pi(y;k,1,1)\right)\right.\] \[= \sum_{\begin{subarray}{c}y<kp\leqslant y+h\\ kp\equiv l\ (\bmod\ d)\end{subarray}}1-\frac{1}{\varphi(d)}\sum_{y<kp\leqslant y+h}1,\]
\[\frac{3}{5}<t\leqslant 1,\quad 0\leqslant\beta<\frac{5t-3}{2},\quad B(A,C)=3A+C+34.\]
_Remark 4.4_.: ([[3], Lemma 7], [[6], Remark]). Let \(g(k)\) be a real function such that
\[\sum_{k\leqslant x}\frac{g^{2}(k)}{k}\ll\log^{C}x\]
for some \(C>0\). Let \(r_{1}(k,h)\) and \(r_{2}(k,h)\) be positive function such that
\[y\leqslant kr_{1}(k,h),kr_{2}(k,h)\leqslant y+h.\]
Then, for any given constant \(A>0\), there exists a constant \(B=B(A,C)>0\) such that
\[\sum_{d\leqslant x^{t-1/2}(\log x)^{-B}}\max_{x/2\leqslant y\leqslant x}\max_ {(l,d)=1}\max_{h\leqslant x^{t}}\left|\sum_{\begin{subarray}{c}k\leqslant x^{ \beta}\\ (k,d)=1\end{subarray}}g(k)\bar{H}^{\prime}(y,h,k,d,l)\right|\ll\frac{x^{t}}{ \log^{A}x},\]
where
\[\bar{H}^{\prime}(y,h,k,d,l)= \left(\pi(kr_{2}(k,h);k,d,l)-\pi(kr_{1}(k,h);k,d,l)\right)\] \[-\frac{1}{\varphi(d)}\left(\pi(kr_{2}(k,h);k,1,1)-\left(\pi(kr_{1 }(k,h);k,1,1)\right)\right.\] \[= \sum_{\begin{subarray}{c}kr_{1}(k,h)<kp\leqslant kr_{2}(k,h)\\ kp\equiv l(\text{ mod }d)\end{subarray}}1-\frac{1}{\varphi(d)}\sum_{kr_{1}(k,h)<kp \leqslant kr_{2}(k,h)}1,\]
\[\frac{3}{5}<t\leqslant 1,\quad 0\leqslant\beta<\frac{5t-3}{2},\quad B(A,C)=3A+C+34.\]
In [3], Cai said that we faced the difficulty which cannot be surmounted that our Lemma 4.3 and Remark 4.4 is not sufficient to deal with some of the sieve error terms involved. Actually the function \(g(k)\) cannot be well-defined to control the sieve error terms involved in evaluation of \(S_{6}^{\prime}\). (i.e. \(\frac{5\theta-3}{2}<\frac{13}{14}\)). So we need a new mean value theorem to overcome that. The next lemma is a new mean value theorem for products of large primes over short intervals and it was first proven by Cai [3]. This lemma will help us deal with the sieve error terms involved in evaluation of \(S_{6}^{\prime}\).
**Lemma 4.5**.: _For \(j=2,3\) and any given constant \(A>0\), there exists a constant \(B=B(A)>0\) such that_
\[\sum_{d\leqslant x^{\theta-1/2}(\log x)^{-B}}\max_{(l,d)=1}\left|\sum_{ \begin{subarray}{c}bm_{1}p_{1}p_{2}p_{3}p_{4}\in\mathcal{F}_{j}\\ bmp_{1}p_{2}p_{3}p_{4}\equiv l(\text{ mod }d)\end{subarray}}1-\frac{1}{ \varphi(d)}\sum_{\begin{subarray}{c}bm_{1}p_{2}p_{3}p_{4}\in\mathcal{F}_{j} \\ (bm_{1}p_{2}p_{3}p_{4},d)=1\end{subarray}}1\right|\ll\frac{x^{\theta}}{\log^{A }x}.\]
Proof.: This result can be proved in the same way as [[3], Lemma 8] by showing that for \(j=2,3\) and \(r\geqslant 5\), the bounds
\[\sum_{d\leqslant x^{\theta-1/2}(\log x)^{-B}}\max_{(l,d)=1}\left|\sum_{ \begin{subarray}{c}p_{1}p_{2}\cdots p_{r}\in\mathcal{F}_{j}\\ p_{1}p_{2}\cdots p_{r}\equiv l(\text{ mod }d)\end{subarray}}1-\frac{1}{ \varphi(d)}\sum_{\begin{subarray}{c}p_{1}p_{2}\cdots p_{r}\in\mathcal{F}_{j} \\ (p_{1}p_{2}\cdots p_{r},d)=1\end{subarray}}1\right|\ll\frac{x^{\theta}}{\log^{A }x}\]
hold.
The following lemmas are the "arithmetical progression with almost all big \(c\)" version of Lemmas 4.1-4.5 and they will help us prove Theorem 1.5. We can also get variants of Theorems 1.6-1.7 with "big" \(c\) by using the following lemmas.
**Lemma 4.6**.: _([5], Lemma 4]). For any given constant \(A>0\), under the conditions in Lemma 4.1 and Remark 4.2, there exists a constant \(B=B(A)>0\) such that for \(c\leqslant x^{0.028}\), except for \(O\left(x^{0.028}(\log x)^{-A}\right)\) exceptional values, we have_
\[R_{1}=\sum_{d\leqslant\left(x^{1/2}(\log x)^{-B}\right)/c}\max_{y\leqslant x} \max_{(l,dc)=1}\left|\sum_{k\leqslant E(x)\atop(k,d)=1}g(k)H(y;k,dc,l)\right| \ll\frac{x^{1-0.028}}{\log^{A}x},\]
\[R_{2}=\sum_{d\leqslant\left(x^{1/2}(\log x)^{-B}\right)/c}\max_{y\leqslant x} \max_{(l,dc)=1}\left|\sum_{k\leqslant E(x)\atop(k,d)=1}g(k)H(kr_{1}(y);k,dc,l )\right|\ll\frac{x^{1-0.028}}{\log^{A}x},\]
\[R_{3}=\sum_{d\leqslant\left(x^{1/2}(\log x)^{-B}\right)/c}\max_{y\leqslant x} \max_{(l,dc)=1}\left|\sum_{k\leqslant E(x)\atop(k,d)=1}g(k)H(kr_{2}(k);k,dc,l )\right|\ll\frac{x^{1-0.028}}{\log^{A}x}.\]
Proof.: We prove Lemma 4.6 in the case \(R_{1}\) only, the same argument can be applied to the cases \(R_{2}\) and \(R_{3}\). Let \(\tau(d)\) be the divisor function, By Lemma 4.1 we have
\[\sum_{c\leqslant N^{0.028}}R_{1}=\sum_{c\leqslant N^{0.028}}\sum_{d\leqslant \left(x^{1/2}(\log x)^{-B}\right)/c}\max_{y\leqslant x}\max_{(l,dc)=1}\left| \sum_{\begin{subarray}{c}k\leqslant E(x)\atop(k,d)=1}g(k)H(y;k,dc,l)\right|\]
\[\leqslant\sum_{d\leqslant x^{1/2}(\log x)^{-B}}\tau(d)\max_{y\leqslant x}\max _{(l,d)=1}\left|\sum_{\begin{subarray}{c}k\leqslant E(x)\atop(k,d)=1}g(k)H(y;k,d,l)\right|\ll\frac{x}{\log^{2A}x},\]
\[\sum_{\begin{subarray}{c}c\leqslant N^{0.028}\\ R_{1}>\frac{x^{1-0.028}}{\log^{A}x}\end{subarray}}1\ll\!\frac{\log^{A}x}{x^{1 -0.028}}\sum_{c\leqslant N^{0.028}}R_{1}\ll\frac{x^{0.028}}{\log^{A}x}.\]
Now the proof of Lemma 4.6 is completed.
**Lemma 4.7**.: _For any given constant \(A>0\), under the conditions in Lemma 4.3 and Remark 4.4, there exists a constant \(B=B(A,C)>0\) such that for \(c\leqslant x^{0.028}\), except for \(O\left(x^{0.028}(\log x)^{-A}\right)\) exceptional values, we have_
\[R_{4}=\sum_{d\leqslant\left(x^{t-1/2}(\log x)^{-B}\right)/c}\max_{x/2\leqslant y \leqslant x}\max_{(l,dc)=1}\max_{h\leqslant x^{4}}\left|\sum_{ \begin{subarray}{c}k\leqslant x^{\beta}\\ (k,d)=1\end{subarray}}g(k)\bar{H}(y,h,k,dc,l)\right|\ll\frac{x^{t-0.028}}{\log^ {A}x},\]
\[R_{5}=\sum_{d\leqslant\left(x^{t-1/2}(\log x)^{-B}\right)/c}\max_{x/2\leqslant y \leqslant x}\max_{(l,dc)=1}\max_{h\leqslant x^{4}}\left|\sum_{ \begin{subarray}{c}k\leqslant x^{\beta}\\ (k,d)=1\end{subarray}}g(k)\bar{H}^{\prime}(y,h,k,dc,l)\right|\ll\frac{x^{t-0.0 28}}{\log^{A}x}.\]
Proof.: We prove Lemma 4.7 in the case \(R_{4}\) only, the same argument can be applied to the case \(R_{5}\). Let \(\tau(d)\) be the divisor function, By Lemma 4.3 we have
\[\sum_{c\leqslant N^{0.028}}R_{4}=\sum_{c\leqslant N^{0.028}}\sum_{d\leqslant \left(x^{t-1/2}(\log x)^{-B}\right)/c}\max_{x/2\leqslant y\leqslant x}\max_{( l,dc)=1}\max_{h\leqslant x^{4}}\left|\sum_{\begin{subarray}{c}k\leqslant x^{ \beta}\\ (k,d)=1\end{subarray}}g(k)\bar{H}(y,h,k,dc,l)\right|\]
\[\leqslant\sum_{\begin{subarray}{c}c\leqslant N^{0.028}\\ R_{2}>\frac{\mu^{-0.028}}{\log^{4}x}\end{subarray}}\tau(d)\max_{x/2\leqslant y \leqslant x}\max_{(l,d)=1}\max_{h\leqslant x^{t}}\left|\sum_{\begin{subarray}{c }k\leqslant x^{\beta}\\ (k,d)=1\end{subarray}}g(k)\bar{H}(y,h,k,d,l)\right|\ll\frac{x^{t}}{\log^{2A}x},\]
\[\sum_{\begin{subarray}{c}c\leqslant N^{0.028}\\ R_{4}>\frac{\mu^{-0.028}}{\log^{4}x}\end{subarray}}1\ll \frac{\log^{A}x}{x^{t-0.028}}\sum_{c\leqslant N^{0.028}}R_{4}\ll \frac{x^{0.028}}{\log^{A}x}.\]
Now the proof of Lemma 4.7 is completed.
**Lemma 4.8**.: _For \(j=2,3\), let_
\[\mathcal{F}^{\prime}_{j}=\left\{bmp_{1}p_{2}p_{3}p_{4}:bmp_{1}p_{2}p_{3}p_{4} \in\mathcal{F}_{j},(p_{1}p_{2}p_{3}p_{4},c)=1\right\},\]
_then for any given constant \(A>0\), there exists a constant \(B=B(A)>0\) such that for \(c\leqslant x^{0.028}\), except for \(O\left(x^{0.028}(\log x)^{-A}\right)\) exceptional values, we have_
\[R^{\prime}_{j}=\sum_{d\leqslant\left(x^{\theta-1/2}(\log x)^{-B}\right)/c}\max _{(l,dc)=1}\left|\sum_{\begin{subarray}{c}bmp_{1}p_{2}p_{3}p_{4}\in\mathcal{F} ^{\prime}_{j}\\ bmp_{1}p_{2}p_{3}p_{4}\equiv l(\text{ mod }dc)\end{subarray}}1-\frac{1}{\varphi(dc)} \sum_{\begin{subarray}{c}bmp_{1}p_{2}p_{3}p_{4}\in\mathcal{F}^{\prime}_{j}\\ (bmp_{1}p_{2}p_{3}p_{4},dc)=1\end{subarray}}1\right|\ll\frac{x^{\theta-0.028}}{ \log^{A}x}.\]
Proof.: We prove Lemma 4.8 in the case \(R^{\prime}_{2}\) only, the same argument can be applied to the case \(R^{\prime}_{3}\). Let \(\tau(d)\) be the divisor function, By Lemma 4.5 we have
\[\sum_{c\leqslant N^{0.028}}R^{\prime}_{2} =\sum_{c\leqslant N^{0.028}}\sum_{d\leqslant\left(x^{\theta-1/2}( \log x)^{-B}\right)/c}\max_{(l,dc)=1}\left|\sum_{\begin{subarray}{c}bmp_{1}p_ {2}p_{3}p_{4}\in\mathcal{F}^{\prime}_{j}\\ bmp_{1}p_{2}p_{3}p_{4}\equiv l(\text{ mod }dc)\end{subarray}}1-\frac{1}{ \varphi(dc)}\sum_{\begin{subarray}{c}bmp_{1}p_{2}p_{3}p_{4}\in\mathcal{F}^{ \prime}_{j}\\ (bmp_{1}p_{2}p_{3}p_{4},dc)=1\end{subarray}}1\right|\] \[\leqslant\sum_{d\leqslant x^{\theta-1/2}(\log x)^{-B}}\tau(d)\max _{(l,d)=1}\left|\sum_{\begin{subarray}{c}bmp_{1}p_{2}p_{3}p_{4}\in\mathcal{F} _{j}\\ bmp_{1}p_{2}p_{3}p_{4}\equiv l(\text{ mod }d)\end{subarray}}1-\frac{1}{ \varphi(d)}\sum_{\begin{subarray}{c}bmp_{1}p_{2}p_{3}p_{4}\in\mathcal{F}_{j} \\ (bmp_{1}p_{2}p_{3}p_{4},d)=1\end{subarray}}1\right|\ll\frac{x^{\theta}}{\log^{2A }x},\] \[\sum_{\begin{subarray}{c}c\leqslant N^{0.028}\\ R^{\prime}_{2}>\frac{\mu^{-0.028}}{\log^{4}x}\end{subarray}}1\ll \frac{\log^{A}x}{x^{\theta-0.028}}\sum_{c\leqslant N^{0.028}}R^{ \prime}_{2}\ll\frac{x^{0.028}}{\log^{A}x}.\]
Now the proof of Lemma 4.8 is completed.
## 5. Weighted sieve method
Now we follow Cai directly to get delicate weighted sieves in order to prove our Theorems 1.1-1.8. The parameters \(\alpha\) and \(\beta\) in Lemma 5.3-5.4 come from Cai's papers [2] and [3], which have been proven to be optimal parameters by himself and Guang-Liang Zhou by solving equations.
**Lemma 5.1**.: _Let \(\mathcal{A}=\mathcal{A}_{1}\) in section 2 and \(0<\alpha<\beta\leqslant\frac{1}{3}\). Then we have_
\[2R_{a,b}(N)\geqslant 2S\left(\mathcal{A};\mathcal{P},\left(\frac{N}{b}\right)^{\alpha }\right)-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\alpha}\leqslant p<(\frac{N} {b})^{\beta}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\alpha}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\alpha}\leqslant p_{1}<( \frac{N}{b})^{\beta}\leqslant p_{2}<(\frac{N}{b_{1}p_{1}})^{\frac{1}{2}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{1} ),p_{2}\right)-2\sum_{\begin{subarray}{c}(\frac{N}{b})^{\beta}\leqslant p_{1} <p_{2}<(\frac{N}{b_{1}p_{1}})^{\frac{1}{2}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{1} ),p_{2}\right)\]
\[\sum_{\begin{subarray}{c}(\frac{N}{b})^{\alpha}\leqslant p_{1}<p_{2}<(\frac{N}{b}) ^{\beta}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{ 1}),p_{2}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\alpha}\leqslant p_{1}<p _{2}<(\frac{N}{b})^{\beta}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{ 1}),p_{2}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\beta}\leqslant p_{1}<p _{2}<(\frac{N}{b})^{\frac{1}{2}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{ 1}),p_{2}\right). \tag{23}\]
Now by Buchstab's identity we have
\[\sum_{\begin{subarray}{c}(\frac{N}{b})^{\alpha}\leqslant p_{1}<p _{2}<(\frac{N}{b})^{\beta}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P},p_{ 1}\right)-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\alpha}\leqslant p_{1}<p_{2} <(\frac{N}{b})^{\beta}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{ 1}),p_{2}\right)\] \[=\sum_{\begin{subarray}{c}(\frac{N}{b})^{\alpha}\leqslant p_{1}<p _{2}<p_{3}<(\frac{N}{b})^{\beta}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}p_{3}};\mathcal{P} (p_{1}),p_{2}\right)+O\left(N^{1-\alpha}\right), \tag{24}\]
where the trivial bound
\[\sum_{\begin{subarray}{c}(\frac{N}{b})^{\alpha}\leqslant p_{1}<p_{2}<(\frac{ N}{b})^{\beta}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}^{2}p_{2}};\mathcal{P},p _{1}\right)\ll N^{1-\alpha} \tag{25}\]
is used.
Now we add (22) and (23) and by (24) Lemma 5.1 follows.
**Lemma 5.2**.: _Let \(\mathcal{A}=\mathcal{A}_{2}\) in section 2 and \(0<\alpha<\beta\leqslant\frac{1}{3}\). Then we have_
\[2R_{a,b}^{\theta}(N)\geqslant 2S\left(\mathcal{A};\mathcal{P},\left(\frac{N}{b}\right)^{\alpha} \right)-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\alpha}\leqslant p<(\frac{N}{b })^{\beta}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\alpha}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\alpha}\leqslant p_{1}<( \frac{N}{b})^{\beta}\leqslant p_{2}<(\frac{N}{bp_{1}})^{\frac{1}{2}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{ 1}),p_{2}\right)-2\sum_{\begin{subarray}{c}(\frac{N}{b})^{\beta}\leqslant p_{1} <p_{2}<(\frac{N}{bp_{1}})^{\frac{1}{2}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{ 1}),p_{2}\right)\] \[+\sum_{\begin{subarray}{c}(\frac{N}{b})^{\alpha}\leqslant p_{1}<p _{2}<p_{3}<(\frac{N}{b})^{\beta}\\ (p_{1}p_{2}p_{3},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}p_{3}}; \mathcal{P}(p_{1}),p_{2}\right)+O\left(N^{1-\alpha}\right).\]
Proof.: It is similar to that of Lemma 5.1 so we omit it here.
**Lemma 5.3**.: _Let \(\mathcal{A}=\mathcal{A}_{1}\) in section 2, then we have_
\[4R_{a,b}(N)\geqslant 3S\left(\mathcal{A};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{ 1}{13.2}}\right)+S\left(\mathcal{A};\mathcal{P},\left(\frac{N}{b}\right)^{ \frac{1}{8.4}}\right)\] \[+\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}<p_{2}<(\frac{N}{b})^{\frac{1}{8.4}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\] \[+\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}<(\frac{N}{b})^{\frac{1}{8.4}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p <(\frac{N}{b})^{\frac{1}{13.2}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{13.2}}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p <(\frac{N}{b})^{\frac{1}{13.2}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N }{b}\right)^{\frac{1}{13.2}}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}<(\frac{N}{b})^{\frac{1}{13.2}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{ 1}),p_{2}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p _{1}<(\frac{N}{b})^{\frac{1}{8.4}}\\ (p_{1}p_{2},N)=1\end{subarray}}\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{ 1}{8.4}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{ 1}),\left(\frac{N}{bp_{1}p_{2}}\right)^{\frac{1}{2}}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4}{13.2}}\leqslant p <(\frac{N}{b})^{\frac{1}{33.2}}\\ (p_{1}p_{2}p_{3},N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left( \frac{N}{b}\right)^{\frac{1}{13.2}}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{3}{13.2}}\leqslant p <(\frac{N}{b})^{\frac{1}{3}}\\ (p_{1}p_{2}p_{3},N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left( \frac{N}{b}\right)^{\frac{1}{8.4}}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}<p_{2}<p_{3}<p_{4}<(\frac{N}{b})^{\frac{1}{8.4}}\\ (p_{1}p_{2}p_{3}p_{4},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}p_{3}p_{4} };\mathcal{P}(p_{1}),p_{2}\right)\]
\[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}<<p_{2}<(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{4}<(\frac{N}{b})^{\frac{4.6} {13.2}}p_{3}^{-1}\\ (p_{1}p_{2}p_{3},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{ P},\left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\] \[-2\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}<(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{2}<(\frac{N}{b})^{\frac{4.6}{13.2 }}p_{1}^{-1}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\] \[= \left(3S_{11}+S_{12}\right)+\left(S_{21}+S_{22}\right)-\left(S_{ 31}+S_{32}\right)-\left(S_{41}+S_{42}\right)\] \[-\left(S_{51}+S_{52}\right)-\left(S_{61}+S_{62}\right)-2S_{7}+O \left(N^{\frac{12.2}{13.2}}\right)\] \[= S_{1}+S_{2}-S_{3}-S_{4}-S_{5}-S_{6}-2S_{7}+O\left(N^{\frac{12.2 }{13.2}}\right).\]
Proof.: It is similar to that of [[2], Lemma 6]. By Buchstab's identity, we have
\[S\left(\mathcal{A};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1 }{8.4}}\right)= S\left(\mathcal{A};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1 }{13.2}}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p <(\frac{N}{b})^{\frac{1}{8.4}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{13.2}}\right)\] \[+\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}<p_{2}<(\frac{N}{b})^{\frac{1}{8.4}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}<p_{2}<p_{3}<(\frac{N}{b})^{\frac{1}{8.4}}\\ (p_{1}p_{2}p_{3},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}p_{3}}; \mathcal{P},p_{1}\right), \tag{26}\]
\[\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p <(\frac{N}{b})^{\frac{3}{8.2}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{13.2}}\right)\] \[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}<(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{2}<(\frac{N}{b})^{\frac{4.6}{13.2 }}p_{1}^{-1}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}, \left(\frac{N}{b}\right)^{\frac{1}{13.2}}\right)\] \[+\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}<p_{2}<(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{3}<(\frac{N}{b})^{\frac{4.6 }{13.2}}p_{2}^{-1}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}p_{3}};\mathcal{P}, p_{1}\right), \tag{27}\]
\[\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p _{1}<(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{2}<(\frac{N}{b})^{\frac{1}{13.2 }}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{1} ),p_{2}\right)\] \[=\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p _{1}<(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{2}<(\frac{N}{b})^{\frac{1}{13.2}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{1} ),p_{2}\right)\]
\[+\sum_{(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{1}<(\frac{N}{b})^{ \frac{1}{3.604}},\;(\frac{N}{bp_{1}})^{\frac{1}{3}}\leqslant p_{2}<(\frac{N}{bp_ {1}})^{\frac{1}{2}}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{1}),p_{2} \right). \tag{28}\]
If \(p_{2}\leqslant\left(\frac{N}{bp_{1}}\right)^{\frac{1}{3}}\), then \(p_{2}\leqslant\left(\frac{N}{bp_{1}p_{2}}\right)^{\frac{1}{2}}\) and by Buchstab's identity we have
\[\sum_{(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{1}<(\frac{N}{b}) ^{\frac{1}{3.604}}\leqslant p_{2}<(\frac{N}{bp_{1}})^{\frac{1}{3}}\atop(p_{1}p_ {2},N)=1}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{1}),p_{2}\right) \tag{29}\] \[= \sum_{(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{1}<(\frac{N}{b}) ^{\frac{1}{3.604}}\leqslant p_{2}<(\frac{N}{bp_{1}})^{\frac{1}{3}}\atop(p_{1}p _{2},N)=1}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{1}),\left(\frac{N}{bp _{1}p_{2}}\right)^{\frac{1}{2}}\right)\] \[+\sum_{(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{1}<(\frac{N}{b}) ^{\frac{1}{3.604}}\leqslant p_{2}\leqslant p_{3}<(\frac{N}{bp_{1}p_{2}})^{ \frac{1}{2}}\atop(p_{1}p_{2}p_{3},N)=1}S\left(\mathcal{A}_{p_{1}p_{2}p_{3}}; \mathcal{P}(p_{1}p_{2}),p_{3}\right).\]
On the other hand, if \(p_{2}\geqslant\left(\frac{N}{bp_{1}}\right)^{\frac{1}{3}}\), then \(p_{2}\geqslant\left(\frac{N}{bp_{1}p_{2}}\right)^{\frac{1}{2}}\) and we have
\[\sum_{(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{1}<(\frac{N}{b}) ^{\frac{1}{3.604}},\;(\frac{N}{bp_{1}})^{\frac{1}{3}}\leqslant p_{2}<(\frac{N} {bp_{1}})^{\frac{1}{2}}\atop(p_{1}p_{2},N)=1}S\left(\mathcal{A}_{p_{1}p_{2}}; \mathcal{P}(p_{1}),p_{2}\right) \tag{30}\] \[\leqslant \sum_{(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{1}<(\frac{N}{b}) ^{\frac{1}{3.604}},\;(\frac{N}{bp_{1}})^{\frac{1}{3}}\leqslant p_{2}<(\frac{N} {bp_{1}})^{\frac{1}{2}}\atop(p_{1}p_{2},N)=1}S\left(\mathcal{A}_{p_{1}p_{2}}; \mathcal{P}(p_{1}),\left(\frac{N}{bp_{1}p_{2}}\right)^{\frac{1}{2}}\right).\]
By (28)-(30) we get
\[\sum_{(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{1}<(\frac{N}{b}) ^{\frac{1}{3.604}}\leqslant p_{2}<(\frac{N}{bp_{1}})^{\frac{1}{2}}\atop(p_{1}p _{2},N)=1}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{1}),p_{2}\right) \tag{31}\] \[\leqslant \sum_{(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{1}<(\frac{N}{b}) ^{\frac{1}{3.604}}\leqslant p_{2}<(\frac{N}{bp_{1}})^{\frac{1}{2}}\atop(p_{1} p_{2},N)=1}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_{1}),\left(\frac{N}{bp _{1}p_{2}}\right)^{\frac{1}{2}}\right)\] \[+\sum_{(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{1}<(\frac{N}{b}) ^{\frac{1}{3.604}}\leqslant p_{2}\leqslant p_{3}<(\frac{N}{bp_{1}p_{2}})^{ \frac{1}{2}}\atop(p_{1}p_{2}p_{3},N)=1}S\left(\mathcal{A}_{p_{1}p_{2}p_{3}}; \mathcal{P}(p_{1}p_{2}),p_{3}\right).\]
By Buchstab's identity we have
\[\sum_{(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p_{1}<p_{2}<p_{3}<( \frac{N}{b})^{\frac{1}{3}}\atop(p_{1}p_{2}p_{3},N)=1}S\left(\mathcal{A}_{p_{1} p_{2}p_{3}};\mathcal{P}(p_{1}),p_{2}\right) \tag{32}\] \[-\sum_{(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p_{1}<p_{2}<p_{3}<( \frac{N}{b})^{\frac{1}{8.4}}\atop(p_{1}p_{2}p_{3},N)=1}S\left(\mathcal{A}_{p_{1} p_{2}p_{3}};\mathcal{P},p_{1}\right)\] \[-\sum_{(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p_{1}<p_{2}<(\frac{N }{b})^{\frac{1}{8.4}}\leqslant p_{3}<(\frac{N}{b})^{\frac{4.6}{13.2}}p_{2}^{-1}}S \left(\mathcal{A}_{p_{1}p_{2}p_{3}};\mathcal{P},p_{1}\right)\]
\[-\sum_{(\frac{N}{b})^{\frac{1}{14}}\leqslant p_{1}<(\frac{N}{b})^{ \frac{1}{8.8}}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{14}}\right)\] \[+\sum_{(\frac{N}{b})^{\frac{1}{14}}\leqslant p_{1}<(\frac{N}{b})^ {\frac{1}{8.8}}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{14}}\right)\] \[-\sum_{(\frac{N}{b})^{\frac{1}{14}}\leqslant p_{1}<(\frac{N}{b})^ {\frac{1}{8.8}}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{ \frac{1}{14}}\right)\] \[-\sum_{(\frac{N}{b})^{\frac{1}{14}}\leqslant p_{1}<(\frac{N}{b})^ {\frac{1}{8.8}}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{14}}\right)\] \[-\sum_{(\frac{N}{b})^{\frac{1}{14}}\leqslant p_{1}<(\frac{N}{b})^ {\frac{1}{8.8}}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{ \frac{1}{14}}\right)\] \[-\sum_{(\frac{N}{b})^{\frac{1}{14}}\leqslant p_{1}<(\frac{N}{b})^ {\frac{1}{8.8}}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{ \frac{1}{14}}\right)\] \[-\sum_{(\frac{N}{b})^{\frac{1}{14}}\leqslant p_{1}<(\frac{N}{b})^ {\frac{1}{3.106}}}\leqslant p_{2}<(\frac{N}{b_{1}p_{2}})^{\frac{1}{2}}\] \[-\sum_{(\frac{N}{b})^{\frac{1}{5.8}}\leqslant p_{1}<(\frac{N}{b})^ {\frac{1}{5.8}}}\leqslant p_{2}<(\frac{N}{b_{1}p_{2}})^{\frac{1}{2}}\] \[-\sum_{(\frac{N}{b})^{\frac{1}{5.8}}\leqslant p_{1}<(\frac{N}{b})^ {\frac{1}{5.8}}}\leqslant p_{2}<(\frac{N}{b_{1}p_{2}})^{\frac{1}{2}}\] \[-\sum_{(\frac{N}{b})^{\frac{1}{5.8}}\leqslant p_{1}<(\frac{N}{b})^ {\frac{1}{5.8}}}\leqslant p_{2}<(\frac{N}{b_{1}p_{2}})^{\frac{1}{2}}\] \[-\sum_{(\frac{N}{b})^{\frac{1}{14}}\leqslant p_{1}<(\frac{N}{b})^ {\frac{1}{3.106}}}\leqslant p_{2}<(\frac{N}{b_{1}p_{2}})^{\frac{1}{2}}\] \[-\sum_{(\frac{N}{b})^{\frac{1}{5.8}}\leqslant p_{1}<(\frac{N}{b})^ {\frac{1}{5.8}}}\leqslant p_{2}<(\frac{N}{b_{1}p_{2}})^{\frac{1}{2}}\] \[-\sum_{(\frac{N}{b})^{\frac{4.0871}{14}}\leqslant p_{1}<(\frac{N}{b })^{\frac{1}{3.106}}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{14}}\right)\] \[-\sum_{(\frac{N}{b})^{\frac{\theta}{2}}-\frac{3}{14}\leqslant p _{1}<(\frac{N}{b})^{\frac{1}{3.73}}}S\left(\mathcal{A}_{p};\mathcal{P},\left( \frac{N}{b}\right)^{\frac{1}{8.8}}\right)\] \[-\sum_{(\frac{N}{b})^{\frac{1}{14}}\leqslant p_{1}<p_{2}<p_{3}<p_ {4}<(\frac{N}{b})^{\frac{1}{8.8}}}S\left(\mathcal{A}_{p_{1}p_{2}p_{3}p_{4}}; \mathcal{P}(p_{1}),p_{2}\right)\]
\[-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{14}}\leqslant p_{1}<p_{2}<p_{3 }<(\frac{N}{b})^{\frac{1}{8}}\leqslant p_{4}<(\frac{N}{b})^{\frac{6}{2}}-\frac{3 }{14}p_{3}^{-1}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_ {1}),p_{2}\right)\] \[-2\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{1.106}} \leqslant p_{1}<p_{2}<(\frac{N}{bp_{1}})^{\frac{1}{2}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_ {1}),p_{2}\right)\] \[-2\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{1.106}} \leqslant p_{1}<p_{2}<(\frac{N}{bp_{1}})^{\frac{1}{2}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}};\mathcal{P}(p_ {1}),p_{2}\right)+O\left(N^{\frac{13}{14}}\right)\] \[= (3S^{\prime}_{11}+S^{\prime}_{12})+(S^{\prime}_{21}+S^{\prime}_ {22})-(S^{\prime}_{31}+S^{\prime}_{32})-(S^{\prime}_{41}+S^{\prime}_{42})\] \[-(S^{\prime}_{51}+S^{\prime}_{52})-(S^{\prime}_{61}+S^{\prime}_ {62})-2\left(S^{\prime}_{71}+S^{\prime}_{72}\right)+O\left(N^{\frac{13}{14}}\right)\] \[= S^{\prime}_{1}+S^{\prime}_{2}-S^{\prime}_{3}-S^{\prime}_{4}-S^{ \prime}_{5}-S^{\prime}_{6}-2S^{\prime}_{7}+O\left(N^{\frac{13}{14}}\right).\]
Proof.: It is similar to that of Lemma 5.3 and [[3], Lemma 9] so we omit it here.
**Lemma 5.5**.: _See [2]. Let \(\mathcal{A}=\mathcal{A}_{1}\) in section 2, \(D_{\mathcal{A}_{1}}=N^{1/2}(\log N)^{-B}\) with \(B=B(A)>0\) in Lemma 4.1, and \(\underline{p}=\frac{D_{\mathcal{A}_{1}}}{bp}\). Then for \(\left(\frac{N}{b}\right)^{\frac{1}{1.5}}<D_{1}<D_{2}\leqslant\left(\frac{N}{b }\right)^{\frac{1}{3}}\) we have_
\[\sum_{\begin{subarray}{c}D_{1}\leqslant p<D_{2}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\frac {1}{2.5}}\right)\] \[\leqslant \sum_{\begin{subarray}{c}D_{1}\leqslant p<D_{2}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\frac {1}{3.675}}\right)\] \[-\frac{1}{2}\sum_{\begin{subarray}{c}D_{1}\leqslant p<D_{2}\\ (p,N)=1\end{subarray}}\sum_{\begin{subarray}{c}\underline{p}^{\frac{1}{3.675} }\leqslant p_{1}<p_{2}^{\frac{1}{5}}\\ (p_{1}p_{2}p_{3},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}};\mathcal{P}, \underline{p}^{\frac{1}{3.675}}\right)\] \[+\frac{1}{2}\sum_{\begin{subarray}{c}D_{1}\leqslant p<D_{2}\\ (p,N)=1\end{subarray}}\sum_{\begin{subarray}{c}\underline{p}^{\frac{1}{3.675} }\leqslant p_{1}<p_{2}<p_{3}^{\frac{1}{2.5}}\\ (p_{1}p_{2}p_{3},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}p_{2}p_{3}}; \mathcal{P}(p_{1}),p_{2}\right)+O\left(N^{\frac{49}{20}}\right).\]
Proof.: It is similar to that of [[2], Lemma 7]. By Buchstab's identity, we have
\[S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\frac{1}{2.5}}\right)= S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\frac{1}{3.675}} \right)-\sum_{\begin{subarray}{c}\underline{p}^{\frac{1}{3.675}}\leqslant p_{1 }<p^{\frac{1}{2.5}}\\ (p_{1},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}};\mathcal{P},\underline{p}^ {\frac{1}{3.675}}\right)\] \[+\sum_{\begin{subarray}{c}\underline{p}^{\frac{1}{3.675}} \leqslant p_{1}<p_{2}<p^{\frac{1}{2.5}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}p_{2}};\mathcal{P},p_ {1}\right), \tag{33}\] \[S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\frac{1}{2.5}}\right)= S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\frac{1}{3.675}} \right)-\sum_{\begin{subarray}{c}\underline{p}^{\frac{1}{3.675}}\leqslant p_{1 }<p^{\frac{1}{2.5}}\\ (p_{1},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}};\mathcal{P},\underline{p}^ {\frac{1}{2.5}}\right)\] \[-\sum_{\begin{subarray}{c}\underline{p}^{\frac{1}{3.675}} \leqslant p_{1}<p_{2}<p^{\frac{1}{2.5}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}p_{2}};\mathcal{P}(p_{1}), p_{2}\right),\] (34) \[\sum_{\begin{subarray}{c}\underline{p}^{\frac{1}{3.675}} \leqslant p_{1}<p_{2}<p^{\frac{1}{2.5}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}p_{2}};\mathcal{P}(p_{1}), p_{2}\right)\]
\[=\sum_{\begin{subarray}{c}p^{\frac{1}{3\cdot 6075}}\leqslant p_{1}<p_{2}<p_{3 }<p_{2}\end{subarray}}S\left(\mathcal{A}_{pp_{1}p_{2}p_{3}};\mathcal{P}(p_{1}), p_{2}\right)+\sum_{\begin{subarray}{c}p^{\frac{1}{3\cdot 675}}\leqslant p_{1}<p_{2}<p^{ \frac{1}{2\cdot 5}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}^{2}p_{2}}; \mathcal{P},p_{1}\right). \tag{35}\]
Now we add (33) and (34), sum over \(p\) in the interval \([D_{1},D_{2})\) and by (35), we get Lemma 5.5, where the trivial inequality
\[\sum_{\begin{subarray}{c}D_{1}\leqslant p<D_{2}\\ (p,N)=1\end{subarray}}\sum_{\begin{subarray}{c}p^{\frac{1}{3\cdot 675}} \leqslant p_{1}<p_{2}<p^{\frac{1}{2\cdot 5}}\\ (p_{1}p_{2},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}^{2}p_{2}};\mathcal{P},p_{1}\right)\ll N^{\frac{19}{20}}\]
is used.
**Lemma 5.6**.: _See [3]. Let \(\mathcal{A}=\mathcal{A}_{2}\) in section 2, \(D_{\mathcal{A}_{2}}=N^{\theta-1/2}(\log N)^{-B}\) with \(B=B(A)>0\) in Lemma 4.1, and \(\underline{p}^{\prime}=\frac{D_{\mathcal{A}_{2}}}{bp}\). Then for \(\left(\frac{N}{b}\right)^{\frac{1}{4.5}}<D_{1}<D_{2}\leqslant\left(\frac{N}{b }\right)^{\frac{1}{3}}\) we have_
\[\sum_{\begin{subarray}{c}D_{1}\leqslant p<D_{2}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{ \prime\frac{1}{2.5}}\right)\] \[\leqslant \sum_{\begin{subarray}{c}D_{1}\leqslant p<D_{2}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{ \prime\frac{1}{3\cdot 675}}\right)\] \[-\frac{1}{2}\sum_{\begin{subarray}{c}D_{1}\leqslant p<D_{2}\\ (p,N)=1\end{subarray}}\sum_{\begin{subarray}{c}p^{\prime\frac{1}{3\cdot 675}} \leqslant p_{1}<p^{\prime\frac{1}{2\cdot 5}}\\ (p_{1},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}};\mathcal{P},\underline{p}^ {\prime\frac{1}{3\cdot 675}}\right)\] \[+\frac{1}{2}\sum_{\begin{subarray}{c}D_{1}\leqslant p<D_{2}\\ (p,N)=1\end{subarray}}\sum_{\begin{subarray}{c}p^{\prime\frac{1}{3\cdot 675}} \leqslant p_{1}<p_{2}<p_{3}<p^{\prime\frac{1}{2\cdot 5}}\\ (p_{1}p_{2}p_{3},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}p_{2}p_{3}}; \mathcal{P}(p_{1}),p_{2}\right)+O\left(N^{\theta-\frac{1}{20}}\right).\]
Proof.: It is similar to that of Lemma 5.5 and [[3], Lemma 10] so we omit it here.
**Lemma 5.7**.: _See [26]. Let \(\mathcal{A}=\mathcal{A}_{1}\) in section 2, then we have_
\[\sum_{\begin{subarray}{c}ap_{1}+bp_{2}=N\\ p_{1}\text{ and }p_{2}\text{ are primes}\end{subarray}}\] \[\leqslant S\left(\mathcal{A};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1} {7}}\right)\] \[-\frac{1}{2}\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{ 2}}\leqslant p<(\frac{N}{b})^{\frac{1}{5}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{7}}\right)\] \[+\frac{1}{2}\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{ 2}}\leqslant p<p_{2}<p_{3}<(\frac{N}{b})^{\frac{1}{5}}\\ (p_{1}p_{2}p_{3},N)=1\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}p_{3}}; \mathcal{P}(p_{1}),p_{2}\right)+O\left(N^{\frac{6}{7}}\right)\] \[= \Upsilon_{1}-\frac{1}{2}\Upsilon_{2}+\frac{1}{2}\Upsilon_{3}+O \left(N^{\frac{6}{7}}\right).\]
Proof.: It is similar to that of Lemma 5.5 and [[26], p. 211, Lemma 5] so we omit it here.
## 6. Proof of Theorem 1.1
In this section, sets \(\mathcal{A}_{1}\), \(\mathcal{B}_{1}\), \(\mathcal{C}_{1}\), \(\mathcal{E}_{1}\) and \(\mathcal{F}_{1}\) are defined respectively. We define the function \(\omega\) as \(\omega(p)=0\) for primes \(p\mid abN\) and \(\omega(p)=\frac{p}{p-1}\) for other primes.
### Evaluation of \(S_{1},s_{2},s_{3}\)
Let \(D_{\mathcal{A}_{1}}=\left(\frac{N}{b}\right)^{1/2}\left(\log\left(\frac{N}{b} \right)\right)^{-B}\) and \(D_{\mathcal{A}_{1_{p}}}=\frac{D_{\mathcal{A}_{1}}}{p}=\frac{\left(\frac{N}{b} \right)^{1/2}\left(\log\left(\frac{N}{b}\right)\right)^{-B}}{p}\) for some positive constant \(B\). By Lemma 3.6, we can take
\[X_{\mathcal{A}_{1}}=\sum_{\begin{subarray}{c}0\leqslant k\leqslant b-1\\ (k,b)=1\end{subarray}}\pi\left(\frac{N}{a};b^{2},Na_{b^{2}}^{-1}+kb\right) \sim\frac{\varphi(b)N}{a\varphi\left(b^{2}\right)\log N}\sim\frac{N}{ab\log N} \tag{36}\]
so that \(\left|\mathcal{A}_{1}\right|\sim X_{\mathcal{A}_{1}}\). By Lemma 3.5 for \(z_{\mathcal{A}_{1}}=N^{\frac{1}{n}}\) we have
\[W(z_{\mathcal{A}_{1}})=\frac{2\alpha e^{-\gamma}C(N)(1+o(1))}{\log N}. \tag{37}\]
To deal with the error terms, we have
\[\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{A}_{1}}\\ n\mid P(z_{\mathcal{A}_{1}})\end{subarray}}\eta\left(X_{\mathcal{A}_{1}},n \right)\ll\sum_{n\leqslant D_{\mathcal{A}_{1}}}\mu^{2}(n)\eta\left(X_{ \mathcal{A}_{1}},n\right), \tag{38}\]
and
\[\sum_{p}\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{A}_{1}}\\ n\mid P(z_{\mathcal{A}_{1}})\end{subarray}}\eta\left(X_{\mathcal{A}_{1}},pn \right)\ll\sum_{n\leqslant D_{\mathcal{A}_{1}}}\mu^{2}(n)\eta\left(X_{ \mathcal{A}_{1}},n\right). \tag{39}\]
By our previous discussion, any \(\frac{N-ap}{b}\) in \(\mathcal{A}_{1}\) is relatively prime to \(b\), so \(\eta\left(X_{\mathcal{A}_{1}},n\right)=0\) for any integer \(n\) that shares a common prime divisor with \(b\). If \(n\) and \(a\) share a common prime divisor \(r\), say \(n=rn^{\prime}\) and \(a=ra^{\prime}\), then \(\frac{N-ap}{bn}=\frac{N-ra^{\prime}p}{brn^{\prime}}\in\mathbb{Z}\) implies \(r\mid N\), which is a contradiction to \((a,N)=1\). Similarly, we have \(\eta\left(X_{\mathcal{A}_{1}},n\right)=0\) if \((n,N)>1\). We conclude that \(\eta\left(X_{\mathcal{A}_{1}},n\right)=0\) if \((n,abN)>1\).
For a square-free integer \(n\leqslant D_{\mathcal{A}_{1}}\) such that \((n,abN)=1\), to make \(n\mid\frac{N-ap}{b}\) for some \(\frac{N-ap}{b}\in\mathcal{A}_{1}\), we need \(ap\equiv N(\operatorname{mod}bn)\), which implies \(ap\equiv N+kbn\)\(\left(\operatorname{mod}b^{2}n\right)\) for some \(0\leqslant k\leqslant b-1\). Since \(\left(\frac{N-ap}{bn},b\right)=1\), we can further require \((k,b)=1\). When \(k\) runs through the reduced residues modulo \(b\), we know \(a_{b^{2}n}^{-1}k\) also runs through the reduced residues modulo \(b\). Therefore, we have \(p\equiv a_{b^{2}n}^{-1}N+kbn\left(\operatorname{mod}b^{2}n\right)\) for some \(0\leqslant k\leqslant b-1\) such that \((k,b)=1\). Conversely, if \(p=a_{b^{2}n}^{-1}N+kbn+mb^{2}n\) for some integer \(m\) and some \(0\leqslant k\leqslant b-1\) such that \((k,b)=1\), then \(\left(\frac{N-ap}{bn},b\right)=\left(\frac{N-aa_{b^{2}n}^{-1}N-akbn-amb^{2}n}{ bn},b\right)=(-ak,b)=1\). Therefore, for square-free integers \(n\) such that \((n,abN)=1\), we have
\[\eta\left(X_{\mathcal{A}_{1}},n\right)= \left|\sum_{\begin{subarray}{c}a\in\mathcal{A}_{1}\\ a\equiv 0\left(\operatorname{mod}n\right)\end{subarray}}1-\frac{\omega(n)}{ n}X_{\mathcal{A}_{1}}\right|\] \[= \left|\sum_{\begin{subarray}{c}0\leqslant k\leqslant b-1\\ (k,b)=1\end{subarray}}\pi\left(\frac{N}{a};b^{2}n,a_{b^{2}n}^{-1}N+kbn\right)- \frac{X_{\mathcal{A}_{1}}}{\varphi(n)}\right|\] \[\ll \left|\sum_{\begin{subarray}{c}0\leqslant k\leqslant b-1\\ (k,b)=1\end{subarray}}\pi\left(\frac{N}{a};b^{2}n,a_{b^{2}n}^{-1}N+kbn\right)- \frac{\varphi(b)\pi\left(\frac{N}{a};1,1\right)}{\varphi\left(b^{2}n\right)}\right|\] \[\ll \sum_{\begin{subarray}{c}0\leqslant k\leqslant b-1\\ (k,b)=1\end{subarray}}\left|\pi\left(\frac{N}{a};b^{2}n,a_{b^{2}n}^{-1}N+kbn \right)-\frac{\pi\left(\frac{N}{a};1,1\right)}{\varphi\left(b^{2}n\right)}\right|\]
\[S_{21} \geqslant(1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\left(\int_{\frac{1}{13. 2}}^{\frac{1}{8.4}}\int_{t_{1}}^{\frac{4.6}{13.2}-t_{1}}\frac{\log\left(5.6-13.2 \left(t_{1}+t_{2}\right)\right)}{t_{1}t_{2}\left(1-2\left(t_{1}+t_{2}\right) \right)}dt_{1}dt_{2}\right),\] \[S_{2} =S_{21}+S_{22}\] \[\geqslant(1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\left(\int_{\frac{1} {13.2}}^{\frac{1}{8.4}}\int_{t_{1}}^{\frac{4.6}{13.2}-t_{1}}\frac{\log\left(5.6 -13.2\left(t_{1}+t_{2}\right)\right)}{t_{1}t_{2}\left(1-2\left(t_{1}+t_{2} \right)\right)}dt_{1}dt_{2}\right)\] \[\geqslant 5.201296\frac{C(N)N}{ab(\log N)^{2}}, \tag{43}\]
\[S_{31} \leqslant (1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\left(\log\frac{4.1001(13.2- 2)}{13.2-8.2002}+\int_{2}^{4.6}\frac{\log(s-1)}{s}\log\frac{5.6(5.6-s)}{s+1}ds\right.\] \[+\int_{2}^{2.6}\frac{\log(s-1)}{s}ds\int_{s+2}^{4.6}\frac{1}{t} \log\frac{t-1}{s+1}\log\frac{5.6(5.6-t)}{t+1}dt\right)\leqslant 21.9016\frac{C(N)N}{ ab(\log N)^{2}},\]
\[S_{32}\leqslant (1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\left(\log\frac{3.6(13.2-2)}{13.2 -7.2}+\int_{2}^{4.6}\frac{\log(s-1)}{s}\log\frac{5.6(5.6-s)}{s+1}ds\right.\] \[\left.+\int_{2}^{2.6}\frac{\log(s-1)}{s}ds\int_{s+2}^{4.6}\frac{1 }{t}\log\frac{t-1}{s+1}\log\frac{5.6(5.6-t)}{t+1}dt\right)\leqslant 19.40136 \frac{C(N)N}{ab(\log N)^{2}},\] \[S_{3}= S_{31}+S_{32}\leqslant 41.30296\frac{C(N)N}{ab(\log N)^{2}}. \tag{44}\]
### Evaluation of \(S_{4},s_{7}\)
Let \(D_{\mathcal{B}_{1}}=N^{1/2}(\log N)^{-B}\). By Chen's role-reversal trick and similar arguments in [7], we know that
\[|\mathcal{E}_{1}|<\left(\frac{N}{b}\right)^{\frac{2}{3}},\quad\left(\frac{N}{ b}\right)^{\frac{1}{3}}<e\leqslant\left(\frac{N}{b}\right)^{\frac{2}{3}}\text{ for }\,e\in\mathcal{E}_{1},\quad S_{41}\leqslant S\left(\mathcal{B}_{1};\mathcal{P},D_{ \mathcal{B}_{1}}^{\frac{1}{2}}\right)+O\left(N^{\frac{2}{3}}\right). \tag{45}\]
Then we can take
\[X_{\mathcal{B}_{1}}=\sum_{(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2}\leqslant(\frac{N}{bp_{1}})^{\frac{ 1}{2}}\atop 0\leqslant j\leqslant a-1,(j,a)=1}\pi\left(\frac{N}{bp_{1}p_{2}};a ^{2},N\left(bp_{1}p_{2}\right)_{a^{2}}^{-1}+ja\right) \tag{46}\]
so that \(|\mathcal{B}_{1}|\sim X_{\mathcal{B}_{1}}\). By Lemma 3.5 for \(z_{\mathcal{B}_{1}}=D_{\mathcal{B}_{1}}^{\frac{1}{2}}=N^{\frac{1}{4}}(\log N) ^{-B/2}\) we have
\[W(z_{\mathcal{B}_{1}})=\frac{8e^{-\gamma}C(N)(1+o(1))}{\log N},\quad F(2)=e^{ \gamma}. \tag{47}\]
By Lemma 3.6 and integration by parts we get that
\[X_{\mathcal{B}_{1}} =(1+o(1))\sum_{(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2}\leqslant(\frac{N}{bp_{1}})^{\frac{ 1}{2}}}\frac{\varphi(a)\frac{N}{bp_{1}p_{2}}}{\varphi\left(a^{2}\right)\log \left(\frac{N}{bp_{1}p_{2}}\right)}\] \[=(1+o(1))\frac{N}{ab}\sum_{(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}\leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2}\leqslant(\frac{N}{bp_{1}})^{ \frac{1}{2}}}\frac{1}{p_{1}p_{2}\log\left(\frac{N}{bp_{1}p_{2}}\right)}\] \[=(1+o(1))\frac{N}{ab}\int_{(\frac{N}{b})^{\frac{1}{3}}\over 13.2} \frac{dt}{t\log t}\int_{(\frac{N}{b})^{\frac{1}{3}}}^{(\frac{N}{bt})^{\frac{1}{ 2}}}\frac{du}{u\log u\log\left(\frac{N}{ut}\right)}\] \[=(1+o(1))\frac{N}{ab\log N}\int_{2}^{12.2}\frac{\log\left(2-\frac {3}{s+1}\right)}{s}ds. \tag{48}\]
To deal with the error terms, we have
\[\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{B}_{1}}\\ n\mid P(z_{\mathcal{B}_{1}})\end{subarray}}\eta\left(X_{\mathcal{B}_{1}},n \right)\ll\sum_{n\leqslant D_{\mathcal{B}_{1}}}\mu^{2}(n)\eta\left(X_{ \mathcal{B}_{1}},n\right). \tag{49}\]
For an integer \(n\) such that \((n,abN)>1\), similarly to the discussion for \(\eta\left(X_{\mathcal{A}_{1}},n\right)\), we have \(\eta\left(X_{\mathcal{B}_{1}},n\right)=0\).
For a square-free integer \(n\) such that \((n,abN)=1\), if \(n\mid\frac{N-bp_{1}p_{2}p_{3}}{a}\), then \((p_{1},n)=1\) and \((p_{2},n)=1\). Moreover, if \(\left(\frac{N-bp_{1}p_{2}p_{3}}{an},a\right)=1\), then we have \(bp_{1}p_{2}p_{3}\equiv N+jan\left(\text{mod}a^{2}n\right)\) for some \(j\) such that \(0\leqslant j\leqslant a-1\) and \((j,a)=1\). Conversely, if \(bp_{1}p_{2}p_{3}=N+jan+sa^{2}n\) for some integer \(j\) such that \(0\leqslant j\leqslant a\) and \((j,a)=1\), some integer \(n\) relatively prime to \(p_{1}p_{2}\) such that \(an\mid(N-bp_{1}p_{2}p_{3})\), and some integer \(s\), then \(\left(\frac{N-bp_{1}p_{2}p_{3}}{an},a\right)=\left(-j,a\right)=1\). Since \(jbp_{1}p_{2}\) runs through the reduced residues modulo \(a\) when \(j\) runs through the reduced residues modulo \(a\) and \(\pi\left(x;k,1,1\right)=\pi\left(\frac{\pi}{k};1,1\right)\), for square-free integers \(n\) such that
\((n,abN)=1\), we have
\[\eta\left(X_{\mathcal{B}_{1}},n\right)= \left|\sum_{\begin{subarray}{c}a\in\mathcal{B}_{1}\\ a\equiv 0\;(\mathrm{mod}\;n)\end{subarray}}1-\frac{\omega(n)}{n}X_{\mathcal{B}_{1}} \right|=\left|\sum_{\begin{subarray}{c}a\in\mathcal{B}_{1}\\ a\equiv 0\;(\mathrm{mod}\;n)\end{subarray}}1-\frac{X_{\mathcal{B}_{1}}}{ \varphi(n)}\right|\] \[= \left|\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}\leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2}\leqslant(\frac{N}{bp_{1}})^{ \frac{1}{2}},(p_{1}p_{2},N)=1\\ (p_{1}p_{2},n)=1,0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}\pi\left(N;bp_{ 1}p_{2},a^{2}n,N+jan\right)\right.\] \[\left.-\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1}\leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2}\leqslant(\frac{N}{bp _{1}})^{\frac{1}{2}}\\ (p_{1}p_{2},n)=1,0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}\frac{\pi \left(\frac{N}{bp_{1}p_{2}};a^{2},N\left(bp_{1}p_{2}\right)_{a^{2}}^{-1}+ja \right)}{\varphi(n)}\right|\] \[\ll \left|\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1}\leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2}\leqslant(\frac{N}{bp _{1}})^{\frac{1}{2}},(p_{1}p_{2},N)=1\\ (p_{1}p_{2},n)=1,0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}\left(\pi \left(N;bp_{1}p_{2},a^{2}n,N+jan\right)\right.\] \[\left.-\frac{\pi\left(\frac{N}{bp_{1}p_{2}};a^{2},N\left(bp_{1}p_ {2}\right)_{a^{2}}^{-1}+ja\right)}{\varphi(n)}\right)\right|\] \[+\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1}\leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2}\leqslant(\frac{N}{bp _{1}})^{\frac{1}{2}},(p_{1}p_{2},N)=1\\ (p_{1}p_{2},n)=1,0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}\frac{\pi \left(\frac{N}{bp_{1}p_{2}};a^{2},N\left(bp_{1}p_{2}\right)_{a^{2}}^{-1}+ja \right)}{\varphi(n)}\] \[\ll \left|\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1}\leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2}\leqslant(\frac{N}{bp _{1}})^{\frac{1}{2}},(p_{1}p_{2},N)=1\\ (p_{1}p_{2},n)=1,0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}\left(\pi \left(N;bp_{1}p_{2},a^{2}n,N+jan\right)-\frac{\pi\left(N;bp_{1}p_{2},1,1 \right)}{\varphi\left(a^{2}n\right)}\right)\right|\] \[+N^{\frac{12.2}{15.2}}(\log N)^{2}\] \[\ll \left|\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1}\leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2}\leqslant(\frac{N}{bp _{1}})^{\frac{1}{2}},(p_{1}p_{2},N)=1\\ (p_{1}p_{2},n)=1,0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}\left(\pi \left(N;bp_{1}p_{2},a^{2}n,N+jan\right)-\frac{\pi\left(N;bp_{1}p_{2},1,1 \right)}{\varphi\left(a^{2}n\right)}\right)\right|\] \[+\frac{1}{\varphi(n)}\left|\sum_{\begin{subarray}{c}(\frac{N}{b})^ {\frac{1}{13.2}}\leqslant p_{1}\leqslant(\frac{N}{b})^{\frac{1}{3}}<p_{2} \leqslant(\frac{N}{bp_{1}})^{\frac{1}{2}},(p_{1}p_{2},N)=1\\ (p_{1}p_{2},n)=1,0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}\left(\pi \left(\frac{N}{bp_{1}p_{2}};a^{2},N\left(bp_{1}p_{2}\right)_{a^{2}}^{-1}+ja \right)-\frac{\pi\left(\frac{N}{bp_{1}p_{2}};1,1\right)}{\varphi\left(a^{2} \right)}\right)\right|\]
\[+N^{\frac{12.2}{13.2}}(\log N)^{2}. \tag{50}\]
By Lemma 4.1 with
\[g(k)=\begin{cases}1,&\text{ if }k\in\mathcal{E}_{1}\\ 0,&\text{ otherwise}\end{cases}\]
and Lemma 3.7, we have
\[\sum_{n\leqslant D_{\mathcal{E}_{1}}}\mu^{2}(n)\eta\left(X_{\mathcal{B}_{1}}, n\right)=\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{B}_{1}}\\ (n,abN)=1\end{subarray}}\mu^{2}(n)\eta\left(X_{\mathcal{B}_{1}},n\right)\ll N( \log N)^{-3}. \tag{51}\]
Then by (45)-(51) and some routine arguments we have
\[S_{41}\leqslant(1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\int_{2}^{12.2}\frac{\log \left(2-\frac{3}{s+1}\right)}{s}ds.\]
Similarly, we have
\[S_{42}\leqslant(1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\int_{2.604}^{7.4}\frac{ \log\left(2.604-\frac{3.604}{s+1}\right)}{s}ds,\]
\[S_{4}=S_{41}+S_{42}\leqslant(1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\left(\int_{ 2}^{12.2}\frac{\log\left(2-\frac{3}{s+1}\right)}{s}ds+\int_{2.604}^{7.4}\frac {\log\left(2.604-\frac{3.604}{s+1}\right)}{s}ds\right)\]
\[\leqslant 10.69152\frac{C(N)N}{ab(\log N)^{2}}, \tag{52}\]
\[S_{7}\leqslant(1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\int_{2}^{2.604}\frac{\log (s-1)}{s}ds\leqslant 0.5160672\frac{C(N)N}{ab(\log N)^{2}}. \tag{53}\]
### Evaluation of \(S_{6}\)
Let \(D_{\mathcal{C}_{1}}=N^{1/2}(\log N)^{-B}\). By Chen's role-reversal trick and similar arguments in [2], we know that
\[|\mathcal{F}_{1}|<\left(\frac{N}{b}\right)^{\frac{12.2}{13.2}},\quad\left( \frac{N}{b}\right)^{\frac{1}{4}}<e\leqslant\left(\frac{N}{b}\right)^{\frac{1 2.2}{13.2}}\text{ for }\,e\in\mathcal{F}_{1},\quad S_{61}\leqslant S\left( \mathcal{C}_{1};\mathcal{P},D_{\mathcal{C}_{1}}^{\frac{1}{2}}\right)+O\left( N^{\frac{12.2}{13.2}}\right). \tag{54}\]
By Lemma 3.5 for \(z_{\mathcal{C}_{1}}=D_{\mathcal{C}_{1}}^{\frac{1}{2}}=N^{\frac{1}{4}}(\log N) ^{-B/2}\) we have
\[W(z_{\mathcal{C}_{1}})=\frac{8e^{-\gamma}C(N)(1+o(1))}{\log N},\quad F(2)=e^{ \gamma}. \tag{55}\]
By Lemma 3.3 we have
\[|\mathcal{C}_{1}| =\sum_{\begin{subarray}{c}m_{p}1_{p}2_{q}\in\mathcal{F}_{1}\\ p_{3}\equiv N(bm_{1}p_{2}p_{4})-\frac{1}{2}+ja\text{ mod }a^{2})\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1\] \[=\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p _{1}<p_{4}<p_{2}<p_{3}<(\frac{N}{b})^{\frac{1}{8.4}}\\ (p_{1}p_{2}p_{3}p_{4},N)=1\end{subarray}}\sum_{\begin{subarray}{c}1\leqslant m \leqslant\frac{N}{8p_{1}p_{2}p_{3}p_{4}}\\ (m,p_{1}^{-1}abNP(p_{4}))=1\end{subarray}}\frac{\varphi(a)}{\varphi(a^{2})}+O \left(N^{\frac{12.2}{13.2}}\right)\] \[<(1+o(1))\frac{N}{ab}\sum_{\begin{subarray}{c}(\frac{N}{b})^{ \frac{1}{13.2}}\leqslant p_{1}<p_{4}<p_{2}<p_{3}<(\frac{N}{b})^{\frac{1}{8.4}} \\ (p_{1}p_{2}p_{3}p_{4}\log p_{4})=1\end{subarray}}\frac{0.5617}{p_{1}p_{2}p_{3}p _{4}\log p_{4}}+O\left(N^{\frac{12.2}{13.2}}\right)\] \[=(1+o(1))\frac{0.5617N}{ab\log N}\int_{\frac{1}{13.2}}^{\frac{1}{8.4}}\frac{dt_{1}}{t_{1}}\int_{t_{1}}^{\frac{1}{8.4}}\frac{1}{t_{2}}\left(\frac {1}{t_{1}}-\frac{1}{t_{2}}\right)\log\frac{1}{8.4t_{2}}dt_{2}. \tag{56}\]
To deal with the error terms, we have
\[\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{C}_{1}}\\ n|P\left(z_{\mathcal{C}_{1}}\right)\end{subarray}}\eta\left(|\mathcal{C}_{1}|,n \right)\ll\sum_{n\leqslant D_{\mathcal{C}_{1}}}\mu^{2}(n)\eta\left(|\mathcal{C} _{1}|,n\right). \tag{57}\]
Since \(\omega(p)=0\) for primes \(p\mid abN\) and \(\omega(p)=\frac{p}{p-1}\) for other primes, for an integer \(n\) such that \((n,abN)>1\), similarly to the discussion for \(\eta\left(X_{\mathcal{B}_{1}},n\right)\), we have \(\eta\left(|\mathcal{C}_{1}|,n\right)=0\).
For a square-free integer \(n\) that is relatively prime to \(abN\), if \(n\mid\frac{N-bmp_{1}p_{2}p_{3}p_{4}}{a}\), then \((p_{1},n)=1,(p_{2},n)=1\) and \((p_{4},n)=1\). Moreover, if \(\left(\frac{N-bmp_{1}p_{2}p_{3}p_{4}}{an},a\right)=1\), then we have \(bmp_{1}p_{2}p_{3}p_{4}\equiv N+jan\left(\text{mod}a^{2}n\right)\) for some \(j\) such that \(0\leqslant j\leqslant a-1\) and \((j,a)=1\). Conversely, if \(bmp_{1}p_{2}p_{3}p_{4}=N+jan+sa^{2}n\) for some integer \(j\) such that \(0\leqslant j\leqslant a\) and \((j,a)=1\), some integer \(n\) relatively prime to \(p_{1}p_{2}p_{4}\) such that \(an\mid(N-bmp_{1}p_{2}p_{3}p_{4})\), and some integer \(s\), then \(\left(\frac{N-bmp_{1}p_{2}p_{3}p_{4}}{an},a\right)=(-j,a)=1\). Since \(jbmp_{1}p_{2}p_{4}\) runs through the reduced residues modulo \(a\) when \(j\) runs through the reduced residues modulo \(a\) and \(\pi\left(x;k,1,1\right)=\pi\left(\frac{x}{k};1,1\right)\), for a square-free integer \(n\) relatively prime to \(abN\), we have
\[\eta\left(|\mathcal{C}_{1}|,n\right)= \left|\sum_{\begin{subarray}{c}a\in\mathcal{C}_{1}\\ a\equiv 0\left(\text{mod }n\right)\end{subarray}}1-\frac{\omega(n)}{n}| \mathcal{C}_{1}|\right|=\left|\sum_{\begin{subarray}{c}a\in\mathcal{C}_{1}\\ a\equiv 0\left(\text{mod }n\right)\end{subarray}}1-\frac{|\mathcal{C}_{1}| }{\varphi(n)}\right| \tag{58}\] \[= \left|\sum_{\begin{subarray}{c}e\in\mathcal{F}_{1}\\ (e,n)=1\\ (e,n)=1\end{subarray}}\sum_{\begin{subarray}{c}p_{2}<p_{3}<\min\left(\frac{N} {b}\right)^{\frac{1}{8.4}},(\frac{N}{bc})\\ bp_{3}\equiv N+jan\left(\text{mod }a^{2}n\right)\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1-\frac{1}{\varphi(n)}\sum_{ \begin{subarray}{c}p_{2}<p_{3}<\min\left((\frac{N}{b}\right)^{\frac{1}{8.4}},(\frac{N}{bc})\right)\\ p_{3}\equiv N\left(bmp_{1}p_{2}p_{4}\right)_{a}^{-1}ja\left(\text{mod }a^{2}\right)\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1\right)\right|\] \[+ \frac{1}{\varphi(n)}\sum_{\begin{subarray}{c}e\in\mathcal{F}_{1} \\ (e,n)>1\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}\sum_{\begin{subarray}{c}p_{2}<p_ {3}<\min\left((\frac{N}{b})^{\frac{1}{8.4}},(\frac{N}{bc})\right)\\ p_{3}\equiv N\left(bmp_{1}p_{2}p_{4}\right)_{a}^{-1}ja\left(\text{mod }a^{2}\right)\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1.\]
Let
\[g(k)=\sum_{\begin{subarray}{c}e\in\mathcal{F}_{1}\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1,\]
then
\[\eta\left(|\mathcal{C}_{1}|,n\right)\ll \left|\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{4}, \epsilon}<k<(\frac{N}{b})^{\frac{12.2}{15.2}}\\ (k,n)=1\end{subarray}}g(k)\left(\sum_{\begin{subarray}{c}p_{2}<p_{3}<\min\left(( \frac{N}{b}\right)^{\frac{1}{8.4}},(\frac{N}{bc})\right)\\ bkp_{3}\equiv N+jan\left(\text{mod }a^{2}n\right)\end{subarray}}1-\frac{1}{\varphi(n)}\sum_{ \begin{subarray}{c}p_{2}<p_{3}<\min\left((\frac{N}{b}\right)^{\frac{1}{8.4}},( \frac{N}{bc})\right)\\ p_{3}\equiv N\left(bmp_{1}p_{2}p_{4}\right)_{a}^{-1}ja\left(\text{mod }a^{2}\right) \end{subarray}}1\right)\right|\] \[+N^{\frac{12.2}{15.2}}(\log N)^{2}\] \[= \left|\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{4}, \epsilon}<k<(\frac{N}{b})^{\frac{7.4}{8.4}}\\ (k,n)=1\end{subarray}}g(k)\left(\pi\left(bk\left(\frac{N}{b}\right)^{\frac{1}{8.4}};bk,a^{2}n,N+jan\right)-\frac{\pi\left(\left(\frac{N}{b}\right)^{\frac{1} {8.4}},a^{2},N(bmp_{1}p_{2}p_{4}\right)_{a^{2}}^{-1}+ja\right)}{\varphi(n)} \right)\right|\]
\[+\left|\sum_{(\frac{N}{b})^{\frac{1}{b+4}}<k<(\frac{N}{b})^{\frac{12 }{b+2}}\atop(k,n)=1}g(k)\left(\pi\left(N;bk,a^{2}n,N+jan\right)-\frac{\pi\left( bk;a^{2},N(bmp_{1}p_{2}p_{4})_{a^{2}}^{-1}+ja\right)}{\varphi(n)}\right)\right|\] \[+\left|\sum_{(\frac{N}{b})^{\frac{1}{b+4}}<k<(\frac{N}{b})^{\frac {12}{b+2}}\atop(k,n)=1}g(k)\left(\pi\left(bkp_{2};bk,a^{2}n,N+jan\right)-\frac {\pi\left(p_{2};a^{2},N(bmp_{1}p_{2}p_{4})_{a^{2}}^{-1}+ja\right)}{\varphi(n)} \right)\right|\] \[+N^{\frac{12}{b+2}}(\log N)^{2}\] \[\ll\] \[+\left|\sum_{(\frac{N}{b})^{\frac{1}{b+4}}<k<(\frac{N}{b})^{\frac {7}{b+4}}\atop(k,n)=1}g(k)\left(\pi\left(bk\left(\frac{N}{b}\right)^{\frac{1}{ b+4}};bk,a^{2}n,N+jan\right)-\frac{\pi\left(bk\left(\frac{N}{b}\right)^{\frac{1}{ b+4}};bk,1,1\right)}{\varphi(a^{2}n)}\right)\right|\] \[+\left|\sum_{(\frac{N}{b})^{\frac{1}{b+4}}<k<(\frac{N}{b})^{\frac {7}{b+4}}\atop(k,n)=1}g(k)\left(\pi\left(bk\left(\frac{N}{b}\right)^{\frac{1}{ b+4}};bk,a^{2}n,N+jan\right)-\frac{\pi\left(bk\left(\frac{N}{b}\right)^{\frac{1}{ b+4}};bk,1,1\right)}{\varphi(a^{2}n)}\right)\right|\] \[+\left|\sum_{(\frac{N}{b})^{\frac{7}{b+4}}<k<(\frac{N}{b})^{\frac {12}{b+2}}\atop(k,n)=1}g(k)\left(\pi\left(N;bk,a^{2}n,N+jan\right)-\frac{\pi \left(N;bk,1,1\right)}{\varphi(a^{2}n)}\right)\right|\] \[+\left|\sum_{(\frac{N}{b})^{\frac{1}{b+4}}<k<(\frac{N}{b})^{\frac {12}{b+2}}\atop(k,n)=1}g(k)\left(\pi\left(bk\left(\frac{N}{b};a^{2}n,N+jan \right)-\frac{\pi\left(bk\left(\frac{N}{b};bk,1,1\right)\right)}{\varphi(a^{2 }n)}\right)\right|\] \[+\left|\sum_{(\frac{N}{b})^{\frac{1}{b+4}}<k<(\frac{N}{b})^{\frac {12}{b+2}}\atop(k,n)=1}g(k)\left(\pi\left(bk\left(\frac{N}{b}\right)^{\frac{1} {b+4}};bk,a^{2}n,N+jan\right)-\frac{\pi\left(bk\left(\frac{N}{b}\right)^{ \frac{1}{b+4}};bk,1,1\right)}{\varphi(a^{2}n)}\right)\right|\] \[+N^{\frac{12}{b+2}}(\log N)^{2}\] \[\ll\]
\[+\frac{1}{\varphi(n)}\left|\sum_{\begin{subarray}{c}(\frac{N}{b})^{ \frac{1}{b+4}}<k<(\frac{N}{b})^{\frac{12.2}{b+2}}\\ (k,n)=1\end{subarray}}g(k)\left(\pi\left(\left(\frac{N}{b}\right)^{\frac{1}{b +4}};a^{2},N(bmp_{1}p_{2}p_{4})_{a^{2}}^{-1}+ja\right)-\frac{\pi\left(\left( \frac{N}{b}\right)^{\frac{1}{b+4}};1,1\right)}{\varphi(a^{2})}\right)\right|\] \[+\left|\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{7}{b+4}}<k<( \frac{N}{b})^{\frac{12.2}{b+2}}\\ (k,n)=1\end{subarray}}g(k)\left(\pi\left(N;bk,a^{2}n,N+jan\right)-\frac{\pi \left(N;bk,1,1\right)}{\varphi(a^{2}n)}\right)\right|\] \[+\frac{1}{\varphi(n)}\left|\sum_{\begin{subarray}{c}(\frac{N}{b} )^{\frac{7}{b+4}}<k<(\frac{N}{b})^{\frac{12.2}{b+2}}\\ (k,n)=1\end{subarray}}g(k)\left(\pi\left(\frac{N}{bk};a^{2},N(bmp_{1}p_{2}p_{4 })_{a^{2}}^{-1}+ja\right)-\frac{\pi\left(\frac{N}{bk};1,1\right)}{\varphi(a^{ 2})}\right)\right|\] \[+\left|\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{b+4}}<k<( \frac{N}{b})^{\frac{12.2}{b+2}}\\ (k,n)=1\end{subarray}}g(k)\left(\pi\left(bkp_{2};bk,a^{2}n,N+jan\right)-\frac{ \pi\left(bkp_{2};bk,1,1\right)}{\varphi(a^{2}n)}\right)\right|\] \[+\frac{1}{\varphi(n)}\left|\sum_{\begin{subarray}{c}(\frac{N}{b} )^{\frac{1}{b+4}}<k<(\frac{N}{b})^{\frac{12.2}{b+2}}\\ (k,n)=1\end{subarray}}g(k)\left(\pi\left(p_{2};a^{2},N(bmp_{1}p_{2}p_{4})_{a^{ 2}}^{-1}+ja\right)-\frac{\pi\left(p_{2};1,1\right)}{\varphi(a^{2})}\right)\right|\] \[+N^{\frac{12.2}{13.2}}(\log N)^{2}. \tag{59}\]
By Lemma 4.1, Remark 4.2 and Lemma 3.7, we have
\[\sum_{n\leqslant D_{\mathcal{C}_{1}}}\mu^{2}(n)\eta\left(|\mathcal{C}_{1}|,n \right)=\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{C}_{1}}\\ (n,abN)=1\end{subarray}}\mu^{2}(n)\eta\left(|\mathcal{C}_{1}|,n\right)\ll N( \log N)^{-3}. \tag{60}\]
By (54)-(60) we have
\[S_{61} \leqslant(1+o(1))\frac{0.5617\times 8C(N)N}{ab(\log N)^{2}} \int_{\frac{1}{13.2}}^{\frac{1}{8.4}}\frac{dt_{1}}{t_{1}}\int_{t_{1}}^{\frac{ 1}{8.4}}\frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right)\log\frac{ 1}{8.4t_{2}}dt_{2}\] \[\leqslant 0.0864362\frac{C(N)N}{ab(\log N)^{2}}. \tag{61}\]
Similarly, we have
\[S_{62}=\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}} \leqslant p_{1}<p_{2}<p_{3}<(\frac{N}{b})^{\frac{1}{8.4}}\leqslant p_{4}<( \frac{N}{b})^{\frac{1}{8.4}}\\ +\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{13.2}}\leqslant p_{1}<p_{2} <p_{3}<(\frac{N}{b})^{\frac{1}{8.4}}<(\frac{N}{b})^{\frac{1}{8.4}}\\ (\frac{N}{b})^{\frac{1}{13.2}}\leqslant p_{1}<p_{2}<p_{3}<(\frac{N}{b})^{\frac{1 }{8.4}}<(\frac{N}{b})^{\frac{1}{8.4}}<p_{4}<(\frac{N}{b})^{\frac{4.6}{13.2}}p_{ 3}^{-1}\end{subarray}}S\left(\mathcal{A}_{p_{1}p_{2}p_{3}p_{4}};\mathcal{P} \left(p_{1}\right),p_{2}\right)\] \[\leqslant(1+o(1))\frac{0.5617\times 8C(N)N}{ab(\log N)^{2}}\left(21. 6\log\frac{13.2}{8.4}-9.6\right)\log 1.4\] \[+(1+o(1))\frac{0.5644\times 8C(N)N}{ab(\log N)^{2}}\int_{\frac{1}{13.2}}^{ \frac{1}{8.4}}\frac{dt_{1}}{t_{1}}\int_{t_{1}}^{\frac{1}{8.4}}\frac{1}{t_{2}} \left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right)\log\left(\frac{8.4}{1.4}\left( \frac{4.6}{13.2}-t_{2}\right)\right)dt_{2}\] \[\leqslant 0.5208761\frac{C(N)N}{ab(\log N)^{2}}. \tag{62}\]
By (61) and (62) we have
\[S_{6}=S_{61}+S_{62} \leqslant 0.0864362\frac{C(N)N}{ab(\log N)^{2}}+0.5208761\frac{C(N)N}{ ab(\log N)^{2}}\] \[\leqslant 0.6073123\frac{C(N)N}{ab(\log N)^{2}}. \tag{63}\]
### Evaluation of \(S_{5}\)
For \(p\geqslant\left(\frac{N}{b}\right)^{\frac{4}{13.2}}\) we have
\[\underline{p}^{\frac{1}{2.5}}\leqslant\left(\frac{N}{b}\right)^{\frac{1}{13.2 }},\quad S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{\frac{1} {13.2}}\right)\leqslant S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{ \frac{1}{2.5}}\right).\]
By Lemma 5.5 we have
\[S_{51} =\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4}{13.2}}\leqslant p <(\frac{N}{b})^{\frac{1}{3}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{13.2}}\right)\] \[\leqslant\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{ \frac{1}{2.5}}\right)\leqslant\Gamma_{1}-\frac{1}{2}\Gamma_{2}+\frac{1}{2} \Gamma_{3}+O\left(N^{\frac{10}{20}}\right). \tag{64}\]
By Lemmas 3.1, 3.2, 4.1 and some routine arguments we get
\[\Gamma_{1} =\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{ \frac{1}{3.675}}\right)\] \[\leqslant(1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\left(\int_{ \frac{4}{13.2}}^{\frac{1}{3}}\frac{dt}{t(1-2t)}\right)\left(1+\int_{2}^{2.675} \frac{\log(t-1)}{t}dt\right), \tag{65}\] \[\Gamma_{2} =\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}};\mathcal{P},\underline{p}^{ \frac{1}{3.675}}\right)\] \[\geqslant(1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\left(\int_{\frac{4 }{13.2}}^{\frac{1}{3}}\frac{dt}{t(1-2t)}\right)\left(\int_{1.5}^{2.675}\frac{ \log\left(2.675-\frac{3.675}{t+1}\right)}{t}dt\right). \tag{66}\]
By an argument similar to the evaluation of \(S_{61}\) we get that
\[\Gamma_{3} =\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}}\\ (p,N)=1\end{subarray}}\sum_{\begin{subarray}{c}p\frac{1}{3.675}\leqslant p_{1 }<p_{2}<p_{3}<p\frac{1}{2.5}\\ (p_{1}p_{2}p_{3},N)=1\end{subarray}}S\left(\mathcal{A}_{pp_{1}p_{2}p_{3}}; \mathcal{P}(p_{1}),p_{2}\right)\] \[\leqslant(1+o(1))\frac{16C(N)N}{1.763ab(\log N)^{2}}\left(\int_{ \frac{4}{13.2}}^{\frac{1}{3}}\frac{dt}{t(1-2t)}\right)\left(6.175\log\frac{3.6 75}{2.5}-2.35\right). \tag{67}\]
By (64)-(67) we have
\[S_{51} =\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4}{13.2}} \leqslant p<(\frac{N}{b})^{\frac{1}{3}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{13.2}}\right)\] \[\leqslant(1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\left(\int_{\frac{4 }{13.2}}^{\frac{1}{3}}\frac{dt}{t(1-2t)}\right)\times\] \[\left(1+\int_{2}^{2.675}\frac{\log(t-1)}{t}dt-\frac{1}{2}\int_{1. 5}^{2.675}\frac{\log\left(2.675-\frac{3.675}{t+1}\right)}{t}dt+\frac{1}{1.763} \left(6.175\log\frac{3.675}{2.5}-2.35\right)\right).\]
Similarly, we have
\[S_{52}=\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{3.64}}\leq p<(\frac{N}{b })^{\frac{1}{3.604}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{8.4}}\right)\]
\[\leqslant(1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\left(\int_{\frac{4}{13.64}}^{ \frac{1}{3.604}}\frac{dt}{t(1-2t)}\right)\times\]
\[\left(1+\int_{2}^{2.675}\frac{\log(t-1)}{t}dt-\frac{1}{2}\int_{1.5}^{2.675} \frac{\log\left(2.675-\frac{3.675}{t+1}\right)}{t}dt+\frac{1}{1.763}\left(6.1 75\log\frac{3.675}{2.5}-2.35\right)\right)\]
\[S_{5}=S_{51}+S_{52}\]
\[\leqslant(1+o(1))\frac{8C(N)N}{ab(\log N)^{2}}\left(\int_{\frac{4}{13.2}}^{ \frac{1}{3}}\frac{dt}{t(1-2t)}+\int_{\frac{3.64}{13.2}}^{\frac{1}{3.604}}\frac {dt}{t(1-2t)}\right)\times\]
\[\left(1+\int_{2}^{2.675}\frac{\log(t-1)}{t}dt-\frac{1}{2}\int_{1.5}^{2.675} \frac{\log\left(2.675-\frac{3.675}{t+1}\right)}{t}dt+\frac{1}{1.763}\left(6.1 75\log\frac{3.675}{2.5}-2.35\right)\right)\]
\[\leqslant 1.87206\frac{C(N)N}{ab(\log N)^{2}}. \tag{68}\]
### Proof of theorem 1.1
By (42)-(44), (52)-(53), (63) and (68) we get
\[S_{1}+S_{2}\geqslant 58.974416\frac{C(N)N}{ab(\log N)^{2}},\]
\[S_{3}+S_{4}+S_{5}+S_{6}+2S_{7}\leqslant 55.505987\frac{C(N)N}{ab(\log N)^{2}},\]
\[4R_{a,b}(N)\geqslant(S_{1}+S_{2})-(S_{3}+S_{4}+S_{5}+S_{6}+2S_{7})\geqslant 3.468429\frac{C(N)N}{ab(\log N)^{2}},\]
\[R_{a,b}(N)\geqslant 0.8671\frac{C(N)N}{ab(\log N)^{2}}.\]
Theorem 1.1 is proved.
## 7. Proof of Theorem 1.2
In this section, sets \(\mathcal{A}_{2}\), \(\mathcal{B}_{2}\), \(\mathcal{C}_{2}\), \(\mathcal{C}_{3}\), \(\mathcal{E}_{2}\), \(\mathcal{F}_{2}\) and \(\mathcal{F}_{3}\) are defined respectively. We define the function \(\omega\) as \(\omega(p)=0\) for primes \(p\mid abN\) and \(\omega(p)=\frac{p}{p-1}\) for other primes.
### Evaluation of \(S_{1}^{\prime},S_{2}^{\prime},S_{3}^{\prime}\)
Let \(D_{\mathcal{A}_{2}}=\left(\frac{N}{b}\right)^{\theta/2}\left(\log\left(\frac{ N}{b}\right)\right)^{-B}\) and \(D_{\mathcal{A}_{2_{p}}}=\frac{D_{\mathcal{A}_{2}}}{p}=\frac{\left(\frac{N}{b} \right)^{\theta/2}\left(\log\left(\frac{N}{b}\right)\right)^{-B}}{p}\) for some positive constant \(B\). By Lemma 3.6, we can take
\[X_{\mathcal{A}_{2}}=\sum_{\begin{subarray}{c}0\leqslant k\leqslant b-1\\ (k,b)=1\end{subarray}}\pi\left(\frac{N^{\theta}}{a};b^{2},Na_{b^{2}}^{-1}+kb \right)\sim\frac{\varphi(b)N^{\theta}}{a\varphi\left(b^{2}\right)\log N^{\theta }}\sim\frac{N^{\theta}}{ab\theta\log N}. \tag{69}\]
so that \(|\mathcal{A}_{2}|\sim X_{\mathcal{A}_{2}}\). By Lemma 3.5 for \(z_{\mathcal{A}_{2}}=N^{\frac{1}{\alpha}}\) we have
\[W(z_{\mathcal{A}_{2}})=\frac{2\alpha e^{-\gamma}C(N)(1+o(1))}{\log N}. \tag{70}\]
To deal with the error terms, we have
\[\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{A}_{2}}\\ n|P(z_{\mathcal{A}_{2}})\end{subarray}}\eta\left(X_{\mathcal{A}_{2}},n\right) \ll\sum_{n\leqslant D_{\mathcal{A}_{2}}}\mu^{2}(n)\eta\left(X_{\mathcal{A}_{2}},n\right), \tag{71}\]
\[\sum_{p}\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{A}_{2}}\\ n|P(z_{\mathcal{A}_{2}})\end{subarray}}\eta\left(X_{\mathcal{A}_{2}},pn\right) \ll\sum_{n\leqslant D_{\mathcal{A}_{2}}}\mu^{2}(n)\eta\left(X_{\mathcal{A}_{2}}, n\right). \tag{72}\]
By our previous discussion, any \(\frac{N-ap}{b}\) in \(\mathcal{A}_{2}\) is relatively prime to \(b\), so \(\eta\left(X_{\mathcal{A}_{2}},n\right)=0\) for any integer \(n\) that shares a common prime divisor with \(b\). If \(n\) and \(a\) share a common prime divisor \(r\), say \(n=rn^{\prime}\) and \(a=ra^{\prime}\), then \(\frac{N-ap}{bn}=\frac{N-ra^{\prime}p}{brn^{\prime}}\in\mathbb{Z}\) implies \(r\mid N\), which is a contradiction to \((a,N)=1\). Similarly, we have \(\eta\left(X_{\mathcal{A}_{2}},n\right)=0\) if \((n,N)>1\). We conclude that \(\eta\left(X_{\mathcal{A}_{2}},n\right)=0\) if \((n,abN)>1\).
For a square-free integer \(n\leqslant D_{\mathcal{A}_{2}}\) such that \((n,abN)=1\), to make \(n\mid\frac{N-ap}{b}\) for some \(\frac{N-ap}{b}\in\mathcal{A}_{2}\), we need \(ap\equiv N(\operatorname{mod}bn)\), which implies \(ap\equiv N+kbn\)\(\left(\operatorname{mod}b^{2}n\right)\) for some \(0\leqslant k\leqslant b-1\). Since \(\left(\frac{N-ap}{bn},b\right)=1\), we can further require \((k,b)=1\). When \(k\) runs through the reduced residues modulo \(b\), we know \(a_{b^{2}n}^{-1}k\) also runs through the reduced residues modulo \(b\). Therefore, we have \(p\equiv a_{b^{2}n}^{-1}N+kbn\left(\operatorname{mod}b^{2}n\right)\) for some \(0\leqslant k\leqslant b-1\) such that \((k,b)=1\). Conversely, if \(p=a_{b^{2}n}^{-1}N+kbn+mb^{2}n\) for some integer \(m\) and some \(0\leqslant k\leqslant b-1\) such that \((k,b)=1\), then \(\left(\frac{N-ap}{bn},b\right)=\left(\frac{N-aa_{b^{2}n}^{-1}N-akbn-amb^{2}n}{ bn},b\right)=(-ak,b)=1\). Therefore, for square-free integers \(n\) such that \((n,abN)=1\), we have
\[\eta\left(X_{\mathcal{A}_{2}},n\right)= \left|\sum_{\begin{subarray}{c}a\in\mathcal{A}_{2}\\ a\equiv 0(\operatorname{mod}\ n)\end{subarray}}1-\frac{\omega(n)}{n}X_{ \mathcal{A}_{2}}\right|\] \[= \left|\sum_{\begin{subarray}{c}0\leqslant k\leqslant b-1\\ (k,b)=1\end{subarray}}\pi\left(\frac{N^{\theta}}{a};b^{2}n,a_{b^{2}n}^{-1}N+ kbn\right)-\frac{X_{\mathcal{A}_{2}}}{\varphi(n)}\right|\] \[\ll \left|\sum_{\begin{subarray}{c}0\leqslant k\leqslant b-1\\ (k,b)=1\end{subarray}}\pi\left(\frac{N^{\theta}}{a};b^{2}n,a_{b^{2}n}^{-1}N+ kbn\right)-\frac{\varphi(b)\pi\left(\frac{N^{\theta}}{a};1,1\right)}{\varphi \left(b^{2}n\right)}\right|\] \[+\left|\sum_{\begin{subarray}{c}0\leqslant k\leqslant b-1\\ (k,b)=1\end{subarray}}\frac{\pi\left(\frac{N^{\theta}}{a};b^{2},a_{b^{2}}^{-1}N +kb\right)}{\varphi(n)}-\frac{\varphi(b)\pi\left(\frac{N^{\theta}}{a};1,1 \right)}{\varphi\left(b^{2}n\right)}\right|\] \[\ll \sum_{\begin{subarray}{c}0\leqslant k\leqslant b-1\\ (k,b)=1\end{subarray}}\left|\pi\left(\frac{N^{\theta}}{a};b^{2}n,a_{b^{2}n}^{-1} N+kbn\right)-\frac{\pi\left(\frac{N^{\theta}}{a};1,1\right)}{\varphi\left(b^{2}n \right)}\right|\] \[+\frac{1}{\varphi(n)}\sum_{\begin{subarray}{c}0\leqslant k \leqslant b-1\\ (k,b)=1\end{subarray}}\left|\pi\left(\frac{N^{\theta}}{a};b^{2},a_{b^{2}}^{-1} N+kb\right)-\frac{\pi\left(\frac{N^{\theta}}{a};1,1\right)}{\varphi\left(b^{2} \right)}\right|. \tag{73}\]
By Lemmas 3.7 and 4.1 with \(g(k)=1\) for \(k=1\) and \(g(k)=0\) for \(k>1\), we know
\[\sum_{n\leqslant D_{\mathcal{A}_{2}}}\mu^{2}(n)\eta\left(X_{\mathcal{A}_{2}},n \right)\ll N^{\theta}(\log N)^{-3}. \tag{74}\]
Then by (69)-(74), Lemma 3.1, Lemma 3.2 and some routine arguments we have
\[S_{11}^{\prime}\geqslant X_{\mathcal{A}_{2}}W\left(z_{\mathcal{A}_{2}}\right) \left\{f\left(\frac{\theta/2}{1/14}\right)+O\left(\frac{1}{\log^{\frac{1}{3}} D}\right)\right\}-\sum_{\begin{subarray}{c}n<D_{\mathcal{A}_{2}}\\ n|P(z_{\mathcal{A}_{2}})\end{subarray}}3^{\nu(n)}\eta(X_{\mathcal{A}_{2}},n)\]
\[S^{\prime}_{31}\leqslant (1+o(1))\frac{8C(N)N^{\theta}}{ab\theta^{2}(\log N)^{2}}\left(\log \frac{4.0871(14\theta-2)}{14\theta-8.1742}\right.\] \[+\int_{2}^{7\theta-2}\frac{\log(s-1)}{s}\log\frac{(7\theta-1)(7 \theta-1-s)}{s+1}ds\] \[+\int_{2}^{7\theta-4}\frac{\log(s-1)}{s}ds\int_{s+2}^{7\theta-2} \frac{1}{t}\log\frac{t-1}{s+1}\log\frac{(7\theta-1)(7\theta-1-t)}{t+1}dt\right)\] \[\leqslant 24.636554\frac{C(N)N^{\theta}}{ab(\log N)^{2}},\] \[S^{\prime}_{32}\leqslant (1+o(1))\frac{8C(N)N^{\theta}}{ab\theta^{2}(\log N)^{2}}\left( \log\frac{(7\theta-3)(7\theta-1)}{3}\right.\]
\[+\int_{2}^{7\theta-2}\frac{\log(s-1)}{s}\log\frac{(7\theta-1)(7 \theta-1-s)}{s+1}ds\] \[+\int_{2}^{7\theta-4}\frac{\log(s-1)}{s}ds\int_{s+2}^{7\theta-2} \frac{1}{t}\log\frac{t-1}{s+1}\log\frac{(7\theta-1)(7\theta-1-t)}{t+1}dt\Bigg{)}\] \[\leqslant 21.808803\frac{C(N)N^{\theta}}{ab(\log N)^{2}},\] \[S_{3}^{\prime}= S_{31}^{\prime}+S_{32}^{\prime}\leqslant 46.445357\frac{C(N)N^{ \theta}}{ab(\log N)^{2}}. \tag{77}\]
### Evaluation of \(S_{4}^{\prime},S_{7}^{\prime}\)
Let \(D_{\mathcal{B}_{2}}=N^{\theta-1/2}(\log N)^{-B}\). By Chen's role-reversal trick and similar arguments in [1], we know that
\[|\mathcal{E}_{2}|<\left(\frac{N}{b}\right)^{\frac{2}{3}},\quad\left(\frac{N}{ b}\right)^{\frac{1}{3}}<e\leqslant\left(\frac{N}{b}\right)^{\frac{2}{3}}\text{ for }\,e\in\mathcal{E}_{2},\quad S_{41}^{\prime}\leqslant S\left(\mathcal{B}_{2}; \mathcal{P},D_{\mathcal{B}_{2}}^{\frac{1}{2}}\right)+O\left(N^{\frac{2}{3}} \right). \tag{78}\]
Then we can take
\[X_{\mathcal{B}_{2}}=\sum_{\begin{subarray}{c}(\frac{N}{b})^{ \frac{1}{4}}\leqslant p_{1}\leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2} \leqslant(\frac{N}{bp_{1}})^{\frac{1}{2}}\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}\left(\pi\left(\frac{N}{bp_{1} p_{2}};a^{2},N\left(bp_{1}p_{2}\right)_{a^{2}}^{-1}+ja\right)\right.\] \[\left.-\pi\left(\frac{N-N^{\theta}}{bp_{1}p_{2}};a^{2},N\left(bp_ {1}p_{2}\right)_{a^{2}}^{-1}+ja\right)\right) \tag{79}\]
so that \(|\mathcal{B}_{2}|\sim X_{\mathcal{B}_{2}}\). By Lemma 3.5 for \(z_{\mathcal{B}_{2}}=D_{\mathcal{B}_{2}}^{\frac{1}{2}}=N^{\frac{2\theta-1}{4}}( \log N)^{-B/2}\) we have
\[W(z_{\mathcal{B}_{2}})=\frac{8e^{-\gamma}C(N)(1+o(1))}{(2\theta-1)\log N}, \quad F(2)=e^{\gamma}. \tag{80}\]
By Huxley's prime number theorem in short intervals and integration by parts we get that
\[X_{\mathcal{B}_{2}} =(1+o(1))\sum_{(\frac{N}{b})^{\frac{1}{4}}\leqslant p_{1} \leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2}\leqslant(\frac{N}{bp_{1}})^{ \frac{1}{2}}}\frac{\varphi(a)\frac{N^{\theta}}{bp_{1}p_{2}}}{\varphi\left(a^{ 2}\right)\log\left(\frac{N}{bp_{1}p_{2}}\right)}\] \[=(1+o(1))\frac{N^{\theta}}{ab}\sum_{(\frac{N}{b})^{\frac{1}{4}} \leqslant p_{1}\leqslant(\frac{N}{b})^{\frac{1}{3.106}}<p_{2}\leqslant(\frac{N }{bp_{1}})^{\frac{1}{2}}}\frac{1}{p_{1}p_{2}\log\left(\frac{N}{bp_{1}p_{2}} \right)}\] \[=(1+o(1))\frac{N^{\theta}}{ab}\int_{(\frac{N}{b})^{\frac{1}{4}} \leqslant\frac{N}{b}}\frac{dt}{t\log t}\int_{(\frac{N}{b})^{\frac{1}{3}} \frac{t}{100}}^{(\frac{N}{b})^{\frac{1}{2}}}\frac{du}{u\log u\log\left(\frac{ N}{ut}\right)}\] \[=(1+o(1))\frac{N^{\theta}}{ab\log N}\int_{2.106}^{13}\frac{\log \left(2.106-\frac{3.106}{s+1}\right)}{s}ds. \tag{81}\]
To deal with the error terms, we have
\[\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{B}_{2}}\\ n\mid P(z_{\mathcal{B}_{2}})\end{subarray}}\eta\left(X_{\mathcal{B}_{2}},n \right)\ll\sum_{n\leqslant D_{\mathcal{B}_{2}}}\mu^{2}(n)\eta\left(X_{ \mathcal{B}_{2}},n\right). \tag{82}\]
For an integer \(n\) such that \((n,abN)>1\), similarly to the discussion for \(\eta\left(X_{\mathcal{A}_{2}},n\right)\), we have \(\eta\left(X_{\mathcal{B}_{2}},n\right)=0\).
For a square-free integer \(n\) such that \((n,abN)=1\), if \(n\mid\frac{N-bp_{1}p_{2}p_{3}}{a}\), then \((p_{1},n)=1\) and \((p_{2},n)=1\). Moreover, if \(\left(\frac{N-bp_{1}p_{2}p_{3}}{an},a\right)=1\), then we have \(bp_{1}p_{2}p_{3}\equiv N+jan\left(\text{mod}\,a^{2}n\right)\) for some \(j\) such that \(0\leqslant j\leqslant a-1\) and \((j,a)=1\). Conversely, if \(bp_{1}p_{2}p_{3}=N+jan+sa^{2}n\) for some integer \(j\) such that \(0\leqslant j\leqslant a\) and \((j,a)=1\), some integer \(n\) relatively prime to \(p_{1}p_{2}\) such that \(an\mid(N-bp_{1}p_{2}p_{3})\), and some integer \(s\), then \(\left(\frac{N-bp_{1}p_{2}p_{3}}{an},a\right)=(-j,a)=1\). Since \(jbp_{1}p_{2}\) runs through the reduced residues modulo \(a\) when \(j\) runs
through the reduced residues modulo \(a\) and \(\pi\left(x;k,1,1\right)=\pi\left(\frac{x}{k};1,1\right)\), for square-free integers \(n\) such that \(\left(n,abN\right)=1\), we have
\[\eta\left(X_{\mathcal{B}_{2}},n\right)= \left|\sum_{\begin{subarray}{c}a\in\mathcal{B}_{2}\\ a\equiv 0\left(\bmod n\right)\end{subarray}}1-\frac{\omega(n)}{n}X_{\mathcal{B}_{2} }\right|=\left|\sum_{\begin{subarray}{c}a\in\mathcal{B}_{2}\\ a\equiv 0\left(\bmod n\right)\end{subarray}}1-\frac{X_{\mathcal{B}_{2}}}{ \varphi(n)}\right|\] \[= \left|\sum_{\begin{subarray}{c}\left(\frac{N}{k}\right)\frac{1}{14} \leqslant p_{1}\leqslant\left(\frac{N}{k}\right)\frac{1}{3\cdot 100}<p_{2}\leqslant \left(\frac{N}{kp_{1}}\right)^{\frac{1}{2}},\left(p_{1}p_{2},N\right)=1\\ \left(p_{1}p_{2},n\right)=1,0\leqslant j\leqslant a-1,\left(j,a\right)=1 \end{subarray}}\left(\pi\left(N;bp_{1}p_{2},a^{2}n,N+jan\right)-\pi\left(N-N^{ \theta};bp_{1}p_{2},a^{2}n,N+jan\right)\right)\right.\] \[-\sum_{\begin{subarray}{c}\left(\frac{N}{k}\right)\frac{1}{14} \leqslant p_{1}\leqslant\left(\frac{N}{k}\right)\frac{1}{3\cdot 100}<p_{2}\leqslant \left(\frac{N}{kp_{1}}\right)^{\frac{1}{2}}\\ \left(p_{1}p_{2},n\right)=1,0\leqslant j\leqslant a-1,\left(j,a\right)=1 \end{subarray}}\frac{\left(\pi\left(\frac{N}{bp_{1}p_{2}};a^{2},N\left(bp_{1} p_{2}\right)_{a^{2}}^{-1}+ja\right)-\pi\left(\frac{N-N^{\theta}}{bp_{1}p_{2}};a^{2},N \left(bp_{1}p_{2}\right)_{a^{2}}^{-1}+ja\right)\right)}{\varphi(n)}\right|\] \[\ll \left|\sum_{\begin{subarray}{c}\left(\frac{N}{k}\right)\frac{1}{14} \leqslant p_{1}\leqslant\left(\frac{N}{k}\right)\frac{1}{3\cdot 100}<p_{2} \leqslant\left(\frac{N}{kp_{1}}\right)^{\frac{1}{2}},\left(p_{1}p_{2},N\right) =1\\ \left(p_{1}p_{2},n\right)=1,0\leqslant j\leqslant a-1,\left(j,a\right)=1 \end{subarray}}\left(\left(\pi\left(N;bp_{1}p_{2},a^{2}n,N+jan\right)-\pi \left(N-N^{\theta};bp_{1}p_{2},a^{2}n,N+jan\right)\right)\right)\right.\] \[-\frac{\left(\pi\left(\frac{N}{bp_{1}p_{2}};a^{2},N\left(bp_{1} p_{2}\right)_{a^{2}}^{-1}+ja\right)-\pi\left(\frac{N-N^{\theta}}{bp_{1}p_{2}};a^{2},N \left(bp_{1}p_{2}\right)_{a^{2}}^{-1}+ja\right)\right)}{\varphi(n)}\right|\] \[+\sum_{\begin{subarray}{c}\left(\frac{N}{k}\right)\frac{1}{14} \leqslant p_{1}\leqslant\left(\frac{N}{k}\right)\frac{1}{3\cdot 100}<p_{2} \leqslant\left(\frac{N}{kp_{1}}\right)^{\frac{1}{2}},\left(p_{1}p_{2},N\right) =1\\ \left(p_{1}p_{2},n\right)=1,0\leqslant j\leqslant a-1,\left(j,a\right)=1 \end{subarray}}\frac{\left(\pi\left(\frac{N}{bp_{1}p_{2}};a^{2},N\left(bp_{1} p_{2}\right)_{a^{2}}^{-1}+ja\right)-\pi\left(\frac{N-N^{\theta}}{bp_{1}p_{2}};a^{2},N \left(bp_{1}p_{2}\right)_{a^{2}}^{-1}+ja\right)\right)}{\varphi(n)}\] \[\ll \left|\sum_{\begin{subarray}{c}\left(\frac{N}{k}\right)\frac{1}{14} \leqslant p_{1}\leqslant\left(\frac{N}{k}\right)\frac{1}{3\cdot 100}<p_{2} \leqslant\left(\frac{N}{kp_{1}}\right)^{\frac{1}{2}},\left(p_{1}p_{2},N\right) =1\\ \left(p_{1}p_{2},n\right)=1,0\leqslant j\leqslant a-1,\left(j,a\right)=1 \end{subarray}}\left(\left(\pi\left(N;bp_{1}p_{2},a^{2}n,N+jan\right)-\pi \left(N-N^{\theta};bp_{1}p_{2},a^{2}n,N+jan\right)\right)\right.\right.\] \[\left.\left.-\frac{\left(\pi\left(N;bp_{1}p_{2},1,1\right)-\pi \left(N-N^{\theta};bp_{1}p_{2},1,1\right)\right)}{\varphi\left(a^{2}n\right) }\right)\right|\] \[+\left|\sum_{\begin{subarray}{c}\left(\frac{N}{k}\right)\frac{1}{14} \leqslant p_{1}\leqslant\left(\frac{N}{k}\right)\frac{1}{3\cdot 100}<p_{2} \leqslant\left(\frac{N}{kp_{1}}\right)^{\frac{1}{2}},\left(p_{1}p_{2},N\right) =1\\ \left(p_{1}p_{2},n\right)=1,0\leqslant j\leqslant a-1,\left(j,a\right)=1 \end{subarray}}\right.\] \[\left(\frac{\left(\pi\left(\frac{N}{kp_{1}p_{2}};a^{2},N\left(bp_{1 }p_{2}\right)_{a^{2}}^{-1}+ja\right)-\pi\left(\frac{N-N^{\theta}}{bp_{1}p_{2}}; a^{2},N\left(bp_{1}p_{2}\right)_{a^{2}}^{-1}+ja\right)\right)}{\varphi(n)}\right.\] \[\left.-\frac{\left(\pi\left(\frac{N}{bp_{1}p_{2}};1,1\right)-\pi \left(\frac{N-N^{\theta}}{bp_{1}p_{2}};1,1\right)\right)}{\varphi\left(a^{2}n \right)}\right)\right|+N^{\frac{13}{14}}(\log N)^{2}\]
\[\ll\] \[-\frac{\left(\pi\left(N;bp_{1}p_{2},1,1\right)-\pi\left(N-N^{ \theta};bp_{1}p_{2},1,1\right)\right)}{\varphi\left(a^{2}n\right)}\Bigg{|}\] \[+\frac{1}{\varphi(n)}\Bigg{|}\sum_{\left(\frac{N}{b};\frac{1}{14} \leqslant p_{1}\leqslant\left(\frac{N}{b}\right)\frac{1}{3};106<p_{2}\leqslant \left(\frac{N}{bp_{1}}\right)^{\frac{1}{2}},\left(p_{1}p_{2},N\right)=1\right.}\] \[\left.\left(\pi\left(\frac{N}{bp_{1}p_{2}};a^{2},N\left(bp_{1}p_{ 2}\right)-1\right.ja\right)-\pi\left(\frac{N-N^{\theta}}{bp_{1}p_{2}};a^{2},N \left(bp_{1}p_{2}\right)-1\right.ja\right)\right)\] \[-\frac{\left(\pi\left(\frac{N}{bp_{1}p_{2}};1,1\right)-\pi\left( \frac{N-N^{\theta}}{bp_{1}p_{2}};1,1\right)\right)}{\varphi\left(a^{2}\right) }\Bigg{|}+N^{\frac{13}{14}}(\log N)^{2}. \tag{83}\]
By Lemma 4.3 with
\[g(k)=\begin{cases}1,&\text{ if }k\in\mathcal{E}_{2}\\ 0,&\text{ otherwise}\end{cases}\]
and Lemma 3.7, we have
\[\sum_{n\leqslant D_{\mathcal{B}_{2}}}\mu^{2}(n)\eta\left(X_{\mathcal{B}_{2}}, n\right)=\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{B}_{2}}\\ \left(n,abN\right)=1\end{subarray}}\mu^{2}(n)\eta\left(X_{\mathcal{B}_{2}}, n\right)\ll N^{\theta}(\log N)^{-3}. \tag{84}\]
Then by (78)-(84) and some routine arguments we have
\[S_{41}^{\prime}\leqslant(1+o(1))\frac{8C(N)N^{\theta}}{ab(2\theta-1)(\log N) ^{2}}\int_{2.106}^{13}\frac{\log\left(2.106-\frac{3.106}{s+1}\right)}{s}ds.\]
Similarly, we have
\[S_{42}^{\prime}\leqslant(1+o(1))\frac{8C(N)N^{\theta}}{ab(2\theta-1)(\log N) ^{2}}\int_{2.73}^{7.8}\frac{\log\left(2.73-\frac{3.73}{s+1}\right)}{s}ds,\]
\[S_{4}^{\prime}=S_{41}^{\prime}+S_{42}^{\prime}\leqslant(1+o(1))\frac{8C(N)N^ {\theta}}{ab(2\theta-1)(\log N)^{2}}\left(\int_{2.106}^{13}\frac{\log\left(2.1 06-\frac{3.106}{s+1}\right)}{s}ds+\int_{2.73}^{7.8}\frac{\log\left(2.73-\frac {3.73}{s+1}\right)}{s}ds\right)\]
\[\leqslant 14.062223\frac{C(N)N^{\theta}}{ab(\log N)^{2}}, \tag{85}\]
\[S_{71}^{\prime}\leqslant(1+o(1))\frac{8C(N)N^{\theta}}{ab(2\theta-1)(\log N) ^{2}}\int_{2}^{2.106}\frac{\log(s-1)}{s}ds\]
\[S_{72}^{\prime}\leqslant(1+o(1))\frac{8C(N)N^{\theta}}{ab(2\theta-1)(\log N) ^{2}}\int_{2}^{2.73}\frac{\log(s-1)}{s}ds,\]
\[S_{7}^{\prime}=S_{71}^{\prime}+S_{72}^{\prime}\leqslant(1+o(1))\frac{8C(N)N^{ \theta}}{ab(2\theta-1)(\log N)^{2}}\left(\int_{2}^{2.106}\frac{\log(s-1)}{s} ds+\int_{2}^{2.73}\frac{\log(s-1)}{s}ds\right)\]
\[\leqslant 0.827696\frac{C(N)N^{\theta}}{ab(\log N)^{2}}. \tag{86}\]
### Evaluation of \(S_{6}^{\prime}\)
Let \(D_{\mathcal{C}_{2}}=N^{\theta-1/2}(\log N)^{-B}\). By Chen's role-reversal trick and similar arguments in [3], we know that
\[S_{61}^{\prime}\leqslant S\left(\mathcal{C}_{2};\mathcal{P},D_{ \mathcal{C}_{2}}^{\frac{1}{2}}\right)+O\left(D_{\mathcal{C}_{2}}^{\frac{1}{2} }\right). \tag{87}\]
By Lemma 3.5 for \(z_{\mathcal{C}_{2}}=D_{\mathcal{C}_{2}}^{\frac{1}{2}}=N^{\frac{2\theta-1}{4}} (\log N)^{-B/2}\) we have
\[W(z_{\mathcal{C}_{2}})=\frac{8e^{-\gamma}C(N)(1+o(1))}{(2\theta-1 )\log N},\quad F(2)=e^{\gamma}. \tag{88}\]
By Remark 3.4 we have
\[|\mathcal{C}_{2}| =\sum_{\begin{subarray}{c}bmp_{1}p_{2}p_{3}p_{4}\in\mathcal{F}_{ 2}\\bmp_{1}p_{2}p_{3}p_{4}\equiv N+ja\pmod{a^{2}}\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1\] \[=\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{1}{14}}\leqslant p _{1}<p_{2}<p_{3}<p_{4}<(\frac{N}{b})^{\frac{1}{88}}\\ (p_{1}p_{2}p_{3}p_{4},N)=1\end{subarray}}\sum_{\begin{subarray}{c}(\frac{N-N^ {\theta}}{p_{1}p_{2}p_{3}p_{4}}\leqslant m\leqslant\frac{N}{bp_{1}p_{2}p_{3}p_ {4}}\\ (m,p_{1}^{-1}abNP(p_{2}))=1\end{subarray}}\frac{\varphi(a)}{\varphi(a^{2})}\] \[<(1+o(1))\frac{N^{\theta}}{ab}\sum_{(\frac{N}{b})^{\frac{1}{14}} \leqslant p_{1}<p_{2}<p_{3}<p_{4}<(\frac{N}{b})^{\frac{1}{88}}}\frac{0.5617}{ p_{1}p_{2}p_{3}p_{4}\log p_{2}}\] \[=(1+o(1))\frac{0.5617N^{\theta}}{ab\log N}\int_{\frac{1}{14}}^{ \frac{1}{88}}\frac{dt_{1}}{t_{1}}\int_{t_{1}}^{\frac{1}{88}}\frac{1}{t_{2}} \left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right)\log\frac{1}{8.8t_{2}}dt_{2}. \tag{89}\]
To deal with the error terms, we have
\[\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{C}_{2}}\\ n|P\left(z_{\mathcal{C}_{2}}\right)\end{subarray}}\eta\left(|\mathcal{C}_{2}|,n\right)\ll\sum_{n\leqslant D_{\mathcal{C}_{2}}}\mu^{2}(n)\eta\left(| \mathcal{C}_{2}|,n\right). \tag{90}\]
Since \(\omega(p)=0\) for primes \(p\mid abN\) and \(\omega(p)=\frac{p}{p-1}\) for other primes, for an integer \(n\) such that \((n,abN)>1\), similarly to the discussion for \(\eta\left(X_{\mathcal{B}_{2}},n\right)\), we have \(\eta\left(|\mathcal{C}_{2}|,n\right)=0\).
For a square-free integer \(n\) that is relatively prime to \(abN\), if \(n\mid\frac{N-bmp_{1}p_{2}p_{3}p_{4}}{a}\), then \((p_{1},n)=1,(p_{2},n)=1,(p_{3},n)=1\) and \((p_{4},n)=1\). Moreover, if \(\left(\frac{N-bmp_{1}p_{2}p_{3}p_{4}}{an},a\right)=1\), then we have \(bmp_{1}p_{2}p_{3}p_{4}\equiv N+jan\left(\bmod a^{2}n\right)\) for some \(j\) such that \(0\leqslant j\leqslant a-1\) and \((j,a)=1\). Conversely, if \(bmp_{1}p_{2}p_{3}p_{4}=N+jan+sa^{2}n\) for some integer \(j\) such that \(0\leqslant j\leqslant a\) and \((j,a)=1\), some integer \(n\) relatively prime to \(p_{1}p_{2}p_{3}p_{4}\) such that \(an\mid(N-bmp_{1}p_{2}p_{3}p_{4})\), and some integer \(s\), then \(\left(\frac{N-bmp_{1}p_{2}p_{3}p_{4}}{an},a\right)=(-j,a)=1\). Since \(jbmp_{1}p_{2}p_{3}p_{4}\) runs through the reduced residues modulo \(a\) when \(j\) runs through the reduced residues modulo \(a\), for a square-free integer \(n\) relatively prime to \(abN\), we have
\[\eta\left(|\mathcal{C}_{2}|,n\right) =\left|\sum_{\begin{subarray}{c}a\in\mathcal{C}_{2}\\ a\equiv 0\pmod{a}\end{subarray}}1-\frac{\omega(n)}{n}|\mathcal{C}_{2}| \right|=\left|\sum_{\begin{subarray}{c}a\in\mathcal{C}_{2}\\ a\equiv 0\pmod{n}\end{subarray}}1-\frac{|\mathcal{C}_{2}|}{\varphi(n)}\right|\] \[\ll\left|\sum_{\begin{subarray}{c}e\in\mathcal{F}_{2}\\ (e,n)=1\\ e\equiv N+jan(\bmod a^{2}n)\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1-\frac{1}{\varphi(n)}\sum_{ \begin{subarray}{c}e\in\mathcal{F}_{2}\\ (e,n)=1\\ e\equiv N+ja(\bmod a^{2})\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1\right|\]
\[+\frac{1}{\varphi(n)}\sum_{\begin{subarray}{c}e\in\mathcal{F}_{2}\\ (e,n)>1\\ e\equiv N+ja\pmod{a^{2}}\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1.\] \[\ll\left|\sum_{\begin{subarray}{c}e\in\mathcal{F}_{2}\\ (e,n)=1\\ e\equiv N+ja\pmod{a^{2}}\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1-\frac{1}{\varphi(a^{2}n)} \sum_{\begin{subarray}{c}e\in\mathcal{F}_{2}\\ (e,n)=1\end{subarray}}1\right|\] \[+\left|\frac{1}{\varphi(n)}\sum_{\begin{subarray}{c}e\in \mathcal{F}_{2}\\ (e,n)=1\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1-\frac{1}{\varphi(a^{2}n)} \sum_{\begin{subarray}{c}e\in\mathcal{F}_{2}\\ (e,n)=1\end{subarray}}1\right|\] \[+N^{\theta-\frac{1}{14}}(\log N)^{2}\] \[\ll\left|\sum_{\begin{subarray}{c}e\in\mathcal{F}_{2}\\ (e,n)=1\\ e\equiv N+ja\pmod{a^{2}}\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1-\frac{1}{\varphi(a^{2}n)} \sum_{\begin{subarray}{c}e\in\mathcal{F}_{2}\\ (e,n)=1\end{subarray}}1\right|\] \[+\frac{1}{\varphi(n)}\left|\sum_{\begin{subarray}{c}e\in \mathcal{F}_{2}\\ (e,n)=1\\ e\equiv N+ja\pmod{a^{2}}\\ 0\leqslant j\leqslant a-1,(j,a)=1\end{subarray}}1-\frac{1}{\varphi(a^{2})} \sum_{\begin{subarray}{c}e\in\mathcal{F}_{2}\\ (e,n)=1\end{subarray}}1\right|\] \[+N^{\theta-\frac{1}{14}}(\log N)^{2}. \tag{91}\]
By Lemma 4.5 and Lemma 3.7, we have
\[\sum_{n\leqslant D_{\mathcal{C}_{2}}}\mu^{2}(n)\eta\left(|\mathcal{C}_{2}|,n \right)=\sum_{\begin{subarray}{c}n\leqslant D_{\mathcal{C}_{2}}\\ (n,abN)=1\end{subarray}}\mu^{2}(n)\eta\left(|\mathcal{C}_{2}|,n\right)\ll N^{ \theta}(\log N)^{-3}. \tag{92}\]
Then by (87)-(92) and some routine arguments we have
\[S_{61}^{\prime}\leqslant(1+o(1))\frac{0.5617\times 8C(N)N^{\theta}}{ab(2\theta-1) (\log N)^{2}}\int_{\frac{1}{14}}^{\frac{1}{8.8}}\frac{dt_{1}}{t_{1}}\int_{t_{ 1}}^{\frac{1}{8.8}}\frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right) \log\frac{1}{8.8t_{2}}dt_{2}. \tag{93}\]
Similarly, we have
\[S_{62}^{\prime}\leqslant(1+o(1))\frac{0.5617\times 8C(N)N^{\theta}}{ab(2 \theta-1)(\log N)^{2}}\int_{\frac{1}{14}}^{\frac{1}{8.8}}\frac{dt_{1}}{t_{1}} \int_{t_{1}}^{\frac{1}{8.8}}\frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2 }}\right)\log\left(8.8\left(\frac{\theta}{2}-\frac{2}{14}-t_{2}\right) \right)dt_{2}. \tag{94}\]
By (93) and (94) we have
\[S_{6}^{\prime}=S_{61}^{\prime}+S_{62}^{\prime}\leqslant(1+o(1))\frac{0.5617 \times 8C(N)N^{\theta}}{ab(2\theta-1)(\log N)^{2}}\left(\int_{\frac{1}{14}}^{ \frac{1}{8.8}}\frac{dt_{1}}{t_{1}}\int_{t_{1}}^{\frac{1}{8.8}}\frac{1}{t_{2}} \left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right)\log\frac{1}{8.8t_{2}}dt_{2}+\right.\]
\[\int_{\frac{1}{14}}^{\frac{1}{3.5}}\frac{dt_{1}}{t_{1}}\int_{t_{1}}^{\frac{1}{3.5} }\frac{1}{t_{2}}\left(\frac{1}{t_{1}}-\frac{1}{t_{2}}\right)\log\left(8.8\left( \frac{\theta}{2}-\frac{2}{14}-t_{2}\right)\right)dt_{2}\right)\leqslant 0.769041 \frac{C(N)N^{\theta}}{ab(\log N)^{2}} \tag{95}\]
### Evaluation of \(S_{5}^{\prime}\)
For \(p\geqslant\left(\frac{N}{b}\right)^{\frac{4.0871}{14}}\) we have
\[\underline{p}^{\prime\frac{1}{2.5}}\leqslant\left(\frac{N}{b}\right)^{\frac{1 }{14}},\quad S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b}\right)^{ \frac{1}{14}}\right)\leqslant S\left(\mathcal{A}_{p};\mathcal{P},\underline {p}^{\prime\frac{1}{2.5}}\right).\]
By Lemma 5.6 we have
\[S_{51}^{\prime} =\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{14}}\right)\] \[\leqslant\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4.0871}{14 }}\leqslant p<(\frac{N}{b})^{\frac{1}{3.106}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\prime \frac{1}{2.5}}\right)\leqslant\Gamma_{1}^{\prime}-\frac{1}{2}\Gamma_{2}^{ \prime}+\frac{1}{2}\Gamma_{3}^{\prime}+O\left(N^{\theta-\frac{1}{2b}}\right). \tag{96}\]
By Lemmas 3.1, 3.2, 4.1 and some routine arguments we get
\[\Gamma_{1}^{\prime} =\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\underline{p}^{\prime \frac{1}{3.675}}\right)\] \[\leqslant(1+o(1))\frac{8C(N)N^{\theta}}{ab\theta(\log N)^{2}} \left(\int_{\frac{4.0871}{14}}^{\frac{1}{3.106}}\frac{dt}{t(\theta-2t)}\right) \left(1+\int_{2}^{2.675}\frac{\log(t-1)}{t}dt\right), \tag{97}\] \[\Gamma_{2}^{\prime} =\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}}\\ (p,N)=1\end{subarray}}\sum_{\begin{subarray}{c}(p_{1},N)=1\end{subarray}}S \left(\mathcal{A}_{pp_{1}};\mathcal{P},\underline{p}^{\prime\frac{1}{3.675}}\right)\] \[\geqslant(1+o(1))\frac{8C(N)N^{\theta}}{ab\theta(\log N)^{2}} \left(\int_{\frac{4.0871}{14}}^{\frac{1}{3.106}}\frac{dt}{t(\theta-2t)}\right) \left(\int_{1.5}^{2.675}\frac{\log\left(2.675-\frac{3.675}{t+1}\right)}{t}dt \right). \tag{98}\]
By an argument similar to the evaluation of \(S_{8}\) in [4] we get that
\[\Gamma_{3}^{\prime} =\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}}\\ (p,N)=1\end{subarray}}\sum_{\begin{subarray}{c}(p_{1},p_{2}p_{3},N)=1\end{subarray}}S \left(\mathcal{A}_{pp_{1}p_{2}p_{3}};\mathcal{P}(p_{1}),p_{2}\right)\] \[\leqslant(1+o(1))\frac{16C(N)N^{\theta}}{1.763ab(2\theta-1)(\log N )^{2}}\left(\int_{\frac{4.0871}{14}}^{\frac{1}{3.106}}\frac{dt}{t(\theta-2t)} \right)\left(6.175\log\frac{3.675}{2.5}-2.35\right). \tag{99}\]
By (96)-(99) we have
\[S_{51}^{\prime} =\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{4.0871}{14}} \leqslant p<(\frac{N}{b})^{\frac{1}{3.106}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{14}}\right)\] \[\leqslant(1+o(1))\frac{8C(N)N^{\theta}}{ab\theta(\log N)^{2}} \left(\int_{\frac{4.0871}{14}}^{\frac{1}{3.106}}\frac{dt}{t(\theta-2t)}\right)\times\] \[\left(1+\int_{2}^{2.675}\frac{\log(t-1)}{t}dt-\frac{1}{2}\int_{1.5 }^{2.675}\frac{\log\left(2.675-\frac{3.675}{t+1}\right)}{t}dt+\frac{\theta}{1. 763(2\theta-1)}\left(6.175\log\frac{3.675}{2.5}-2.35\right)\right).\]
Similarly, we have
\[S_{52}^{\prime}=\sum_{\begin{subarray}{c}(\frac{N}{b})^{\frac{\theta}{2}-\frac{3} {4}}\leqslant p<(\frac{N}{b})^{\frac{1}{3.73}}\\ (p,N)=1\end{subarray}}S\left(\mathcal{A}_{p};\mathcal{P},\left(\frac{N}{b} \right)^{\frac{1}{8.8}}\right)\]
\[\leqslant(1+o(1))\frac{8C(N)N^{\theta}}{ab\theta(\log N)^{2}}\left(\int_{ \frac{\theta}{2}-\frac{3}{4}}^{\frac{1}{3.73}}\frac{dt}{t(\theta-2t)}\right)\times\]
\[\left(1+\int_{2}^{2.675}\frac{\log(t-1)}{t}dt-\frac{1}{2}\int_{1.5}^{2.675} \frac{\log\left(2.675-\frac{3.675}{t+1}\right)}{t}dt+\frac{\theta}{1.763(2 \theta-1)}\left(6.175\log\frac{3.675}{2.5}-2.35\right)\right)\]
\[S_{5}^{\prime}=S_{51}^{\prime}+S_{52}^{\prime}\]
\[\leqslant(1+o(1))\frac{8C(N)N^{\theta}}{ab\theta(\log N)^{2}}\left(\int_{ \frac{4.0871}{14}}^{\frac{1}{3.166}}\frac{dt}{t(\theta-2t)}+\int_{\frac{\theta }{2}-\frac{3}{14}}^{\frac{1}{3.73}}\frac{dt}{t(\theta-2t)}\right)\times\]
\[\left(1+\int_{2}^{2.675}\frac{\log(t-1)}{t}dt-\frac{1}{2}\int_{1.5}^{2.675} \frac{\log\left(2.675-\frac{3.675}{t+1}\right)}{t}dt+\frac{\theta}{1.763(2 \theta-1)}\left(6.175\log\frac{3.675}{2.5}-2.35\right)\right)\]
\[\leqslant 3.43654\frac{C(N)N^{\theta}}{ab(\log N)^{2}}. \tag{100}\]
### Proof of theorem 1.2
By (75)-(77), (85)-(86), (95) and (100) we get
\[S_{1}^{\prime}+S_{2}^{\prime}\geqslant 66.37372\frac{C(N)N^{\theta}}{ab(\log N )^{2}},\]
\[S_{3}^{\prime}+S_{4}^{\prime}+S_{5}^{\prime}+S_{6}^{\prime}+2S_{7}^{\prime} \leqslant 66.368553\frac{C(N)N^{\theta}}{ab(\log N)^{2}},\]
\[4R_{a,b}^{\theta}(N)\geqslant(S_{1}^{\prime}+S_{2}^{\prime})-(S_{3}^{\prime}+ S_{4}^{\prime}+S_{5}^{\prime}+S_{6}^{\prime}+2S_{7}^{\prime})\geqslant 0.005167 \frac{C(N)N^{\theta}}{ab(\log N)^{2}},\]
\[R_{a,b}^{\theta}(N)\geqslant 0.00129\frac{C(N)N^{\theta}}{ab(\log N)^{2}}.\]
Theorem 1.2 is proved. While calculating, we also try to extend the range to \(0.9409\leqslant\theta\leqslant 1\) but fail. The range \(0.941\leqslant\theta\leqslant 1\) is rather near to the limit obtained by this method.
## 8. An outline of the proof of Theorems 1.3-1.8
The proof of Theorems 1.3-1.8 is similar and even simpler than the proof of Theorems 1.1-1.2.
For Theorem 1.3, we only need Lemma 4.3 and Remark 4.4 to deal with the sieve error terms involved instead of Lemma 4.5 (i.e. \(\frac{5*0.97-3}{2}=0.925>\frac{12.2}{13.2}\)). For example, let \(D_{\mathcal{A}_{3}}=\left(\frac{N}{b}\right)^{0.97-1/2}\left(\log\left(\frac{N }{b}\right)\right)^{-B}\) and by Huxley's prime number theorem in short intervals, we can take
\[X_{\mathcal{A}_{3}}= \sum_{\begin{subarray}{c}0\leqslant k\leqslant b-1\\ (k,b)=1\end{subarray}}\left(\pi\left(\frac{N/2+N^{0.97}}{a};b^{2},Na_{b^{2}}^{ -1}+kb\right)-\pi\left(\frac{N/2-N^{0.97}}{a};b^{2},Na_{b^{2}}^{-1}+kb\right)\right)\] \[\sim \frac{\varphi(b)\left(\pi\left(\frac{N/2+N^{0.97}}{a}\right)-\pi \left(\frac{N/2-N^{0.97}}{a}\right)\right)}{\varphi\left(b^{2}\right)}\sim \frac{\varphi(b)\left(\frac{2N^{0.97}}{a}\right)}{\varphi\left(b^{2}\right) \log\left(N/2\right)}\sim\frac{2N^{0.97}}{ab\log N} \tag{101}\]
and we can construct the sets \(\mathcal{B}\), \(\mathcal{C}\), \(\mathcal{E}\) and \(\mathcal{F}\) for Theorem 1.3 similar to those of Theorem 1.1 and [6].
The proof of Theorems 1.4-1.5 is very similar to that of Theorem 1.1. For example, let \(D_{\mathcal{A}_{4}}=\left(\frac{N}{b}\right)^{1/2}\left(\log\left(\frac{N}{b} \right)\right)^{-B}\) and by Lemma 3.6, we can take
\[X_{\mathcal{A}_{4}}\sim\frac{1}{\varphi(c)}X_{\mathcal{A}_{1}}\sim\frac{N}{ \varphi(c)ab\log N}. \tag{102}\]
We can construct the sets \(\mathcal{B}\), \(\mathcal{C}\), \(\mathcal{E}\) and \(\mathcal{F}\) for Theorems 1.4-1.5 similar to those of Theorem 1.1. The infinite set of primes used in the proof of Theorems 1.4-1.7 is \(\mathcal{P}^{\prime}=\{p:(p,Nc)=1\}\), so by using the similar arguments to Lemma 3.6, for \(j=4,5,6\) we have
\[W^{\prime}(z_{\mathcal{A}_{j}})=\prod_{\begin{subarray}{c}p<z\\ (p,Nc)=1\end{subarray}}\left(1-\frac{\omega(p)}{p}\right)=\prod_{ \begin{subarray}{c}p|c\\ p|N\\ p>2\end{subarray}}\left(\frac{p-1}{p-2}\right)\frac{2\alpha e^{-\gamma}C(N)(1+o( 1))}{\log N}. \tag{103}\]
To deal with the error terms involved, we need to modify our Lemma 4.1 and Remark 4.2. We can do that by using the similar arguments to those of Kan and Shan's paper [20] and we refer the interested readers to check it. For Theorem 1.5, we need Lemma 4.6 to control the sieve error terms with "big" \(c\).
The proof of Theorems 1.6-1.7 is like a combination of the proof of Theorems 1.2-1.3 and Theorem 1.4. For example, let \(D_{\mathcal{A}_{5}}=\left(\frac{N}{b}\right)^{\theta/2}\left(\log\left(\frac{ N}{b}\right)\right)^{-B}\) and \(D_{\mathcal{A}_{6}}=\left(\frac{N}{b}\right)^{0.97-1/2}\left(\log\left(\frac{ N}{b}\right)\right)^{-B}\), by Lemma 3.6, we can take
\[X_{\mathcal{A}_{5}}\sim\frac{1}{\varphi(c)}X_{\mathcal{A}_{2}}\sim\frac{N^{ \theta}}{\varphi(c)ab\theta\log N}\quad\text{and}\quad X_{\mathcal{A}_{6}} \sim\frac{1}{\varphi(c)}X_{\mathcal{A}_{3}}\sim\frac{2N^{0.97}}{\varphi(c)ab \log N}. \tag{104}\]
We can construct the sets \(\mathcal{B}\), \(\mathcal{C}\), \(\mathcal{E}\) and \(\mathcal{F}\) for Theorems 1.6-1.7 similar to those of Theorem 1.2 and [6]. To deal with the sieve error terms involved, we also need to modify our Lemmas 4.3-4.5 by using the similar arguments to those of [20]. Our Lemmas 4.6-4.8 will help us if we want to combine Theorems 1.2-1.3 with Theorem 1.5 and get similar results to Theorems 1.6-1.7 with "big" \(c\).
Finally, in order to prove Theorem 1.8, we need Lemma 5.7 to give an upper bound. Then we can treat \(\Upsilon_{1}\) and \(\Upsilon_{2}\) by arguments involved in evaluation of \(S_{1},S_{2},S_{3}\), and \(\Upsilon_{3}\) by similar arguments involved in evaluation of \(S_{6}\).
## Acknowledgements
The author would like to thank Huixi Li and Guang-Liang Zhou for providing information about the papers [22][23][29] and for some helpful discussions.
|
2309.05669 | Implications of Edge Computing for Static Site Generation | Static site generation (SSG) is a common technique in the web development
space to create performant websites that are easy to host. Numerous SSG tools
exist, and the approach has been complemented by newer approaches, such as
Jamstack, that extend its usability. Edge computing represents a new option to
extend the usefulness of SSG further by allowing the creation of dynamic sites
on top of a static backdrop, providing dynamic resources close to the user. In
this paper, we explore the impact of the recent developments in the edge
computing space and consider its implications for SSG. | Juho Vepsäläinen, Arto Hellas, Petri Vuorimaa | 2023-09-08T07:49:23Z | http://arxiv.org/abs/2309.05669v1 | # Implications of Edge Computing for Static Site Generation
###### Abstract
Static site generation (SSG) is a common technique in the web development space to create performant websites that are easy to host. Numerous SSG tools exist, and the approach has been complemented by newer approaches, such as Jamstack, that extend its usability. Edge computing represents a new option to extend the usefulness of SSG further by allowing the creation of dynamic sites on top of a static backdrop, providing dynamic resources close to the user. In this paper, we explore the impact of the recent developments in the edge computing space and consider its implications for SSG.
**The paper was accepted for WEBIST 2023 and the final version will be available in the conference proceedings. Note that this version has been edited to compile on arXiv and the final one is shorter due to a two-column layout.**
## 1 Introduction
Historically, websites have been hosted on servers serving static content (Berners-Lee et al., 1992). The advent of content management systems (CMSs) brought about a dynamic approach that allowed editing the served contents with an online editor, leading to additional requirements and complexity from the server, including server-side rendering (SSR) (Boiko, 2005; W3Techs, 2022). **Static site generators** (SSGs) that build static files out of dynamically edited contents were developed to mitigate the need for server-side rendering, yielding the possibility to serve the content with static file servers with little need for dynamic functionality. This possibility of serving static content - coupled with an increased demand in throughput - in part led to the emergence of **content delivery networks** (CDNs), which leverage a global network of servers for faster content delivery through geographical distribution.
With the emergence of commercial server providers and the decline of self-hosting, server farms were developed. On top of server farms, new methods for trading computational resources emerged, including the cloud computing market. Contemporary offerings allow paying based on the execution of individual function calls, potentially accounting for CPU and memory usage. This shift significantly contrasts the traditional trading of computational power, as the payment unit can be measured through individual computations rather than pieces of hosted hardware (Lynn et al., 2017). From the point of view of a computation resource vendor, this has enabled new economies of scale while encouraging custom hardware development.
The combination of these advancements - CDNs and the more fine-grained control and billing of computation power - has led to the emergence of **edge computing** as a viable option for web developers. While CDNs have benefits, edge computing allows programmability and selling function executions on top of CDNs. Edge computing has emerged as a viable option for software developers as it allows them to shape client requests and server responses at a scale near to the client, enabling faster response times (Carvalho et al., 2021). The shift to the edge has resulted in new technical solutions, such as edge-friendly databases, and the problem of cold starts familiar from cloud computing is becoming solved (Partovi, 2022).
In the present study, we explore the impact of edge computing for static website hosting to evaluate how the ideas from static and dynamic realms may be mixed, answering the question _What are the technical opportunities and challenges of edge computing for static website hosting?_ A version of the question was previously posed in (Vepsalainen and Vuorimaa, 2022), where the authors discussed the challenges of SSG when adjusting site contents and proposed an intermediate JSON representation format for site data. The work expands on a recent overview of edge computing research by (Cao et al., 2020) in the specific case of SSG.
The approach to answering the research question is two-fold. We first explore the advances of website rendering and hosting in Section 2 to create a view of the recent movements in the space. Then, to illustrate the benefits of edge computing in practice, we explore the efficacy of rendering a blog platform using three rendering mechanisms and two popular edge providers, outlined in Sections 3 and 4. The results of our experiment and the potential of edge computing for SSG are discussed in Section 5. Finally, Section 6 provides a conclusion and outlines directions for future study.
## 2 Background
In this section, we outline the main movements leading to the present state of creating websites and delivering websites. We also consider how these developments align with the emerging trend of edge computing in the web space.
### Evolution of Website Rendering Techniques
Website rendering techniques have evolved since the beginning of the web to address the new requirements set for websites. The evolution has been supported by the growing market and the shift in use cases for the web as it grew from a site platform to an application platform as web applications became popular with the introduction of social networking and related trends. The growth of the web platform motivated the development of multiple website rendering techniques that each address specific pain points related to developing websites and web applications.
#### 2.1.1 Server Side Rendering
Early websites developed in the 1990s were mainly static and served using static file servers. A static site consists of HTML pages, documents, and media, which can be read by the server from persistent storage and served to the client (usually a web browser) without further customization (Petersen, 2016). Due to a need to provide a degree of interactivity and to allow changing the served data, dynamic functionality was added to the servers. Dynamic websites are typically stored in a format not directly renderable by the browser (Petersen, 2016). In the dynamic
case, the server takes an incoming request, performs some actions in between, and generates a response that is then sent to the client. The process is commonly known as server-side rendering (SSR).
#### 2.1.2 Client Side Rendering and Single Page Applications
SSR was the prevalent technology for building content for the web for over a decade until its slow decline in favor of client-side rendering (CSR) in the late 2000s and early 2010s. The move towards CSR stemmed from a potential for increased perceived usability as while SSR required the whole site to be reloaded per request, CSR allowed changing only the parts needed on a page using technologies such as JavaScript without forcing a refresh (Flanagan and Novak, 1998). A culmination point of this development was the emergence of single-page applications (SPA) in which it became possible to dynamically adjust the shown content based on the user interactions (Mikowski and Powell, 2013; Carnato, 2021).
#### 2.1.3 Static Site Generation
Both SSR and CSR are complemented by static site generation (SSG). In SSG assets are compiled together to a format that can be hosted using a static file server (Newson, 2017) while coming with benefits related to security (Petersen, 2016; Camden and Rinaldi, 2017), fast page load times (Petersen, 2016; Camden and Rinaldi, 2017), scaling (Petersen, 2016), compatibility with versioning systems (Camden and Rinaldi, 2017), and efficient resource usage (Petersen, 2016).
Traditionally, SSGs have been a great fit for small content sites as in the worst case and the most naive implementation, an SSG must recompile the entire site when the content changes. However, techniques such as incremental compilation enable an SSG to reuse the previous results while recompiling only the parts that a change made by the user affects.
There exists a wide variety of SSGs. For example, [https://jamstack.org/enumerates](https://jamstack.org/enumerates) over 350 SSGs (August 2023) in their listing (Jamstack, 2022) while [https://staticsitgenerators.net/](https://staticsitgenerators.net/) has over 460 SSGs (August 2023) (SSG, 2022).
#### 2.1.4 Jamstack
Jamstack was introduced by Matt Biilmann at Smashing Conf in 2016 as a response to the weaknesses of the SSG model. It represents a change in thinking compared to the traditional web (Kumar, 2019) and shifts the perspective on how websites should be composed. The idea is to decouple content from the layout and then collect them together. The approach goes well with headless CMSs that expose their data through an API for third parties to consume (Barker, 2017). Standard webhooks allow refreshing a website when the data changes (Hoang, 2020).
From a deployment point of view, Jamstack sites are still static and can be hosted through a static file server, therefore inheriting the SSG approach's benefits (Markovic et al., 2022). Jamstack relies on external services for dynamic functionality, such as authentication (Peltonen et al., 2021). Due to their static nature, Jamstack sites can be hosted on CDNs and gain their benefits in terms of security and scalability, as with SSGs earlier. Figure 1 shows how the dynamic and static portions of Jamstack go together and how a Jamstack site is deployed on a CDN.
According to (Markovic et al., 2022), the hype around Jamstack is currently at its peak, and their findings indicate that although Jamstack is a promising approach, it may not become the de facto model for web development as there are concerns related to handling dynamic use cases and that is one of the main challenges the advocates of the Jamstack approach have to resolve in the coming years.
Several early pain points of Jamstack have already been resolved through improved service offerings that cover features such as authentication or payment. The problem of previewing the impact of data changes has been alleviated to some extent through techniques such as incremental static regeneration (Markovic et al., 2022).
#### 2.1.5 Incremental Static Regeneration and Distributed Persistent Rendering
Recent frameworks, such as Next.js, offer the possibility for SSR, CSR, and SSG, leading to hybrid functionality. Hybrid approaches enable developers to use the rendering technology that makes the most sense at a given time. On top of this, Next.js innovated a rendering method called incremental static regeneration (ISR), mixing SSG and SSR, that allows the use of SSG without rebuilding the entire site by shifting some of the work to on-demand (Nguyen, 2022). In the on-demand case where ISR is leveraged, pages are cached, and subsequent requests rely on the cache.
In 2021, Netlify introduced distributed persistent rendering (DPR). The idea of DPR is to address the shortcomings of ISR by providing atomic and immutable deploys consistent with the notion of Jamstack. In ISR, the users may see stale content on the first render by design, and this perceived shortcoming has been removed in DPR (Williams, 2021).
To understand how different rendering techniques relate to the client and the developer, Figure 2 summarizes them in a graphical form.
#### 2.1.6 Islands Architecture
As discussed by (Vepsalainen et al., 2023), islands architecture is a way to include dynamic portions to a static page and define strategies for loading them. Deferring loading allows pushing work performed by JavaScript to the future; some of the work may not occur depending on usage. As the architecture was formalized in 2019 (Miller, 2020), there is not much experience in using it but at the same time solid adoption of Astro framework leveraging the approach shows increasing developer interest.
Figure 1: In Jamstack approach, data from a content management system is combined with templates (HTML, JS, CSS, etc.) that are then compiled using a SSG. The resulting static website is then deployed on a CDN hosting service. (Utomo et al., 2020)
### Evolution of Website Hosting
Similar to website rendering techniques, the ways to host websites have evolved. The evolution of website hosting comes together with rendering techniques as they form a pair in the sense that hosting enables rendering at different levels of technical infrastructure, allowing new models for developing websites.
#### 2.2.1 Rented Servers and Virtual Machines
At the beginning of the web, companies and individuals had to maintain their servers. A whole hosting market emerged to make it easier for people to host their websites and applications. The early providers offered space for static websites and offered dedicated servers to rent. Later, virtual machines (VMs) emerged as an abstraction, decoupling hosting from hardware and enabling the sharing resources across multiple users. A key enabler here was HTTP/1.1, which provided the means to indicate the host to which a request was directed, in addition to the IP.
#### 2.2.2 Content Delivery Networks
With increasing demand and an acknowledgment that parts of the served contents were static and rarely changed, CDNs, such as Akamai, emerged (Nygren et al., 2010). CDNs provided both the possibility of distributing requests over a broader range of servers to decrease individual server load and to respond to requests from servers close to the client, thereby reducing the latency experienced by the user (Triukose et al., 2011).
#### 2.2.3 Cloud and Serverless Computing
Cloud computing was a movement in offering computing resources that abstracted away physical hardware. One could still buy a virtual machine when buying resources from cloud computing providers. Still, the location of the virtual machine might have been unclear, and it was also possible that the physical machine running the virtual machine could change dynamically. The infrastructure built to support cloud computing slowly led to the emergence of the serverless computing paradigm, where the notion of starting a server was abstracted away, and develop
Figure 2: Workflow from a client to a developer. The workflow applies to traditional web and edge computing; the number of web servers can be scaled.
ers instead defined entry points to applications. In serverless computing, functions are triggered on demand while having access to databases (Jonas et al., 2019).
#### 2.2.4 Edge Computing
Edge computing represents the next step in how and where computation occurs. Edge computing is a natural evolution over the CDN approach as instead of only serving resources; it enables computation close to the client on demand (Shi et al., 2016). The distributed approach leads to new technical challenges as traditional ways of thinking about aspects, such as databases, must be reconsidered to be compatible with a global infrastructure. In general, edge computing shows promise in improving web page and content rendering performance (Zhu et al., 2013; Vittanen et al., 2018), reinvigorating discussions on making informed decisions on what content to serve to account for network quality (Zhu et al., 2013).
#### 2.2.5 Discussion
The latest developments in rendering techniques and edge computing allow us to address the traditional limitations of SSG and Jamstack while gaining their benefits. Most importantly, edge computing provides a way to intercept user requests before they reach the file server. Alternatively, the edge network can work as a server and return suitable payloads to the client directly. Perhaps more interestingly, edge computing enables the development of hybrid websites where some portions are static and others are dynamic. The islands architecture is a good example of an approach ready to leverage edge computing.
There are some concerns related to the lock-in potential of edge platforms. At the same time, initiatives such as WinterCG provide hope of collaboration to make JavaScript-based edge runtimes compatible with each other. In the ideal case, developers should be able to move edge workers from one platform to another with minimal effort.
## 3 Methodology
To illustrate the implications of edge computing for SSG, we benchmark a statically hosted site against one served from an edge platform. We hypothesize that their performance is close to each other, although we expect the latter solution to come with a slight performance cost depending on the use of caching. To provide a third point of view, we examine the impact of ISR as it is a technique between SSG and SSR.
### Platform and implementation
For the present study, we explored the efficacy of a blog platform with the following constraints1:
Footnote 1: For replication and analysis of the implementation, the source code for the project has been made available at [https://github.com/bebra/ssg-benchmark](https://github.com/bebra/ssg-benchmark)
1. There are three variants to compare: static site generation (SSG), pure edge server-side rendering (SSR), and edge server-side rendering with ISR, which leverages Cloudflare KV for caching 2
2. All variants are implemented using TypeScript.
3. The static variant is generated using an ad hoc implementation based on ES2015 templates for templating. The edge variants use the same logic.
4. The static variant is hosted on both Cloudflare Pages and Netlify so it will be measured twice to see the impact of the platform.
5. The edge variants are implemented using Cloudflare workers.
6. The site to test mimics a blog with a blog index and individual pages.
7. Styling and images are kept out of scope to keep the test case simple and to avoid loading costs.
8. All variants fetch content from a small server, returning pseudorandom data for repeatability.
9. Each implementation had a fixed 100ms delay to simulate the cost of server-side logic.
Cloudflare and Netlify platforms were chosen; both offer edge computing facilities. Cloudflare is a company that started as a CDN provider and has then expanded to hosting and edge computing, which are natural extensions to the CDN business. Cloudflare has developed solutions to cloud computing problems, including approaches for eliminating cold starts related to starting edge workers (Partovi, 2022). Netlify, similar to Cloudflare, provides edge computing capabilities and a Git-connected way to deploy applications on their platform (Netlify, 2022). For the scope of the present work, Netlify is used only as a static host. These platforms were chosen by their relative popularity in the developer community, and an expanded study should include more options to benchmark.
### Measurement of performance
For performance measurements, we used Playwright with Google Lighthouse. We created a test suite that is run against the blog site variants, intended to capture any differences in performance. For the present study, each blog site variant hosted a hundred blog posts, and when measuring performance, we focused on First Contentful Paint and Server Response Time. In addition, we used Autocannon to capture rough throughput as responses per second and latency for each variant. Lighthouse and Autocannon have commonly used tools for assessing website performance, and blogs are a common archetype in web applications.
Following (Hericko et al., 2021), who noted that performing Lighthouse performance audits five times reduces variability in results significantly in a reasonable time, we executed the tests five times. This is in line with the Lighthouse documentation that suggests that measuring the median of five runs is twice as stable as measuring a single run (Google, 2022).
For the Lighthouse tests, we measured the rendering performance of the blog index page (listing 100 blog links) and the performance of a blog page (showing a blog entry), and we throttled the network using mobile (1.6 Mbps down / 750 Kbps up) with 150 ms latency. For the Autocannon tests, we measured the performance of the blog index page. We wrote the test to run for 30 seconds per variant to decrease the impact of variability in connection quality. Before every ISR variant-related test, the cache was emptied manually to avoid skewing results.
### Threats to validity
The tests we perform are black-box by their nature. In other words, we do not control and know anything about the underlying infrastructure. There may be significant differences at the infrastructure level and technical implementations we are
unaware of. However, the platforms we benchmark claim to implement the edge paradigm and expose related APIs.
Another threat to validity has to do with the scope of testing. Given we test from a single location, we do not test the scalability of the approach from a global perspective. Global scaling is considered one of the selling points of the CDN approach, but it is out of the scope of the study.
Our test project is synthetic and reflects only a simple static use case. In practice, web applications can be far more complex and dynamic by nature. The test project provides a baseline for more dynamic tests that build on top of static functionality.
## 4 Results
In the following subsections, we show Lighthouse and Autocannon results separately.
### Lighthouse results
Lighthouse scores pages from zero to a hundred based on the categories: performance, accessibility, best practices, SEO, and PWA. While we focused on First Contentful Paint and Server Response Time, we also briefly studied the other Lighthouse metrics. For each page tested, the performance, accessibility, and best practices metrics received a full score of hundred. SEO varied between 82 and 91, suggesting that the implementation was missing a meta description and the blog page implementation had too tiny tap targets on mobile.
For each variant, the First Contentful Paint (FCP) and Server Response Time (SRT) values have been listed in Table 1. Time to Interactive (TTI) followed FCP closely in this scenario. The values have been rounded to the closest value and are provided in milliseconds (ms). The first test run and the subsequent four test runs are reported separately in the table.
### Autocannon results
For measuring the application's throughput, we utilized Autocannon, studying how the latency behaves over the requests in the 30-second time, focusing on the blog
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline & \multicolumn{3}{c|}{FCP} & \multicolumn{3}{c}{SRT} \\ \hline Run & 1 & 2-5 (med.) & 2-5 (avg.) & 1 & 2-5 (med.) & 2-5 (avg.) \\ \hline CF SSR index & 1053 & 1028 & 1039 & 283 & 282 & 276 \\ CF SSR post & 1030 & 991 & 1053 & 280 & 263 & 322 \\ CF ISR index & 895 & 879 & 889 & 145 & 131 & 135 \\ CF ISR post & 879 & 880 & 876 & 166 & 159 & 148 \\ CF SSG index & 919 & 1026 & 987 & 160 & 272 & 227 \\ CF SSG post & 873 & 860 & 862 & 145 & 128 & 133 \\ Netlify SSG index & 963 & 880 & 924 & 241 & 140 & 186 \\ Netlify SSG post & 955 & 861 & 872 & 241 & 149 & 170 \\ \hline \end{tabular}
\end{table}
Table 1: Summarized measurement results (each result is given in ms). CF = Cloud-flare, FCP = First Contentful Paint, SRT = Server Response Time. The suffix index indicates the performance of the index page with 100 blog post links, while the suffix post indicates the performance of an individual blog page with the blog post contents.
index page for each variant. Figure 3 outlines the latency per percentile, which shows sub-100 millisecond latencies for most requests. In the Figure, the 100 ms latency embedded in the blog code to highlight additional server-side logic is visible in the SSR option, as the option does not benefit from caching. The differences would be negligible if we omit the additional 100 ms latency.
In general, the Autocannon results are somewhat consistent with the Lighthouse server response times, although the Lighthouse server response times show more variance, perhaps due to the fewer tests. In the Autocannon test, we view that the 100% percentile could be safely dropped as it represents individual outliers - on average, over the thirty seconds, the Autocannon tests yielded between 18,000 and 30,000 responses, which our single-computer test setup may partially limit.
## 5 Discussion
Given the measurements, we can see that the latencies of edge platforms are low, especially for the SSG and ISR cases. SSR is expected to come at a cost as there is more processing. The difference became apparent due to the artificial delay added to SSR, and in practice, the delay could be even more visible due to database requests and further work to perform per request. The benefit of ISR is that it allows us to avoid build time work and shift it to runtime at the cost of potentially stale cache for the client.
It would be possible to discard the entire ISR cache during deployment to
Figure 3: Autocannon latency per percentile over each variant over a thirty-second interval shown using a logarithmic scale. Note the peak at the end. Also, note that ISR and SSG follow each other as the cost of ISR is visible only on the first render, and due to the number of runs it vanishes.
address the staleness issue. Doing this would shift the implementation closer to Netlify's Distributed Persistent Rendering (DPR) (Bilmann, 2021), which seeks to address the shortcomings of ISR by providing atomic and immutable deploys consistent with the idea of Jamstack. Furthermore, assuming the ISR cache has a staleness factor, such as time, related to it, the Stale While Revalidate (SWR) technique could be applied to return stale results while generating a new page in the background. In this case, the next request would yield fresh results (Rigon, 2021).
### When to apply ISR?
Given there's a cost related to SSG and especially to building sites on content change, the question is when it becomes beneficial to apply techniques such as ISR. There is added complexity for small sites due to having to use a framework or program on the edge. For highly dynamic use cases where the content changes often, the added complexity may be worth it, as otherwise, you would have to build the site constantly. For example, for social media platforms with rapidly changing content, static site generation might not be a feasible option - in such a case, one could rely on hybrid rendering approaches, which were scoped out from the present work. It could be argued that techniques, such as incremental compilation, can significantly decrease the cost of doing this.
### No cold start cost at Cloudflare
Interestingly, Cloudflare does not seem to have a cost associated with a cold start, while Netlify has a cold start penalty, as evidenced in the server response time measurements. The lack of penalty is a good sign, which means response times are more predictable for developers. At the same time, we could observe, however, that an individual response might occasionally take up to a second at the extreme outliers while, generally, response speeds were stable.
Our measurement server latencies (SR) generally seem low and are within the 300 ms range. The rest of the cost occurs on the browser side (FCP), implying that development practices matter as developers can optimize this cost. It is also good news for framework authors, given they can use the findings to optimize asset delivery.
### Shift of JavaScript frameworks towards the edge
The latest generation of JavaScript frameworks, such as Astro or Qwik, are compatible with the edge out of the box and support the most popular edge platforms as deployment targets while coming with static functionality as well. They support hybrid rendering and allow developers to choose what technique to use and where. The results of the study support this movement as there are clear benefits to SSG combined with edge computing.
### Potential of edge-powered islands
Since the edge provides simple ways to encapsulate logic within workers, developers can leverage islands architecture on top of their static sites. Using an appropriate strategy, the idea is to encapsulate dynamic functionality behind an edge worker and call that within an island. To simplify the task, 11ty/is-land implements multiple strategies while allowing any framework to be used for rendering the islands, making it a good companion for the edge. The idea would be to leverage the _template_ element of HTML while pointing to the edge endpoint that implements the island contents.
Eleventy, a popular SSG, implements edge support natively through shortcodes included in templates (Eleventy, 2023). The feature is experimental and works only with the Netlify Edge platform (Eleventy, 2023). First-class support for the edge on an SSG tells about the direction and the fact that tool authors have recognized the potential of the edge. The same is visible in solutions like Astro that allow hosting and processing on edge while supporting pure SSG.
Cloudflare research team devised the fragments architecture, and the target of this work was to allow building micro-frontends using Cloudflare Workers(Darwin et al., 2022). The idea is consistent with edge-powered islands and approaches it from a vendor point of view while considering legacy and mixed systems enabled by micro-frontends where teams can develop using technologies they prefer. Cloudflare researchers' work implies a crossing point between micro-frontends, islands, and edge computing, which alone may be a direction worth exploring in further study as a technological intersection.
Edge-powered islands come with challenges related to a state shared by multiple islands. It is also likely more suitable for cases with limited interactivity than experiences where the whole page has to be dynamic by definition. In other words, edge-powered islands expand the types of applications that can be developed on top of SSG but encounter limits in highly dynamic use cases.
## 6 Conclusion
We started this paper by asking the question _What are the technical opportunities and challenges of edge computing for static website hosting?_ and found out the intersection expands the usefulness of SSG by allowing more dynamic use cases to be covered on top of it. There are clear opportunities in leveraging architectures like islands architecture on top of a static site. The performance of edge platforms seems reasonable enough in terms of latency, and techniques, like ISR, address problems related to SSG build speed.
Our empirical evaluation demonstrated how SSG and edge computing can work together to enable performant websites and applications to be developed, in part yielding evidence on the efficacy of mixing web technologies as asked for in (Vepsalainen and Vuorimaa, 2022). That said, there are still open questions related to techniques, their applicability in other environments, and their limitations. Furthermore, there are questions related to the costs of the platforms in comparison to the cloud and self-hosting. It's undeniable developing a comparable infrastructure yourself would be cost-prohibitive for many but at the same time not all applications require the same capabilities.
On top of build and server infrastructure, there are layers of techniques related to leveraging caching, prefetching, and pushing work to the client. These techniques are often orthogonal and may be used to complement server-side optimizations. In terms of research, it would be valuable to understand which optimizations can be done at each level how much can they contribute towards the overall performance of a web service, and at what cost.
There are also questions related to reproducing the study results globally. Given edge infrastructure operates on top of CDN, the assumption is that the results should be fairly consistent across the globe depending on CDN density. That starkly contrasts traditional architecture where the server is in a specific location. Measuring the difference and reproducing the study with a global scope would be worthwhile. To help with this goal, our implementation and evaluation code are available on GitHub at [https://github.com/bebraw/ssg-benchmark](https://github.com/bebraw/ssg-benchmark). |
2303.18105 | The novel XYU-GEM to resolve ambiguities | Removing ambiguities within a single stage becomes crucial when one can not
use multiple detectors behind each other to resolve them which naturally is the
case for neutral radiation. An example would be RICH detectors. Commonly
pixilated readout is choosen for this purpose. However, this causes a
remarkable increase in quantity of channels and does not scale up well.
Therefore, the XYU-GEM was proposed as a three coordinate strip-readout which
is combined with a triple GEM detector. The readout complements a common XY
readout with an additional projection which is tilted by 45{\deg}. The
overdetermination due to three projections can be used to resovle ambiguities.
Following the detector design will be explained, first measurements discussed
to understand the response of the detector and a way to change the charge
sharing without changing the manufacturing parameters of the readout. | K. J. Flöthner, F. Brunbauer, S. Ferry, F. Garcia, D. Janssens, B. Ketzer, M. Lisowska, H. Muller, R. de Oliveira, E. Oliveri, G. Orlandini, D. Pfeiffer, L. Ropelewski, J. Samarati, F. Sauli, L. Scharenberg, M. van Stenis, A. Utrobicic, R. Veenhof | 2023-03-31T14:50:09Z | http://arxiv.org/abs/2303.18105v1 | # The novel XYU-GEM to resolve ambiguities
###### Abstract
Removing ambiguities within a single stage becomes crucial when one can not use multiple detectors behind each other to resolve them which naturally is the case for neutral radiation. An example would be RICH detectors. Commonly pixilated readout is choosen for this purpose. However, this causes a remarkable increase in quantity of channels and does not scale up well. Therefore, the XYU-GEM was proposed as a three coordinate strip-readout which is combined with a triple GEM detector. The readout complements a common XY readout with an additional projection which is tilted by 45\({}^{\circ}\). The overdetermination due to three projections can be used to resovelle ambiguities. Following the detector design will be explained, first measurements discussed to understand the response of the detector and a way to change the charge sharing without changing the manufacturing parameters of the readout.
Micropattern gaseous detectors, Gaseous imaging and tracking detectors, X-ray detectors, Detector design and construction technologies and materials +
## 1 Introduction
Within limits, ambiguities can be resolved by using electronics which give access to better time resolution or additional information about signal amplitude. However, solutions can be found as well at detector level by specific designs [1][2]. The proposed XYU readout would be such a solution which does not depend on the used electronics. A cross-section and microscope picture of the proposed XYU readout can be seen in fig. 1.
How the three coordinates can be used to resolve ambiguities is shown in fig. 2. As depicted on the left it is not possible to resolve ambiguities with a binary readout with only using two projections. But obvious with the third coordinate. On the right resolving most of the ambiguities by simple correlation is already possible. Still present ghosts might be resolved by more complex combinatory algorithm, e.g. looping over the multiple possibilities and requiring to use up all signals. A multi-hit event like illustrated in 2 would be impossible to reconstruct with two projections, while one would be able resolve this using the third projection. This has to be verified with actual multi-hit events, caused by showers, from test beam data.
Figure 1: Left: Schematic cross section. Right: Microscope picture of the three strip layers.
## 2 Detector Design and Setup
From simulations a smaller charge collection for U is predicted for the current parameters which are limited by manufacturing constrains [3]. In principle it would be desired to have an equal sharing to avoid the need of operating the detector at higher gains to achieve full efficiency with all projections. These simulations were based on the Ramo-Shockley theorem [4][5] and can provide the total induced charge on the different strip layers. The picture in fig. 1 shows all three conductive layers of the readout which are exposed to the gas volume. On top visible the X strips, by 90\({}^{\circ}\) tilted the Y strips and by another 45\({}^{\circ}\) the U strips. The manufacturing process was developed by the CERN Micro Pattern Technology (MPT) group [6]. The process involves the glueing of two copper-cladded polyimide foils, hosting the different strip-layers. To reveal projections, three dielectric etchings need to be performed. The result can be seen in fig. 1 as it was stated above. The detector itself is based on the standard COMPASS-like triple-GEM detector [7]. The gas mixture Ar/CO\({}_{2}\) 70/30 was used and an passive voltage divider (R in M\(\Omega\): 1/0.55/1/0.5/1/0.44/1). For the readout electronics the VMM3a ASIC with the RD51 Scalable Readout System (SRS) was used [8].
## 3 General response of the Detector
The response of the XYU detector was investigated with \({}^{55}\)Fe in the laboratory and with Minimum Ionizing Particles (MIPs) during the RD51 October test-beam at the SPS H4 beam-line. Due to the lower energy deposition from MIPs, compared to \({}^{55}\)Fe, the gain for the test-beam is increased by a factor of three1. As shown by the ADC counts, fig. 3, the distribution looks similar for all three coordinates. Altough one can observe a shift to lower adc channel for the U coordinate, which indicates the lower charge collection. Extracting the position of the photopeak and the MPV, the charge sharing can be determined. This leads to 41/38/21 (X/Y/U)2 for the \({}^{55}\)Fe and 39/36/25 for the beam measurements, which matches with the simulations [3].
Figure 2: Left: Schematic of ambiguities. Right: Illustrated Multi-hit example.
## 4 Biasing Strips and Charge Sharing Variation
Applying a voltage on the strips can be used to compensate the unequal charge collection. As shown in fig. 4 one can see a clear shift of the spectrum collected from the X strips while applying different voltages on the U strips. Therefore one could achieve equal sharing without changing the manufacturing parameters.
## 5 Summary and Outlook
The first prototype of the XYU-GEM could operated and it is possible to collect reasonable data using an \({}^{55}\)Fe source and 150 GeV/c muons. The observed characteristics are as expected. It is shown, that one can influence the sharing of the signal by applying a bias voltage on the strips. However, further investigations are needed to see the effect simultaneously on all projections. Using the beam data the detector response with actual multi-hit events, caused by showers, is under investigation. As well as the possible influence of the third coordinate on the resolution. Another aspect worth to investigate would be a different pitch of the U projection. This can be done merging channels and therefore changing the signal-to-noise-ratio on individual strips which could be compared to simulations.
Figure 4: Spectra from the X strips for different bias voltages on U strips.
Figure 3: Left: ADC spectra for \({}^{55}\)Fe. Right: ADC spectra for 150 GeV/c muons
## Acknowledgments
This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). The work has been performed in the context of the CERN Strategic Programme on Technologies for Future Experiments. [https://ep-rnd.web.cern.ch/](https://ep-rnd.web.cern.ch/)
|
2309.07718 | On the non-integrability of three dimensional Ising model | It is well known that the partition function of two-dimensional Ising model
can be expressed as a Grassmann integral over the action bilinear in Grassmann
variables. The key aspect of the proof of this equivalence is to show that all
polygons, appearing in Grassmann integration, enter with fixed sign. For
three-dimensional model, the partition function can also be expressed by
Grassmann integral. However, the action resulting from low-temperature
expansion contains quartic terms, which does not allow explicit computation of
the integral. We wanted to check - apparently not explored - the possibility
that using the high-temperature expansion would result in action with only
bilinear terms. (in two dimensions, low-T and high-T expansions are equivalent,
but in three dimensions, they differ.) It turned out, however that polygons
obtained by Grassman integration are not of fixed sign for any ordering of
Grassmann variables on sites. This way, it is not possible to express the
partition function of three-dimensional Ising model as a Grassman integral over
bilinear action. | Wojciech Niedziółka, Jacek Wojtkiewicz | 2023-09-14T13:51:21Z | http://arxiv.org/abs/2309.07718v1 | # On the non-integrability of three dimensional Ising models
###### Abstract
It is well known that the partition function of two-dimensional Ising model can be expressed as a Grassmann integral over the action bilinear in Grassmann variables. The key aspect of the proof of this equivalence is to show that all polygons, appearing in Grassmann integration, enter with fixed sign. For three-dimensional model, the partition function can also be expressed by Grassmann integral. However, the action resulting from low-temperature expansion contains quartic terms, which does not allow explicit computation of the integral. We wanted to check - apparently not explored - the possibility that using the high-temperature expansion would result in action with only bilinear terms. (in two dimensions, low-T and high-T expansions are equivalent, but in three dimensions, they differ.) It turned out, however that polygons obtained by Grassman integration are not of fixed sign for any ordering of Grassmann variables on sites. This way, it is not possible to express the partition function of three-dimensional Ising model as a Grassman integral over bilinear action.
_Keywords_: Ising model, Grassmann integration, integrability
## 1 Motivation
The first solution for the two-dimensional _Ising model_[1], [2] without a magnetic field was obtained by Onsager (1944) [3]. After him, many others provided alternative derivations for the free energy, including [4], [5], [6], [7], [8]. The solution for the three-dimensional model remains an open problem, despite numerous attempts and studies that delve into the problem and provide new perspectives [9]. There are also works by individuals claiming they have solved the problem; however, these solutions have not been recognized by the scientific community as correct. An example of such an attempt is the work by D. Zhang [12], whose result was not acknowledged due to factors such as the dependence of the free energy on boundary conditions in the thermodynamic limit [13], [14], [15].
In our paper, we wanted to answer the question of why the method of Grassmann integrals, which allows one to easily obtain an expression for free energy in 2D, fails in 3D. More concretely: In 2D case one can write the partition function with the use of H-T expansion [16] as a sum over closed polygons. Then one can prove that the H-T expansion can be obtained as a Grassmann integral with the use of very natural 'action', being the bilinear form over Grassmann variables. Passing to 3D, it is also possible to obtain the partition function with the use of H-T expansion as a sum over closed polygons. It is tempting to take the action in the form analogous as in 2D. But the resulting Grassmann integrals gives apparently wrong expression for the partition function. We wanted to take a closer look at why it is so. The
short summary of the answer is that in 3D, the Grassmann integral produces certain polygons with a _negative sign_.
Let us take a closer look at the utilization of Grassmann variables to solve the Ising model(s) [10]. Samuel used Grassmann variables to write down the _low-temperature expansion_ of the Ising Model. His work was inspired by the research of C. A. Hurst and H. S. Green, who expressed the partition function as a Pfaffian, utilizing a polygonal interpretation of the model [17]. Hurst and Green referred to their work as a bridge between the algebraic approach of Onsager and the combinatorial approach of M. Kac and J. C. Ward, which turned out to be more straightforward than both previous works. Samuel's contribution involved rewriting the expression as an integral over Grassmann variables. The Grassmann integral used the _quadratic_ action, and it was possible to reproduce Onsager expression for the free energy. In another work, he repeated the reasoning in three dimensions [11], but the action was _quartic_ one and he wasn't able to obtain an explicit expression for the free energy. The modified proof by Samuel was presented in A. Giuliani's doctoral thesis [16], who reproduced Samuel's proof in the language of high-temperature expansion. Giuliani's work also proved to be much more transparent (although somewhat lengthy and involved) than the original proof by Samuel, which relied on properties of oriented polygons. Giuliani performed an induction proof in which, using elementary methods, he proved what Samuel quickly passed over by referring to the theorem on oriented polygons.
An important observation is the equivalence between the high-temperature (H-T) and low-temperature (L-T) expansions in two dimensions. Both expansions are described as a sum over polygons on the square lattice. But in H-T expansion, polygons are drawn on the original lattice, whereas in L-T case, they are drawn on a _dual_ one. In 2D, the square lattice is self-dual. However, it is not the case in 3D. The high-temperature expansion is still related to polygons on the lattice, while the low-temperature expansion is linked to polyhedrons. Samuel used the low-temperature expansion in his works [10], [11], which did not play a significant role in the two-dimensional model. In the three-dimensional model, since both expansions are significantly different, both formulations need to be examined. Our paper uses the high-temperature expansion, whereas Samuel studied the three-dimensional model using the low-temperature expansion.
The structure of the paper is as follows. In Sec. 2, we present a sketch of the solution of the 2D Ising model with the use of Grassmann integrals. This sketch is based on Giulani paper [16], but we also introduce notation that will be used later to simplify formulas. In Sec. 3, we present an attempt to repeat analogous considerations for 3D model and present points where such an approach fails. The Sec. 4 is devoted to summary and conclusions.
## 2 Two-dimensional model: solution using Grassmann variables
We consider classical Ising model with nearest neighbour interaction with coupling constant \(J_{ij}\) equal to \(J_{1}\) and \(J_{2}\) respectively for horizontal and vertical neighbours and following Hamiltonian
\[H(\{s_{i}\})=-\sum_{ij}J_{ij}s_{i}s_{j}-\sum_{i}B_{i}s_{i}\]
We rewrite partition function using high-temperature expansion
\[Z_{N}=\cosh^{M_{1}}K_{1}\cosh^{M_{2}}K_{2}2^{N}\sum_{P}t_{1}^{r_{1}}t_{2}^{r_{ 2}}\]
where \(N\) is the number of sites, \(M_{i}\) is the number of bonds in each respective direction, \(K_{i}=J_{i}/kT\), \(t_{i}=\tanh K_{i}\). The summation is over all multipolygons, which are geometrical figures composed of bonds on the lattice with an even number of bonds connected to each site. \(r_{i}\) represents the number of bonds in each respective direction.
Samuel noticed [10] that it is possible to relate a statistical sum to an integral over Grassmann variables. His reasoning reproduced Onsager's result and led to a new method for solving the two-dimensional Ising model. Below, we present a sketch of the reasoning, which serves as a starting point for considering the three-dimensional case. However, it should be noted that the conventions used in this work are taken from Giuliani's paper [16], where he reproduced Samuel's proof in two dimensions using the high-temperature expansion.
Let us consider \(S\), which is a bilinear action in Grassmann variables:
\[S=\sum\nolimits_{i}{}^{\prime}t_{1}\overline{H}_{i}H_{i+\hat{e}_{1}}+t_{2} \overline{V}_{i}V_{\hat{v}+\hat{e}_{2}}+\sum\limits_{i}\overline{H}_{i}H_{i}+ \overline{V}_{i}V_{\hat{i}}+V_{i}\overline{H}_{i}+V_{i}\overline{V}_{i}+V_{i} H_{i} \tag{1}\]
where \(\hat{e}_{1}\) and \(\hat{e}_{2}\) are the coordinate versors in the horizontal and vertical directions, respectively and the primed sum runs over those sites for which \(i\), \(i+\hat{e}_{1}\), and \(i+\hat{e}_{2}\) belong to the lattice. This form of \(S\) corresponds to open boundary conditions. Then the sum over polygons for open boundary conditions turns out to be equal to:
\[\sum\limits_{P}t_{1}^{r_{1}}t_{2}^{r_{2}}=(-1)^{N}\int\prod\limits_{i}d \overline{H}_{i}dH_{i}d\overline{V}_{i}dV_{i}e^{S} \tag{2}\]
By modifying the action, one can also obtain a formula for periodic boundary conditions. Diagonalizing the functional and then using the properties of Grassmann integrals, one can derive the formula for the free energy obtained by Onsager.
Let us examine the method of calculating this integral in the two-dimensional case. Due to the properties of Grassmann variables, non-zero contributions come only from terms containing all 4N variables exactly once. Each of these terms can be represented as a so-called dimer on a new lattice obtained by replacing the sites of the original lattice with four sites on the new lattice, as shown in Figure2. While for the square correspondence is one to one, for a general multipolygon we might have many different dimers corresponding to it, as we have 3 different ways of representing an empty site. The key to proving the relationship between the
Figure 1: An example of multipolygon with 10 horizontal edges with constant \(t_{1}\) and 8 vertical ones with constant \(t_{2}\)
integral and the statistical sum is to show that for each multipolygon, all dimers corresponding to it together give a contribution of \((-1)^{n_{\gamma}}t_{1}^{r_{1}}t_{2}^{r_{2}}\), not including empty sites. This weight is composed of the constants \(t_{i}\) appearing in the appropriate powers related to the number of bonds in both directions, and an additional factor called the sign of the polygon. Here, \(n_{\gamma}\) denotes the number of sites belonging to the multipolygon.
Additionally, the integral for a so-called empty site, i.e., a site through which no polygon passes, which is the sum of three possible configurations of variables at that site, gives -1, which can be easily verified:
\[\int d\overline{H}dHd\overline{V}dV(\overline{H}H\overline{V}V+\overline{V} \overline{H}VH+V\overline{H}H\overline{V})=1-1-1=-1\]
Thus, the contribution from the entire lattice for a single multipolygon, including empty sites, is \((-1)^{n_{\gamma}^{\prime}}(-1)^{n_{\gamma}}t_{1}^{r_{1}}t_{2}^{r_{2}}=(-1)^{ N}t_{1}^{r_{1}}t_{2}^{r_{2}}\), \(n_{\gamma}^{\prime}\) is the number of empty sites, here we used the fact that the total number of sites is \(N\). By summing over all configurations of polygons, we obtain the above formula.
Here, we would like to introduce a notation for pairs of variables \(\overline{H}_{i}\), \(H_{i}\) and \(\overline{V}_{i}\), \(V_{i}\). They are of the same type (either H or V), while the pairs \(V_{i}\), \(H_{i}\) and \(\overline{V}_{i}\), \(\overline{H}_{i}\) are from the same group (group 1 and 2, respectively). These variables anti-commute, which means that their pairs commute. Therefore, the pairs of variables can be rearranged without changing the result under the integral. According to the following rule, we choose a direction for the polygon and one of the edges connecting the vertices, and write down its term. Then, we proceed to the next points in the chosen order, writing down the term associated with connecting two variables at each point, followed by the term associated with connecting that point with the next one. For the terms connecting the vertices, if the order of points corresponding to variables in the functional is different from the one resulting from the direction of the contour, we change the order of variables, obtaining either \(-H_{i+\hat{e}_{1}}\overline{H}_{i}\) or \(-V_{i+\hat{e}_{1}}\overline{V}_{i}\). The minus sign is "attached" to the variable from group 2. Finally, we only need to move the first variable from the beginning to the end in order to connect it with the remaining variables belonging to the same point. This introduces an additional minus sign since, apart from the first variable, there is an odd number of variables. The procedure described above, for an example polygon in the form of a square, looks as follows:
Figure 2: An example of a polygon and corresponding dimmer
\[\int\prod_{i}d\overline{H}_{i}dH_{i}d\overline{V}_{i}dV_{i}\overline{V}_{A}V_{D} \cdot H_{A}V_{A}\cdot\overline{H}_{A}H_{B}\cdot V_{B}\overline{H}_{B}\cdot \overline{V}_{B}V_{C}\cdot\overline{V}_{C}\overline{H}_{C}\cdot\overline{H}_{D}H _{C}\cdot V_{D}\overline{H}_{D}=\]
\[\int\prod_{i}d\overline{H}_{i}dH_{i}d\overline{V}_{i}dV_{i}V_{D}(-\overline{V} _{A})\cdot H_{A}V_{A}\cdot\overline{H}_{A}H_{B}\cdot V_{B}\overline{H}_{B} \cdot\overline{V}_{B}V_{C}\cdot\overline{V}_{C}\overline{H}_{C}\cdot H_{C}(- \overline{H}_{D})\cdot V_{D}\overline{H}_{D}=\]
\[-\int\prod_{i}d\overline{H}_{i}dH_{i}d\overline{V}_{i}dV_{i}(-\overline{V}_{A} )H_{A}V_{A}\overline{H}_{A}\cdot H_{B}V_{B}\overline{H}_{B}\cdot V_{C} \overline{V}_{C}\overline{H}_{C}H_{C}\cdot(-\overline{H}_{D})V_{D}\overline{H }_{D}V_{D}\]
Performing the integration for each point, it can be seen that sign of the square is correct and equal to +1. As shown in the example, this procedure allows us to express the sign of a polygon as (-1) times the product of integrals for individual points. Additionally, the integral for a given node depends solely on through which variable we enter the site and through which we leave it while traversing the polygon in a predetermined direction. Let us define an auxiliary function:
\[F_{2}(X,Y)=\int d\overline{H}dHd\overline{V}dV(\pm X\dots Y) \tag{3}\]
where X and Y are two distinct Grassmann variables, and the dots represent the remaining two variables in the order determined by the action. The sign is + if X belongs to group 1 and - if it belongs to group 2. Thus, the sign of the square can be written as \(-F_{2}(\overline{V},\overline{H})F_{2}(H,\overline{V})F_{2}(V,H)F_{2}( \overline{H},V)\).
Furthermore, we will need a simple property of \(F_{2}\), namely \(F_{2}(X,Y)=\pm F_{2}(Y,X)\), where + corresponds to X and Y being from different groups, and - when they are from the same group. This follows from the necessity of performing an odd number of variable exchanges to swap X and Y. Thus, after such an exchange, the sign under the integral changes, meaning that the integral does not depend on the order of X and Y if they are from different groups.
## 3 Three-dimensional model
Now, we pass to three dimensions. Naturally, in order to demonstrate the connection between the Grassmann integral and the high-temperature representation, we want the integral for any given multipolygon to yield the corresponding contribution \(\alpha t_{1}^{r_{1}}t_{2}^{r_{2}}t_{3}^{r_{3}}\), again not counting empty sites. Analogously to the two-dimensional case, we may want \(\alpha=(-1)^{n_{\gamma}}\) and require the integral for a single point to be -1. However, we will consider a more general case. We desire the sum over multipolygon to be equal to the Grassmann variable integral multiplied by a constant A. Upon expanding \(e^{S}\), we obtain terms for different configurations that, after integration, result in the sum of terms \(Aw^{N-n_{\gamma}}\alpha t_{1}^{r_{1}}t_{2}^{r_{2}}t_{3}^{r_{3}}\), where \(w\) represents the weight of an empty sites. We aim to make the weights of the polygons, together with the empty sites and the global constant \(A\), i.e., \(Aw^{N-n_{\gamma}}\alpha\), equal to 1. This gives us a condition that \(\alpha\) is the same for any two polygon configurations with the same \(r_{1}\), \(r_{2}\), and \(r_{3}\), as well as the same number of points belonging to the polygon. In the following part, we will show that this condition cannot be satisfied.
To each node, we add additional Grassmann variables \(U_{i}\) and \(\overline{U}_{i}\). We extend the notions of type and group to incorporate the new variables, where the new variables form a new type U, with the first one joining group 1 and the second one joining group 2. However, we need to specify how to extend the action with the new variables. Naturally, we want all 15 terms connecting the variables at a given point to appear, as well as 3 types of connections between points. Hence, the new action takes the form:
\[S_{3D}=\sum_{i}{}^{\prime}\left(t_{1}\overline{H}_{i}H_{i+\hat{e}_{1}}+t_{2} \overline{V}_{i}V_{i+\hat{e}_{2}}+t_{3}\overline{U}_{i}U_{i+\hat{e}_{3}}\right)+ \sum_{i}\left(a_{1}\overline{H}_{i}H_{i}+a_{2}\overline{V}_{i}V_{i}+a_{3} \overline{U}_{i}U_{i}+\right. \tag{4}\]
\[\left.+a_{4}H_{i}V_{i}+a_{5}H_{i}U_{i}+a_{6}H_{i}\overline{V}+a_{7}H_{i} \overline{U}_{i}+a_{8}V_{i}U_{i}+a_{9}V_{i}\overline{H_{i}}+\right.\]
\[\left.+a_{10}V_{i}\overline{U_{i}}+a_{11}U_{i}\overline{H_{i}}+a_{12}U_{i} \overline{V_{i}}+a_{13}\overline{H_{i}V_{i}}+a_{14}\overline{H_{i}U_{i}}+a_{15 }\overline{V_{i}U_{i}}\right.\right)\]
where \(a_{i}\) represents certain constants. The natural choice would be \(a_{i}=\pm\), but the final proof holds true even if we allow \(a_{i}\in\mathbb{C}\). Analogous to the two-dimensional case, we consider open boundary conditions. The summation with a prime runs over those points for which i, \(i+\hat{e}_{1}\), \(i+\hat{e}_{2}\), \(i+\hat{e}_{3}\) belong to the lattice. It turns out that regardless of how we choose the action, certain polygons will always yield opposite signs, thus it is impossible to select an action that expresses the statistical sum through the Grassmann variable integral.
Now, let's define the function \(F_{3}(X,Y)\) analogously to \(F_{2}(X,Y)\). The new function will differ in the sense that... now contain 4 additional variables, and thus they will be the sum of 3 possible products of those variables with the new action. Each of these products yields \(\pm 1\), so the possible values of \(F_{3}(X,Y)\) are \(\pm 1\),\(\pm 3\). The polygon weights \(\alpha\) are the product of the \(F_{3}\) functions. In the two-dimensional case, since \(F_{2}(X,Y)=\pm 1\), we had \(\alpha=\pm 1\). However, in three dimensions, \(\alpha\) generally equals \(\pm 3^{k}\) for some natural number k, which means there are more possible weights for polygons. For example, \(F_{3}(H,U)\) will consist of the following 3 terms corresponding to different dimer configurations at the vertex:
\[F_{3}(H,U)= \int d\overline{H}dHd\overline{V}dVd\overline{U}dU\] \[(H\cdot a_{2}\overline{V}V\cdot a_{14}\overline{HU}\cdot U+H\cdot a _{10}V\overline{U}\cdot a_{13}\overline{HV}\cdot U+H\cdot a_{9}V\overline{H} \cdot a_{15}\overline{VU}\cdot U)\] \[= \int d\overline{H}dHd\overline{V}dVd\overline{U}dU\] \[H(a_{2}a_{14}\cdot\overline{V}V\cdot\overline{HU}+a_{10}a_{13} \cdot\overline{U}V\cdot\overline{HV}+a_{9}a_{15}\cdot\overline{HV}\cdot \overline{VU})U\]
where \(a_{i}\) are constants associated with the definition of the action, specifically the order in which we selected the terms for these expressions.
The aim of this study was to demonstrate that no choice of \(\alpha\) and action can guarantee the condition of \(\alpha\) depending solely on \(r_{i}\) and \(n_{\gamma}\). The research commenced with considering polygons on a 2x2x2 lattice, which is the simplest three-dimensional lattice. The analysis focused on the simplest non-planar polygons with two edges in each of the three directions.
Figure 3: Various configurations of connections between four variables \(V\), \(\overline{V}\), \(\overline{H}\), \(\overline{U}\) corresponding to successive expressions under the integral.
Two exemplary polygons are visible in the figure below. The Grassmann variables are denoted as \(X\), \(X^{\prime}\), \(Y\), \(Y^{\prime}\), \(Z\), \(Z^{\prime}\), which are, of course, the six variables defined earlier, but after a certain rotation.
\(E\)\(E\)\(E\)\(E\)\(E\)\(E\)\(E^{\prime}
the same type. Consequently, they belong to different groups and because \(X^{\prime}\) has a fixed group, either \(Z\) or \(Z^{\prime}\) should belong to the same group as \(X^{\prime}\). We know that swapping the arguments of a function changes the sign only if the arguments belong to the same group. Thus, in one of these products, we have two terms with the same sign, and in the other, we have terms with opposite signs. We don't know which pair has which sign, but we can certainly state that these two pairs have opposite signs, that is, \(sgn(F_{3}(X^{\prime},Z)F_{3}(Z,X^{\prime}))=-sgn(F_{3}(X^{\prime},Z^{\prime})F _{3}(Z^{\prime},X^{\prime}))\). Let's analyze the last pairs. Notice that their product satisfies \(F_{3}(Z^{\prime},Y^{\prime})F_{3}(Y,Z^{\prime})\cdot F_{3}(Z,Y^{\prime})F_{3}( Y,Z)=F_{3}(Z^{\prime},Y^{\prime})F_{3}(Y,Z^{\prime})\cdot(-F_{3}(Y^{\prime},Z)F _{3}(Z,Y))\). We obtain this from similar reasoning as before. The terms \(Z,Y^{\prime}\) or \(Z,Y\) belong to the same group, while the remaining pair of terms belongs to different groups. Swapping the arguments in one of the terms, either \(F_{3}(Y^{\prime},Z)\) or \(F_{3}(Z,Y)\), changes the sign, while it remains the same in the other term. We don't know the behavior of each term, but their product certainly changes the sign. If we change the order of the terms, we get \(F_{3}(Z^{\prime},Y^{\prime})F_{3}(Y,Z^{\prime})\cdot F_{3}(Z,Y^{\prime})F_{3}( Y,Z)=-F_{3}(Z^{\prime},Y^{\prime})F_{3}(Y,Z^{\prime})\cdot F_{3}(Z,Y)F_{3}(Y^{ \prime},Z)\), which is the exact form of the expression for \(\alpha\) for the square. Here, we use our assumption \(sgn(\alpha)=+1\). Therefore, \(sgn(F_{3}(Z^{\prime},Y^{\prime})F_{3}(Y,Z^{\prime})\cdot F_{3}(Z,Y^{\prime})F _{3}(Y,Z))=+1\), which implies that \(sgn(F_{3}(Z^{\prime},Y^{\prime})F_{3}(Y,Z^{\prime}))=sgn(F_{3}(Z,Y^{\prime})F _{3}(Y,Z))\). Comparing the terms for \(\alpha_{1}\) and \(\alpha_{2}\) gives us the relationship \(sgn(\alpha_{1})=-sgn(\alpha_{2})\). Of course, since the range of values for \(F_{3}\) does not include 0, \(\alpha\) is different from 0. Therefore, the condition \(sgn(\alpha_{1})=-sgn(\alpha_{2})\) contradicts the assumption that the constant alpha depends only on \(r_{1}\), \(r_{2}\), \(r_{3}\).
Case 2: \(sgn(\alpha(r_{1},r_{2},r_{3}))=-1\) for \(r_{1}\), \(r_{2}\), \(r_{3}\) corresponding to the simplest polygon, namely a square with a side length of 1, located in a chosen plane.
Let's consider two configurations on a piece of the distinguished plane.
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & & & & & & & & \\ \hline & & & & & & & & & \\ \hline & & & & & & & & & \\ \hline & & & & & & & & & \\ \hline & & & & & & & & & \\ \hline & & & & & & & & & \\ \hline \end{tabular} Two considered configuration
Both configurations have 4 edges in each direction and 8 points belonging to polygons through which the edges pass, consequently they have the same number of empty sites. The constants \(\alpha\) for both configurations can be calculated using the function \(F_{3}\).
\[\alpha_{1} =-F_{3}(Y^{\prime},X^{\prime})F_{3}(X,X^{\prime})F_{3}(X,Y^{ \prime})F_{3}(Y,Y^{\prime})F_{3}(Y,X)F_{3}(X^{\prime},X)F_{3}(X,Y)F_{3}(Y^{ \prime},Y)\] \[=-F_{3}(Y^{\prime},X^{\prime})F_{3}(X,Y^{\prime})F_{3}(Y,X)F_{3}( X,Y)\cdot F_{3}(X,X^{\prime})F_{3}(X^{\prime},X)\cdot F_{3}(Y,Y^{\prime})F_{3}(Y^{ \prime},Y)\] \[\alpha_{2} =-F_{3}(Y^{\prime},X^{\prime})F_{3}(X,Y^{\prime})F_{3}(Y,X)F_{3}( X,Y)\cdot-F_{3}(Y^{\prime},X^{\prime})F_{3}(X,Y^{\prime})F_{3}(Y,X)F_{3}(X,Y)\]
Let us examine the signs of both constants. In \(\alpha_{2}\), we have the product of two terms, each of which is an expression for the unit square with a sign of -1. The product of two such expressions gives a sign of +1, that is, \(sgn(\alpha_{2})=+1\). In the first term, \(\alpha_{1}\), we again recognize an expression for the constant of the unit square, which has a sign of -1. The remaining two
expressions are products of two \(F_{3}\) with reversed arguments. Knowing that both pairs X, X' and Y, Y' have elements from different groups, we conclude that reversing the arguments does not change the sign. Therefore, the signs of both \(F_{3}(X,X^{\prime})F_{3}(X^{\prime},X)\) and \(F_{3}(Y,Y^{\prime})F_{3}(Y^{\prime},Y)\) are positive, which gives us \(\mathit{sgn}(\alpha_{1})=-1\), once again contradicting the thesis.
This proof demonstrates the failure of Samuel's method even on the smallest lattices. However, it turns out that this can be shown much more simply using a slightly larger lattice. The idea is based on creating two polygons that not only have the same number of edges in each direction but are also composed of the same corners. The only difference between the polygons is the directions in which certain corners are traversed, as changing the direction can introduce an additional sign of '-'. Clever construction of the polygons allows for a much simpler proof. Additionally, the previous proof required the constants \(a_{i}\) to be real numbers. However, this proof does not rely on this assumption and holds true even for complex numbers.
Consider the following two polygons:
\(A\)\(\overline{U}\)\(\overline{
Summary and conclusions
In our paper, we examined the possibility of computing the partition function for the 3D Ising model as a Grassmann integral over bilinear action. This possibility is well known in 2D, but some mentions of non-extensibility of such approach to 3D were rare and loose. We wanted to take closer look to this opportunity and understand reason of this presumed non-extensibility. We have examined the most general expression for a bilinear action in Grassmann variables, integral of which would give the polygonal form (high-temperature expansion) of the partition function. However, it turned out that such an expression _does not_ replicate the partition function: The number of polygons is correct, but some of them enter with a _negative sign_. In other words, it has been shown that it is not possible to derive the expression for the partition function of the 3D Ising model using the translationally invariant bilinear action. It can be interpreted as manifestation of the fact that mathematics behind 2D and 3D Ising model are essentially different: the 2D Ising model can be viewed as a system of non-interacting fermions, whereas in 3D, the fermionic picture inevitably leads to interacting fermions [11], [25].
The technique of Grassmann integrals is intimately connected with _dimers_. For dimers on 2D lattices, there are lot of theorems allow for effective computations of dimer coverings [17], [19], [23], [24]. As far as the authors know, there are no analogous theorems in 3D. Some attempts in this direction were mentioned in a few papers, for instance [20], [21], [26], but they did not lead to a general 3D case. It would be very interesting to relate Grassmann integrals in 3D to dimers on 3D lattices.
|
2309.10264 | Revisiting and Improving Retrieval-Augmented Deep Assertion Generation | Unit testing validates the correctness of the unit under test and has become
an essential activity in software development process. A unit test consists of
a test prefix that drives the unit under test into a particular state, and a
test oracle (e.g., assertion), which specifies the behavior in that state. To
reduce manual efforts in conducting unit testing, Yu et al. proposed an
integrated approach (integration for short), combining information retrieval
(IR) with a deep learning-based approach, to generate assertions for a unit
test. Despite promising, there is still a knowledge gap as to why or where
integration works or does not work. In this paper, we describe an in-depth
analysis of the effectiveness of integration. Our analysis shows that: 1) The
overall performance of integration is mainly due to its success in retrieving
assertions. 2) integration struggles to understand the semantic differences
between the retrieved focal-test (focal-test includes a test prefix and a unit
under test) and the input focal-test; 3) integration is limited to specific
types of edit operations and cannot handle token addition or deletion. To
improve the effectiveness of assertion generation, this paper proposes a novel
retrieve-and-edit approach named EditAS. Specifically, EditAS first retrieves a
similar focal-test from a pre-defined corpus and treats its assertion as a
prototype. Then, EditAS reuses the information in the prototype and edits the
prototype automatically. EditAS is more generalizable than integration. We
conduct experiments on two large-scale datasets and experimental results
demonstrate that EditAS outperforms the state-of-the-art approaches, with an
average improvement of 10.00%-87.48% and 3.30%-42.65% in accuracy and BLEU
score, respectively. | Weifeng Sun, Hongyan Li, Meng Yan, Yan Lei, Hongyu Zhang | 2023-09-19T02:39:02Z | http://arxiv.org/abs/2309.10264v1 | # Revisiting and Improving Retrieval-Augmented Deep Assertion Generation
###### Abstract
Unit testing validates the correctness of the unit under test and has become an essential activity in software development process. A unit test consists of a test prefix that drives the unit under test into a particular state, and a test oracle (e.g., assertion), which specifies the behavior in that state. To reduce manual efforts in conducting unit testing, Yu et al. proposed an integrated approach (_integration_ for short), combining information retrieval with a deep learning-based approach, to generate assertions for a unit test. Despite being promising, there is still a knowledge gap as to why or where _integration_ works or does not work. In this paper, we describe an in-depth analysis of the effectiveness of _integration_. Our analysis shows that: 1 The overall performance of _integration_ is mainly due to its success in retrieving assertions. 2_integration_ struggles to understand the semantic differences between the retrieved focal-test (_focal-test_ includes a test prefix and a unit under test) and the input focal-test, resulting in many tokens being incorrectly modified; 3_integration_ is limited to specific types of edit operations (i.e., replacement) and cannot handle token addition or deletion. To improve the effectiveness of assertion generation, this paper proposes a novel retrieve-and-edit approach named EditAS. Specifically, EditAS first retrieves a similar focal-test from a pre-defined corpus and treats its assertion as a prototype. Then, EditAS reuses the information in the prototype and edits the prototype automatically. EditAS is more generalizable than _integration_ because it can 0 comprehensively understand the semantic differences between input and similar focal-tests; 4 apply appropriate assertion edit patterns with greater flexibility; and 5 generate more diverse edit actions than just replacement operations. We conduct experiments on two large-scale datasets and the experimental results demonstrate that EditAS outperforms the state-of-the-art approaches, with an average improvement of 10.00%-87.48% and 3.30%-42.65% in accuracy and BLEU score, respectively.
Unit Testing, Assertion Generation, Test Assertion, Deep Learning
## I Introduction
Unit testing is a crucial activity of software development, which involves testing an individual unit of software applications, such as a method, a class, or a module. While integration and system testing assess the overall performance of a system, unit testing focuses on validating that each unit of code works as intended and conceived by the developer, thereby detecting and diagnosing failures before they propagate throughout the system and preventing regressions. Effective unit tests can improve the quality of software, reduce the incidence, and cost of software failures [1, 2], as well as enhance the entire software development process.
A unit test comprises a _test prefix_, which is a series of statements that manipulate a unit under test to attain a specific state, and a _test oracle_, which typically includes an assertion that specifies the expected behavior under that state [3]. For instance, in the unit test illustrated in Figure 1, lines 3-6 constitute the test prefix, which creates two CategoryAxis3D objects, with categoryAxis3D0 set to invisible; tests utilize a boolean variable to check whether categoryAxis3D0 and categoryAxis3D1 are equal. On lines 7-8, the assertion specifies the expected outcome, which tests that after executing the test prefix, the visibility property of categoryAxis3D0 should be False and that the two objects should not be equivalent.
Despite the significant benefits of testing, creating effective unit tests is a non-trivial and time-consuming task. Previous studies have indicated that developers can spend more than 15% of their time on test generation [4]. To streamline unit test generation, various automated testing tools have been proposed, such as Randoop [5] and EvoSuite [6]. However, these test-generation tools prioritize generating high-coverage tests over meaningful assertions and face challenges in comprehending the intended program behavior. For example, an industrial evaluation [7] of assertions generated by EvoSuite has shown that "in manually written tests, the assertions are meaningful and useful unlike the generated ones." As a result, a lot of manual effort is still required in conducting unit testing.
To reduce the effort for oracle generation, Watson et al. introduced ATLAS [8], a deep learning (DL)-based technique that trains a neural generative model on an extensive corpus of existing unit tests. ATLAS operates by taking a pair consisting of a test prefix and its _focal method_ (i.e., the method
Fig. 1: Example of a unit test case.
under test) that encompasses both the method names and the method bodies. For consistency with prior research [9], we refer to such a pair as _focal-test_. ATLAS then generates an assertion for the focal-test from scratch. Neural models, unlike specification-mining-based approaches, are more flexible, particularly when documentation is imprecise or incomplete. Thus, ATLAS offers a more adaptable solution to the test assertion problem. However, the effectiveness of ATLAS is limited by several issues: 1) ATLAS generally prefers high-frequency words in the corpus and may have trouble with low-frequency words, such as project-specific identifiers. 2) ATLAS has poor performance when generating long sequences of tokens as assertions.
Recently, Yu et al. [9] proposed an information retrieval (IR)-based assertion generation method, including IR-based assertion retrieval (\(IR_{ar}\)) and retrieved-assertion adaptation (\(RA_{adapt}\)) techniques. \(IR_{ar}\) takes the same input as ATLAS and retrieves the most similar assertion to the given focal-test based on the Jaccard similarity coefficient [10]. Then, \(RA_{adapt}\) further adjusts the tokens in the retrieved assertion based on the context. Furthermore, an integrated approach [9] (abbreviated as _Integration_) that combines the IR-based approach with a DL-based approach has been proposed to improve assertion generation capabilities. The integrated method verifies the compatibility between the retrieved assertion and the current focal-test. If the compatibility exceeds a threshold, the retrieved assertion is returned as the final result. Otherwise, the DL-based method generates the assertion. Experimental results show that the integrated approach achieves higher accuracy and BLEU score than ATLAS. While the performance is a notable achievement in assertion generation research, knowledge gaps still exist regarding why or where the proposed technique is effective or ineffective.
To fill this gap in the literature, we first conduct a comprehensive evaluation of _Integration_ to gain a better understanding of its application scenarios. We conduct the empirical assessment on two public datasets adopted by Yu et al. [9], namely \(Data_{old}\) and \(Data_{new}\). The main findings are:
* _Integration_ relies mainly on the IR-based method for generating assertions. For instance, _Integration_ utilizes the IR-based approach to generate 80.06% of the assertions for \(Data_{new}\). Additionally, for \(Data_{old}\) and \(Data_{new}\), 83.38% and 92.47% of correct assertions generated by the IR-based approach are identical to the retrieved assertions.
* _Integration_ only replaces tokens during its adaptation operation, making it challenging to generalize to complex assertion generation scenarios.
* _Integration_ incorrectly modifies assertions frequently, even when the retrieved assertion matches the ground truth exactly. When one token needs to be modified to get the correct result, the average accuracy of _Integration_ is just 20.14% and drops to only 1.91% for more than five tokens.
* _Integration_ fails to comprehend the semantic differences between focal-tests, resulting in many cases where retrieved assertions require modifications, but it is returned directly as expected assertion.
Overall, our empirical study reaffirms previous findings that based on the similarity of focal-test, we can always find "almost correct" assertions that are very similar to the correct ones. However, we also identify several limitations that restrict the effectiveness and generalizability of _Integration_. On the one hand, the single token replacement operation struggles with complex assertion edit scenarios, resulting in low accuracy (1.9%) of _Integration_ when modifying more than five tokens. On the other hand, _Integration_'s inability to comprehend semantic differences between input and similar focal-tests causes it to make incorrect editing actions or fail to edit retrieved assertions when necessary.
To alleviate the above-mentioned limitations and achieve higher levels of performance, this paper proposes EditAS, a novel retrieve-and-edit approach for assertion generation. The effectiveness of IR implies that similar focal-tests' assertions can be reused. In other words, certain tokens in the expected assertion are also highly probable to appear in the retrieved assertion. The improvements by the specification-mining-based and retrieved-assertion adaptation approaches have revealed the significance of assertion patterns. Inspired by this, EditAS views the assertion from a similar focal-test as a prototype and leverages a neural sequence-to-sequence model to learn the assertion edit patterns used to modify the prototype. Our motivation is that the retrieved assertion guides the neural model on "how to assert" and the assertion edit pattern highlights to the neural model "what to assert".
EditAS consists of two major components: a Retrieval component and an Edit component. Given an input focal-test, the Retrieval component obtains its similar focal-test from a corpus and utilizes the retrieved focal-test's corresponding assertion as a prototype. In the Edit component, a sequence-to-sequence neural network is trained to edit the prototype, based on edit sequences representing the semantic differences between the input and the similar focal-test. On the one hand, EditAS effectively mitigates the long assertion generation problem, which is the performance bottleneck of ATLAS, by editing assertions instead of generating them from scratch. Additionally, EditAS is more generalizable than _Integration_ because it can 1) comprehensively understand the semantic differences between focal-tests; 2) apply appropriate assertion edit patterns with greater flexibility; and 3) generate more diverse edit actions than just replacement operations.
The IR-based assertion retrieval method (\(IR_{ar}\)), the retrieved-assertion adaptation methods including \(RA_{adapt}^{H}\) and \(RA_{adapt}^{NN}\), the integrated approach _Integration_, and ATLAS are used as baselines. We evaluate EditAS and the baselines on both \(Data_{old}\) and \(Data_{new}\) datasets in terms of accuracy and BLEU. The evaluation results show that EditAS outperforms all baselines across all metrics in assertion generation. Specifically, EditAS achieves the highest accuracy, outperforming the baselines by 14.87%-70.15% and 5.12%-104.80% on the two datasets, respectively.
In summary, the contributions of this paper include:
1. We conduct an in-depth analysis of the state-of-the-art assertion generation method that combines DL and IR techniques. Our analysis results provide valuable insights for future research in this direction.
2. We propose a novel retrieve-and-edit approach, namely EditAS, for assertion generation. EditAS utilizes assertions from similar focal-tests as prototypes and uses a neural sequence-to-sequence model to learn the assertion edit patterns.
3. We conduct extensive experiments to evaluate our approach. The experimental results show that EditAS significantly outperforms all baselines.
4. We open-source our replication package1, including the dataset, the source code of EditAS, the trained model, and assertion generation results, for follow-up studies.
Footnote 1: [https://github.com/swf-cqu/EditAS](https://github.com/swf-cqu/EditAS)
## II Background and related work
### _DL-based Assertion Generation_
With the rise of deep learning (DL), an increasing number of software engineering tasks can be effectively tackled using advanced DL techniques, such as code search [11, 12], automated program repair [13, 14, 15, 16, 17], fault diagnosis [18], code summarization [19, 20, 21, 22], and code clone detection [23, 24, 25, 26, 27]. Watson et al. [8] recently proposed ATLAS, the first DL-based assertion generation method. ATLAS employs Neural Machine Translation (NMT) to generate assertions for a given _focal-test_, which consists of a test method without any assertion (i.e., test prefix) and its focal method (i.e., the method under test), including both method names and method bodies. To develop ATLAS, Watson et al. first extracted Test-Assert Pairs (TAPs) from GitHub projects that use the JUnit testing framework. Each pair consists of a focal-test and its corresponding assertion. The initial TAP set is then preprocessed into two datasets: 1) raw source code, where TAPs are simply tokenized (see the top part of Figure 2), and 2) abstract code, where uncommon tokens are further represented by their respective types and sequential IDs from the raw source code (see the bottom part of Figure 2). Ultimately, ATLAS generates a meaningful assertion to verify the correctness of the focal method.
Recent research [28, 29, 30] has explored the potential of pre-trained models, such as T5 and BART, for supporting the assertion generation task through pre-training and fine-tuning. Specifically, these approaches involve pre-training a transform model using a large corpus of either source code or English language, followed by fine-tuning it for assertion generation. Dinella [3] proposed TOGA to address the assertion generation problem by using a ranking architecture over a set of candidate test assertions. TOGA employs grammar along with type-based constraints to limit the generation space of candidates and uses a transformer-based neural approach to obtain the most probable assertions. We find that the formal implementation of TOGA [31, 32] requires a seed/approximate assertion to determine the variables for constructing and optimizing the candidate assertion set. Also, the seed/approximate assertion needs to be created manually or via EvoSuite tool [6]. Unlike TOGA, this paper focuses on the problem of assertion generation using only focal-test.
### _Combining Information Retrieval and Deep Learning for Assertion Generation_
The community has been working to boost DL techniques in software engineering tasks by leveraging Information Retrieval (IR) techniques and achieved promising results. Drawing on the idea of "combining IR and DL", Yu et al. [9] proposed a novel integration approach to tackle the assertion generation problem. Their contributions include two IR-based approaches for assertion generation, namely IR-based assertion retrieval (\(IR_{ar}\)) and retrieved-assertion adaptation (\(RA_{adapt}\)).
The basic idea of \(IR_{ar}\) is to retrieve the assertion whose corresponding focal-test has the highest similarity (e.g., Jaccard [10] similarity) with the given focal-test, and the retrieved assertion is returned as an expected assertion. As \(IR_{ar}\) does not always retrieve completely accurate assertions, \(RA_{adapt}\) has been proposed to automatically adapt retrieved assertions to the right forms utilizing contextual information. For a retrieved assertion, \(RA_{adapt}\) performs the following adaptation process: 1) deciding whether the assertion should be modified; 2) deciding which token (i.e., invoked method, variable, or constant) should be modified; 3) deciding what value a candidate token should be replaced with. Yu et al. have proposed two replacement strategies for determining the replacement value: one based on heuristics (\(RA_{adapt}^{H}\)) and the other on neural networks (\(RA_{adapt}^{NN}\)). \(RA_{adapt}^{H}\) utilizes lexical similarity for code replacement. In contrast to \(RA_{adapt}^{H}\), \(RA_{adapt}^{NN}\) further enhances lexical similarity by incorporating semantic information using a neural network architecture and computes replacement values for code adaptation. Finally, Yu et al. [9] combine IR and DL techniques and propose an integrated approach, referred to as _Integration_ in this paper. _Integration_ first retrieves assertions using Jaccard similarity and adjusts the retrieved assertion if necessary. Then, _Integration_ uses a semantic compatibility inference model to compute the "compatibility" of the adjusted assertion and the current
Fig. 2: Abstraction process.
-focal-test. If the compatibility is below a specified threshold (denoted as \(t\)), _Integration_ switches to ATLAS to generate an assertion from scratch. The value of \(t\) is determined based on the validation set. Given that \(RA_{adapt}^{NN}\) outperforms other IR-based approaches, including \(IR_{ar}\) and \(RA_{adapt}^{H}\), we adopt the combination of \(RA_{adapt}^{NN}\) and ATLAS to explore the optimal performance of _Integration_.
## III The empirical exploration of _Integration_
The empirical study aims to investigate the application scenarios of _Integration_[9] and gain insight into its mechanisms i.e., where and why _Integration_ works and does not work. To this end, three research questions are designed.
* **RQ1**: _What are the characteristics of the dataset?_ We first perform an in-depth analysis of the dataset. In particular, we analyze the distribution of assertion lengths on the entire dataset. Next, for each Test-Assertion Pair \(\mathrm{TAP}_{i}\) in the test set, we retrieve the assertion from the training set whose corresponding focal-test has the highest similarity to the focus-test of \(\mathrm{TAP}_{i}\) and compute the edit distance between the assertion of \(\mathrm{TAP}_{i}\) and the retrieved assertion. Such investigation not only validates the dataset's characteristics but also helps us comprehend the intrinsic association between TAPs and their similar instances, which can guide our method design (see Section IV).
* **RQ2**: _Where and why does Integration work?_ We further explore the advantages of _Integration_ for assertion generation. We first classify the results generated by _Integration_ according to the assertion generation method used. Next, we analyze the assertions that are generated correctly.
* **RQ3**: _Where and why does Integration fail?_ We investigate the weaknesses of _Integration_, which are critical in that understanding the application scenario of an approach can better help us apply it in practice [33].
### _Dataset_
We use the two publicly available datasets [34] from Yu et al. for our experiments, namely \(Data_{old}\) and \(Data_{new}\), respectively. To make our paper self-contained, we briefly describe the \(Data_{old}\) and \(Data_{new}\) in the following paragraphs.
Iii-A1 \(Data_{old}\): \(Data_{old}\) is derived from the original dataset used by ATLAS. Initially, \(Data_{old}\) is extracted from a pool of 2.5 million test methods in GitHub, which include test prefixes and their corresponding assertion statements. For each test method, \(Data_{old}\) includes the focal method, i.e., the production code under test. The \(Data_{old}\) is then preprocessed to exclude test methods with token lengths exceeding 1K and filter out assertions containing _unknown_ tokens that are not present in the focal-test and the vocabulary, following established practice in natural language processing [35, 36]. As an example, the bottom part of Figure 3 highlights the unknown tokens childCompoundWrite, apply, and EmptyNode. After removing duplicates, \(Data_{old}\) obtains 156,760 data items, which are further divided into training, validation, and test sets at an 8:1:1 ratio.
#### Iii-A2 \(Data_{new}\)
The exclusion of assertions with unknown tokens can oversimplify the assertion generation problem, making \(Data_{old}\) unsuitable for representing the realistic data distribution. This, in turn, poses a significant threat to the validity of experimental conclusions. Therefore, Yu et al. [9] constructed an expanded dataset, denoted as \(Data_{new}\), by adding those cases that are excluded with unknown tokens in \(Data_{old}\). Besides the existing data items in \(Data_{old}\), \(Data_{new}\) contains an extra 108,660 samples with unknown tokens to form a total of 265,420 data items, which are also divided into training, validation, and test sets in an 8:1:1 ratio.
### _Study Results_
#### Iii-B1 RQ1: What are the characteristics of the dataset
Figure 4 shows the distribution of assertion length, where the X axis represents various assertion lengths and the Y axis represents the number of corresponding assertions. Both datasets reveal a similar distribution trend: Assertions are typically less than 30 tokens, and the number of assertions decreases as the length of the assertions increases. The average
Fig. 4: Length distribution of assertions of each dataset.
Fig. 3: Assertion with known _vs._ unknown tokens.
length of assertions in \(Data_{new}\) is 13 tokens, while in \(Data_{old}\) that is 12 tokens. This finding suggests that in practice, testers may prefer brief assertions as long ones can cause maintenance difficulties.
Table I further provides edit distances between test samples' ground truth (i.e., expected assertions) with retrieved assertions. Interestingly, we can observe that a significant proportion of assertions match exactly with retrieved assertions, accounting for \(37.90\%=10,059/26,542\) and \(36.26\%=5,684/15,676\) in \(Data_{new}\) and \(Data_{old}\), respectively. In addition, for \(Data_{new}\) and \(Data_{old}\), 65.15% and 67.70% of the test samples only need to modify the retrieved assertion by less than or equal to five tokens to produce a correct assertion. Such findings provide evidence of the effectiveness of the information retrieval (IR) approach: by identifying similarities from the focal-tests, we can find almost or completely matched assertions.
#### Iv-C2 RQ2: Where and why does Integration work
To gain insight into where and why _Integration_ works, we analyze its prediction results. Given that _Integration_ is an integrated method that integrates ATLAS and an IR-based assertion generation method (\(RA^{NN}_{adapt}\) in this paper), we categorize _Integration_'s generated results based on the assertion generation method used. Our analysis reveals that \(RA^{NN}_{adapt}\) generates 21,250 (\(80.06\%=21,250/26,542\)) assertions for \(Data_{new}\) and 10,215 (\(65.16\%=10,215/15,676\)) assertions for \(Data_{old}\). The results indicate that _Integration_ prefers assertions generated by the IR-based approach.
As the IR-based approach contributes most to the _Integration_, we conduct further analysis on the correct assertions generated by \(RA^{NN}_{adapt}\). Table II shows the edit distance between retrieved assertions and correct ones generated by \(RA^{NN}_{adapt}\) for \(Data_{new}\) and \(Data_{old}\), accordingly. From the table, it can be seen that the success of \(RA^{NN}_{adapt}\) is mainly based on retrieving assertions that are identical to the ground truth, such as the samples with \(E=0\) account for \(92.47\%=9,839/10,640\) in \(Data_{new}\). \(RA^{NN}_{adapt}\) enhances the assertion generation of _Integration_ through adaptation operations, but is more effective for single or two token modifications than for other modification cases. For example, with the \(Data_{new}\) dataset, \(RA^{NN}_{adapt}\) achieves an accuracy of \(15.21\%=437/(437+2,436)\) when only one token is altered and \(7.75\%=67/(67+798)\) when three tokens are modified.
The majority of the successful assertions produced by \(RA^{NN}_{adapt}\) are exactly the same as the retrieved ones. In terms of adaptation operations, \(RA^{NN}_{adapt}\) performs better on single token modifications than other modification cases.
#### Iv-C3 RQ3: Where and why does Integration fail
Previous work [9] has demonstrated that one of the bottlenecks of ATLAS's performance arises from generating assertions with long sequences. Hence, in RQ3, we focus on exploring where and why \(RA^{NN}_{adapt}\) fails. We first collect the incorrect assertions generated by \(RA^{NN}_{adapt}\), and calculate the edit distance between retrieved assertions and their ground truth. As shown in Table II, for \(Data_{new}\), there are 130 test samples whose assertions match the ground truth but are still modified by \(RA^{NN}_{adapt}\), and a similar phenomenon occurs in \(Data_{old}\). The performance of \(RA^{NN}_{adapt}\)'s adaptation strategy is limited. For instance, when \(E=1\), \(RA^{NN}_{adapt}\) incorrectly generates \(84.79\%=2,436/(437+2,436)\) of assertions for \(Data_{new}\) and \(74.94\%=1,453/(1,453+486)\) for \(Data_{old}\). The performance of \(RA^{NN}_{adapt}\) is worse when \(E>5\), with an average accuracy of only 1.91%.
Integration frequently incorrectly modifies assertions even when the retrieved assertion matches the ground truth exactly. When modifying one token to get the correct assertion, the average accuracy is only 20.14%.
We further count the number of edits made by \(RA^{NN}_{adapt}\) for those incorrect assertions. As reported in Table III, we can observe that there is a significant proportion of samples for which \(RA^{NN}_{adapt}\) does not make any changes to their retrieval assertions, e.g. in \(Data_{new}\), such type of sample size is
9,436, representing 88.93% of the total number of incorrect assertions. Our analysis is that this may be due to the difficulty of \(RA_{adapt}^{NN}\) in understanding the semantic differences between focal-tests (an example is shown in Figure 7). Yu et al. [9] argue that an assertion needs to be edited if it contains at least one token absent from the input focal-test. However, this setting ignores many instances where the necessary changes are made to adapt the context of the input focal-test, even if all the tokens of the retrieved assertions occur in the input focal-test.
_Integration_ fails to comprehend the semantic differences between focal-tests, resulting in many cases where retrieved assertions require modifications but are not edited appropriately.
## IV Retrieve-and-Edit Assertion Generation
Drawing on insights garnered from our empirical study, we propose EditAS, a retrieve-and-edit approach for generating assertions. Unlike _Integration_, which only considers replacement operations to adjust retrieved assertions, EditAS can learn diverse assertion edit patterns from similar TAPs. Given a focal-test as input, EditAS retrieves a corpus to obtain the similar TAP based on the similarity of focal-test and generates a new assertion via the retrieved TAP's assertion and compatible edit patterns. The overall framework of EditAS is illustrated in Figure 5. EditAS consists of a Retrieval component and an Edit component.
### _Retrieval component_
In our approach, the Retrieve component retrieves a similar Test-Assert Pair (TAP) from a corpus given the input focal-test. Specifically, to facilitate efficient retrieval, EditAS first tokenizes each focal-test in the training and test sets using javalang [37] and removes duplicate tokens. Then, the Retrieve component retrieves the TAP whose focal-test has the highest Jaccard similarity coefficient with the input focal-test. The Jaccard similarity is a text similarity measurement that considers the overlap of words between two texts, calculated using the following formula:
\[J(A,B)=\frac{|A\cap B|}{|A\cup B|} \tag{1}\]
Where \(A\) and \(B\) are two bags of words, \(|\cdot|\) denotes the number of elements in a collection.
### _Edit component_
We employ a neural edit model to learn diverse assertion edit patterns from similar TAPs. The edit model is designed to understand how to modify one assertion to another based on semantic differences between focal-tests. Specifically, for a given focal-test \(ft\) and its similar focal-test instance \(ft^{\prime}\), along with their corresponding assertions \(x\) and \(y\), the neural edit model aims to find a function \(f\) such that \(f\left(ft,ft^{\prime},x\right)=y\). In this section, we elaborate on the focal-test semantic difference representation and the neural edit model training.
#### Iv-B1 Focal-test semantic difference representation
We extract and compare semantic information and modification details between focal-tests using edit sequences, according to the previous work's finding [38]: different words between the two methods can reflect their semantic differences. We follow a similar approach as in [39, 40], aligning the two tokenized focal-test sequences using a diff tool and creating an edit sequence based on the resulting alignment. As shown in Figure 6, each element (named as an _edit_) in an edit sequence is represented as a triple \(\left\langle ft_{i},ft_{i}{{}^{\prime}},a_{i}\right\rangle\), where \(ft_{i}\) is a token in one focal-test and \(ft_{i}{{}^{\prime}}\) is a token in the similar focal-test, and \(a_{i}\) is the edit action that transforms \(ft_{i}\) to \(ft_{i}{{}^{\prime}}\). There are four types of edit actions: _insert_, _delete_, _equal_, or _replace_. when \(a_{i}\) is an _insert_ (_delete_) operation, it means that \(ft_{i}\) (\(ft_{i}{{}^{\prime}}\)) will be an empty token \(\emptyset\). Constructing such an edit sequence can not only preserve the information of the focal-test (i.e., \(ft_{i}\) and \(ft_{i}{{}^{\prime}}\)) but also highlight their fine-grained differences through \(a_{i}\).
#### Iv-B2 Neural edit model training
Our neural edit model is fundamentally a sequence-to-sequence neural architecture. It accepts a retrieved assertion \(x=\left[x_{1},x_{2},\cdots,x_{|x|}\right]\) and an edit sequence \(E=\left[\left\langle ft_{1},ft_{1}^{{}^{\prime}},a_{1}\right\rangle,\ldots \left\langle ft_{n},ft_{n}^{{}^{\prime}},a_{n}\right\rangle\right]\) as input and is designed to generate a new assertion \(y=\left[y_{1},y_{2},\cdots,y_{|y|}\right]\). Specifically, EditAS incorporates two encoders: the Edit Sequence Encoder and the Assertion Encoder, to respectively encode the edit sequence and the retrieved assertion. An attention mechanism is then applied on the encoder side to learn the relationship between the edit sequence and the retrieved assertion. The decoder consists of two pointer generators that enable the model to copy tokens from both the input focal-test and the retrieved assertion concurrently during the generation process. This approach can effectively preserve the original meaning of the retrieved assertion while integrating the assertion edit patterns reflected in the edit sequence.
**Encoders**. The structures of the two encoders, i.e., the Edit Sequence Encoder and the Assertion Encoder, are nearly identical. They consist of a contextual embedding layer, an attention layer, and a modeling layer.
_The Contextual Embed Layer_. We first map the focal-test tokens, assertion tokens, and edit action tokens to embeddings. Considering there are only four edit actions, we randomly initialize an embedding matrix and update it during training. To capture both syntactic and semantic information, we employ a pre-trained model, such as fastText [41], to acquire word
embeddings for each focal-test and assertion token. We then use a bidirectional long short-term memory (Bi-LSTM) [42] to process the sequence of word embeddings to access contextual information. For Edit Sequence Encoder and each edit \(E_{i}\), the three embeddings, i.e., \(e_{ft_{i}}\), \(e_{ft^{\prime}_{i}}\), \(e_{a_{i}}\) are first concatenated horizontally, and then input to the Bi-LSTM, as follows:
\[\begin{gathered} e_{i}^{{}^{\prime}}=[e_{ft_{i}}\oplus e_{ft^{ \prime}_{i}}\oplus e_{a_{i}}]\\ \overrightarrow{h}_{i}^{{}^{\prime}}=\mathrm{LSTM}(\overrightarrow{h}_{i -1}^{{}^{\prime}},e_{i}^{{}^{\prime}});\overleftarrow{h}_{i}^{{}^{\prime}}= \mathrm{LSTM}(\overleftarrow{h}_{i+1}^{{}^{\prime}},e_{i}^{{}^{\prime}})\\ h_{i}^{{}^{\prime}}=[\overrightarrow{h}_{i}^{{}^{\prime}}\oplus \overleftarrow{h}_{i}^{{}^{\prime}}]\end{gathered}\]
where \(h_{i}^{{}^{\prime}}\) is the contextual vector of this edit and \(\oplus\) is a concatenation operation. Similarly, Assertion Encoder obtains the contextual vector \(h_{i}\) of each assertion token \(x_{i}\) with \(x_{i}\)'s embedding \(e_{x_{i}}\) as input.
_The Attention Layer_. This layer is responsible for linking and fusing the information of the focal-test difference and that of the retrieved assertion, capturing their relationship. We place it on the top of the two contextual embed layers. The attention layer takes as input the contextual vectors, i.e., \(H\) and \(H^{\prime}\), and outputs an assertion-aware (edit-aware) feature vector for each edit (assertion token), as well as the original contextual vector for this edit (assertion token). Formally, the edit-aware feature vector \(g_{i}\) of assertion token \(x_{i}\) is calculated using the dot-production attention mechanism [43] as follows:
\[g_{i}=H^{{}^{\prime}}\alpha_{i};\ \ \ \alpha_{i}=\mathrm{softmax}(H^{{}^{ \prime}\top}W_{\alpha}h_{i}) \tag{2}\]
The attention weight \(\alpha_{i}\) measures the importance of each edit is with respect to \(x_{i}\). The computation of the assertion-aware feature vector \(g_{i}^{{}^{\prime}}\) of edit \(E_{i}\) is almost same and can be expressed as follows:
\[g_{i}^{{}^{\prime}}=H\alpha_{i}^{{}^{\prime}};\ \ \ \alpha_{i}^{{}^{\prime}}= \mathrm{softmax}(H^{\top}W_{\alpha}^{{}^{\top}}h_{i}^{{}^{\prime}})\]
_The Modeling Layer_. This layer uses two distinct Bi-LSTM to generate the final feature representation based on the contextual vector of each edit (assertion token) and the assertion-aware (edit-aware) feature vector, respectively. For
Fig. 5: The overall framework of our approach.
Fig. 6: Converting a difference between focal-tests to an edit sequence.
example, given an assertion token \(x_{i}\), its final representation \(z_{i}\) is computed as follows:
\[f_{i}=[g_{i}\oplus h_{i}]\] \[\overrightarrow{z}_{i}=\mathrm{LSTM}(\overrightarrow{z}_{i-1},f_{ i});\overleftarrow{z}_{i}=\mathrm{LSTM}(\overleftarrow{z}_{i+1},f_{i})\] \[z_{i}=[\overrightarrow{z}_{i}\oplus\overleftarrow{z}_{i}]\]
**Decoder**. EditAS uses an LSTM-based decoder to generate a new assertion from the outputs of two encoders, \(Z\) and \(Z^{{}^{\prime}}\). The final hidden states of both modeling layers are concatenated and used as the initial state for the decoder's LSTM. During decoding step \(j\), the decoder computes the hidden state \(s_{j}\) based on the \(j\)-th word embedding of ground truth assertion \(e_{\hat{g}_{j}}\), the previous hidden state \(s_{j-1}\), and the previous output vector \(o_{j-1}\), as follows:
\[s_{j}=\mathrm{LSTM}(s_{j-1},[e_{\hat{g}_{j}}\oplus o_{j-1}]) \tag{3}\]
We compute a context vector at each time step as the representation of the encoder's input by the dot-product attention mechanism, following Equation 2. Given there are two encoders, the decoder obtains two context vectors, i.e., \(c_{j}\) from the retrieved assertion and \(c^{{}^{\prime}}_{j}\) from the focal-test difference. An output vector \(o_{j}\) is computed using \(c_{j}\), \(c^{{}^{\prime}}_{j}\) and \(s_{j}\), and the corresponding vocabulary distribution \(P^{vocab}_{j}\) is obtained using a softmax layer.
\[o_{j}=tanh(V_{c}[c_{j}\oplus c^{{}^{\prime}}_{j}\oplus s_{j}];\quad P^{vocab}_{j}=\mathrm{softmax}(V^{{}^{\prime}}_{c}o_{j})\]
where \(V_{c}\) and \(V^{{}^{\prime}}_{c}\) are trainable parameters. \(P^{vocab}_{j}\) records the probability of each token being generated, of which the element with the highest probability will be the output under decoding step \(j\). Due to the similarity of focal-tests, it is reasonable to assume that certain tokens in the new assertion should also appear in the retrieved assertion, while others that are not present in the retrieved assertion should be included in the input focal-test. Therefore, we adopt the pointer generator to copy tokens from the retrieved assertion and the input focal-test:
\[P^{ass}_{j}(y_{j})=\sum_{k:x_{k}=y_{j}}\beta_{jk};\;\;P^{ft}_{j}(y_{j})=\sum_{ k:t^{{}^{\prime}}_{k}=y_{j}}\beta^{{}^{\prime}}_{jk}\]
\(P^{ass}_{j}(y_{j})\) and \(P^{ft}_{j}(y_{j})\) are the probabilities of copying \(y_{j}\) from the retrieved assertion and the input focal-test, respectively. \(\beta_{jk}\) and \(\beta^{{}^{\prime}}_{jk}\) are the attention weights of \(x_{k}\) and \(E_{k}\) at time step \(j\), computed based on the context vectors \(c_{j}\) and \(c^{{}^{\prime}}_{j}\).
The conditional probability of \(y_{j}\) at time step \(j\) is then the combination of \(P^{vocab}_{j}\), \(P^{ass}_{j}\), and \(P^{ft}_{j}\), i.e.,
\[p(y_{j}|y_{<j},x,E) =\gamma_{j}P^{vocab}_{j}(y_{j})+(1-\gamma_{j})(\theta_{j}P^{ass} _{j}(y_{j})+\] \[(1-\theta_{j})P^{ft}_{j}(y_{j}))\]
\(\gamma_{j}\) and \(\theta_{j}\) represent the probabilities of generating \(y_{j}\) by selecting from the vocabulary and copying from the retrieved assertion, respectively. Both probabilities are modeled by a single-layer feed-forward neural network, which is trained jointly with the decoder.
### _Generation_
Given an input focal-test \(ft\), EditAS generates an assertion through three steps:
_Step 1: Selecting a similar TAP._ EditAS leverages a large-scale training dataset as the retrieval corpus. Then, the Retrieval component retrieves the TAP whose focal-test closely matches \(ft\) from the corpus. Further information on the retrieval process is outlined in Section IV-A.
_Step 2: Capturing fine-grained semantic differences between focal-tests._ Through Step 1, EditAS obtains a similar focal-test to \(ft\) and its corresponding assertion statements. EditAS calculates the edit sequences to capture semantic differences between \(ft\) and the retrieved focal-test. The details of this part are described in Section IV-B.
_Step 3: Combining retrieved assertion with semantic differences between focal-tests._ Finally, the trained neural edit model adjusts the retrieved assertion based on the edit sequences, thereby generating one assertion corresponding to \(ft\). Further details on this model can be found in Section IV-B.
## V Experimental Setup
In this section, we describe the dataset and evaluation metrics used in the experiment. We conduct experiments to answer the following research questions:
* **RQ4:** How does the proposed EditAS perform compared to state-of-the-art assertion generation baselines?
* **RQ5:** Do different similarity coefficients affect the performance of EditAS?
### _Baselines_
To answer the above-mentioned research questions, we compare EditAS to five baselines. We first chose ATLAS, the first and classic neural network-based method for assertion generation. ATLAS utilizes a sequence-to-sequence model to generate assertions from scratch. Given that EditAS aims to revisit and improve retrieval-augmented deep assertion generation methods, we further adopt the three state-of-the-art (SOTA) retrieval-based methods, including \(IR_{ar}\), \(RA^{H}_{adapt}\), and \(RA^{NN}_{adapt}\). Finally, we provide the performance comparison between EditAS and _integration_ which is a SOTA retrieval-augmented deep assertion generation method. For more details please refer to Section II.
### _Datasets_
We utilize \(Data_{old}\) and \(Data_{new}\) to evaluate the effectiveness of EditAS and baseline methods following Yu et al [9]. Compared to \(Data_{old}\), \(Data_{new}\) adds the excluded cases with unknown tokens back to \(Data_{old}\). For more details, please refer to Section III-A. Table IV provides detailed statistics of the test sets for the two datasets, including their distribution across different assertion types.
### _Metrics_
Consistent with prior work [8, 9], the following metrics are utilized in our experiment.
#### V-C1 Accuracy
We use accuracy to evaluate the effectiveness of assertion generation techniques. Specifically, a generated assertion is considered accurate if and only if it exactly matches the ground truth. Accuracy determines the percentage of samples in which the generated output matches the expected output in terms of syntax.
#### V-C2 Bleu
In line with previous studies [8, 9], we use the muti-BLEU to measure the similarity between the generated assertion and the ground truth. BLEU calculates the modified \(n\)-gram precision of a candidate sequence (i.e., the generated assertion) to the reference sequence (i.e., the ground truth), where \(n\) ranges from 1 to 4. The modified \(n\)-gram precision values are then averaged, and a penalty is applied for overly short sentences.
### _Implementation Details_
In EditAS, the hyper-parameter settings are determined based on the performance on our validation set. For the focal-test tokens, assertion tokens, and edit actions, we use 300-dimensional word embeddings. These embeddings are obtained from a fastText model pre-trained on Common Crawl and Wikipedia [44]. During training, the word embeddings are frozen. The hidden states of the Bi-LSTMs and the LSTM in our model have dimensions of 256 and 512, respectively. The Edit Sequence Encoder, Assertion Encoder, and Decoder are jointly trained to minimize the cross-entropy loss. We use the Adam optimizer [45] with a learning rate of 0.001 and clip the gradients norm by 5. The training is done with a batch size of 8 and a dropout [46] rate of 0.2 for all LSTM layers. We truncate the overlong input, where the length of the edit sequence and the focal-test is set to 512. We stop training after 5 trials, and the model with the best (smallest) validation perplexity is selected for evaluation. All of our approaches are built based on PyTorch framework [47]. We conduct all the experiments on a Ubuntu 20.04 with four NVIDIA GeForce RTX 3090 GPUs, one 32-core processor, and 256GB memory.
## VI Experimental results
### _Rq4:_EditAS _vs. Baselines_
_Overall effectiveness of EditAS_. We calculate the accuracy and BLEU scores between the assertions generated by different approaches and human-written assertions. The experimental results are presented in Table V. We notice that
ATLAS performs the worst among all approaches. This mainly attributed to two reasons: 1) ATLAS, as a typical sequence-to-sequence DL model, suffers from exposure bias and gradient disappearance, leading to poor effectiveness in generating a long sequence of tokens as an assertion. As demonstrated in the previous work [9], ATLAS generates less than 15 tokens with an accuracy of 46.3%, and only 17.9% accuracy of generating tokens with more than 15 tokens. 2) ATLAS has a weaker ability to generate statements that contain unknown tokens, which significantly degrades its overall performance. \(IR_{ar}\) retrieves assertions from the corpus and uses them as output results, achieving better performance than ATLAS. This indicates that the similar focal-test's assertion contains some valuable and reusable information, which also demonstrates that it is reasonable for us to use the assertion of a similar focal-test as the prototype. \(RA^{H}_{adapt}\) and \(RA^{NN}_{adapt}\) further adjust the retrieved assertions to enhance the capability of the IR-based approach in generating assertions. However, as shown in Table V, the performance of adaptation operations of both \(RA^{H}_{adapt}\) and \(RA^{NN}_{adapt}\) is limited, especially for complex datasets. For instance, \(RA^{NN}_{adapt}\) can improve 20.33% of accuracy compared to \(IR_{ar}\), while for \(Data_{new}\), the improvement is only 6.94%. A similar observation holds for \(RA^{H}_{adapt}\). _Integration_ combines IR and DL techniques and achieves better accuracy and BLEU scores than ATLAS and IR-based assertion generation methods.
From Table V, EditAS achieves a significant performance improvement over ATLAS, with an average accuracy improvement of 87.48% and a BLEU score improvement of 42.65% on both datasets. This is attributed to EditAS using the rich semantic information from the retrieved assertions, rather than generating assertions from scratch. Our approach EditAS outperforms IR-based baseline methods and _Integration_ across all evaluation metrics. Specifically, compared to \(IR_{ar}\), \(RA^{H}_{adapt}\), \(RA^{NN}_{adapt}\), and _Integration_, EditAS achieves an average accuracy improvement of 32.24%, 21.19%, 15.99%, and 10.00%, respectively, which demonstrates the effectiveness of our edit module. As compared with IR-based baselines, EditAS adopts the retrieved assertions as prototypes, and makes modifications by considering the semantic difference between input and similar focal-tests. By combining the advantages of neural networks and IR-based methods, EditAS achieves the best performance.
_Effectiveness on different assertion types_. We further compare the effectiveness of EditAS and baseline methods for different types of assertions. Each column in Table VI represents an assertion type, and each cell shows the number of correctly generated assertions and their corresponding ratios in brackets. The results present that EditAS performs better than the baseline methods for almost all assertion types, especially for the standard JUnit assertion type. For the other assertion type, EditAS performs marginally worse than _Integration_. This may be attributed to the fact that the training set has a large of samples of the JUnit testing framework, leading EditAS to learn more of the syntactic features of JUnit. Overall, the experimental results can indicate the generality of EditAS in generating different types of assertions.
To better understand why EditAS outperforms \(RA^{H}_{adapt}\) and \(RA^{NN}_{adapt}\), we manually inspect the assertion generation results. Our analysis reveals that EditAS has the following advantages over the other methods: 1) EditAS is capable of learning and applying diverse assertion editing patterns, whereas \(RA^{H}_{adapt}\) and \(RA^{NN}_{adapt}\) cannot handle token addition or deletion operations. 2) \(RA^{H}_{adapt}\) and \(RA^{NN}_{adapt}\) only modify the retrieved assertion when it contains at least one token not present in the input focal-test. However, even if all tokens in the retrieved assertion appear in the input focal-test, it may still require modification due to the semantic differences between the focal-tests. In contrast, EditAS leverages a probabilistic model to learn common patterns of assertion edits from existing focal-tests' semantic differences. Overall, the edit patterns learned by EditAS are more diverse and can cover a wider range of samples. For example, Figure 7 presents a test sample. In this sample, the focal method of the input focal-test is responsible for transforming the file array into a URL array. On the other hand, the retrieved focal-test performs the opposite operation. Therefore, the corresponding test prefix constructs an empty URL array to verify that the transformed files array should also be empty. Similarly, the test prefix of the input focal-test constructs an empty files array, and its corresponding assertion aims to verify that the transformed URL array is empty. Methods including \(IR_{ar}\), \(RA^{H}_{adapt}\), \(RA^{NN}_{adapt}\), and _Integration_ encounter difficulty in understanding the semantic difference between focal-tests, as they fail to recognize the opposite operation being performed. In addition, the tokens for the retrieved assertions all appear in the input focal-test, which does not satisfy the condition that such methods modify the assertions. Consequently, they directly take the retrieved assertion as the expected result, leading to potential inaccuracies, while EditAS succeeds.
Fig. 7: An example of generated assertions by our approach and baselines.
* EditAS significantly outperforms all baseline methods in accuracy and BLEU, with average performance improvement of 10.00%-87.48% and 3.30%-42.65% on the two datasets, respectively.
_RQ5: The effectiveness with different similarity coefficients_ EditAS utilizes Jaccard as its default similarity coefficient. In this RQ, we aim to examine whether the similarity coefficients affect EditAS's effectiveness. Specifically, we developed two additional versions of EditAS, utilizing two commonly used similarity coefficients, Dice [48] and Overlap [49], respectively. Given two sets \(X\) and \(Y\), Dice and Overlap calculate the similarity as follows.
\[DSC(X,Y) =|X\cap Y|/(|X|+|Y|)\] \[Overlap(X,Y) =|X\cap Y|/min(|X|,|Y|)\]
Table VII displays the accuracy of EditAS on both datasets using different similarity coefficients. Our results illustrate that the similarity coefficient does have an impact on the effectiveness of EditAS. Specifically, Jaccard and DICE similarity coefficients have little impact on the effectiveness of EditAS. Notably, we observe that Overlap yields the poorest accuracy and BLEU scores compared to Jaccard and DICE. We attribute this to Overlap only accounting for the degree of overlap between the two focal-tests, ignoring their differences.
## VII Threats to validity
There are three main threats to the validity of our approach. Firstly, following previous works [8, 9], we only conducted experiments on two Java datasets. Although Java may not be representative of all programming languages, the datasets used in our experiments are large enough and provide sufficient safety to demonstrate the effectiveness of EditAS. Furthermore, EditAS employs exclusively language-agnostic features, i.e., edit sequences, and can be applied to other programming languages. Secondly, the Retrieve component retrieves a similar focal-test based on lexical similarity. This may result in the retrieved focal-test and input focal-test being similar only at the lexical level while exhibiting different assertions. To address this threat, we use a large-scale Java dataset (247M) to increase the scale and diversity of the retrieval corpus. We also propose an Edit component, which modifies the prototype by considering the semantic differences between the input focal-test and the retrieved focal-test, to alleviate this threat. The final major threat comes from the lack of comparing EditAS with other existing assertion generation methods in terms of generating compilable assertions. While compilability can be used as another indicator of assertion quality, automated construction has been a challenging task that is highly dependent on external/internal settings/resources [50, 51]. Hence, automatically compiling large-scale assertions remains an obstacle. To ensure scalability, we do not use compilability and bug detection as metrics.
## VIII Conclusion and Future Works
In this paper, we reaffirm the previous finding that Information Retrieval (IR) can always find a "almost correct" assertion that is very similar to the expected one, and emphasize the shortcomings of previous approaches in modifying retrieved assertions. To alleviate these problems, we propose a novel retrieve-and-edit approach named EditAS for assertion generation. EditAS contains two components. A Retrieve component for retrieving the similar focal-test that consists of a test method without any assertion and its focal method (i.e., the method under test). An Edit component treats the assertion of a similar focal-test as a prototype and combines the prototype and assertion edit pattern reflected by semantic differences between the input focal-test and similar focal-test to generate a target assertion. We conducted extensive experiments on two large-scale Java datasets. The experimental results show that EditAS substantially outperforms the state-of-the-art baselines. Our work shows that for the assertion generation task, retrieving similar assertions and learning to modify retrieved assertions by applying a set of editing operations can achieve satisfactory performance. In the future, we plan to explore additional techniques to enhance EditAS further. One avenue we will explore is integrating contextual information or leveraging code language models to handle code changes more effectively. Additionally, we aim to extend our approach to support other programming languages, such as Python, to enhance the generalizability and applicability of our method. Our source code and experimental data are available at [https://github.com/swf-cqu/EditAS](https://github.com/swf-cqu/EditAS).
## IX Acknowledgements
We appreciate the insightful insights provided by anonymous reviewers to improve the quality of the paper. This work was supported in part by the National Key Research and Development Project (No. 2021YFB1714200), the Fundamental Research Funds for the CentralUniversities (No. 2022CDJDX-005), the Chongqing Technology Innovation and Application Development Project (No. CSTB2022TIAD-STX0007 and No. CSTB2022TIAD-KPX0067), the National Natural Science Foundation of China (No. 62002034) and the Natural Science Foundation of Chongqing (No. cstc2021jcyj-msxmX0538).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Dataset** & **Metrics** & **Jaccard** & **Overlap** & **DICE** \\ \hline \multirow{2}{*}{\(Data_{old}\)} & Accuracy & 53.46 & 46.13 & 53.46 \\ & BLEU & 80.77 & 75.42 & 80.77 \\ \hline \hline \multirow{2}{*}{\(Data_{new}\)} & Accuracy & 44.36 & 38.22 & 44.24 \\ & BLEU & 63.46 & 56.37 & 63.59 \\ \hline \hline \end{tabular}
* Summary of RQ5
The similarity coefficient does have an impact on the effectiveness of EditAS. Improving the Retrieval component’s capability can lead to better assertion generation performance of EditAS.
\end{table} TABLE VII: Performance of different similarity coefficients |
2309.09735 | Hopf superalgebra structure for Drinfeld super Yangian of Lie
superalgebra $B(m,n)$ | We construct a Hopf superalgebra structure for a Drinfeld super Yangian of
Lie superalgebra $B(m,n)$ relative to all possible choices of Borel
subalgebras. In order to do this we introduce a simplified definition of
Drinfeld super Yangians. | Alexander Mazurenko | 2023-09-18T12:59:36Z | http://arxiv.org/abs/2309.09735v1 | # Hopf superalgebra structure for Drinfeld super Yangian of Lie superalgebra B(m,n)
###### Abstract
We construct a Hopf superalgebra structure for a Drinfeld super Yangian of Lie superalgebra \(B(m,n)\) relative to all possible choices of Borel subalgebras. In order to do this we introduce a simplified definition of Drinfeld super Yangians.
keywords: Lie superalgebra, Basic Lie superalgebra, Drinfeld Super Yangian, Hopf Superalgebra, Minimalistic Presentation
MSC Primary 16W35, Secondary 16W55, 17B37, 81R50, 16W30
## Acknowledgements
This work is funded by Russian Science Foundation, scientific grant 21-11-00283. With gratitude to Stukopin V.A., who introduced the author to the world of superalgebras.
## 1 Introduction
The Drinfeld super Yangian, a concept rooted in the rich theory of superalgebras and Lie superalgebras, has gained substantial attention in recent mathematical research. To learn more details about the structure of Drinfeld super Yangians, refer to the following works: [13], [10], [1], [19], [16]. In particular, the exploration of its Hopf superalgebra structure has been an area of significant interest. This paper delves into precisely this area of study, focusing on the Drinfeld super Yangian associated with the Lie superalgebra \(B(m,n)\).
The concept of the Drinfeld super Yangian for the Lie superalgebra \(A(m,n)\), which serves as a precursor to our exploration, has been introduced relative to specific subclasses of Borel subalgebras, as detailed in [11]. Building on this groundwork, we extend our focus to the Lie superalgebra \(B(m,n)\). The formal definition of the Drinfeld super Yangian for \(B(m,n)\) relative to distinguished Borel subalgebra is presented in [9], establishing a foundation for our current study.
One of the central objectives of this paper is to provide insights how to generalize our results across a broader spectrum of basic Lie superalgebras, encompassing all possible choices of Borel subalgebras. This extension necessitates the introduction of additional relations in the definitions of both Lie superalgebras and Drinfeld super Yangians, along with supplementary lemmas. However, the fundamental approach to our proofs remains consistent.
Our paper is structured as follows:
Section 2 - Preliminaries: This section provides essential background information, including fundamental algebraic concepts (Subsection 2.1) and general definitions and results pertaining to Lie superalgebras \(B(m,n)\) (Subsection 2.2).
Section 3 - Drinfeld Super Yangians: Here, we introduce the fundamental definitions and concepts associated with Drinfeld super Yangians. Subsection 3.1 presents the definition of the Drinfeld super Yangian of Lie superalgebra \(B(m,n)\) relative to all possible choices of Borel subalgebras. In Subsection 3.2, we provide a minimalistic representation of the Drinfeld super Yangian, a crucial prerequisite for constructing the Hopf superalgebra structure discussed in Section 4.
Section 4 - Hopf Superalgebra Structure: This section presents the core result of our paper. In Theorem 4.1, we establish the Hopf superalgebra structure for the Drinfeld super Yangian of Lie superalgebra \(B(m,n)\), extending our findings from the minimalistic representation presented in Section 3.2.
Section 5 - Proof of Theorem 3.1: The final section is dedicated to proving Theorem 3.1, which is foundational for our exploration of the Hopf superalgebra structure. Subsection 5.1 introduces special elements necessary for constructing the minimalistic presentation, while Subsection 5.2 contains auxiliary lemmas crucial for establishing the theorem.
This structured approach allows us to build a comprehensive understanding of the Hopf superalgebra structure of the Drinfeld super Yangian for Lie superalgebra \(B(m,n)\), and it serves as a stepping stone for broader generalizations and applications in the field.
## 2 Preliminaries
We introduce general notations, definitions and state important results about Lie superalgebras.
### Notations
By \(\mathbb{N}\), \(\mathbb{N}_{0}\), \(\mathbb{Z}\) we denote the sets of natural numbers, natural numbers with zero and integers respectively. Recall that \(\mathbb{Z}_{2}=\{\bar{0},\bar{1}\}\). Let \(\Bbbk\) be an algebraically closed field of characteristic zero. Denote by \(\mathcal{M}_{n,m}(A)\) a ring of \(n\times m\)\((n,m\in\mathbb{N})\) matrices over a ring \(A\). We also use the Iverson bracket defined by \([P]=\begin{cases}1\text{ if }P\text{ is true;}\\ 0\text{ otherwise,}\end{cases}\) where \(P\) is a statement that can be true or false. \([1,n]\) denotes \(\{1,2,...,n\}\) for any \(n\in\mathbb{N}\).
Note that all equations in this subsection can be extended by linearity from homogeneous to all elements in corresponding algebraic structure.
Let \(A\) be an associative superalgebra. Denote for all homogeneous \(x\in A\) the \(\mathbb{Z}_{2}\)-grading of \(x\) by \(|x|\in\mathbb{Z}_{2}\). One defines
1. the Lie superbracket by \[[x,y]:=xy-(-1)^{|x||y|}yx\] (2.1)
2. the anticommutator by \[\{x,y\}:=xy+(-1)^{|x||y|}yx;\]
3. the graded Lie superbracket by \[[x,y]_{v}:=xy-(-1)^{|x||y|}vyx\]
for all homogeneous elements \(x,y\in A\) and all \(v\in\Bbbk\).
Consider super vector spaces \(V\) and \(W\). Define the linear function \(\tau_{V,W}:V\otimes W\to W\otimes V\) by
\[\tau_{V,W}(v\otimes w)=(-1)^{|v||w|}w\otimes v \tag{2.2}\]
for all homogeneous \(v\in V\) and \(w\in W\).
A Lie superalgebra is a super vector space \(\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}\) together with a bilinear map (the Lie superbracket) \([\cdot,\cdot]:\mathfrak{g}\times\mathfrak{g}\rightarrow\mathfrak{g}\) which satisfies the following axioms:
\[[g_{\bar{\alpha}},g_{\bar{\beta}}]\subseteq g_{\bar{\alpha}+\bar{\beta}}\text { for }\bar{\alpha},\bar{\beta}\in\mathbb{Z}_{2}, \tag{2.3}\]
\[[x,y]=-(-1)^{|x||y|}[y,x], \tag{2.4}\]
\[(-1)^{|x||z|}[x,[y,z]]+(-1)^{|x||y|}[y,[z,x]]+(-1)^{|z||y|}[z,[x,y]]=0 \tag{2.5}\]
for all homogeneous \(x,y,z\in\mathfrak{g}\).
The linear mapping \(\mathrm{ad}_{x}:\mathfrak{g}\rightarrow\mathfrak{g}\) is called the adjoint action and is defined by \(\mathrm{ad}_{x}(y)=[x,y]\) for all \(x,y\in\mathfrak{g}\). Denote by \(\mathrm{id}_{\mathfrak{g}}\) the identity map on \(\mathfrak{g}\).
We use results about formal power series proved in [8]. Let \(A\) be a ring. Consider the ring of formal power series \(A[[X]]\). Define for each \(f\in 1+XA[[X]]\)
\[\log(f):=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}(f-1)^{n}\in XA[[X]].\]
### Lie Superalgebras
Let \(\mathfrak{g}\) be the Lie superalgebra \(B(m,n)=\mathfrak{osp}(2m+1|2n)\) with \(m\geq 0\), \(n\geq 1\) relative to a Borel subalgebra \(\mathfrak{b}\). Recall that \(\mathfrak{g}\) has rank \(m+n\) and set \(I=\{1,2,..,m+n\}\). Suppose that \(\mathfrak{g}\) has a simple root system \(\Delta^{0}=\{\alpha_{i}\}_{i\in I}\) and a Cartan subalgebra \(\mathcal{H}=\langle h_{i}\rangle_{i\in I}\). Denote the corresponding simple root generators by \(e_{i}\) and \(f_{i}\) for all \(i\in I\). For \(\tau\subseteq I\), we define a \(\mathbb{Z}_{2}\)-gradation by setting \(|e_{i}|=\bar{0}\) (\(|f_{i}|=\bar{0}\)) if \(i\notin\tau\) else \(|e_{i}|=\bar{1}\) (\(|f_{i}|=\bar{1}\)) for all \(i\in I\). Moreover, set a \(\mathbb{Z}_{2}\)-gradation on \(I\) by setting \(|i|=\bar{0}\) if \(i\notin\tau\) else \(|i|=\bar{1}\) for all \(i\in I\).
Decompose \(\mathfrak{g}\) using its canonical root decomposition as follows:
\[\mathfrak{g}=\mathcal{H}\oplus\bigoplus_{\alpha\in\Delta}\mathfrak{g}^{\alpha}\]
where \(\Delta\) is a root system of \(\mathfrak{g}\) and
\[\mathfrak{g}^{\alpha}=\{x\in\mathfrak{g}\mid[h,x]=\alpha(h)x,h\in\mathcal{H}\}\]
is the root space corresponding to a root \(\alpha\in\Delta\). It is well known that \(\dim(\mathfrak{g}^{\alpha})=1\) as a super vector space for all \(\alpha\in\Delta\). Thus we can fix basis elemenets \(\{e_{\alpha},f_{\alpha}\}_{\alpha\in\Delta^{+}}\) such that \(\mathfrak{g}^{\alpha}=\langle e_{\alpha}\rangle\) and \(\mathfrak{g}^{-\alpha}=\langle f_{\alpha}\rangle\) for all \(\alpha\in\Delta^{+}\). Define \(\mathrm{ht}(\sum_{i\in I}n_{i}\alpha_{i})=\sum_{i\in I}n_{i}\) for all \((n_{i})_{i\in I}\in\mathbb{Z}^{|I|}\) where \(\alpha_{i}\in\Delta^{0}\) for all \(i\in I\).
By definition of basic Lie superalgebra (see [6]) there exists a unique (up to a constant factor) even nondegenerate \(\mathfrak{g}\)-invariant supersymmetric bilinear form \(\langle.,.\rangle:\mathfrak{g}\times\mathfrak{g}\rightarrow\Bbbk\), i. e.
1. \(\langle\mathfrak{g}_{\bar{\alpha}},\mathfrak{g}_{\bar{\beta}}\rangle=0\) unless \(\bar{\alpha}+\bar{\beta}=\bar{0}\) for \(\bar{\alpha},\bar{\beta}\in\mathbb{Z}_{2}\);
2. the form induces an isomorphism \(\mathfrak{g}\cong(\mathfrak{g})^{*}\);
3. \(\langle[x,y],z\rangle=\langle x,[y,z]\rangle\) for all \(x,y,z\in\mathfrak{g}\);
4. \(\langle x,y\rangle=(-1)^{|x||y|}\langle y,x\rangle\) for all homogeneous \(x,y\in\mathfrak{g}\).
Without loss of generality we choose a bilinear form on \(\mathfrak{g}\) in such a way that
1. \(\langle e_{i},f_{j}\rangle=\delta_{i,j}\) for all \(i,j\in I\);
2. \(\langle e_{\alpha},f_{\beta}\rangle=[\alpha+\beta=0]\) for all \(\alpha,\beta\in\Delta\);
3. the restriction of \(\langle.,.\rangle\) to \(\langle\mathfrak{g}^{\alpha},\mathfrak{g}^{-\alpha}\rangle\) is nondegenerate for all \(\alpha\in\Delta\).
Using the last property note that for any \(\alpha\in\mathcal{H}^{*}\) there exists a unique element \(h_{\alpha}\in\mathcal{H}\) such that \(\langle h_{\alpha},h\rangle=\alpha(h)\) for all \(h\in\mathcal{H}\). Thus we can define a nondegenerate symmetric bilinear form \((.,.)\) on \(\mathcal{H}^{*}\) by \((\alpha,\beta)=\langle h_{\alpha},h_{\beta}\rangle\) for all \(\alpha,\beta\in\mathcal{H}^{*}\). Now we are able to construct the symmetric Cartan matrix \(C=(c_{ij})_{i,j\in I}\) of \(\mathfrak{g}\) by supposing \(C=((\alpha_{i},\alpha_{j}))_{i,j\in I}\). Therefore the following arithmetic conditions for elements of the Cartan matrix \(C\) hold:
1. \(\langle h_{i},h_{j}\rangle=(\alpha_{i},\alpha_{j})=c_{ij}\) for all \(i,j\in I\);
2. \(c_{ij}\in\mathbb{Z}\), for all \(i,j\in I\);
3. \(c_{ij}\leq 0\), if \(i\notin\tau\), for all \(i,j\in I\).
Let \(\{\epsilon_{i},\delta_{j}\mid i\in[1,m],j\in[1,n]\}\) be the dual basis of \(\mathcal{H}^{*}\). Then for \(\mathfrak{g}\) the bilinear form is defined by
\[(\epsilon_{i},\epsilon_{i^{{}^{\prime}}})=\delta_{i,i^{{}^{\prime}}},\ (\delta_{j},\delta_{j^{{}^{\prime}}})=-\delta_{j,j^{{}^{\prime}}},(\epsilon_{i },\delta_{j})=0 \tag{2.6}\]
for all \(i,i^{{}^{\prime}}\in[1,m]\) and \(j,j^{{}^{\prime}}\in[1,n]\). We also require the following well-known
**Lemma 2.1**.:
1. _If_ \(x\in\mathfrak{g}^{\alpha}\)_,_ \(y\in\mathfrak{g}^{-\alpha}\)_, then_ \([x,y]=\langle x,y\rangle h_{\alpha}\) _for all_ \(\alpha\in\Delta\)_._
2. \([\mathfrak{g}^{\alpha},\mathfrak{g}^{-\alpha}]=\langle h_{\alpha}\rangle\) _for all_ \(\alpha\in\Delta\)_._
3. \([h,\mathfrak{g}^{\alpha}]=\pm\langle h,h_{\alpha}\rangle\mathfrak{g}^{\alpha}\) _for all_ \(\alpha\in\Delta^{\pm}\)_._
Using all previous definitions we formulate the well-known result (for more details see [17], [18], [20]):
**Proposition 2.1**.: _The Lie superalgebra \(\mathfrak{g}\) is generated by elements \(h_{i}\), \(e_{i}\) and \(f_{i}\) for \(i\in I\) subject to the relations_
**LS1**: _for_ \(i,j\in I\)__
\[[h_{i},h_{j}]=0,\ [h_{i},e_{j}]=c_{ij}e_{j},\ [h_{i},f_{j}]=-c_{ij}f_{j},\ [e_{i},f_{j}]= \delta_{ij}h_{i} \tag{2.7}\]
_and the standart Serre relations_
**LS2**: _for_ \(i,j\in I\)__
\[[e_{i},e_{i}]=[f_{i},f_{i}]=0,\ \text{for}\ c_{ii}=0, \tag{2.8}\]
\[[e_{i},e_{j}]=[f_{i},f_{j}]=0,\ \text{for}\ i\neq j,\ c_{ij}=0, \tag{2.9}\]
\[[e_{i},[e_{i},e_{j}]]=[f_{i},[f_{i},f_{j}]]=0,\ \text{for}\ c_{ii}\neq 0,\ i \neq m+n,\ j\in\{i-1,i+1\}, \tag{2.10}\]
\[[e_{m+n},[e_{m+n},[e_{m+n},e_{m+n-1}]]]=[f_{m+n},[f_{m+n},[f_{m+n},f_{m+n-1}]] ]=0, \tag{2.11}\]
_plus higher order Serre relations for type \(i\), \(iia\) and \(iib\) vertices_
**LS3**: _for_ \(j-1,j,j+1\in I\)__
\[[[[e_{j+1},e_{j}],e_{j-1}],e_{j}]=[[[f_{j+1},f_{j}],f_{j-1}],f_{j}]=0. \tag{2.12}\]
**Remark 2.1**.: _Below \(\times\) dots represent either white dots associated to even roots or grey dots associated to isotropic odd roots. Black dot corresponds to non-isotropic odd root._
**Remark 2.2**.:
1. _Relations (_2.8_) are also valid for_ \(i\notin\tau\) _by (_2.1_)._
2. _Relations (_2.10_) are also valid for_ \(c_{ii}=0\)_,_ \(i\neq m+n\)_, and_ \(j\in\{i-1,i+1\}\)_, but in that case, they already follow from eqs. (_2.1_), (_2.5_) and (_2.8_)._
3. _Relations (_2.12_) are also valid for_ \(c_{jj}\neq 0\) _(in that case_ \(|e_{j}|=|f_{j}|=\bar{0}\)_), but in that case, they already follow from eqs. (_2.1_), (_2.5_), (_2.8_) and (_2.9_) (see_ _[_18_, Lemma 6.1.1]__)._
To find out how look like all Dynkin diagrams and simple root systems for all possible Borel subalgebras of \(\mathfrak{g}\) see for example [20].
By assumption \(\mathcal{H}=\langle h_{i}\rangle_{i\in I}\). We can construct an orthonormal basis \(\{h^{(k)}\}_{k\in I}\) of \(\mathcal{H}\) with respect to the \(\langle.,.\rangle\), i. e. \(\langle h^{(i)},h^{(j)}\rangle=\delta_{ij}\) for all \(i,j\in I\). Then the Casimir operator \(\Omega\in\mathfrak{g}\otimes\mathfrak{g}\) is given by the formula
\[\Omega=\sum_{k\in I}h^{(k)}\otimes h^{(k)}+\sum_{\alpha\in\Delta _{0}^{+}}(e_{\alpha}\otimes f_{\alpha}+f_{\alpha}\otimes e_{\alpha})+\sum_{ \alpha\in\Delta_{1}^{+}}(f_{\alpha}\otimes e_{\alpha}-e_{\alpha}\otimes f_{ \alpha})=\] \[\sum_{k\in I}h^{(k)}\otimes h^{(k)}+\sum_{\alpha\in\Delta^{+}}(- 1)^{|e_{\alpha}|}e_{\alpha}\otimes f_{\alpha}+\sum_{\alpha\in\Delta^{+}}f_{ \alpha}\otimes e_{\alpha}. \tag{2.13}\]
The element \(\Omega\) is the even, invariant, supersymmetric, i. e.
1. \(|\Omega|=0\);
2. \([x\otimes 1+1\otimes x,\Omega]=0\) for all \(x\in\mathfrak{g}\);
3. \(\Omega=\tau_{\mathfrak{g}.\mathfrak{g}}(\Omega)\) where \(\tau_{\mathfrak{g}.\mathfrak{g}}\) is defined by (2.2).
Note that
\[\sum_{k\in I}\langle h^{(k)},h_{i}\rangle h^{(k)}=h_{i},\]
for all \(i\in I\).
Figure 3: type \(iib\)
Figure 2: type \(iia\)
## 3 Drinfeld super Yangians
In this section, we present the definition of the Drinfeld super Yangian associated with the Lie superalgebra \(B(m,n)\) relative to all possible Borel subalgebras. Furthermore, we develop a concise framework for expressing this Drinfeld super Yangian in a minimalistic manner.
### Definition of Drinfeld super Yangian
We use the notations introduced in Subsection 2.2. Following [14; 15] (see also [2], [12] and [5]), define the Drinfeld super Yangian of \(\mathfrak{g}\), denoted by \(Y_{h}(\mathfrak{g})\), to be the unital, associative \(\Bbbk[[\hbar]]\)-Hopf superalgebra. It is generated by \(\{h_{i,r},x_{i,r}^{\pm}\}_{i\in I}^{r\in\mathbb{N}_{0}}\). The \(\mathbb{Z}_{2}\)-grading of these generators is specified as follows \(|h_{i,r}|=\bar{0}\), \(|x_{i,r}^{\pm}|=|i|\) for all \(r\in\mathbb{N}_{0}\), \(i\in I\). \(Y_{h}(\mathfrak{g})\) is subject to the following defining relations
1. for all \(i,j\in I\) and \(r,s\in\mathbb{N}_{0}\) \[[h_{i,r},h_{j,s}]=0,\] (3.1)
2. for all \(i,j\in I\) and \(s\in\mathbb{N}_{0}\) \[[h_{i,0},x_{j,s}^{\pm}]=\pm c_{ij}x_{j,s}^{\pm},\] (3.2)
3. for \(i,j\in I\) and \(r,s\in\mathbb{N}_{0}\) \[[x_{i,r}^{+},x_{j,s}^{-}]=\delta_{i,j}h_{i,r+s},\] (3.3)
4. for \(i,j\in I\) and \(r,s\in\mathbb{N}_{0}\); if \(i=j\), then \(|i|=\bar{0}\) or (\(|i|=\bar{1}\), \(c_{ii}\neq 0\) and \(r=0\)) \[[h_{i,r+1},x_{j,s}^{\pm}]-[h_{i,r},x_{j,s+1}^{\pm}]=\pm\frac{c_{ij}\hbar}{2} \{h_{i,r},x_{j,s}^{\pm}\},\] (3.4)
5. for \(i,j\in I\) and \(r,s\in\mathbb{N}_{0}\) \[[x_{i,r+1}^{\pm},x_{j,s}^{\pm}]-[x_{i,r}^{\pm},x_{j,s+1}^{\pm}]=\pm\frac{c_{ij} \hbar}{2}\{x_{i,r}^{\pm},x_{j,s}^{\pm}\},\text{ unless }i=j\text{ and }|i|=\bar{1},\] (3.5)
6. for \(i\in I\) and \(r,s\in\mathbb{N}_{0}\) \[[h_{i,r},x_{i,s}^{\pm}]=0\text{ if }c_{ii}=0,\] (3.6)
7. for \(i,j\in I\) and \(r,s\in\mathbb{N}_{0}\) \[[x_{i,r}^{\pm},x_{j,s}^{\pm}]=0\text{ if }c_{ij}=0,\] (3.7) as well as cubic super Serre relations
8. for \(i,j\in I\) (\(i\neq m+n\) in case of \(iib\) vertices), \(j\in\{i-1,i+1\}\) and \(r,s,t\in\mathbb{N}_{0}\) \[[x_{i,r}^{\pm},[x_{i,s}^{\pm},x_{j,t}^{\pm}]]+[x_{i,s}^{\pm},[x_{i,r}^{\pm},x_ {j,t}^{\pm}]]=0\text{ if }c_{ii}\neq 0,\] (3.8)
9. in case of \(iib\) vertices for \(r_{1},r_{2},r_{3},t\in\mathbb{N}_{0}\) and \(S_{3}\) - symmetric group on the set \(\{1,2,3\}\) \[\sum_{\sigma\in S_{3}}[x_{m+n,r_{\sigma(1)}}^{\pm},[x_{m+n,r_{\sigma(2)}}^{\pm},[x_{m+n,r_{\sigma(3)}}^{\pm},x_{m+n-1,t}^{\pm}]]]=0,\] (3.9)
and quartic super Serre relations for type \(i\), \(iia\) and \(iib\) vertices
**Y10**: for \(j-1,j,j+1\in I\) and \(r,s\in\mathbb{N}_{0}\)
\[[[x^{\pm}_{j-1,r},x^{\pm}_{j,0}],[x^{\pm}_{j,0},x^{\pm}_{j+1,s}]]=0. \tag{3.10}\]
**Remark 3.1**.:
1. _Similar to Remark_ 2.2_, cubic super Serre relations (_3.8_) also hold for_ \(c_{ii}=0\)_,_ \(i\neq m+n\)_, and_ \(j\in\{i-1,i+1\}\)_, but in that case they already follow from eqs. (_2.5_) and (_3.7_)._
2. _Similar to Remark_ 2.2_, quartic super Serre relations (_3.10_) also hold for for_ \(c_{jj}\neq 0\)_, but in that case they already follow from eqs. (_2.5_), (_3.7_) and (_3.8_)._
3. _Generalizing the quartic super Serre relations (_3.10_), the following relations also hold_ _[_15_]__:_ \[[[x^{\pm}_{j-1,r},x^{\pm}_{j,k}],[x^{\pm}_{j,l},x^{\pm}_{j+1,s}]]+[[x^{\pm}_{j -1,r},x^{\pm}_{j,l}],[x^{\pm}_{j,k},x^{\pm}_{j+1,s}]]=0.\] _This, in turn, using (_3.8_) can be rewritten as_ \[[[[x^{\pm}_{j-1,r},x^{\pm}_{j,k}],x^{\pm}_{j+1,s}],x^{\pm}_{j,l}]+[[[x^{\pm}_{ j-1,r},x^{\pm}_{j,l}],x^{\pm}_{j+1,s}],x^{\pm}_{j,k}]=0.\]
We note that the universal enveloping superalgebra \(U(\mathfrak{g})\) is naturally embedded in \(Y_{h}(\mathfrak{g})\) as Hopf superalgebra, and the embedding is given by the formulas \(h_{i}\to h_{i,0}\), \(e_{i}\to x^{+}_{i,0}\), \(f_{i}\to x^{-}_{i,0}\). We shall identify the universal enveloping superalgebra \(U(\mathfrak{g})\) with its image in the Drinfeld super Yangian \(Y_{h}(\mathfrak{g})\).
Let \(Y^{0}_{h}(\mathfrak{g})\) be the subalgebra generated by the elements \(\{h_{i,r}\}_{i\in I,r\in\mathbb{N}_{0}}\).
### Minimalistic presentation for Drinfeld super Yangian
To establish the Hopf superalgebra properties of a Drinfeld super Yangian, we introduce a more convenient representation for the superalgebras \(Y_{h}(\mathfrak{g})\). Such work was done in [12] for \(Y_{h}(\mathfrak{g})\) associated only with the distinguished Dynkin diagram (the result is stated without proof), and for non-super case in [4] and [7].
From defining relations we can see that \(Y_{h}(\mathfrak{g})\) is generated by elements \(\{h_{ir},x^{\pm}_{i0}\}_{i\in I}^{r\in\{0,1\}}\). We use equations (3.2), (3.3) and (3.4) to get recurrent formulas
\[x^{\pm}_{i,r+1}=\pm(c_{ij})^{-1}[h_{j1}-\frac{\hbar}{2}h_{j0}^{2},x^{\pm}_{ir}], \tag{3.11}\]
where \(r\in\mathbb{N}_{0}\); if \(c_{ii}\neq 0\), then \(j=i\), and if \(c_{ii}=0\), then \(j=i+1\) (\(i\in I\));
\[h_{ir}=[x^{+}_{ir},x^{-}_{i0}], \tag{3.12}\]
where \(r\geq 2\) and \(i\in I\). We introduce the auxiliary generators for \(i\in I\) by setting
\[\widetilde{h}_{i1}\stackrel{{\rm def}}{{=}}h_{i1}-\frac{\hbar}{ 2}h_{i0}^{2}. \tag{3.13}\]
The \(\mathbb{Z}_{2}\)-grading of these generators is specified as follows \(|\widetilde{h}_{i1}|=|h_{i,r}|=\bar{0}\), \(|x^{\pm}_{i,r}|=|i|\) for all \(r\in\mathbb{N}_{0}\) and \(i\in I\).
**Theorem 3.1**.: \(Y_{h}(\mathfrak{g})\) _is isomorphic to the superalgebra generated by \(\{h_{ir},x^{\pm}_{ir}\}_{i\in I}^{r\in\{0,1\}}\) subject only to the relations_
**MY1**: _for all_ \(i,j\in I\) _and_ \(0\leq r,s\leq 1\)__ \[[h_{ir},h_{js}]=0,\] (3.14)
**MY2**: _for all_ \(i,j\in I\) _and_ \(0\leq s\leq 1\)__ \[[h_{i0},x_{js}^{\pm}]=\pm c_{ij}x_{js}^{\pm},\] (3.15)
**MY3**: _for_ \(i,j\in I\) _and_ \(0\leq r+s\leq 1\)__ \[[x_{ir}^{+},x_{js}^{-}]=\delta_{ij}h_{i,r+s},\] (3.16)
**MY4**: _for_ \(i,j\in I\)__ \[[\widetilde{h}_{i1},x_{j0}^{\pm}]=\pm c_{ij}x_{j1}^{\pm},\text{ unless }i=j\text{ and }c_{ii}=0,\] (3.17)
**MY5**: _for_ \(i,j\in I\)__ \[[x_{i1}^{\pm},x_{j0}^{\pm}]-[x_{i0}^{\pm},x_{j1}^{\pm}]=\pm\frac{c_{ij}\hbar} {2}\{x_{i0}^{\pm},x_{j0}^{\pm}\},\text{ unless }i=j\text{ and }|i|=\bar{1},\] (3.18)
**MY6**: _for_ \(i\in I\) _and_ \(0\leq s\leq 1\)__ \[[h_{i1},x_{is}^{\pm}]=0,\text{ if }c_{ii}=0,\] (3.19)
**MY7**: _for_ \(i,j\in I\)__ \[[x_{i0}^{\pm},x_{j0}^{\pm}]=0,\text{ if }c_{ij}=0,\] (3.20)
**MY8**: _for_ \(i\in I\) _(_\(i\neq m+n\) _in case of iib vertices),_ \(j\in\{i-1,i+1\}\)__ \[[x_{i0}^{\pm},[x_{i0}^{\pm},x_{j0}^{\pm}]]=0,\text{ if }c_{ii}\neq 0,\] (3.21)
**MY9**: _in case of iib vertices_
\[[x_{m+n,0}^{\pm},[x_{m+n,0}^{\pm},[x_{m+n,0}^{\pm},x_{m+n-1,0}^{\pm}]]]=0, \tag{3.22}\]
**MY10**: _for_ \(j-1,j,j+1\in I\) _and for type_ \(i\)_,_ \(iia\) _and_ \(iib\) _vertices_
\[[[x_{j-1,0}^{\pm},x_{j0}^{\pm}],[x_{j0}^{\pm},x_{j+1,0}^{\pm}]]=0. \tag{3.23}\]
Proof.: The proof of the theorem is established through a sequence of logical steps based on various remarks and lemmas. We summarize these key steps as follows:
1. The equality (3.1) is derived from Lemmas 5.13 and 5.14.
2. The relation (3.2) is established with reference to Lemma 5.4.
3. The equality (3.3) is established with support from Lemmas 5.7 and 5.13.
4. The relation (3.4) is deduced using Lemma 5.16.
5. The relation (3.5) is derived by means of Lemma 5.15.
6. The relation (3.6) is obtained through the insights provided by Lemma 5.19.
7. The relations (3.7) and (3.8) are established based on the analysis presented in Lemma 5.18.
8. The relation (3.9) is derived following the findings in Lemma 5.20.
9. The relation (3.10) is logically connected to the results of Lemma 5.21.
10. Finally, the relation (3.24) is established, drawing upon the insights derived from Lemma 5.11.
By systematically referencing these remarks and lemmas, the theorem's proof is constructed, providing a clear and organized line of reasoning.
In this superalgebra, we also define elements \(x_{ir}^{\pm}\) (\(r\geq 2\)) and \(h_{ir}\) (\(r\geq 2\)) for \(i\in I\) using (3.11) and (3.12). The \(\mathbb{Z}_{2}\)-grading of these generators is specified as in \(Y_{h}(\mathfrak{g})\).
**Remark 3.2**.: _As noted in [4] in order to prove Theorem 3.1 we must prove that equation_
\[[[\widetilde{h}_{j1},x_{i1}^{+}],x_{i1}^{-}]+[x_{i1}^{+},[\widetilde{h}_{j1}, x_{i1}^{-}]]=0 \tag{3.24}\]
_or_
\[[h_{j1},h_{i2}]=0 \tag{3.25}\]
_can be deduced from relations (3.14) - (3.23), where if \(c_{ii}\neq 0\), then \(j=i\), and if \(c_{ii}=0\), then \(j=i+1\) (\(i\in I\)). Note that it follows from Lemmas (5.5) and (5.9) that (3.24) is equivalent to (3.25)._
## 4 Hopf superalgebra structure on Drinfeld super Yangians
In this Section we explicitly describe a Hopf superalgebra structure on a Drinfeld super Yangian of Lie superalgebra \(B(m,n)\). We use notations introduced in Subsection 3.2.
**Theorem 4.1**.: \(Y_{h}(\mathfrak{g})\) _is a Hopf superalgebra._
_Counit: the counit \(\epsilon:Y_{h}(\mathfrak{g})\rightarrow\Bbbk\) is defined by the following equations:_
**Co1**: \[\epsilon(1)=1,\] (4.1)
**Co2**: _for_ \(x\in\{h_{ir},x_{ir}^{\pm}\}_{i\in I}^{r\in\mathbb{N}_{0}}\)__
\[\epsilon(x)=0. \tag{4.2}\]
_Comultiplication: The comultiplication \(\Delta:Y_{h}(\mathfrak{g})\to Y_{h}(\mathfrak{g})\otimes Y_{h}( \mathfrak{g})\) is given by the following relations:_
**Com1**: _for all_ \(x\in\mathfrak{g}\)__
\[\Delta(x)=\square(x), \tag{4.3}\]
**Com2**: \[\Delta(h_{i1}) =\square(h_{i1})+\hbar(h_{i0}\otimes h_{i0}+[h_{i0}\otimes 1, \Omega^{+}])\] \[=\square(h_{i1})+\hbar(h_{i0}\otimes h_{i0}-\sum_{\alpha\in \Delta^{+}}(\alpha_{i},\alpha)x_{\alpha}^{-}\otimes x_{\alpha}^{+}).\] (4.4)
**Com3**: _for all_ \(r\in\mathbb{N}_{0}\)_; if_ \(c_{ii}\neq 0\)_, then_ \(j=i\)_, and if_ \(c_{ii}=0\)_, then_ \(j=i+1\) _(_\(i\in I\)_)_
\[\Delta(x_{i,r+1}^{\pm})=\pm(c_{ij})^{-1}[\Delta(h_{j1})-\frac{\hbar}{2}\Delta ^{2}(h_{j0}),\Delta(x_{ir}^{\pm})], \tag{4.5}\]
**Com4**: _for all_ \(i\in I\) _and_ \(r\geq 2\)__
\[\Delta(h_{ir})=[\Delta(x_{ir}^{+}),\Delta(x_{i0}^{-})], \tag{4.6}\]
_Antipode:_ _The antipode_ \(S:Y_{h}(\mathfrak{g})\to Y_{h}(\mathfrak{g})^{op\;cop}\) _is described by the following equations:_
**Ant1**: _for all_ \(x\in\mathfrak{g}\)__
\[S(x)=-x, \tag{4.7}\]
**Ant2**: \[S(h_{i1})=-h_{i1}+\hbar(h_{i0}^{2}+\sum_{\alpha\in\Delta^{+}}(-1)^{1+|\alpha|}( \alpha_{i},\alpha)x_{\alpha}^{-}x_{\alpha}^{+}).\] (4.8)
**Ant3**: _for all_ \(r\in\mathbb{N}_{0}\)_; if_ \(c_{ii}\neq 0\)_, then_ \(j=i\)_, and if_ \(c_{ii}=0\)_, then_ \(j=i+1\) _(_\(i\in I\)_)_
\[S(x_{i,r+1}^{\pm})=\mp(c_{ij})^{-1}[S(h_{j1})-\frac{\hbar}{2}S^{2}(h_{j0}),S(x_ {ir}^{\pm})], \tag{4.9}\]
**Ant4**: _for all_ \(i\in I\) _and_ \(r\geq 2\)__
\[S(h_{ir})=-[S(x_{ir}^{+}),S(x_{i0}^{-})]. \tag{4.10}\]
Proof.: By combining the insights from the Subsection 2.2 with the results of Theorem 3.1, we can observe that the proof aligns explicitly with the one presented in [11, Section 4].
## 5 Proofs of auxiliary results
The proof of Theorem 3.1 is splitted in several lemmas and propositions. We give all necessary proofs in this section.
### General results about Drinfeld super Yangians
Set for any \(i\in I\)
\[\widetilde{h}_{i}(t):=\hbar\sum_{r\geq 0}\widetilde{h}_{i,r}t^{-r-1}=\log(1+ \hbar\sum_{r\geq 0}h_{i,r}t^{-r-1})\in Y_{h}(\mathfrak{g})^{0}[[t^{-1}]]. \tag{5.1}\]
**Lemma 5.1**.: _Let (3.1) hold for \(i,j\in I\) and \(0\leq r,s\leq v\), let (3.4) hold for \(i,j\in I\), \(0\leq r\leq v\) and \(s\in\mathbb{N}_{0}\), and let (3.6) hold for \(i\in I\) (\(c_{ii}=0\)), \(0\leq r,s\leq v\). Then for \(i,j\in I\), \(0\leq r\leq v\), \(s\in\mathbb{N}_{0}\),_
\[[\widetilde{h}_{i,r},x_{j,s}^{\pm}]=\pm c_{ij}x_{j,r+s}^{\pm}\pm c_{ij}\sum_{ p=1}^{\lfloor r/2\rfloor}\binom{r}{2p}\frac{(\hbar c_{ij}/2)^{2p}}{2p+1}x_{j,r+s-2p}^ {\pm}. \tag{5.2}\]
Proof.: Note that from (5.1) follows that for arbitrary \(r\in\mathbb{N}_{0}\) we have \(\widetilde{h}_{i,r}=f(h_{i,0},h_{i,1},...,h_{i,r})\) for some element of free algebra \(f\in\Bbbk\langle x_{1},...,x_{r+1}\rangle\). Hence, while deriving (5.2), we may and shall assume that (3.1) and (3.4) hold for all \(r,s\in\mathbb{N}_{0}\). Then the result follows from [3, Lemma 2.7, Lemma 2.9, Remark 3.1].
Set \(\overset{\approx}{h}_{i,0}=h_{i,0}\) (\(i,j\in I\)), and define inductively for \(r\in\mathbb{N}\)
\[\overset{\approx}{h}_{ij,r}=\widetilde{h}_{i,r}-\sum_{p=1}^{\lfloor r/2 \rfloor}\binom{r}{2p}\frac{(\hbar(\alpha_{i},\alpha_{j})/2)^{2p}}{2p+1} \overset{\approx}{h}_{ij,r-2p}.\]
We have
**Lemma 5.2**.: _Suppose that Lemma 5.1 holds. Then in the same notations_
\[[\overset{\approx}{h}_{ij,r},x_{j,s}^{\pm}]=\pm(\alpha_{i},\alpha_{j})x_{j,r+s}^ {\pm}, \tag{5.3}\]
_for \(i,j\in I\), \(0\leq r\leq v\), \(s\in\mathbb{N}_{0}\), and_
\[\overset{\approx}{h}_{ij,r}=h_{i,r}+f(h_{i,0},h_{i,1},...,h_{i,r-1}) \tag{5.4}\]
_for some polynomial \(f(x_{1},x_{2},...,x_{r})\in\Bbbk[x_{1},x_{2},...,x_{r}]\)._
Proof.: The proof is the same as in [11, Lemma 3.16]. It is based on Lemma 5.1.
**Lemma 5.3**.: _Let \(p,n,m\in I\) and \(z\in\mathbb{N}_{0}\). Suppose that \([h_{pz},h_{nv}]=0\), \([h_{pz},h_{mv}]=0\) for \(0\leq v\leq s^{{}^{\prime}}-1\)\((s^{{}^{\prime}}\in\mathbb{N}_{0})\); \((\alpha_{p},\alpha_{n})\neq 0\) and \((\alpha_{p},\alpha_{m})\neq 0\). Moreover, let \([h_{nv_{1}},h_{nv_{2}}]=0\) and \([h_{mv_{1}},h_{mv_{2}}]=0\) for \(0\leq v_{1},v_{2}\leq s^{{}^{\prime}}\) and (3.4) hold for \((i,j)=(n,p)=(m,p)\), \(0\leq r\leq s^{{}^{\prime}}\) and any \(s\in\mathbb{N}_{0}\). Then \([h_{pz},h_{ns^{{}^{\prime}}}]=\frac{(\alpha_{n},\alpha_{p})}{(\alpha_{m}, \alpha_{p})}[h_{pz},h_{ms^{{}^{\prime}}}]\)._
Proof.: The proof is the same as in [11, Lemma 3.17]. It is based on Lemmas 5.1 and 5.2.
### Proof of minimalistic presentation theorem
**Lemma 5.4**.: _The relation (3.2) is satisfied for all \(i,j\in I\) and \(r\in\mathbb{N}_{0}\). Moreover, for the same parameters_
\[[\widetilde{h}_{i1},x_{jr}^{\pm}]=\pm c_{ij}x_{j,r+1}^{\pm}. \tag{5.5}\]
Proof.: The proof is the same as in [11, Lemma 3.1].
**Lemma 5.5**.: _The relation (3.3) holds when \(i=j\), \(0\leq r+s\leq 2\)._
Proof.: From (3.14), (3.16) and Lemma 5.4 for all \(i\in I\) (\(j\in I\) such that \(c_{ij}\neq 0\)), we have
\[0=[h_{i1},\widetilde{h}_{j1}]=[[x_{i1}^{+},x_{i0}^{-}],\widetilde{h}_{j1}]=c_ {ij}([x_{i1}^{+},x_{i1}^{-}]-[x_{i2}^{+},x_{i0}^{-}]).\]
On the other hand,
\[0=[h_{i1},\widetilde{h}_{j1}]=[[x_{i0}^{+},x_{i1}^{-}],\widetilde{h}_{j1}]=c_ {ij}([x_{i0}^{+},x_{i2}^{-}]-[x_{i1}^{+},x_{i1}^{-}]).\]
Therefore
\[[x_{i0}^{+},x_{i2}^{-}]=[x_{i1}^{+},x_{i1}^{-}]=[x_{i2}^{+},x_{i0}^{-}]=h_{i2}.\]
**Lemma 5.6**.: _The relation (3.4) holds when \(i=j\), \((r,s)=(1,0)\), and \(|i|=\bar{0}\), i.e._
\[[h_{i2},x_{i0}^{\pm}]-[h_{i1},x_{i1}^{\pm}]=\pm\frac{c_{ii}\bar{h}}{2}\{h_{i1},x_{i0}^{\pm}\},\]
Proof.: The proof is analogous to that in [4, Lemma 2.26].
**Lemma 5.7**.: _Let \(i,j\in I\) and \(i\neq j\). The relations (3.3), (3.5), and (3.7) hold for any \(r,s\in\mathbb{N}_{0}\)._
Proof.: We prove (3.5) by induction on \(r\) and \(s\). The initial case \(r=s=0\) is our assumption (3.18). Let \(X^{\pm}(r,s)\) be the result of substracting the right-hand side of (3.5) from the left-hand side. Suppose that \(X^{\pm}(r,s)=0\). Note that if we apply \([\widetilde{h}_{m1},\cdot]\) and \([\widetilde{h}_{n1},\cdot]\)\((m,n\in I)\) to \(X^{\pm}(r,s)=0\) we get from (5.5)
\[0=(\alpha_{i},\alpha_{m})X^{\pm}(r+1,s)+(\alpha_{j},\alpha_{m})X^{\pm}(r,s+1),\]
\[0=(\alpha_{i},\alpha_{n})X^{\pm}(r+1,s)+(\alpha_{j},\alpha_{n})X^{\pm}(r,s+1).\]
Consider the matrix \(A=\big{(}\begin{smallmatrix}(\alpha_{i},\alpha_{m})&(\alpha_{j},\alpha_{m}) \\ (\alpha_{i},\alpha_{n})&(\alpha_{j},\alpha_{n})\end{smallmatrix}\big{)}\). When the determinant of \(A\) is non-zero, we have \(X^{\pm}(r+1,s)=X^{\pm}(r,s+1)=0\).
In order to determine when the determinant of the matrix \(A\) is non-zero depending on the grading of roots it is sufficient to consider the following Dynkin (sub)diagrams.
1. Select in case 1.1, case 1.2 and case 1.3 \(m=i\), \(n=j\) in order to get the nonzero determinant.
2. Select in case 2.1, case 2.2 and case 2.3 \(|i|=|j|=\bar{0}\Rightarrow m=i,n=j\); \(|i|=\bar{1},|j|=\bar{0}\Rightarrow m=i^{{}^{\prime}},n=j\); \(|i|=\bar{0},|j|=\bar{1}\Rightarrow m=i,n=i^{{}^{\prime}}\).
For case 2.3 when \(|i|=|j|=\bar{1}\Rightarrow m=i^{{}^{\prime}},n=j\). Note that case 2.1 can occur only when \(|I|>3\). Thus in case 2.4 or case 2.5 when \(|i|=|j|=\bar{1}\Rightarrow m=i^{{}^{\prime}}\), \(n=j^{{}^{\prime}}\).
Figure 4: case 1.1 Figure 5: case 1.2
Figure 6: case 1.3
Figure 7: case 2.1 Figure 8: case 2.2
Figure 9: case 2.3
3. Select in case 3.1, case 3.2 and case 3.3 \(|i|=|j|=\bar{0}\Rightarrow m=i,n=j\); \(|i|=\bar{1},|j|=\bar{0}\Rightarrow m=i^{{}^{\prime}},n=j\); \(|i|=\bar{0},|j|=\bar{1}\Rightarrow m=i,n=j^{{}^{\prime}}\); \(|i|=|j|=\bar{1}\Rightarrow m=i^{{}^{\prime}},n=j^{{}^{\prime}}\).
The result follows by induction hypothesis.
We prove (3.3) by induction on \(r\) and \(s\). The initial case \(r=s=0\) is our assumption (3.16). Let \(X^{\pm}(r,s)\) be the result of substracting the right-hand side of (3.3) from the left-hand side. Suppose that \(X^{\pm}(r,s)=0\). Note that if we apply \([\widetilde{h}_{m1},\cdot]\) and \([\widetilde{h}_{n1},\cdot]\)\((m,n\in I)\) to \(X^{\pm}(r,s)=0\) we get from (5.5) that
\[0=(\alpha_{i},\alpha_{m})X^{\pm}(r+1,s)+(-1)(\alpha_{j},\alpha_{m})X^{\pm}(r, s+1),\]
\[0=(\alpha_{i},\alpha_{n})X^{\pm}(r+1,s)+(-1)(\alpha_{j},\alpha_{n})X^{\pm}(r, s+1).\]
Consider the matrix \(A=\big{(}\begin{smallmatrix}(\alpha_{i},\alpha_{m})&-(\alpha_{j},\alpha_{m}) \\ (\alpha_{i},\alpha_{n})&-(\alpha_{j},\alpha_{n})\end{smallmatrix}\big{)}\). When the determinant of \(A\) is non-zero, we have \(X^{\pm}(r+1,s)=X^{\pm}(r,s+1)=0\). We apply the same arguments as above to deduce that it is always possible to select \(m\) and \(n\) in such a way that \(\det(A)\neq 0\). The result follows by induction hypothesis.
We prove (3.7) by induction on \(r\) and \(s\). Denote the left hand side of (3.7) by \(X^{\pm}(r,s)\). The initial case \(r=s=0\) is our assumption (3.20). Suppose that \(X^{\pm}(r,s)=0\). We apply \([\widetilde{h}_{m1},\cdot]\) and \([\widetilde{h}_{n1},\cdot]\) to \(X^{\pm}(r,s)=0\) to get
\[0=(\alpha_{i},\alpha_{m})X^{\pm}(r+1,s)+(\alpha_{j},\alpha_{m})X^{\pm}(r,s+1),\]
Figure 11: case 2.5
Figure 12: case 3.1
Figure 13: case 3.2
\[0=(\alpha_{i},\alpha_{n})X^{\pm}(r+1,s)+(\alpha_{j},\alpha_{n})X^{\pm}(r,s+1).\]
Consider the matrix \(A=\big{(}\begin{smallmatrix}(\alpha_{i},\alpha_{m})&(\alpha_{j},\alpha_{m})\\ (\alpha_{i},\alpha_{n})&(\alpha_{j},\alpha_{n})\end{smallmatrix}\big{)}\). When the determinant of \(A\) is non-zero, we have \(X^{\pm}(r+1,s)=X^{\pm}(r,s+1)=0\). We apply the same arguments as above to deduce that it is always possible to select \(m\) and \(n\) in such a way that \(\det(A)\neq 0\). The result follows by induction hypothesis.
**Lemma 5.8**.: _Suppose that \(i,j\in I\) and \(i\neq j\). The equation (3.4) holds for any \(r,s\in\mathbb{N}_{0}\)._
Proof.: The proof is analogous to that in [11, Lemma 3.13].
**Lemma 5.9**.: _For all \(i,j\in I\) and \(s\in\mathbb{N}_{0}\)_
\[[h_{i0},h_{js}]=0.\]
Proof.: By Lemma 5.5 and (3.12)
\[[h_{i0},h_{js}]=[h_{i0},[x^{+}_{js},x^{-}_{j0}]]=(\alpha_{i},\alpha_{j})(h_{js }-h_{js})=0.\]
**Lemma 5.10**.: \([h_{j1},h_{i2}]=0\) _for all \(i,j\in I\)._
Proof.: For any \(j\in I\) we have by Lemmas 5.4, 5.5 and relation (3.19)
\[[h_{j1},h_{i2}]=[h_{j1},[x^{+}_{i1},x^{-}_{i1}]]=(-1)([x^{+}_{i1},[x^{-}_{i1},h_{j1}]]+(-1)^{|i|}[x^{-}_{i1},[h_{j1},x^{+}_{i1}]])=\] \[[x^{+}_{i1},[h_{j1},x^{-}_{i1}]]+[[h_{j1},x^{+}_{i1}],x^{-}_{i1}]=\] \[[x^{+}_{i1},-c_{ij}x^{-}_{i2}-\frac{\hbar}{2}[h^{2}_{j0},x^{-}_{i 1}]]+[c_{ij}x^{+}_{i2}+\frac{\hbar}{2}[h^{2}_{j0},x^{+}_{i1}],x^{-}_{i1}]=\] \[c_{ij}([x^{+}_{i2},x^{-}_{i1}]-[x^{+}_{i1},x^{-}_{i2}])+\frac{ \hbar}{2}([[h^{2}_{j0},x^{+}_{i1}],x^{-}_{i1}]-[x^{+}_{i1},[h^{2}_{j0},x^{-}_{ i1}]])=\] \[c_{ij}([x^{+}_{i2},x^{-}_{i1}]-[x^{+}_{i1},x^{-}_{i2}])+\frac{ \hbar}{2}([[h^{2}_{j0},x^{+}_{i1}],x^{-}_{i1}]+[h^{2}_{j0},h_{i2}]-[[h^{2}_{j0 },x^{+}_{i1}],x^{-}_{i1}])=\] \[c_{ij}([x^{+}_{i2},x^{-}_{i1}]-[x^{+}_{i1},x^{-}_{i2}]). \tag{5.6}\]
On the other hand, by Lemmas 5.5, 5.9
\[[h_{j1},h_{i2}]=[\widetilde{h}_{j1},h_{i2}]=[\widetilde{h}_{j1},[x^{+}_{i1},x^ {-}_{i1}]]=c_{ij}([x^{+}_{i1},x^{-}_{i2}]-[x^{+}_{i2},x^{-}_{i1}]).\]
Thus we get
\[[h_{j1},h_{i2}]=0.\]
**Lemma 5.11**.: _The equation (3.24) holds for all \(i\in I\)._
Proof.: The result follows from (3.25) and Lemma 5.10.
**Lemma 5.12**.: _Equations (3.1), (3.3) \((\)for \(0\leq r+s\leq 3)\) and (3.4) \((\)for \(0\leq r\leq 1;\;s\in\mathbb{N}_{0})\) hold for \(i=j\)\((i\in I)\)._
Proof.: Consider the equation (3.1). The proof follows from (3.14) and Lemmas 5.9, 5.10.
Consider the equation (3.3). It follows from Lemma 5.5 for \(0\leq r+s\leq 2\). By Lemmas (5.5) and (5.10) we have
\[0=[h_{i2},\widetilde{h}_{i^{{}^{\prime}}1}]=[[x^{+}_{i2},x^{-}_{i0}],\widetilde{ h}_{i^{{}^{\prime}}1}]=(\alpha_{i^{{}^{\prime}}},\alpha_{i})([x^{+}_{i2},x^{-}_{i1 }]-[x^{+}_{i3},x^{-}_{i0}]),\]
where if \(c_{ii}\neq 0\), then \(i^{{}^{\prime}}=i\); otherwise \(i^{{}^{\prime}}=i+1\). Apply \([\widetilde{h}_{i^{{}^{\prime}}1},\cdot]\) to
\[[x^{+}_{i2},x^{-}_{i0}]=[x^{+}_{i1},x^{-}_{i1}]=[x^{+}_{i0},x^{-}_{i2}].\]
Then
\[0=[x^{+}_{i2},x^{-}_{i1}]-[x^{+}_{i3},x^{-}_{i0}]=[x^{+}_{i1},x^{-}_{i2}]-[x^ {+}_{i2},x^{-}_{i1}]=[x^{+}_{i0},x^{-}_{i3}]-[x^{+}_{i1},x^{-}_{i2}].\]
Thus we get
\[[x^{+}_{i3},x^{-}_{i0}]=[x^{+}_{i2},x^{-}_{i1}]=[x^{+}_{i1},x^{-}_{i2}]=[x^{+} _{i0},x^{-}_{i3}].\]
Consider the equation (3.4). The case \(0\leq r\leq 1\), \(s=0\) follows from (3.17) and Lemmas 5.4, 5.6. We prove by induction that (3.4) holds for \(0\leq r\leq 1\) and all \(s\in\mathbb{N}_{0}\). Suppose that (3.4) holds for \(0\leq r\leq 1\) and \(0\leq s\in\mathbb{N}_{0}\). Then we apply \([\widetilde{h}_{i1},\cdot]\) to get
\[[\widetilde{h}_{i1},[h_{i,r+1},x^{\pm}_{is}]]=[\widetilde{h}_{i1},[h_{ir},x^{ \pm}_{i,s+1}]\pm\frac{c_{ii}\hbar}{2}\{h_{ir},x^{\pm}_{is}\}]\Leftrightarrow\]
\[[h_{i,r+1},x^{\pm}_{i,s+1}]=[h_{ir},x^{\pm}_{i,s+2}]\pm\frac{c_{ii}\hbar}{2}\{ h_{ir},x^{\pm}_{i,s+1}\}.\]
Thus by induction (3.4) is true for \(0\leq r\leq 1\) and all \(s\in\mathbb{N}_{0}\).
**Lemma 5.13**.: _Let \(i,j\in I\). Equations (3.1) and (3.3) hold for \(i=j\) and all \(r,s\in\mathbb{N}_{0}\)._
Proof.: The proof is completely the same as in [7] (see considerations after Lemma 2.2) where the proof is based on mathematical induction. That's why we only explain here how to deal with equations (2.25)-(2.29) in [7] where some extra arguments are needed to deal with the case \(|i|=\bar{1}\). First note that the base of induction is obtained by Lemma 5.12. Further in this proof in each equation we use the same notations as in mentioned paper. From now on suppose that \(|i|=\bar{1}\), and if \(c_{ii}=0\) then \(i^{{}^{\prime}}=i+1\); otherwise \(i^{{}^{\prime}}=i-1\).
Consider the equation (2.25). We are able to define \(\widetilde{\widetilde{h}}_{i^{{}^{\prime}}i,p}\) by Lemma 5.1 and to use Lemma 5.3 to get
\[0=[h_{i^{{}^{\prime}}p},h_{i^{{}^{\prime}}p}]=\frac{(\alpha_{i^{{}^{\prime}}},\alpha_{i^{{}^{\prime}}})}{(\alpha_{i},\alpha_{i^{{}^{\prime}}})}[h_{ip},h_{ i^{{}^{\prime}}p}]=[h_{ip},\widetilde{h}_{i^{{}^{\prime}}i,p}].\]
Further steps are the same as in the paper. In the equation (2.26) we use \([\widetilde{h}_{i^{{}^{\prime}}1},\cdot]\). Further steps are the same.
In the equation between (2.26) and (2.27):
\[[h_{i^{{}^{\prime}},r-q},h_{i^{{}^{\prime}},q+1}]=\frac{(\alpha_{i^{{}^{\prime }}},\alpha_{i^{{}^{\prime}}})}{(\alpha_{i},\alpha_{i^{{}^{\prime}}})}[h_{i,r-q},h_{i^{{}^{\prime}},q+1}]=\frac{(\alpha_{i^{{}^{\prime}}},\alpha_{i^{{}^{\prime }}})}{(\alpha_{i},\alpha_{i^{{}^{\prime}}})}[h_{i,r-q},\widetilde{\widetilde{h} }_{i^{{}^{\prime}}i,q+1}].\]
Further steps are the same.
Consider the equation (2.27). Here we use \([\widetilde{h}_{i^{{}^{\prime}}1},\cdot]\). Further steps are the same. Consider the equation (2.28). We are able to define \(\widetilde{\widetilde{h}}_{i^{{}^{\prime}}i,p}\) by Lemma 5.1 and to use Lemma 5.3 to get
\[[h_{i^{{}^{\prime}},p+1},h_{i^{{}^{\prime}}p}]=\frac{(\alpha_{i^{{}^{\prime}} },\alpha_{i^{{}^{\prime}}})}{(\alpha_{i},\alpha_{i^{{}^{\prime}}})}[h_{i,p+1},h_{i^{{}^{\prime}}p}]=\frac{(\alpha_{i^{{}^{\prime}}},\alpha_{i^{{}^{\prime}}})} {(\alpha_{i},\alpha_{i^{{}^{\prime}}})}[h_{i,p+1},\widetilde{\widetilde{h}}_{ i^{{}^{\prime}}i,p}].\]
Further steps are the same.
In the equation (2.29) we use the same arguments as in the (2.28) and Lemma 5.8 in the following to get
\[[h_{i^{{}^{\prime}},p+1},h_{i^{{}^{\prime}}p}]=\frac{(\alpha_{i^{{}^{\prime}}}, \alpha_{i^{{}^{\prime}}})}{(\alpha_{i},\alpha_{i^{{}^{\prime}}})}[h_{i^{{}^{ \prime}},p+1},h_{ip}].\]
Further steps are the same. It is easy to see that all considerations after the equation (2.29) are also true in our case.
**Lemma 5.14**.: _Suppose that \(i,j\in I\) and \(i\neq j\). The equation (3.1) holds for any \(r,s\in\mathbb{N}_{0}\)._
Proof.: The proof is the same as in [11, Lemma 3.19].
**Lemma 5.15**.: _The relation (3.5) is satisfied for all \(i,j\in I\) and \(r,s\in\mathbb{N}_{0}\)._
Proof.: The case \(i\neq j\) (\(i,j\in I\)) is proved in Lemma 5.7. Suppose that \(i=j\). Note that \(|i|=\bar{0}\). We prove by induction on \(r\) and \(s\in\mathbb{N}_{0}\). The initial case \((r,s)=(0,0)\) is our initial assumption (3.18). Let \(X^{\pm}(r,s)\) be the result of substracting the right-hand side of (3.5) from the left-hand side. We are able to define \(\overset{\cong}{h}_{i^{{}^{\prime}}i,r}\) (\(i\in I\), \(|i^{{}^{\prime}}-i|=1\), \(r\in\mathbb{N}_{0}\)) by Lemma 5.1. Using the relation (5.3) we have for an arbitrary \(r\in\mathbb{N}_{0}\):
\[0=[\overset{\cong}{h}_{i^{{}^{\prime}}i,r},X^{\pm}(0,0)]=\pm c_{i^{{}^{\prime} }i}(X^{\pm}(r,0)+X^{\pm}(0,r))=\]
\[\pm c_{i^{{}^{\prime}}i}2X^{\pm}(r,0)\Rightarrow X^{\pm}(r,0)=0.\]
Now for an arbitrary \(s\in\mathbb{N}_{0}\):
\[0=[\overset{\cong}{h}_{i^{{}^{\prime}}i,s},X^{\pm}(r,0)]=\pm c_{i^{{}^{\prime }}i}(X^{\pm}(r+s,0)+X^{\pm}(r,s))=\]
\[\pm c_{i^{{}^{\prime}}i}X^{\pm}(r,s)\Rightarrow X^{\pm}(r,s)=0.\]
The result follows by induction hypothesis.
**Lemma 5.16**.: _Let \(i,j\in I\). The equation (3.4) holds for all \(i,j\in I\) and \(r,s\in\mathbb{N}_{0}\)._
Proof.: The case \(i\neq j\) is proved in Lemma 5.8. Suppose that \(i=j\). We prove by induction on \(r\) and \(s\in\mathbb{N}_{0}\). Let \(X^{\pm}(r,s)\) be the result of substracting the right-hand side of (3.4) from the left-hand side. The case \(r=0\) and arbitrary \(s\in\mathbb{N}_{0}\) follows by Lemma 5.4. Suppose that \(r\geq 1\) and \(X^{\pm}(r,s)=0\) for all \(s\in\mathbb{N}_{0}\). The case \(r=1\) follows from Lemma 5.12. Note that \(|i|=\bar{0}\). We consider the case \(\pm=+\) (the case \(\pm=-\) is proved in the same way). Apply \([x^{-}_{i1},\cdot]\) to (3.5) (we can do it by Lemma 5.15) to get by Lemma 5.13
\[[x^{-}_{i1},[x^{+}_{i,r+1},x^{+}_{i,s}]-[x^{+}_{i,r},x^{+}_{i,s+1}]]=\frac{c_ {ii}\hbar}{2}[x^{-}_{i1},\{x^{+}_{i,r},x^{+}_{i,s}\}]\iff\]
\[[h_{i,s+1},x^{+}_{i,r+1}]-[h_{i,r+2},x^{+}_{i,s}]-[h_{i,s+2},x^{+}_{ir}]+[h_{i,r+1},x^{+}_{i,s+1}]=\]
\[\frac{c_{ii}\hbar}{2}(-1)(\{h_{i,r+1},x^{+}_{is}\}+\{h_{i,s+1},x^{+}_{i,r}\}) \iff\]
\[X^{+}(r+1,s)+X^{+}(s+1,r)=0.\]
Select \(s=0\) and note that by Lemma 5.12\(X^{+}(1,r)=0\). Thus \(X^{+}(r+1,0)=0\). Suppose that \(X^{\pm}(r+1,s)=0\) for \(s\geq 0\). Apply \([\widetilde{h}_{i,1},\cdot]\) to \(X^{\pm}(r+1,s)=0\). By Lemma 5.13 we have
\[[\widetilde{h}_{i1},[h_{i,r+2},x_{is}^{\pm}]]=[\widetilde{h}_{i1},[h_{i,r+1}, x_{i,s+1}^{\pm}]\pm\frac{c_{ii}\hbar}{2}\{h_{i,r+1},x_{is}^{\pm}\}]\iff\]
\[[h_{i,r+2},x_{i,s+1}^{\pm}]=[h_{i,r+1},x_{i,s+2}^{\pm}]\pm\frac{c_{ii}\hbar}{2} \{h_{i,r+1},x_{i,s+1}^{\pm}\}.\]
Thus \(X^{\pm}(r+1,s+1)=0\). The result follows by induction hypothesis.
**Lemma 5.17**.: _Relation (3.8) holds in the following cases:_
1. \((r,s,t)=(0,0,z)\) _(_\(z\in\mathbb{N}_{0}\)_);_
2. \((r,s,t)=(1,0,z)\) _(_\(z\in\mathbb{N}_{0}\)_)._
_Equation (3.9) is satisfied for_
1. \((r_{1},r_{2},r_{3},t)=(0,0,0,z)\) _(_\(z\in\mathbb{N}_{0}\)_);_
2. \((r_{1},r_{2},r_{3},t)=(1,0,0,z)\) _(_\(z\in\mathbb{N}_{0}\)_)._
Proof.: The proof remains consistent with the one presented in [4, Lemma 2.33]. It's important to highlight that the similarity in the proof arises from the construction of a system of homogeneous linear equations, which, in turn, results in only the trivial solution.
**Lemma 5.18**.: _Relations (3.7) and (3.8) are satisfied for all \(i,j\in I\) and \(r,s,t\in\mathbb{N}_{0}\)._
Proof.: The proof is the same as in [11, Lemma 3.21].
**Lemma 5.19**.: _The relation (3.6) is satisfied for all \(i,j\in I\) and \(r,s\in\mathbb{N}_{0}\)._
Proof.: The proof is the same as in [11, Lemma 3.22].
**Lemma 5.20**.: _The relation (3.9) is satisfied for all \(r_{1},r_{2},r_{3},t\in\mathbb{N}_{0}\)._
Proof.: Let \(X^{\pm}(r_{1},r_{2},r_{3},t)\) be the left-hand side of (3.9). We prove by induction on \(r_{1}\), \(r_{2}\), \(r_{3}\), and \(t\in\mathbb{N}_{0}\). The initial case \((r_{1},r_{2},r_{3},t)=(0,0,0,0)\) is our initial assumption (3.22). We have proved in Lemma 5.17 that \(X^{\pm}(0,0,0,t)=0\) for all \(t\in\mathbb{N}_{0}\). Note that for any \(r,t\in\mathbb{N}_{0}\)
\[0=[\widetilde{h}_{m+n-1,m+n,r},X^{\pm}(0,0,0,t)]=\sum_{z=t}^{t+r}a_{z}X^{\pm} (0,0,0,z)+\]
\[\pm(\alpha_{m+n-1},\alpha_{m+n})X^{\pm}(r,0,0,t)=\pm(\alpha_{m+n-1},\alpha_{ m+n})X^{\pm}(r,0,0,t)\Rightarrow X^{\pm}(r,0,0,t)=0,\]
where \(a_{z}\in\Bbbk\) (\(r\leq z\leq t+r\)).
It follows that for any \(r,s,t\in\mathbb{N}_{0}\)
\[0=[\widetilde{h}_{m+n-1,m+n,s},X^{\pm}(r,0,0,t)]=\sum_{z=t}^{t+s}a_{z}X^{\pm} (r,0,0,z)+\]
\[\pm(\alpha_{m+n-1},\alpha_{m+n})(X^{\pm}(r+s,0,0,t)+X^{\pm}(r,s,0,t))=\pm( \alpha_{m+n-1},\alpha_{m+n})X^{\pm}(r,s,0,t)\Rightarrow\]
\[X^{\pm}(r,s,0,t)=0,\]
where \(a_{z}\in\Bbbk\) (\(r\leq z\leq t+s\)).
Suppose that for any \(r,s,t\in\mathbb{N}_{0}\) and for \(0\leq l\leq u\) (\(s\in\mathbb{N}_{0}\)), \(X^{\pm}(r,s,u,t)=0\). Then
\[0=[\widetilde{h}_{m+n,1},X^{\pm}(r,s,u,t)]=(\alpha_{m+n},\alpha_{m+n-1})X^{\pm} (r,s,u,t+1)+\]
\[(\alpha_{m+n},\alpha_{m+n})(X^{\pm}(r+1,s,u,t)+X^{\pm}(r,s+1,u,t)+X^{\pm}(r,s, u+1,t))=\]
\[(\alpha_{m+n},\alpha_{m+n})X^{\pm}(r,s,u+1,t)\Rightarrow X^{\pm}(r,s,u+1,t)=0.\]
Thus the result follows by induction hypothesis.
**Lemma 5.21**.: _The relation (3.10) is satisfied for all \(j\in I\) and \(r,s\in\mathbb{N}_{0}\)._
Proof.: Let \(X^{\pm}(r,0,0,s)\) be the left-hand side of (3.10). We prove by induction on \(r\) and \(s\in\mathbb{N}_{0}\). The initial case \((r,0,0,s)=(0,0,0,0)\) is our initial assumption (3.23). Note that if we apply \([\widetilde{h}_{m1},\cdot]\), \([\widetilde{h}_{n1},\cdot]\) and \([\widetilde{h}_{k1},\cdot]\)\((m,n,k\in I)\) to \(X^{\pm}(r,0,0,s)\) we get from (5.5)
\[0=(\alpha_{m},\alpha_{j-1})X^{\pm}(r+1,0,0,s)+(\alpha_{m},\alpha_{j})(X^{\pm} (r,1,0,s)+X^{\pm}(r,0,1,s))+(\alpha_{m},\alpha_{j+1})X^{\pm}(r,0,0,s+1),\]
\[0=(\alpha_{n},\alpha_{j-1})X^{\pm}(r+1,0,0,s)+(\alpha_{n},\alpha_{j})(X^{\pm} (r,1,0,s)+X^{\pm}(r,0,1,s))+(\alpha_{n},\alpha_{j+1})X^{\pm}(r,0,0,s+1),\]
\[0=(\alpha_{k},\alpha_{j-1})X^{\pm}(r+1,0,0,s)+(\alpha_{k},\alpha_{j})(X^{\pm} (r,1,0,s)+X^{\pm}(r,0,1,s))+(\alpha_{k},\alpha_{j+1})X^{\pm}(r,0,0,s+1).\]
Consider the matrix
\[A=\begin{pmatrix}(\alpha_{m},\alpha_{j-1})&(\alpha_{m},\alpha_{j})&(\alpha_{m },\alpha_{j+1})\\ (\alpha_{n},\alpha_{j-1})&(\alpha_{n},\alpha_{j})&(\alpha_{n},\alpha_{j+1})\\ (\alpha_{k},\alpha_{j-1})&(\alpha_{k},\alpha_{j})&(\alpha_{k},\alpha_{j+1}) \end{pmatrix}\]
When the determinant of \(A\) is nonzero, we have \(X^{\pm}(r+1,0,0,s)=X^{\pm}(r,0,0,s+1)=0\). In order to determine when the determinant of \(A\) is nonzero depending on the grading of roots it is sufficient to consider the following Dynkin (sub)diagrams: \(i\), \(iia\) and \(iib\).
Select \(m=j-1,n=j,k=j+1\) for \(iia\) and \(iib\). Then \(\det(A)=-((\alpha_{j-1},\alpha_{j-1})+(\alpha_{j+1},\alpha_{j+1}))\). It is easy to see that the determinant is nonzero in that case.
Note that case \(i\) can occur only when \(|I|>3\).
1. Select for case 1 \(|j-1|=|j+1|=\bar{0}\Rightarrow m=j-2,n=j,k=j+1\); \(|j-1|=\bar{1},|j+1|=\bar{0}\Rightarrow m=j-1,n=j,k=j+1\); \(|j-1|=\bar{0},|j+1|=\bar{1}\Rightarrow m=j-1,n=j,k=j+1\); \(|j-1|=|j+1|=\bar{1}\Rightarrow m=j-2,n=j,k=j+1\).
2. Select for case 2 \(|j-1|=|j+1|=\bar{0}\Rightarrow m=j+2,n=j,k=j+1\); \(|j-1|=\bar{1},|j+1|=\bar{0}\Rightarrow m=j-1,n=j,k=j+1\); \(|j-1|=\bar{0},|j+1|=\bar{1}\Rightarrow m=j-1,n=j,k=j+1\); \(|j-1|=|j+1|=\bar{1}\Rightarrow m=j+2,n=j,k=j+1\).
Figure 16: case 2
Figure 15: case 1
It is easy to see that the determinant is nonzero in all these cases. The result follows by induction hypothesis.
|
2309.16615 | Anatomy of real intermediate state-subtraction scheme | We study the origin of the real intermediate state subtraction problem and
compare its different solutions. We show that the ambiguity in subtraction
schemes arises from the on-shell approximation for the 2-point functions that
reduces the Schwinger-Dyson equations to the Boltzmann limit. We also suggest a
new subtraction scheme which, unlike the earlier definitions, never leads to
negative scattering rates. This scheme also quantifies the validity of the
on-shell limit in terms of an effective one-particle weight function $R(\Delta
)$, where $\Delta$ measures the region around the resonance associated with the
real state. | Kalle Ala-Mattinen, Matti Heikinheimo, Kimmo Kainulainen, Kimmo Tuominen | 2023-09-28T17:14:11Z | http://arxiv.org/abs/2309.16615v2 | # Anatomy of real intermediate state-subtraction scheme
###### Abstract
We study the origin of the real intermediate state subtraction problem and compare its different solutions. We show that the ambiguity in subtraction schemes arises from the on-shell approximation for the 2-point functions that reduces the Schwinger-Dyson equations to the Boltzmann limit. We also suggest a new subtraction scheme which, unlike the earlier definitions, never leads to negative scattering rates. This scheme also quantifies the validity of the on-shell limit in terms of an effective one-particle weight function \(R(\Delta)\), where \(\Delta\) measures the region around the resonance associated with the real state.
+
Footnote †: preprint: HIP-2023-14/TH
## I Introduction
There is a long-standing issue in setting up consistent kinetic equations for particle distribution functions when relevant scattering processes involve real particles that also appear as intermediate states. The problem is that the resonant on-shell scattering processes overlap with the decay contributions in the kinetic equations, which results in double counting unless some subtraction procedure is established for the scattering rates. Such a procedure, referred to as the real intermediate state (RIS) subtraction, was first set up in [1; 2; 3]. The RIS subtraction has since been discussed in various different forms [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30], and an alternative method that does not rely on the RIS subtraction was recently suggested in [31]. Typically the RIS subtraction is performed at the level of Boltzmann equations, implementing some formal way to isolate and remove the on-shell part, \(D_{\rm on}(s)\), from the full Breit-Wigner (BW) propagator \(D_{\rm BW}(s)\), thus replacing it with the off-shell part, \(D_{\rm off}(s)\), that is used for scattering amplitudes. However, this procedure is inherently ambiguous due to the ambiguity in the definition of a "real" particle with unstable states. As a result, there is large freedom in the definition of the RIS subtraction, and different RIS subtraction schemes are even known to lead to apparently unphysical negative scattering cross sections [24].
In this article we study and compare different solutions to the RIS problem. We also study the emergence of the problem in the context of Boltzmann equations (BE) and then from a more fundamental perspective of the Schwinger-Dyson (SD) equations for the 2-point functions. SD equations are a fully consistent setup for studying unstable particles with no notion of the real intermediate states. We will show how the problem arises when the SD equations are reduced to the spectral limit. This also makes the effect of RIS subtraction to different reaction channels evident. We then suggest a new definition for the RIS subtraction, which does not suffer from the negative cross sections. The new scheme quantifies the ambiguity in the RIS subtraction in terms of an effective one-particle weight function \(R(\Delta)<1\) for the approximate real states, where \(\Delta\) measures the size of the kinematic region around the resonance that is counted to contribute to the real particle. For the real-particle picture to be a good approximation, one should be able to choose \(\Delta\), which is much smaller than the characteristic energy scale in the system, such that \(R(\Delta)\approx 1\). Failing to satisfy these conditions signals the breakdown of the on-shell limit, and a BE network with the decay channel should not be used, or should be interpreted with due care.
The article is structured as follows: We begin by revisiting the main RIS subtraction schemes in Sec. II, where we lay out the RIS subtraction at the level of propagators via formal propagator modifications. We then apply this method to an explicit example in Sec. III and demonstrate how the standard RIS contribution emerges from the \(s\)-channel interaction. In Sec. IV we study the origin of the RIS problem in the Schwinger-Dyson formalism in the case of the Yukawa theory. We show how the RIS problem arises in the on-shell limit, when the resonance overlaps with the kinematic region of interest for the problem at hand. This then motivates us to suggest our new RIS-subtraction scheme in Sec. V. In Sec. VI we perform numerical comparisons between different subtraction methods including our new proposal and conclude in Sec. VII.
## II Review of the RIS-subtraction schemes
The goal of the RIS-subtraction procedure is to somehow eliminate the on-shell part from the finite-width Breit-Wigner propagator:
\[D_{\rm BW}(s)=\frac{1}{s-m^{2}+im\Gamma}\to D_{\rm off}(s)\,, \tag{1}\]
where \(s\) is the Mandelstam variable, \(m\) is the mass of the propagating intermediate particle, and \(\Gamma\) is its total decay width. The on-shell part \(D_{\rm on}(s)=D_{\rm BW}(s)-D_{\rm off}(s)\) is then reduced to a delta function in the limit \(\Gamma\to 0\). It is associated with the decay processes, while the remainder \(D_{\rm off}(s)\) is thought to describe the off-shell scattering processes. For this task, different
technical schemes have been proposed. Let us begin with the _standard RIS-subtraction_ (SRS) scheme that is applied at the level of the propagator function squared.
The standard subtraction scheme is based on the simple observation that the squared BW propagator itself has a formal delta-function limit:
\[|D_{\mathrm{BW}}(s)|^{2}=\frac{1}{m\Gamma}\frac{m\Gamma}{(s\!-\!m^{2})^{2}\!+ (m\Gamma)^{2}}\approx\frac{\pi}{m\Gamma}\delta(s\!-\!m^{2})\,, \tag{2}\]
where the last equality holds approximately for small \(\Gamma\). Interpreting the right-hand side of (2) as the on-shell propagator for a finite \(\Gamma\) can then be used as the definition for the _squared_ off-shell propagator:
\[|D_{\mathrm{off}}^{\mathrm{SRS}}(s)|^{2}=|D_{\mathrm{BW}}(s)|^{2}-\frac{\pi}{m \Gamma}\delta(s-m^{2}). \tag{3}\]
This procedure works for the resonances isolated in a single channel. However, we may have for example a process with \(s\)- and \(t\)-channel contributions where the \(s\)-channel is resonant. Then the collision integral is proportional to
\[|\mathcal{M}_{\mathrm{s}}|^{2}+|\mathcal{M}_{\mathrm{t}}|^{2}+2\mathfrak{Re} [\mathcal{M}_{\mathrm{s}}^{*}\mathcal{M}_{\mathrm{t}}]\,. \tag{4}\]
The rule (3) is not sufficient to deal with the resonant propagator in the mixing term, as it only tells how to remove the on-shell part from the propagator squared contributions.
To avoid this issue, a subtraction procedure at the level of the propagator is needed. This was discussed in [4] but, to our knowledge, properly proposed later in [5]. Here one starts by writing
\[D_{\mathrm{BW}}(s) =\frac{s-m^{2}}{(s-m^{2})^{2}+(m\Gamma)^{2}}-i\frac{m\Gamma}{(s-m ^{2})^{2}+(m\Gamma)^{2}}\] \[=\mathfrak{Re}[D_{\mathrm{BW}}(s)]+i\mathfrak{Im}[D_{\mathrm{BW} }(s)]\] \[\to\mathfrak{Re}[D_{\mathrm{BW}}(s)]-i\pi\delta(s-m^{2}). \tag{5}\]
In the last line the limit \(\Gamma\to 0\) was assumed (only) in the imaginary part of the propagator. One can use this result to complete the SRS scheme for the interference term in (4), setting
\[D_{\mathrm{off}}^{\mathrm{SRS}}(s)\equiv D_{\mathrm{BW}}(s)+i\pi\delta(s-m^{2}) \tag{6}\]
at the propagator level, whenever a single resonant propagator (as opposed to a squared one) is encountered. In the _principal value subtraction_ (PVS) scheme suggested in [5] one reverses the logic and assumes that the off-shell propagator is just the real part of the Breit-Wigner propagator:
\[D_{\mathrm{off}}^{\mathrm{PSS}}(s)\equiv\mathfrak{Re}[D_{\mathrm{BW}}(s)]\,. \tag{7}\]
Also in the PVS scheme different definitions for the propagator and for the squared propagator are needed, which require some care. One might naively think that the on-shell part of the squared propagator would be just \(|D_{\mathrm{on}}|^{2}\equiv|D_{\mathrm{BW}}|^{2}-|\mathfrak{Re}[D_{\mathrm{BW} }]|^{2}=|\mathfrak{Im}[D_{\mathrm{BW}}]|^{2}\), but this is _not_ the correct subtraction, because the imaginary part squared produces only half of the on-shell contribution:
\[|\mathfrak{Im}[D_{\mathrm{BW}}]|^{2}\to\frac{\pi}{2m\Gamma}\delta(s-m^{2})\,. \tag{8}\]
As pointed out in [12; 13; 14], this issue is particularly relevant in the resonant leptogenesis literature.
The problem is that when we extract a distribution from a function the remainder is also a distribution, and one must be careful when taking the square. The explicit case at point here is that
\[\Big{(}\mathrm{PV}\!\Big{\{}\frac{1}{x}\Big{\}}\Big{)}^{2}\neq\mathrm{PV}\! \Big{\{}\frac{1}{x^{2}}\Big{\}}\,, \tag{9}\]
where PV refers to the principal value part of the function. Indeed, while \(1/x\) has the principal value sequence \(\mathrm{PV}(1/x)=\mathfrak{Re}(1/(x+i\varepsilon))=x/(x^{2}+\epsilon^{2})\), the corresponding sequence for \(1/x^{2}\) is
\[\mathrm{PV}\!\Big{\{}\frac{1}{x^{2}}\Big{\}}=\,\mathfrak{Re}\Big{(}\frac{1}{( x\pm i\varepsilon)^{2}}\Big{)}=\frac{x^{2}-\varepsilon^{2}}{(x^{2}+\varepsilon^{2})^{ 2}}\,. \tag{10}\]
In these expressions, the limit \(\epsilon\to 0\) is of course assumed. From this exercise one infers that the correct squared propagator in the PVS scheme for a finite \(\Gamma\) is
\[D_{\mathrm{off}}^{\mathrm{PSS}}(s)=\frac{(s-m^{2})^{2}-(m\Gamma)^{2}}{((s-m^{2 })^{2}+(m\Gamma)^{2})^{2}} \tag{11}\]
as was indeed suggested in [6]. Subtracting this off-shell part from the full BW propagator one finds that
\[|D_{\mathrm{on}}^{\mathrm{PSS}}(s)|^{2} =|D_{\mathrm{BW}}(s)|^{2}-|D_{\mathrm{off}}^{\mathrm{PSS}}(s)|^{2}\] \[=\frac{2(m\Gamma)^{2}}{((s-m^{2})^{2}+(m\Gamma)^{2})^{2}}\] \[\to\frac{\pi}{m\Gamma}\delta(s-m^{2})\,. \tag{12}\]
That is, the narrow-width limit in the PVS scheme requires the use of the delta sequence \(\pi\delta(x)=2\epsilon^{3}/[x^{2}+\varepsilon^{2}]^{2}\).
The formulas for the SRS scheme (3) and (6) and for the PVS scheme (7) and (11) are equivalent up to order \(\Gamma^{2}\). The PVS scheme also has a well defined \(\Gamma\to 0\) limit for the off-shell propagators:
\[D_{\mathrm{off}}^{\mathrm{r=0}}(s) \to\mathrm{PV}\Big{\{}\frac{1}{s-m^{2}}\Big{\}}\] \[|D_{\mathrm{off}}^{\mathrm{r=0}}(s)|^{2} \to\mathrm{PV}\Big{\{}\frac{1}{(s-m^{2})^{2}}\Big{\}}. \tag{13}\]
The limiting case (13) can be understood as yet another subtraction scheme. We stress that the subtraction scheme dependence affects only the off-shell propagators. All schemes were designed to have the same on-shell limits:
\[D_{\mathrm{on}}(s) =-i\pi\delta(s-m^{2})\] \[|D_{\mathrm{on}}(s)|^{2} =\frac{\pi}{m\Gamma}\delta(s-m^{2}). \tag{14}\]
There is no _a priori_ reason to prefer one scheme over the other, although the limiting scheme (13) is perhaps conceptually most consistent, as we shall argue later. It seems that most confusion related to RIS subtraction in the literature has resulted from a failure to realize that each scheme requires separate formulas for the off-shell propagator and the squared off-shell propagator
functions. Finally, we point out that the above discussion is not restricted to \(s\)-channel processes. Resonances can appear also in \(t\)- and \(u\)-channels. When this happens, the subtraction should be performed as in the \(s\)-channel case.
A fundamental problem in all above schemes is that they can occasionally lead to _negative_ reaction rates. This can happen because the off-shell propagators are negative near on-shell; if the scattering process is enhanced there, the integrated rates may also become negative. This is an apparently unphysical result, but it does not necessarily lead to a failure of the kinetic equation network. We shall return to these issues in Sec. 6.
## III A minimal example of double counting
Here we show that the decay contribution to kinetic equations indeed corresponds to the contribution from the on-shell propagator in the scattering channel. To this end we consider a setting with unspecified particles \(a\), \(b\) and \(c\), and for demonstrative purpose focus only on the processes shown in Fig. 1. We want to track the distribution function of species \(c\), whose kinetic equation can be written as
\[\mathrm{L}\left[f_{c}(p_{1})\right]=\mathcal{C}^{c}_{aa\leftrightarrow cc}(p _{1})+\mathcal{C}^{c}_{b\leftrightarrow cc}(p_{1})\,, \tag{15}\]
where the Liouville operator \(\mathrm{L}\left[f(p)\right]\equiv(\partial_{t}-Hp\partial_{p})f(p)\). To allow for analytic calculation, we work in the Maxwell-Boltzmann (MB) approximation, \(1\pm f_{i}\approx 1\) and \(f_{i}^{\mathrm{eq}}=e^{-\beta E_{i}}\) for \(i=a,b,c\) and assume that particles \(a\) and \(b\) are in equilibrium. We will prove that when the intermediate \(b\) particle is treated at the idealized resonant limit, the equilibrium contribution of the scattering term reduces exactly to the equilibrium decay term.
We start by evaluating the equilibrium scattering term in the MB limit:
\[\mathcal{C}^{c}(a_{\mathrm{eq}}a_{\mathrm{eq}}\leftrightarrow cc)\] \[=\frac{1}{2E_{p_{1}}^{c}}\int_{p_{2},k_{1},k_{2}}(2\pi)^{4}\, \delta^{(4)}(k_{1}+k_{2}-p_{1}-p_{2})\] \[\quad\times|\mathcal{M}_{aa\leftrightarrow cc}|^{2}\left[f_{a}^{ \mathrm{eq}}(k_{1})f_{a}^{\mathrm{eq}}(k_{2})-f_{c}(p_{1})f_{c}(p_{2})\right], \tag{16}\]
where \(\int_{p}\equiv\int\mathrm{d}^{3}p/[2E_{p}(2\pi)^{3}]\). The integral over momenta \(k_{1}\) and \(k_{2}\) is easily performed when one demands that the detailed balance holds: \(f_{a}^{\mathrm{eq}}(k_{1})f_{a}^{\mathrm{eq}}(k_{2})=f_{c}^{\mathrm{eq}}(p_{1 })f_{c}^{\mathrm{eq}}(p_{2})\). Furthermore, factoring the squared matrix element as \(|\mathcal{M}_{aa\leftrightarrow cc}|^{2}=|\mathcal{M}_{aa\leftrightarrow b}|^{ 2}|D_{b}|^{2}|\mathcal{M}_{bb\leftrightarrow cc}|^{2}\) we end up with
\[\mathcal{C}^{c}(a_{\mathrm{eq}}a_{\mathrm{eq}}\leftrightarrow cc)\] \[\quad\times|\mathcal{M}_{bb\leftrightarrow cc}|^{2}\left[f_{c}^{ \mathrm{eq}}(p_{1})f_{c}^{\mathrm{eq}}(p_{2})-f_{c}(p_{1})f_{c}(p_{2})\right], \tag{17}\]
where \(\lambda(x,y,z)=(x-y-z)^{2}-4yz\). To compare with the decay term, we need to evaluate
\[\mathcal{C}^{c}(b_{\mathrm{eq}}\leftrightarrow cc)=\frac{1}{2E_{p _{1}}^{c}}\int_{p_{2},q}(2\pi)^{4}\,\delta^{(4)}(q-p_{1}-p_{2})\] \[\quad\times|\mathcal{M}_{bb\leftrightarrow cc}|^{2}\left[f_{b}^{ \mathrm{eq}}(q)-f_{c}(p_{1})f_{c}(p_{2})\right]\,. \tag{18}\]
The detailed balance condition again allows us to replace \(f_{b}^{\mathrm{eq}}(q)\) by \(f_{c}^{\mathrm{eq}}(p_{1})f_{c}^{\mathrm{eq}}(p_{2})\). Writing the three-dimensional integral over momentum \(q\) as a four-dimensional integral with the measure \(\delta(q^{2}-m_{b}^{2})\,(2\pi)^{-3}\mathrm{d}^{4}q\) and using \(\delta^{(4)}(q-p_{1}-p_{2})\) to carry out integration over \(\mathrm{d}^{4}q\), the decay term then becomes
\[\mathcal{C}^{c}(b_{\mathrm{eq}}\leftrightarrow cc)=\frac{1}{2E_{ p_{1}}^{c}}\int_{p_{2},\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots, \ldots,\ldots,\,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\, \ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\,\ldots,\ldots,\ldots,\ldots,\ldots,\,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\ldots,\,\ldots,\ldots,\ldots
The origin of the RIS problem
The previous example verified the double counting between the decay and scattering channels, and showed that the on-shell \(s\)-channel propagator defined in (14) indeed corresponds to the decay contribution. In this section we shall take a look at the problem from a more general point of view.
### Spectral functions
A simple interacting system where the RIS problem may arise is the Yukawa theory with a scalar field \(\varphi\) and a fermion field \(\psi\). If \(\varphi\) is too light to decay into a fermion pair, isolated poles exist for both \(\varphi\) and \(\psi\), and no RIS problem exists, however. In this case the vacuum scalar spectral functions \(\mathcal{A}_{\varphi}\) and \(\mathcal{A}_{\psi}\) can be written as follows:
\[\mathcal{A}_{\varphi} =\pi\epsilon(k_{0})\big{(}R_{\varphi}\delta(s-m_{\varphi}^{2})+ \rho_{\varphi}(s)\big{)}\] \[\mathcal{A}_{\psi} =\pi\epsilon(k_{0})(\not{k}+m_{\psi})\Big{(}R_{\psi}\delta(s-m_{ \varphi}^{2})+\rho_{\psi}(s)\Big{)}, \tag{25}\]
where \(\rho_{\varphi,\psi}(s)\) are the continuum (off-shell) parts to the spectral functions and the one-particle weight functions \(R_{\varphi,\psi}\) are constrained to be less than unity by the spectral sum rules. Indeed, \(\frac{1}{\pi}\int\mathrm{d}k_{0}k_{0}\mathcal{A}_{\varphi}=1\) and \(\frac{1}{\pi}\int\mathrm{d}k_{0}\mathcal{A}_{\psi}=\gamma^{0}\) imply that
\[R_{\varphi,\psi}=1-\frac{1}{2\pi}\int\mathrm{d}s\rho_{\varphi,\psi}(s)<1. \tag{26}\]
The continuum parts \(\rho_{\varphi,\psi}(s)\) may to a good accuracy be computed perturbatively using spectral free-theory propagators. In this case one can also derive a good BE approximation for kinetic equations reducing the SD equations to the on-shell limit using the spectral propagators:
\[i\Delta_{\varphi}^{<} =2\pi\epsilon(k_{0})R_{\varphi}f_{\varphi}(\kappa)\delta(s-m_{ \varphi}^{2})\] \[iS_{\psi}^{<} =2\pi\epsilon(k_{0})(\not{k}+m_{\psi})R_{\psi}f_{\psi}(k)\delta( s-m_{\varphi}^{2}) \tag{27}\]
and moreover
\[\Delta_{\varphi}^{\mathrm{H}} =\mathrm{PV}\Big{\{}\frac{R_{\varphi}}{s-m_{\varphi}^{2}}\Big{\}}\] \[i\Delta_{\varphi}^{11} =i\Delta_{\varphi}^{\mathrm{H}}+2\pi\epsilon(k_{0})R_{\varphi}( f_{\varphi}(k)+\tfrac{1}{2})\delta(s-m_{\varphi}^{2}), \tag{28}\]
where \(\Delta_{\varphi}^{11}\) is the Feynman propagator, whose Hermitian component is given by \(\Delta_{\varphi}^{\mathrm{H}}\). Similar equations hold for the fermion \(\psi\). One can usually set \(R_{\varphi,\psi}=1\) to a good approximation for perturbative couplings. We kept them here to emphasize that in an interacting theory the one-particle states do not exhaust the entire state space of the system. However, in the current setup where isolated poles exist, the on-shell formulas (27) and (28) provide a good parametrization for the system.
When \(m_{\varphi}>2m_{\psi},\ \varphi\) is unstable and no longer has an isolated pole. In this case the spectral solutions do not strictly speaking exist, and it is not possible to derive the BE limit for the \(\varphi\) field without further approximations. For example, if the energy scales one is interested in do not overlap with the \(\varphi\) resonance, one may be able to derive a good BE limit for the problem based on the spectral parametrization for the fermion and treating \(\varphi\) only as an intermediate resonance. The RIS problem becomes acute only when the relevant energy scales _do_ overlap with the resonance _and_ one implements an on-shell kinetic equation also for \(\varphi\), as we shall see in the next section.
The on-shell limit can often be a good approximation also for unstable particles. When this is so, it would seem natural to use the spectral limit for all propagator functions, including the Hermitian parts. This would give rise to the limiting subtraction prescription (13), which in this sense is the most natural scheme to use. But this choice is not unique, as we shall see below.
### Schwinger-Dyson equations
We now discuss the RIS problem in the context of SD equations. We emphasize that the full SD equations, which span the entire phase space of the 2-point functions, do not need RIS subtraction. The issue emerges only when the SD equations for unstable particles are reduced to the on-shell limit. The SD equations for the Yukawa theory can be schematically written as
\[\Delta_{\varphi} =\Delta_{\varphi,0}+\Delta_{\varphi,0}\otimes\Pi_{\varphi}\otimes \Delta_{\varphi}\] \[S =S_{\psi,0}+S_{\psi,0}\otimes\Sigma_{\psi}\otimes S\psi \tag{29}\]
where \(\Delta_{\varphi,0}\) and \(S_{\psi,0}\) are the free propagators and \(\Delta_{\varphi}\) and \(S_{\psi}\) are the full propagators. In the direct space representation the convolutions are defined as \((A\otimes B)(x,y)\equiv\int\mathrm{d}^{4}zA(x,z)B(z,y)\), where \(z_{0}\) lives on the complex time contour. In real time the SD equations split into Kadanoff-Baym (KB) equations for the pole functions and for the Wightman functions. In the bosonic case we get:
\[(\Delta_{\varphi,0}^{-1}-\Pi_{\varphi}^{p})\otimes\Delta_{\varphi }^{p} =1 \tag{30}\] \[(\Delta_{\varphi,0}^{-1}-\Pi_{\varphi}^{\mathrm{H}})\otimes\Delta _{\varphi}^{s} =\Pi_{\varphi}^{s}\otimes\Delta_{\varphi}^{\mathrm{H}}+\mathcal{C}_{ \varphi}^{s},\]
where \(p=r,a\) refer to the retarded and advanced pole functions and \(s=<,>\) to the statistical Wightman functions, and we defined the collision terms
\[\mathcal{C}_{\varphi}^{<}=\frac{1}{2}\big{(}\Pi_{\varphi}^{>}\otimes\Delta_{ \varphi}^{<}-\Pi_{\varphi}^{<}\otimes\Delta_{\varphi}^{>}\big{)}, \tag{31}\]
and \(\mathcal{C}_{\varphi}^{>}=-\mathcal{C}_{\varphi}^{<}\). Similar decomposition can be derived for the fermionic correlation functions \(S_{\psi}^{p,s}\); see _e.g._[32]. The KB equations usually cannot be solved without further approximations, such as the spectral ansatz (27). The reduction of the KB equations to the spectral limit is a delicate task [32; 33; 34]. However, to understand the RIS problem, it is sufficient to concentrate on the collision terms \(\mathcal{C}_{\varphi,\psi}^{<}\), which in the end emerge as the collision integrals in Boltzmann equations.
In Fig. 2 we show the diagrams contributing to Eq. (29) up to two loops in the two-particle irreducible (2PI) expansion. Dashed lines correspond to \(\varphi\)-propagators and continuous ones to \(\psi\)-propagators, and thin (thick) lines correspond to the free (full) propagators. We emphasize that the two-loop diagrams shown in Fig. 3 are _not_ included in full SD equations.
They are already accounted for by the one-loop diagrams, because the full SD equations sum all perturbative corrections associated with the diagrams included in the 2PI expansion. However, when simplifying approximations are imposed, the consistency of the 2PI expansion breaks down and some non-2PI diagrams may need to be inserted by hand. The simplest example is the case where we treat the \(\varphi\)-field as a resonance and drop it from the SD equation network. In this case the scalar decay diagrams are not summed by the SD equations and the last diagram in Fig. 3 must be included perturbatively into the SD equation for the \(\psi\)-field.
The spectral limit is more delicate. Here the SD equation for the \(\varphi\)-field is kept, but all collision integrals are reduced to the on-shell limit, where the one-loop terms reduce to the decay terms in the BEs. While the BEs still sum these on-shell contributions to all orders, this summation completely misses all off-shell contributions. The 2PI expansion is again broken and _the off-shell parts_ of the diagrams shown in Fig. 3 must be included perturbatively. The RIS problem has thus entered.
The cuts in two-loop diagrams indicated by blue dashed lines in Fig. 2 give only the interference terms (_e.g._ between the \(s\)- and \(t\)-channels) in the collision integrals, while the squared matrix elements in each channel emerge from the cuts in the forbidden diagrams of Fig. 3. The summation argument applies to all noncut internal propagators, and so the need for subtraction concerns all channels and all matrix elements including the interference terms in collision integrals. In the present example the on-shell condition is kinematically forbidden from appearing in fermion lines, but it affects the scalar propagators in the last diagrams in Figs. 2 and 3.
The standard on-shell reduction of the SD equations to BEs explicitly uses the spectral propagators of Eqs. (27) and (28). However, one _can_ replace the Hermitian principal value propagators by other off-shell propagators with no change to the reduction procedure. Indeed, the different subtraction schemes discussed so far, are but different definitions for the Hermitian propagator functions in the spectral ansatz. From this point of view all schemes are equally good. In the next section we will introduce another subtraction scheme that is free of the issue of negative rates and provides a quantitative estimate for the validity of the BE limit.
## V cut-subtraction scheme
In the previous section we saw that both the need for and the ambiguity of the RIS subtraction arise when the full solutions to the SD equations are approximated by spectral solutions. The validity of the spectral limit clearly depends on the resonance width \(\Gamma\). For a small width, excitations around the resonance may give nearly identical contributions to physical processes, allowing them to be treated as an effective single-particle state. If \(\Gamma\) is very small, the integrated weight of such states may even saturate the spectral sum rule. This is the case when the usual spectral reduction of the SD equations to the BE limit with free theory normalization \(R=1\) is valid. If the physical process changes significantly over the resonance region, however, the spectral approximation starts to break down.
We quantify these simple observations by defining the following cut-subtraction scheme:
\[D_{\rm off}^{\Delta}(a)=\big{[}1-\Theta(a\!-\!m^{2},\Delta)\big{]}D_{\rm BW}(a), \tag{32}\]
where \(a=s,t\), or \(u\), depending on the channel one is looking at, and the cut function \(\Theta\) singles out a region around the resonance. The simplest choice is the top-hat function:
\[\Theta(x,\Delta)=\theta(\Delta-x)\theta(x+\Delta). \tag{33}\]
The squared off-shell propagator does not need a separate rule in this scheme. The propagator in Eq. (32) is again assumed to replace the Hermitian parts in the Feynman and anti-Feynman propagators in spectral decomposition, _e.g._ in Eq. (28). Combined with this prescription, the statistical propagators in Eq. (27) (as well as the spectral parts of the Feynman and anti-Feynman propagators), are rescaled by a weight function:
\[R(\Delta)\equiv\frac{1}{\pi}\int{\rm d}s\,\Theta(s\!-\!m^{2},\Delta)\,m\Gamma |D_{\rm BW}(s)|^{2}. \tag{34}\]
It should be obvious that none of these changes interfere with the on-shell reduction of the SD equations.
Figure 3: Additional perturbative diagrams needed in the on-shell limit. The numbers are the closed time path indices singling out the embedded self-energy diagram \(\Sigma^{>}=\Sigma^{21}\), and blue dashed lines indicate cuts corresponding to these indices. Red dotted lines in the internal boson propagators \(\Delta_{11}\) and \(\Delta_{22}\) indicate that these propagators are included without their on-shell parts.
Figure 2: The diagrams contributing to the SD equations in the Yukawa theory to the two-loop order in the 2PI expansion. Thin lines indicate the free and thick lines the full propagators. Blue dashed lines express cuts defined similarly to Fig. 3.
The precise form of the cut function in Eq. (33) is not relevant, and the characteristic width \(\Delta\) will depend on the problem. The weight function will give a _quantitative measure_ for the validity of the spectral limit. To see how this works, consider some physical quantity \(\mathcal{F}\), which is an integral over, say, the Mandelstam variable \(s\):
\[\mathcal{F}=\int\mathrm{d}s\,F(s)\,|D_{\mathrm{BW}}(s)|^{2}\,. \tag{35}\]
For example the vacuum contribution to the \(\psi\) collision integral from process \(\psi\bar{\psi}\to\psi\bar{\psi}\), coming from the cut in the last diagram in Fig. 3 would have this form. We can now divide \(\mathcal{F}\) into on- and off-shell contributions \(\mathcal{F}=\mathcal{F}_{\mathrm{on}}+\mathcal{F}_{\mathrm{off}}\), where
\[\mathcal{F}_{\mathrm{off}}=\int\mathrm{d}s\,F(s)\,|D_{\mathrm{off}}^{\Delta}( s)|^{2}\,. \tag{36}\]
and
\[\mathcal{F}_{\mathrm{on}} =\frac{1}{m\Gamma}\int\mathrm{d}s\,F(s)\Theta(s\!-\!m^{2},\Delta )m\Gamma|D_{\mathrm{ S}}(s)|^{2}\] \[\approx R(\Delta)\pi\frac{F(m_{\mathrm{ S}}^{2})}{m_{ \mathrm{ S}}\Gamma_{\mathrm{ S}}}\,. \tag{37}\]
In the second line we assumed that \(F(s)\) remains essentially a constant in the cut region and then used Eq. (34). This clearly corresponds to replacing \(|D_{\mathrm{BW}}(s)|^{2}\) inside the on-shell region with a weighted delta function: \(\pi R(\Delta)\delta(s-m^{2})/(m\Gamma)\).
If \(\mathcal{F}\) indeed described the process \(\psi\bar{\psi}\to\psi\bar{\psi}\), then the on-shell part in Eq. (37) would be exactly canceled by the on-shell vacuum decay term \(\varphi\to\psi\bar{\psi}\) arising from the one-loop diagram in the fermion SD equation in Fig. 2. Indeed, because of our rule for weighting the statistical propagators by \(R(\Delta)\), the latter is given by the standard vacuum decay term multiplied by \(R(\Delta)\). The exact calculation equating the two is essentially the same we presented in Sec. 3, with \(b=\varphi\) and \(aa,cc\to\psi\bar{\psi}\).
Our scheme in corresponds to setting \(\Delta_{\varphi}^{\mathrm{H}}\to D_{\mathrm{off}}^{\Delta}\)_and_\(R_{\varphi}\to R(\Delta)\) in Eqs. (27) and (28). All collision integrals can then be computed from these functions following standard finite temperature field theory methods. One then recovers the standard tree level rules for computing collision integrals, augmented with the scaling of the phase space factors \(f_{\varphi}\to R(\Delta)f_{\varphi}\) and replacing \(D_{\mathrm{BW}}\to D_{\mathrm{off}}^{\Delta}\) for all intermediate scalar propagators. Extensions to more complicated theories where also fermions can be unstable should be obvious.
We should emphasize one important difference between our scheme and the other subtraction schemes. Consider the case where \(F\) is a constant. In this case all standard schemes give \(\mathcal{F}_{\mathrm{off}}=0\), whereas in the cut scheme \(\mathcal{F}_{\mathrm{off}}=F(1-R(\Delta))\). The latter one is the qualitatively correct result. The standard schemes thus _overestimate_ the weight of the effective one-particle states, which forces the flat-weight integrals of the off-shell parts to vanish in compensation. The effect underestimates the off-shell contributions also for a nonconstant \(F(s)\) and may even lead to a negative cross section if the corresponding \(F(s)\) is enhanced near the resonance. The cut-subtraction scheme does not suffer from these problems by construction.
The particle picture makes sense only if the approximation in Eq. (37) holds with a large enough \(\Delta\), such as \(R(\Delta)\simeq 1\). Indeed, if \(F(s)\) changed rapidly in the scale \(m\Gamma\), we would be forced to use very small \(\Delta\ll m\Gamma\) to extract \(F(s)\) outside the integral. This would then give \(R(\Delta)\ll 1\), showing that the one-particle contribution to the process is very small. Of course the particle picture would not make sense in this limit. Nevertheless, the cut-subtraction procedure allows for a continuous mapping between the limit where the particle picture is valid [\(R(\Delta)\simeq 1\)] and the resonance limit where \(\varphi\) no longer is part of the thermal bath [\(R(\Delta)\simeq 0\)].
### Numerical examples
In this section we compare quantitatively the predictions of the SRS and PVS schemes and study the cut scheme as a function of \(\Delta\) in a physical system where RIS subtraction is needed, at least in principle. To be specific, and to keep the discussion as simple as possible, we consider the dark matter freeze-out in the singlet extension of the standard model(_e.g._[35; 36; 37; 38; 39]), where the DM-abundance calculations have been recently studied to high precision [30; 31; 40; 41; 42]. The model is described by the Lagrangian
\[\mathcal{L}=\tfrac{1}{2}(\partial_{\mu}S)^{2}-\tfrac{1}{2}\lambda_{hs}h^{2}S^{ 2}+\ldots\,, \tag{38}\]
where \(h\) is the Standard Model Higgs, \(S\) is a singlet scalar, and dots refer to other terms whose precise form is not relevant here, including the SM Lagrangian. We focus on the \(h\) and \(S\) particle densities governed by the following set of coupled Boltzmann equations:
\[\begin{split}\mathrm{L}\left[f_{h}(p_{1})\right]&= \mathcal{C}_{h\leftrightarrow SS}^{h}(p_{1})+\mathcal{C}_{h\leftrightarrow( \mathrm{SM})}^{h}(p_{1})+\mathcal{C}_{\mathrm{other}}^{h}\,,\\ \mathrm{L}\left[f_{S}(k_{1})\right]&=\mathcal{C}_{SS \leftrightarrow h}^{S}(k_{1})+\mathcal{C}_{SS\leftrightarrow(\mathrm{SM})}^{S}(k _{1})+\mathcal{C}_{\mathrm{other}}^{S}\,.\end{split} \tag{39}\]
The scattering process \(SS\leftrightarrow(\mathrm{SM})\) describes the annihilation of the \(S\) scalars to the standard model (SM) particles via \(s\)-channel Higgs boson resonance. Other channels that could affect these densities, not relevant for the present discussion, are contained in \(\mathcal{C}_{\mathrm{other}}^{h,S}\). The RIS subtraction is necessary in this model, because the decay and inverse decay terms \(h\leftrightarrow SS\) and \(h\leftrightarrow(\mathrm{SM})\) already account for the on-shell part of the scattering term as discussed in previous sections.
To perform a full comparison we should solve Eq. (39) using the different schemes for the off-shell propagator in the scattering integral. In the momentum dependent case the collision integrals should be arranged to contain an integral over \(s\), as was done _e.g._ in [41], to facilitate the cut regularization. We will instead concentrate on the simpler momentum integrated version of Eq. (39), where the relevant dynamics are given by
the thermal averaged cross section [43]1
Footnote 1: In thermal equilibrium this rate is exact when \(S\) scalars are treated in spectral approximation and we assume the MB statistics. Indeed, in thermal equilibrium the full solution to the Higgs SDE is \(i\Delta_{h}^{\ast}=2f_{\rm so}A_{h}\), where \(\mathcal{A}_{h}=\epsilon(k_{0})m\Gamma D_{\rm BW}(s)\). With these assumptions the one-loop collision term in the SD equation for \(S\) is given by Eq. (19), with the delta function replaced by \((m\Gamma/\pi)D_{\rm BW}(s)\). Tracing the derivation in Sec. 3 backwards, one can establish the equality of that result and (17) which, when integrated over the initial momentum, gives (40) with the BW propagator. From this result one then must subtract the on-shell part, which eventually gives (40).
\[\langle v_{\text{Mel}}\sigma_{\rm x}\rangle =\frac{1}{8m_{\rm S}^{4}TK_{2}^{2}(\frac{m_{\rm S}}{T})}\int_{4m_ {\rm S}^{2}}^{\infty}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(0.09\), which implies that one cannot choose as large \(\Delta\) as in the first example. The deviation of the cut contributions from the exact result, given by \(1-r(\Delta)\), is shown in the inset of Fig. 7. We see that already for \(\Delta=10\epsilon_{\rm os}\) we have a percent level deviation from the accurate result, while \(R(10\epsilon_{\rm os})\approx 0.93\) is still well below unity. In this case the best spectral modeling of the system would correspond to a cut scheme with \(\Delta\approx 5\) and \(R\approx 0.9\); _e.g._ there should be a suppression in the decay channel contributions to the collision integrals.
Finally, in Fig. 8 we present results for the same case as in Fig. 6, but at a lower temperature of \(T=5\) GeV. Here the SRS and the PVS scenarios again give positive results with a dispersion of about 25%. One might then think that the division to on-shell and off-shell contributions still makes sense, but this is not the case, as can be seen from the cut-scheme results. The sum of the cut contributions starts to deviate from the full result already when \(\Delta>\epsilon_{\rm os}\) and the off-shell contribution gets completely erased before the on-shell result reaches its maximum. In this case the Higgs resonance cannot be treated as an on-shell particle. Or conversely, if one insists in doing so, one should use \(\Delta<\epsilon_{\rm os}\), giving strong suppression to the decay contributions: \(R<0.5\).
To conclude this section we note that none of the scenarios discussed above was realistic in the sense that one _needed_ to deal with the problem of on- and off-shell division. In the singlet model the accurate evaluation of the final dark matter abundance only requires that the kinetic description is valid near the freeze-out temperature, which does not require a kinetic equation for the on-shell excitations of the Higgs field. There are other realistic settings [24; 30; 44], where these effects may be relevant, however, and our examples display qualitatively the problems one might encounter. Overall the standard schemes work rather well and are indistinguishable when applied correctly. Moreover, even the negative cross sections are not necessarily fatal, although the accuracy of schemes leading to them is difficult to assess without more detailed computations.
## VII Conclusions
In this article we revisited the well-known RIS subtraction problem. The RIS subtraction is necessary for the resonant scattering channels when the intermediate state causing the resonance is also included as a real state in the on-shell kinetic equation network. Over the years many different methods to perform the RIS subtraction have been proposed, with some of them leading to erroneous results and sometimes to negative scattering cross sections.
We took a fresh look into the different RIS schemes that have appeared in the literature. We pointed out how to extend the standard RIS-subtraction scheme to interference matrix elements and emphasized the need to use the correct sequences for different propagator functions in the principal value scheme. It indeed seems that most errors in the literature associated with these schemes have arisen from not realizing that the off-shell propagator and the squared off-shell propagator need separate definitions.
Figure 8: The same as in Fig. 6, but now in the temperature \(T=5\) GeV.
Figure 6: Same as in Fig. 5, but for parameters \(\lambda_{hs}=0.01\) and \(T=50\) GeV. Note that the standard subtraction scheme results are all negative in this case.
Figure 7: Shown is the weight function \(R(\Delta)\) for the parameters used in Fig. 6. The inset shows the deviation of the sum of cut contributions from the full thermal integral, as explained in the text.
We then discussed the origin of RIS problem from the Schwinger-Dyson equation point of view. The freedom in the definition of the RIS-subtracted propagators corresponds to a freedom in defining the Hermitian part of the pole functions when Schwinger-Dyson equations for unstable particles are reduced to the spectral limit. We used this insight to propose a more physical cut-subtraction scheme, where the modes within distance \(\Delta\) from the resonance are associated with the on-shell state and those outside this interval are treated as virtual states. This method avoids the negative cross sections and provides a direct measure, in terms of an on-shell weight function \(R(\Delta)\), to estimate when the on-shell particle picture is a consistent description.
We finished by a detailed numerical comparison of the different subtraction schemes in a toy model which qualitatively reproduces the various issues that can be encountered with the different subtraction schemes. We pointed out that the negative cross sections in the SRS and PVS schemes, while unphysical at face value, arise to compensate for the overestimated on-shell contributions in these schemes. This problem never appears in the cut scheme. It would obviously be interesting to make detailed comparisons of the different approaches in more realistic setups and with full out-of-equilibrium computations.
## Acknowledgements
This work was supported by the Research Council of Finland Grant No. 342777 and Jenny and Antti Wihuri Foundation.
|
2309.00063 | A compact cryogenic configurable slit unit for a multi-object infrared
spectrograph:Design and Development of a prototype at TIFR | We present a cryogenic configurable slit unit (CSU) for a multi object
infrared spectrograph with an effective field of view of 9.1 arcmin x 9.1
arcmin that was completely conceived and designed in the laboratory at TIFR.
Several components of the CSU including the controller for the commercially
procured piezo-walkers, controlled loop position sensing mechanism using
digital slide callipers and a cryogenic test facility for the assembled
prototype were also developed in-house. The principle of the CSU involves
division of the field of view of the spectrometer into contiguous and parallel
spatial bands, each one associated with two opposite sliding metal bars that
can be positioned to create a slit needed to make spectroscopic observations of
one astronomical object. A three-slit prototype of the newly designed CSU was
built and tested extensively at ambient and cryogenic temperatures. The
performance of the CSU was found to be as per specifications. | P. P. Madhwani, A. P. K. Kutty, B. Mookerjea, J. V. Parmar, V. N. Kurhade, S. L. D'Costa, P. Manoj, A. Surya | 2023-08-31T18:07:27Z | http://arxiv.org/abs/2309.00063v1 | A compact cryogenic configurable slit unit for a multi-object infrared spectrograph: Design and Development of a prototype at TIFR
###### Abstract
We present a cryogenic configurable slit unit (CSU) for a multi object infrared spectrograph with an effective field of view of 9\(\fdg\)1\(\times\)9\(\fdg\)1 that was completely conceived and designed in the laboratory at TIFR. Several components of the CSU including the controller for the commercially procured piezo-wakers, controlled loop position sensing mechanism using digital slide callers and a cryogenic test facility for the assembled prototype were also developed in-house. The principle of the CSU involves division of the field of view of the spectrometer into contiguous and parallel spatial bands, each one associated with two opposite sliding metal bars that can be positioned to create a slit needed to make spectroscopic observations of one astronomical object. A three-slit prototype of the newly designed CSU was built and tested extensively at ambient and cryogenic temperatures. The performance of the CSU was found to be as per specifications.
Infrared; Multi-Object Spectrograph; configurable Slit Unit; Piezowalker-based positioning 1
P. P. Madhwani, A. P. K. Kutty, B. Mookerjea, J. V. Parmar, V. N. Kurhade, S. L. D'Costa, P. Manoj, A. Surya
## 1 Introduction
Near-infrared (NIR) spectroscopy is particularly well-suited for observation of vibrational and rotational molecular lines that are indicative of the chemo-dynamical structure of stars of the Milky Way and can be used to study stellar radial velocities and abundances [Wallace & Hinkle, 1997]. The NIR spectra of young stellar objects and stars provide important constraints on the accretion of material, the structure of protostellar disks that eventually form planets [Scoville et al., 1983, Alcala et al., 2014]. Spectroscopic observations in the NIR can be used to access the high redshift galaxies whose important rest frame optical diagnostic emission lines are redshifted to the NIR [Petitjean, 2011].
Multi Object Spectroscopy (MOS)is an efficient way to observe the spectra of an aggregate of sources that are within the field of view of an instrument at a single pointing. With the advent of technologies enabling fabrication of large format detector arrays, the demand for capabilities to perform simultaneous spectroscopy of multiple objects within the fields of view (FOV) covering several square arcminutes up to several square degrees of the large telescopes has been on the rise [Ellis & Parry, 1988, Allington-Smith, 2006].
In the instruments capable of MOS, the light from the source is focused by the telescope onto the focal plane, where narrow slits or fiber optics are used to isolate the source light and is finally dispersed using a spectrograph with a dispersing element like a prism or a diffraction grating. The spectra are then imaged onto a detector. The data undergo detector level processing and extraction, as well as wavelength and flux calibration, typically using software designed specifically to handle the multiplexed spectra. Presently, most large 8-10 meter class ground-based observatories offer at least one MOS instrument option. In ground-based instruments, apertures are either created using slit masks or using fibers arranged into patterns on the sources in the field. For optical telescopes the field-specific slit masks are milled out on-site prior to the observations. Such pre-fabricated slit masks are inconvenient in the near-infrared (1-2 \(\mu\)m) where there is
a strong contribution from background thermal emission that necessitate cooling the slits to temperatures of 120 K or below along with the entire spectrograph. A plausible solution for such instruments is the use of a configurable slit unit (CSU) to precisely position the slit mask that can be configured in-situ in the cryogenically cooled environment shortly before observations begin. Configurable slit spectrographs have a number of benefits over slit mask-based spectrographs, that include flexibility, efficiency (easy to customize the slit width based on the properties of the target) and reduced overhead (in-situ configuration in cryogenic environment). Such configurable slit units are currently installed on the MOSFIRE instrument on the Keck telescope [McLean et al., 2010] and also on Espectrografo Multiobjeto Infra-Rojo (EMIR) instrument on the Gran Telescopio de Canarias (GTC). MOSFIRE can observe up to 46 objects in a 6\(\farcm\)12\(\times\)6\(\farcm\)12 FOV and EMIR on GTC can observe up to 53 objects in a FOV of 6\(\farcm\)64\(\times\)4\({}^{\prime}\) [Garzon et al., 2022]. The physical size of the EMIR FOV in the focal plane is 307 mm\(\times\)307 mm and for the MOSFIRE the FOV is 267 mm\(\times\)267 mm.
We present here the design of a configurable slit unit operational at cryogenic temperatures (120 K) meant for a Multi Object Infrared Spectrograph (MOIS) [Surya et al., 2023] operational between 0.8-2.5 \(\mu\)m, to be installed on an Indian optical/near-infrared observing facility. The present prototype for the CSU has 5 configurable slits designed as per the optics of the 3.6 m Devasthal Optical Telescope (DOT). However the modular and scalable nature of the design will enable installation of the CSU on future 10 m class Indian telescopes that are being planned. The entire design has been developed in-house and a prototype consisting of three slits has already been fabricated and tested at cryogenic temperatures.
## 2 Design of the CSU
The design goals for the CSU was to create 5 slits using 10 bars each of which can have a maximum displacement of up to 100 mm for a field of view of 85.77 mm\(\times\)85.77 mm. The position of the slit in the FOV and its width are decided upon based on the location and size of the astronomical source. The CSU is required to generate a multi-slit configuration, a long slit, or an imaging aperture while being at cryogenic temperatures, necessary for its operation at the infrared wavelengths.
Each slit bar assembly for the CSU consists of i) Vernier scale, ii) Piezo-walker,iii) Piezo-walker mounting base, iv) Titanium link rods, v) Slit bar and Slit edge and the vi) Controller electronics (Figure 1). In the following text we present details of the crucial components of this assembly.
Figure 1: Single slit pair assembly along with the drive electronics. The functional components of the bar assembly are labeled.
### Piezo-walker based drive mechanism
At the operating wavelengths of the MOIS it is necessary to suppress the background emission from the components of the spectrometer, hence the slit bars need to be in vacuum and in a cryogenic environment. This requirement of low temperatures impose major constraints on the drive mechanisms that can be used for the motion of the slit bars. Additionally, it is preferable that the power dissipation from the bars and the driving motors is negligible when the bars are not in motion. Based on these considerations we have identified piezo-walkers as the drivers for the slit bars, which further have the advantage of negligible backlash error as compared to regular rotary stepper motors that are also used for such purposes. The PI NEXACT-310 piezo-walkers identified for this purpose provide a travel range 10 to 125 mm and are capable of generating a force of up to 10 N. The piezo-walker drives a ceramic runner which was mechanically interfaced with the slit bar through a titanium link rod and the motion is controlled by a controller designed in-house. The NEXACT-310 piezo-walkers are functional only down to a temperature of 273 K, hence the piezo-walkers were isolated from thermal radiation using enclosures made out of space-grade multi-layer insulating (MLI) sheet. Only the slit bars were cooled using copper braids connected to a cold plate at liquid N\({}_{2}\) temperature of 77 K (Fig.1).
### Closed-loop Vernier-scale based accurate Positioning of the Slit Bars
Accurate positioning of the slit bars was achieved using Grid-Capacitance Linear Synchronous Sensors which consist of a set of grid-capacitances and signal processing circuit similar to those used in commercially available digital vernier calipers. The position information was tapped from the readout to enable closed-loop correction to achieve an absolute positioning accuracy of 10 \(\mu\)m for each slit bar. The positioning mechanism was also designed and developed in-house. The slit length is fixed (17.15 mm) and the width can range from 0 (fully shut) to 90 mm (fully open). The slit positioning accuracy is \(\pm\)10 \(\mu\)m and the slit width accuracy is \(\pm\)14 \(\mu\)m. The design goal is a slit width of 120 \(\mu\)m. The motion of the slit-bars is restricted within the desired range by using limit switches, which when depressed stop the movement in a particular direction and subsequently only motion in the reverse direction is possible. The positioning accuracy of the CSU was tested by shinning light through the assembly and the slits of different widths that were formed at different positions of the FOV were imaged using a high resolution camera fitted to a microscope. The attained positions and widths of the slits were accurate to within 15 \(\mu\)m and were found to be consistently reproducible.
### Mechanical Design of the Slit Bars
Figure 2: Cross-section of the single slit bar designed to allow smooth motion without light leak. The copper braids used to cool the slit bars are also shown.
The mechanical design of the slit bars was challenging since the movement of the slit bars had to be smooth while ensuring that there is no light leak through them. In order to meet both design goals, a special design for the cross-section of the slit bars was devised (Fig. 2) which ensured complete overlap of the bars at their edges as well as free movement. Each slit bar consists of two parts the main body or the bar and the edge. The bar, made of stainless steel has poor thermal conductivity. The edge of the slit bar is made from a hard aluminium alloy plate with a thickness of \(800\,\mu\)m attached to an overlapping machined edge having a thickness of \(300\,\mu\)m. The choice of aluminium alloy for the slit edge is to ensure efficient cooling to cryogenic temperatures using copper braids connected to the cold plate of the liquid N\({}_{2}\) cylinder. The titanium link rods which were machined as per the precision requirement play an important role in the assembly because these isolate the slit bars from the rest of the assembly thus minimizing the thermal load on the copper braids. The titanium rod in the slit bar assembly works as a link between the Vernier scale, the piezo-walker and the slit bar.
#### 2.3.1 Controller Electronics
The proto type CSU model developed comprises of 3 pairs of slit bar assemblies and each slit bar assembly has a left arm and a right arm. The runner of the piezo walker is coupled to the slit bar as well as a electronic Vernier scale which provides the position information with an accuracy of \(10\,\mu\)m. Each of the six piezo walkers is driven individually by a drive controller consisting of closed loop circuits developed in-house. The working principle of the control circuit can be understood from the functional block diagram of 1 slit assembly (Fig. 3). In the following although the description is for the three slit assembly for which the mechanical parts have been fabricated, the electronics has already been designed and built and tested for the five-slit final CSU configuration.
The piezo walker has four legs which when excited with a \(50\,\)V trapezoidal wave in a particular sequence produces a linear motion in the runner. The controller circuit is built around a microcontroller which generates the square waves of desired frequency, which are then converted to sine waves by passing through low pass filters. The signals are further amplified, processed and level shifte
Figure 3: Schematic to show the working principle of the functional blocks of the controller electronics for the piezowalkers.
waveforms of 50 V as required to drive the four legs of the piezo walker. The functional block represents a single slit assembly. Two more such assemblies together complete the three slit assembly. These three slit assemblies termed as slaves are controlled through a master which is commanded through a user interface.
Operationally, when powered up all the 6 slit bars are moved to the retracted position so that the FOV of the detector is exposed and when fully open, the limit switch is depressed to set the Vernier readings to a reference value. At the engineering model level, using the user interface it is possible to provide the location and size of all 3 slits, by providing 6 parameters i.e the 3 positions corresponding to the location of the left arm of each slit and 3 slit widths. The parameters thus supplied are used by the master controller to compute the value of distances to be traversed by the 6 arms of the CSU. The computed values are then sent to the corresponding slaves, which enable the movement of the slit bars by the piezos till the desired position and width of all slits are achieved. The configuration of slits on the FOV so achieved remain unchanged till a new set of parameters is transmitted by the user to the master controller. The master controller stores the current location of the slits, so that when a new set of parameters is entered it calculates the range of motion required for each slit bar relative to that position. The controller is programmed to enable motion of the slit bars with a coarse and a fine step depending on the distance between the actual and the desired positions of the bars. The controller is also designed and programmed to avoid accidental collisions of the slit bars.
Figure 4 shows the 3-slit CSU prototype being tested in the laboratory. In the final configuration the drive electronics consisting of ten cards will be kept outside of the cryostat and will be used to control the movement of the slits, monitor the positions and temperatures of the different parts of the CSU through hermetically sealed connectors.
## 3 Cryogenic Functionality Testing of the CSU in the laboratory
A dedicated aluminium dewar (supporting a vacuum of up to \(10^{-6}\) mbar) was designed and fabricated at TIFR for the cryogenic testing of the CSU. The dewar consists of a top plate with a flange with four hermetically sealed connectors each with 19 pins as well as an inlet for the 3 litre liquid N\({}_{2}\) tank inside. The inlet pipe to the liquid nitrogen tank was fitted with a solenoid valve based system for automated round-the-clock filling (without spillage) of N\({}_{2}\) that was developed in-house. The base of the slit bar assembly was mounted on the bottom plate of the dewar with teflon rings to isolate the two plates thermally. For ease
Figure 4: Completed CSU prototype creating 3-slits at the FOV in the laboratory. The slit bar assembly on the top will be inside the cryogenic dewar while the drive and monitoring electronics will be outside.
of mounting the dewar was suspended upside-down and the base plate with the CSU attached to it was attached to the dewar (Fig. 5). Multiple tests of the performance of the cooling scheme as well as copper braids of different sizes (and hence thermal efficiencies) were carried out in the laboratory using this set-up. No major issues with the motion of the CSU in the final design presented here was noticed during the cryogenic tests. We present the results of one of the tests which was continued uninterrupted for over 72 hours and the motion of the slit bars measured at regular intervals was found to be as per specifications for the entire duration of the test. Figure 5 shows the results of the temperature profiles of the slit, the piezo-walker and the coldplate measured using thermistors affixed at different parts of the CSU. In order to maintain the piezo-walkers at a temperature above 273 K it was locally heated in an automated manner. The temperatures measured in the rest of the CSU do not reflect any changes due to this heating, which suggests that the MLI enclosure of the piezo-walkers may be adequate. A more detailed analysis of the thermal background is currently in progress.
## 4 Summary and Future Directions
We presented here the new design of a configurable slit unit for a multi-object infrared spectromgraph. The current technical specifications are based on the optics of the DOT and for a five-slit unit. However, the modular nature of the CSU design makes it easily scalable for much larger FOV spectrometers meant for bigger (10 m or larger) telescopes and larger number of slits. The mechanical and electronic design of the CSU presented here were developed completely indigenously. Except for the commercially procured piezo-walkers all mechanical and electronic parts were fabricated/built in-house for the three-slit prototype of the CSU presented here. Based on a series of tests performed in the laboratory we conclude that CSU performs as per specification both at room and cryogenic temperatures. Keeping in mind the constraint imposed by the piezo-walkers being non-functional at temperatures below 273 K, we are working on a design using cryogenic piezo-drives. A definite conclusion on whether a cryogenic piezo-drive is really needed will become clearer only after the opto-mechanical design of the spectrometer including the thermal modelling is completed. In this context we point out that the commercially available cryo piezodrives typically have travel ranges of only up to 50 mm, whereas for our design a movement range up to 100 mm is needed. Thus specially customized commercially available piezo-drives are currently being explored.
## Acknowledgments
The project was entirely funded by Department of Atomic Energy, Government of India, under Project Identification No. RTI 4002. The authors acknowledge the contributions of G. S. Meshram towards the
Figure 5: (_Left_) The CSU connected to the cold plate (at the bottom of the liquid N\({}_{2}\) tank) inside the dewar prior to the cryogenic testing of the unit in the laboratory. The shinning surface around the cold plate is a radiation shield created to ensure efficient cooling by reflecting the heat due to the uncooled parts of the dewar. (_Right_) Typical temperature profiles of the slit bar, the body of the piezo-walker and the coldplate measured during one of the cryogenic tests in the laboratory.
initial design of the slit bar edges and also the support and help received from H. Shah, R. Jadhav, S. Poojary, S. B. Bhagat and S. Gharat of the IR Astronomy group, TIFR.
|
2309.10270 | Hybrid SBI or How I Learned to Stop Worrying and Learn the Likelihood | We propose a new framework for the analysis of current and future
cosmological surveys, which combines perturbative methods (PT) on large scales
with conditional simulation-based implicit inference (SBI) on small scales.
This enables modeling of a wide range of statistics across all scales using
only small-volume simulations, drastically reducing computational costs, and
avoids the assumption of an explicit small-scale likelihood. As a
proof-of-principle for this hybrid simulation-based inference (HySBI) approach,
we apply it to dark matter density fields and constrain cosmological parameters
using both the power spectrum and wavelet coefficients, finding promising
results that significantly outperform classical PT methods. We additionally lay
out a roadmap for the next steps necessary to implement HySBI on actual survey
data, including consideration of bias, systematics, and customized simulations.
Our approach provides a realistic way to scale SBI to future survey volumes,
avoiding prohibitive computational costs. | Chirag Modi, Oliver H. E. Philcox | 2023-09-19T02:51:26Z | http://arxiv.org/abs/2309.10270v1 | # Hybrid SBI or How I Learned to Stop Worrying and Learn the Likelihood
###### Abstract
We propose a new framework for the analysis of current and future cosmological surveys, which combines perturbative methods (PT) on large scales with conditional simulation-based implicit inference (SBI) on small scales. This enables modeling of a wide range of statistics across all scales using only small-volume simulations, drastically reducing computational costs, and avoids the assumption of an explicit small-scale likelihood. As a proof-of-principle for this hybrid simulation-based inference (HySBI) approach, we apply it to dark matter density fields and constrain cosmological parameters using both the power spectrum and wavelet coefficients, finding promising results that significantly outperform classical PT methods. We additionally lay out a roadmap for the next steps necessary to implement HySBI on actual survey data, including consideration of bias, systematics, and customized simulations. Our approach provides a realistic way to scale SBI to future survey volumes, avoiding prohibitive computational costs.
Current and upcoming cosmological surveys such as the Dark Energy Spectroscopic Instrument [1; 2; 3], the ESA _Euclid_ satellite mission [4], Vera C. Rubin Observatory (LSST) [5], and the NASA Nancy Grace Roman Space Telescope [Roman; 6; 7], will probe the Universe across a vast array of scales providing high-resolution measurements with the potential to constrain cosmological parameters to unprecedented precision. The past iteration of cosmological data has been analyzed primarily through traditional analytical methods such as perturbation theory (PT); whilst this has been highly successful [8; 9; 10], the approach is necessarily limited to low-order clustering statistics (such as the two- and three-point functions), and to linear and quasi-linear scales. If we wish to fully exploit the information contained within upcoming large scale structure surveys, new methods are needed.
Recently, simulation-based inference (SBI) has emerged as a promising alternative to overcome the limitations of traditional methods [11; 12; 13; 14; 15]. SBI uses numerical simulations to model observables and their summary statistics, before these are combined with neural density estimation methods to efficiently infer cosmological parameters. As a result, it can be applied to any summary statistic that can be simulated and evaluated, including those deep in the non-linear regime, inaccessible to PT, and those whose likelihoods are far from Gaussian. A major limitation of SBI is that its training requires a large number of high-fidelity simulations, spanning a wide range of cosmologies. This makes it a computationally expensive exercise; a drawback that is further exacerbated by the increasing simulation volumes and resolutions needed to model upcoming data. The largest simulation suite currently available for training SBI for galaxy clustering (quijote simulations [16]) has moderately low mass resolution and is only \(1\,h^{-3}\text{Gpc}^{3}\) in volume, which is smaller than than the SDSS III-BOSS survey [17]. Creating analogous simulations appropriate for the next generation of cosmological surveys is thus likely to be _computationally prohibitive_.
The primary challenge in scaling up simulations stems from the simultaneous requirements of large volumes, high resolution, and accurate small-scale dynamics. On the largest scales, however, physics is well understood analytically, thus simulation-based approaches are not strictly necessary. In this regime, the density field is close to Gaussian and with quasi-linear perturbations that can be well modeled using perturbation theory. Furthermore, traditional statistics like the power spectrum and bispectrum are close to optimal on such scales [18], thus any more nuanced analysis schemes will not yield significant additional information. It is only the small, non-linear scales where higher-order statistics dominate and thus SBI is needed. The conclusion is clear: if we can combine perturbative methods for analyzing large scales with simulation-based methods for analyzing small scales, we can mitigate the computational cost, and thus make feasible the full analysis of upcoming generations of surveys.
In this work, we propose such a framework for hybrid simulation-based inference (HySBI), and demonstrate its utility in predicting cosmological parameters from summary statistics of the matter density field. This is depicted schematically in Fig. 1.
## Simulation-based Inference
SBI begins with a training dataset composed of \((\mathbf{\theta},\mathbf{x})\) pairs where \(\mathbf{\theta}\) are the underlying model parameters to be inferred (here, cosmology parameters and any rele
vant nuisance parameters), and \(\mathbf{x}\) is the corresponding data-vector generated from a simulator (such as an \(N\)-body realization). Analysis usually proceeds by training a conditional neural density estimator \(q_{\phi}\) (with parameters \(\phi\)) to learn: (a) the likelihood model, \(p(\mathbf{x}|\mathbf{\theta})\), by maximizing the conditional log-probability of the data \(\mathbf{x}\) conditioned on the parameters \(\mathbf{\theta}\) (known as a neural likelihood estimator; NLE); or (b) the posterior itself, \(p(\mathbf{\theta}|\mathbf{x})\), by maximizing the conditional log-probability of the model parameters \(\mathbf{\theta}\) conditioned on the data \(\mathbf{x}\), over the training data (known as a neural posterior estimator; NPE).
When performing inference on a test observation \(\mathbf{x}^{\prime}\) when using NPE, we can directly query the trained estimator \(q_{\phi^{*}}\) to generate samples from the posterior, _i.e_\(\mathbf{\theta}\sim q_{\phi^{*}}(\mathbf{\theta}|\mathbf{x}^{\prime})\). In NLE, we instead need to combine the learnt likelihood model \(q_{\phi^{*}}(\mathbf{x}|\mathbf{\theta})\) with the prior \(p(\mathbf{\theta})\), then use Markov chain Monte Carlo (MCMC) algorithms to generate samples from this posterior. NPE is often preferred over NLE for two reasons: (1) often the dimensionality of the posterior distribution is smaller than that of the likelihood which can lead to asymptotically better learning; (2) NPE does not require the additional step of running MCMC. In the HySBI framework however, we use NLE, since it enables us to combine the learnt neural likelihood on small scales with the analytic likelihood on large scales.
## II Hybrid simulation-based inference
Consider the scenario in which the data-vector \(\mathbf{x}\) can be split into two components - large scales \(\mathbf{x}_{L}\), and small scales \(\mathbf{x}_{S}\) - such that \(\mathbf{x}=\{\mathbf{x}_{L},\mathbf{x}_{S}\}\). Such a split is natural for the power spectrum (where \(\mathbf{x}_{L}\) would comprise measurements with wavenumbers below some transition scale, _i.e._\(\{P(k):k\leq k_{*}\}\)); however, we note that the following discussion is applicable to any combination of large- and small-scale statistics (e.g., power spectra on large scales and wavelets on small scales). The data likelihood can then be decomposed as the product of a large-scale likelihood and the _conditional_ likelihood of small-scales given large-scales:
\[p(\mathbf{x}|\mathbf{\theta})=p(\mathbf{x}_{L},\mathbf{x}_{S}|\mathbf{\theta})=p(\mathbf{x}_{L}|\mathbf{ \theta})p(\mathbf{x}_{S}|\mathbf{x}_{L},\mathbf{\theta}). \tag{1}\]
In HySBI, we propose to use classical statistics such as the power spectrum and bispectrum on large scales for which the likelihood, \(p(\mathbf{x}_{L}|\mathbf{\theta})\), can be modeled analytically with perturbation theory [e.g., 19, 8, 9]. As such, only the likelihood term on small scales, \(p(\mathbf{x}_{S}|\mathbf{x}_{L},\mathbf{\theta})\), needs to be learnt with simulations.1 We stress that, in contrast to global SBI approaches, this likelihood term depends not only on the model parameters \(\mathbf{\theta}\) but also on the large scale statistic \(\mathbf{x}_{L}\).
Footnote 1: This differs from previous hybrid approaches, such as Ref. [20, 21], who use small-volume simulations to model the signal, but assume a Gaussian likelihood; this fails for many higher-order statistics.
As discussed above, the main advantage of learning only the small-scale likelihood with simulations is that, in principle, it can be done by simulating only a small fractional volume at full fidelity instead of the whole simulation box, creating a surrogate likelihood that mitigates the increasing computational cost of scaling SBI to upcoming cosmological surveys. For a sufficiently large sub-volume, we still have enough modes on small scales that the associated sample variance is sub-dominant compared to other errors such as data noise and stochasticity in training. However, two new issues arise: (1) to correctly learn
Figure 1: Schematic overview of this work. Large-scale statistics are modeled using classical perturbative techniques, whilst small-scale summaries make use of simulation-based inference, trained on small sub-volume simulations, obviating the need to run costly high-resolution simulations of the entire survey volume.
the conditional dependence \(p(\mathbf{x}_{S}|\mathbf{x}_{L},\mathbf{\theta})\), one needs to be able to estimate the correct large-scale statistics \(\mathbf{x}_{L}\) for the exact large-scale realization in which the small-scale Universe forms with only coarse simulations on the full volume and full fidelity simulations only on the small volume; (2) a given sub-volume suffers from increased sample variance and super-sample effects from the large box that will impact the evolution in a given sub-volume, thus adding noise to the small scale statistics \(\mathbf{x}_{S}\) used for training SBI. Thus we learn a _surrogate likelihood_ for the small scale data which has the same mean but larger volume and hence is more conservative (effectively trading bias from poor quality simulations with variance from a smaller simulation box). In this first work, we present a proof-of-principle of our methodology and focus on studying the impact of this super-sample variance. In the discussion below, we will comment on other subtleties associated with this approach.
## III Data analysis
Setup -In order to present our methodology and its advantages, this work considers constraining the \(\Omega_{m}\) and \(\sigma_{8}\) parameters from the three-dimensional dark matter distribution. Our motivation to use dark matter density field rather than a more realistic halo or galaxy sample for this first exercise is to avoid the need to infer non-cosmological parameters (such as those of the halo-occupation distribution), allowing us to focus solely on the impact of super-sample variance.
To train the small-scale likelihood we use the quiote latin-hypercube suite [16] which contains 2000 \(N\)-body simulations varying five cosmology parameters: \(\{\Omega_{\rm m},\,\sigma_{8},\,\Omega_{\rm b},\,n_{s},h\}\). These simulations evolve \(1024^{3}\) cold dark matter particles in a \(1\,h^{-3}{\rm Gpc}^{3}\) box with the TreePM Gadget-III code [22]. To mimic a small-volume simulation, we split each simulation into eight sub-volumes with side-length \(500\,h^{-1}{\rm Mpc}\). 1500 simulations are used for training, 200 for validation, and the remaining 300 for testing. We emphasize that we use the sub-volumes to estimate small scale statistics only for _training HySBI_ and in tests, we always analyze the statistics estimated from the whole-volume as simulated data.
Large-scale statisticsHere, we set \(\mathbf{x}_{L}\) equal to the observed power spectrum of dark matter density field, \(\hat{P}(k)\), on scales \(k\in[0.007,0.15]\,h{\rm Mpc}^{-1}\), obtained from the full simulation volume of \(1\,h^{-3}{\rm Gpc}^{3}\) with \(\Delta k=2\pi/1000\,h\,{\rm Mpc}^{-1}\), _i.e._ the fundamental mode of the box. This scale cut is chosen to balance two competing factors: (a) perturbation theory predictions are accurate only on large-scales (favoring lower \(k_{\rm max}\)); (b) to suppress sample-variance in the small-scale sub-box statistics, we require many modes and thus larger \(k_{\rm max}\). Power spectra are estimated with nbodykit, using cloud-in-cell interpolation on an \(N=256\) side mesh [23].
We adopt a Gaussian model for the large-scale likelihood, as specified by the Effective Field Theory of Large Scale Structure:
\[-2\log p(\mathbf{x}_{L}|\mathbf{\theta}) = \sum_{k}\left[\frac{P_{\rm loop}(k)-2c_{s}^{2}P_{\rm ct}(k)-\hat{ P}(k)}{\sigma_{P}(k)}\right]^{2} \tag{2}\]
where the variance term is specified by the linear theory power spectrum, evaluated at the true cosmology of the test simulations. Our model for the theoretical power spectrum is given by 1-loop standard perturbation theory (\(P_{\rm loop}(k)\)) in addition to a renormalization contribution \(-2c_{s}^{2}P_{\rm ct}(k)\), depending on the counter-term \(c_{s}^{2}\), which must be treated as a free parameter and marginalized over [e.g., 24, 25].2
Footnote 2: The PT models are computed via loop integrals over the linear power spectrum (with e.g., class-pt[26]), itself computed using Boltzmann solvers such as class or camb. For the sake of speed, here we instead train a simple neural network emulator to predict \(P_{\rm loop}\) and \(P_{\rm ct}\) as a function of cosmological parameters (given a grid of training data from class-pt). We have validated that this does not impact the final inference but speeds up our MCMC chains by a factor \(\approx 100\).
Small scale statisticsWe consider two different choices for the small-scale statistic \(\mathbf{x}_{S}\): (1) the power spectrum for \(k\leq 0.5\,h\,{\rm Mpc}^{-1}\); (2) wavelet coefficients.3 In both cases, the conditional likelihood model \(p(\mathbf{x}_{S}|\mathbf{x}_{L},\mathbf{\theta})\), is learnt using SBI, given the above \(\mathbf{x}_{L}\) definition. To train the likelihood model, we estimate the small-scale statistics using only the sub-volumes of the simulation, one at a time; this emulates the realistic setting when it is practical only to compute reduced-volume simulations. We thus have eight estimates of \(\mathbf{x}_{S}\) from each simulation which generically differ due to sample and super-sample variance effects. To test our methodology, however, we generate \(\mathbf{x}_{S}\) from the full simulation volume of \(1\,h^{-3}{\rm Gpc}^{3}\) such that all the observational data is used.
Footnote 3: Since the wavelets are not localized in Fourier space, it is hard to define a strict cut-off. The coefficients used here are constructed using a mother wavelet whose weight falls to zero at the Nyquist frequency \(k=0.8\,h\,{\rm Mpc}^{-1}\).
Here, we measure the power spectrum on scales \(k\in[0.15,0.5]\,h\,{\rm Mpc}^{-1}\) with \(\Delta k\) fixed to the fundamental mode of the small sub-volume, _i.e._\(2\pi/500\,h\,{\rm Mpc}^{-1}\); this allows for sub-volume simulations and full-volume 'data' to be robustly compared. Wavelet coefficients are defined for a statistically homogeneous real field \(F\) as \(S_{0}=\langle|F|^{p}\rangle\) and \(S_{1}=\langle|I*\psi_{j}|^{p}\rangle\) where the dilation index \(j\) varies as \(q/Q\) with \(q\in\{0,\ldots,J\times Q-1\}\). The parameter \(J\) is the total number of octaves _i.e._ doublings in scale of the mother wavelet, and \(Q\) is the quality factor, which controls the number of scales per octave. We use the package galactic-wavelets4 to estimate these wavelet coefficients. Our mother wavelet is inspired from [27] and
we take \(J=3\), \(Q=4\), and \(p\in\{0.5,1,1.5,2\}\), resulting in a total of four \(S_{0}\) and 48 \(S_{1}\) wavelet coefficients [28].
Non-periodic boundary conditions -Being cut-outs from a larger simulation, the sub-volumes do not have periodic boundary conditions, which must be accounted for when estimating the small-scale power spectrum and wavelet coefficients. For power spectrum, this is achieved via a (square) window-function matrix \(M\) such that the true small scale power is \(P(k)=[M^{-1}\tilde{P}](k)\), where \(\tilde{P}\) is the measured small-scale power spectrum on the volume, assuming periodicity. This matrix is estimated by generating Gaussian white-noise fields over the full-volume with power injected only in a single \(k-\)bin, and then measuring the leakage of this power into other bins when power spectrum is evaluated on a sub-volume.
Performing similar corrections for wavelet coefficients is easier since they are localized in configuration space. Here, we achieve this by masking the pixels near the boundary such that for any wavelet centered on a pixel within the mask, the fraction of weight that leaks outside the sub-volume is less than a pre-defined small threshold. This implies that larger the wavelets, the more pixels are masked. As such, the threshold determines the bias-variance trade-off in our estimator, and is here set to 0.02.
SBI Implementation -We use the sbi5 package to train masked auto-regressive flows as conditional neural density estimators and learn the likelihood, \(q_{\phi}(\mathbf{x}_{S}|\mathbf{x}_{L},\mathbf{\theta})\sim p(\mathbf{x}_{S}|\mathbf{x}_{L},\mathbf{ \theta})\). To minimize stochasticity in training, we train 400 networks for each statistic by varying hyperparameters such as the network width, number of layers, learning rate and batch size. We use the Weights-and-Biases6 package for the hyperparameter search. After training, we collect the ten neural density estimators with best validation loss and use them as an ensemble.
Footnote 5: [https://github.com/mackelab/sbi](https://github.com/mackelab/sbi)
Footnote 6: [https://wandb.ai/site](https://wandb.ai/site)
Posterior Inference -For inference over a new observation \(\mathbf{x}^{\prime}\), we combine the learnt small-scale likelihood with the Gaussian likelihood for large scale \(p(\mathbf{x}_{L}|\mathbf{\theta})\) (via Eq. 1), and a uniform prior on cosmology parameters (dictated by the bounds of the quijote latin-hypercube). The counter-term \(c_{s}\) has a Gaussian prior \(\mathcal{N}(0,10)\) [e.g., 8]. To generate samples from the posterior, we use the emcee package [29] for MCMC sampling and run 20 chains with 11,000 samples each.
## III Results
In Fig. 2, we show the results of HySBI using the power spectrum and wavelet coefficients for inferring \(\Omega_{m}\) and \(\sigma_{8}\) on one of the test simulations7. To minimize the impact of super-sample variance, the SBI component of HySBI is trained using all the eight sub-volumes of the simulations _i.e._ we take the mean of the small-scale statistics estimated from all the individual sub-volumes for training.8 As discussed in the previous section, the test data is generated from the full \(1\,h^{-3}\text{Gpc}^{3}\) simulation volume.
Footnote 7: Results are similar on all the 160 simulations in the test dataset.
Footnote 8: The mean of all subvolume statistics well approximates the statistic computed on the total volume; differences appear only from boundary effects, which are small if the characteristic scale of \(\mathbf{x}\) is small compared to the subbox size.
For comparison, we show also the results of the PT analysis which is restricted to the large scales (\(k\in[0.007,0.15]\,h^{-1}\text{Mpc}\)), and a global SBI power spectrum analysis (hereafter NLE) that covers the same scales as HySBI (\(k\in[0.007,0.5]\,h^{-1}\text{Mpc}\)). As expected, HySBI significantly outperforms the classical PT analysis, clearly demonstrating the utility of our method. That said, the NLE power spectrum constraints on \(\sigma_{8}\) are somewhat tighter than those of HySBI over the same range; this is because, in our scale-decoupled formalism, HySBI also infers and marginalizes over the large-scale PT counter-term \(c_{s}^{2}\), unlike global SBI. This can in principle be improved upon by learning a prior of \(c_{s}^{2}\) conditioned on small scale statistics and cosmology, but we do not explore that in
Figure 2: Posterior distribution of \(\Omega_{m}\) and \(\sigma_{8}\) inferred from a power spectrum analysis using classical perturbative methods on large-scales (PT), full simulation-based inference (NLE), and our new hybrid scheme (HySBI). We also show HySBI results including small-scale wavelet statistics (purple). Results are shown for a representative test simulation (quijote LH-872). HySBI strongly outperforms classical techniques, and is able to robustly infer cosmological parameters without large-volume simulations.
this work. Finally, HySBI with wavelet coefficients on small scales outperforms all the power spectrum analyses, emphasizing the advantage of using higher order statistics.
To study the impact of super-sample covariance, we repeat the HySBI analysis training the small-scale likelihood using different number of sub-volumes (each of size \(500^{3}\,h^{-3}\text{Mpc}^{3}\)). The corresponding constraints on \(\Omega_{m}\) and \(\sigma_{8}\) are shown in Fig. 3 for a single representative simulation using the power spectrum as the summary statistic for both small and large scales. Averaging over all test simulations, the uncertainties when using four, two and one sub-volumes are inflated by \(\sim 6,\,9,\,11\%\) (\(\sim 45,\,80,\,130\%\)) for \(\Omega_{m}\) (\(\sigma_{8}\)) compared to those using all eight sub-volumes. The inference of \(\sigma_{8}\) is thus more sensitive to super-sample effects than \(\Omega_{m}\); this matches expectations, since most of the constraining power on the latter comes from the large scales which are modeled with PT, whilst \(\sigma_{8}\) constraints are most informed by small scales.
In Fig. 4, we show the same analysis using wavelet coefficients on small scales. In this case, we find that constraints are inflated by \(\sim 20,\,24,\,50\%\) (\(\sim 40,\,80,\,100\%\)) for \(\Omega_{m}\) (\(\sigma_{8}\)); here the \(\Omega_{m}\) analysis more sensitive to super-sample effects than before, since the wavelets carry significant information on the matter density on all scales. On the other hand, \(\sigma_{8}\) constraints are slightly less sensitive for wavelet coefficients than the power spectrum since wavelets include statistics corresponding to \(\delta^{p}\) for \(p=0.5,1,1.5\) (where \(\delta\) is the overdensity field), whilst the power spectrum only measures squared statistics, _i.e._\(\delta^{2}\). The former lead to lower sample variance across the different sub-volumes.
## IV Discussion and Outlook
Simulation-based inference provides a promising avenue through which to overcome the limitations of traditional analytic methods for cosmological analysis, and make use of the upcoming tranche of small-scale data. However, the computational cost of generating the high-fidelity and high-volume simulations needed to train SBI at the precision required for the next generation of cosmological surveys will likely be prohibitive.
To mitigate this challenge, we propose Hybrid simulation-based inference (HySBI); this combines perturbative methods on large scales (using quasi-optimal statistics such as the power spectrum and bispectrum), with simulation-based techniques on small scales (including arbitrary higher-order statistics which cannot be modeled analytically). This is achieved by decomposing the data likelihood as the product of a large-scale likelihood modeled analytically with perturbation theory, and a _conditional_ likelihood of small-scales given large-scales - the latter is learnt using simulations. This approach can be lead to significant computational gains, since we need to simulate only a small fraction of the whole data volume to model the whole.
A natural drawback is that modeling small-scales using only one (or a few) sub-volumes of a simulation adds super-sample effects from the large scale modes in the global simulation box. As a result, we learn a conservative surrogate likelihood for small scales which has a larger variance (but removes the bias that would result from simulating the entire volume at low resolution). In this first work, we have presented a proof-of-principle of HySBI
Figure 4: As Fig. 3, but using wavelet coefficients as the small-scale statistic. The improvement in \(\sigma_{8}\) with number of sub-volumes is more muted in this case.
Figure 3: As Fig. 2, but comparing the power spectrum HySBI constraints when the SBI is trained using one, two, four, and eight sub-volumes of every simulation. This reduces super-sample variance in the small-scale training data, and is found to somewhat enhance \(\sigma_{8}\) constraints.
and studied the impact of this super-sample variance by inferring \(\sigma_{8}\) and \(\Omega_{m}\) using the three-dimensional distribution of the dark-matter density field, analyzed with the large-scale power spectrum, and small-scale power spectrum or wavelet coefficients. Upon splitting the entire \(1h^{-3}\)Gpc\({}^{3}\) simulation box into eight sub-volumes, we found that using even one of the eight sub-volumes to train the small-scale likelihood leads to significant gains over state-of-the-art power spectrum analyses.
Performing a global SBI analysis leads to slightly better results than our hybrid scheme; this occurs since we have enforced that the large-scale likelihood is not influenced by small-scales, and thus marginalized over a necessary perturbative counterterm parameter appearing on large scales. Such an assumption must be altered for realistic observables such as galaxies, whence there are nuisance parameters in both PT and SBI (encoding bias and halo-occupation distributions). These parameters however are not independent and should be inferred jointly. This will be explored in future work.
To fully realize the computational advanced promised by HySBI, two technical advances must be realized. As mentioned briefly above, a key ingredient of our SBI framework is accurate, high-fidelity small sub-volume training simulations which "know" about the large-volume Universe in which they are embedded, _i.e._ those from which we can measure both the small-scale statistics, \(\mathbf{x}_{S}\), and the corresponding realization of the large-scale data, \(\mathbf{x}_{L}\) but without running the full-fidelity simulations on the full volume for the latter. We plan on developing these simulations by building upon promising techniques that are already available. For example, the s-COLA method creates coarse PT simulations on large-scales (which provide the global \(\mathbf{x}_{L}\) measurements), and evolves small sub-volumes with high-resolution particle-mesh simulations [30, 31]. The FlowPM simulations, which use multi-grid schemes to estimate small-scale forces independently in different sub-volumes, can also be adapted to this end [32]. A third avenue is by building upon the zoom-in simulations for galaxy formation [33]. Crucially, sub-volumes can be independently simulated in these frameworks, thus we can straightforwardly run a single small-volume simulation with a known (coarsened) large-volume realization. Since these can be run on a single GPU, the resulting pipeline will not require multi-node communications and can enjoy GPU accelerations; this will lead to significant computational gains even if we simulate all the sub-volumes of a given simulation to minimize super-sample effects [34, 32].
A second technical hurdle to be overcome is the the inclusion of systematic effects such as survey masks, fibre collisions _et cetera_. These often couple small- and large-scales and will be considered in future works. Finally, we currently assume a clear division between the large and small scale statistics (such that \(\mathbf{x}_{S}\) can be measured from sub-boxes), which excludes some types of statistics such as squeezed bispectra; similar techniques can likely be developed for such modified analyses (often of use in inflationary studies) [e.g., 35, 36, 37, 38].Though much work remains to be done, it seems likely that hybrid approaches such as HySBI can create a promising symbiosis of old and new techniques, significantly enhancing the science output of future surveys without requiring prohibitive computational costs.
###### Acknowledgements.
OHEP is a Junior Fellow of the Simons Society of Fellows and thanks the office of Chuck Schumer for immigration assistance. CM would like to thank Dr. Strangelove for the title, and Ben Wandelt, Michael Eickenberg, and Mikhail Ivanov for useful discussions. We would especially like to thank Bruno Regaldo-Saint Blancard for setting up and sharing the code for estimating wavelet coefficients with erosion. We additionally thank Sebastian Wagner-Carena for insightful comments on the draft. This work was partly conceived during the "Beyond 2-point Challenge" workshop at the University of Arizona in February 2023.
|
2309.06412 | Quantum transport in a multi-path graphene Aharonov-Bohm inteferometer | We investigate the quantum transport dynamics of electrons in a multi-path
Aharonov-Bohm interferometer comprising several parallel graphene nanoribbons.
At low magnetic field strengths, the conductance displays a complex oscillatory
behavior stemming from the interference of electron wave functions from
different paths, reminiscent of the diffraction grating in optics. With
increasing magnetic field strength, certain nanoribbons experience transport
blockade, leading to conventional Aharonov-Bohm oscillations arising from
two-path interference. We also discuss the impact of edge effects and the
influence of finite temperature. Our findings offer valuable insights for
experimental investigations of quantum transport in multi-path devices and
their potential application for interferometry and quantum sensing. | Cynthia I. Osuala, Zitao Tang, Stefan Strauf, Eui-Hyeok Yang, Chunlei Qu | 2023-09-12T17:25:16Z | http://arxiv.org/abs/2309.06412v1 | # Quantum transport in a multi-path graphene Aharonov-Bohm interferometer
###### Abstract
We investigate the quantum transport dynamics of electrons in a multi-path Aharonov-Bohm interferometer comprising several parallel graphene nanoribbons. At low magnetic field strengths, the conductance displays a complex oscillatory behavior stemming from the interference of electron wave functions from different paths, reminiscent of the diffraction grating in optics. With increasing magnetic field strength, certain nanoribbons experience transport blockade, leading to conventional Aharonov-Bohm oscillations arising from two-path interference. We also discuss the impact of edge effects and the influence of finite temperature. Our findings offer valuable insights for experimental investigations of quantum transport in multi-path devices and their potential application for interferometry and quantum sensing.
+
Footnote †: Corresponding author. [email protected]
## I Introduction
Graphene is a compelling contender for next-generation compact and high-performance electronic devices due to its exceptional properties, including the atomically thin monolayer structure, linear dispersion relation near the Dirac points, absence of energy gap, and high carrier mobility [1; 2]. These properties have enabled a myriad of innovative applications ranging from ultrafast electronics and flexible optoelectronics to advanced sensing technologies. Recent research has also unveiled graphene's potential as a versatile playground for exploring intriguing quantum interference phenomena similar to those observed in optics, yet operating on the nanoscale with matter waves. Notable examples of electron optics with graphene include the realization of Fabry-Perot interferometers, Mach-Zehnder interferometers, and Veselago lens, which have paved the way for fabricating chip-scale electronic interferometers [3; 4; 5].
The Aharonov-Bohm (AB) effect is a quintessential example of quantum interference that has garnered substantial attention across various materials systems, including metals [6], semiconductor heterostructures [7], carbon nanotubes [8], and topological insulators [9; 10; 11; 12]. It arises from the interference of wave functions of charged particles encircling a ring structure in the presence of a perpendicular magnetic field. Due to the presence of magnetic flux, charged particles pick up a different phase \(\Delta\varphi=2\pi BS/\phi_{0}\) when traversing along the two paths where \(\phi_{0}=h/e\) denotes the flux quantum, \(S\) is the area enclosed by the ring, \(B\) is the strength of the applied perpendicular magnetic field [13]. Consequently, by varying the magnetic field, the conductance exhibits a periodic oscillation with an oscillation frequency given by \(f_{B}=S/\phi_{0}\). AB oscillations have been observed in various graphene materials, including mechanically exfoliated graphene [14; 15; 16; 17; 18], epitaxial graphene [19], CVD graphene [20; 21], exfoliated bilayer graphene [22; 23]. However, most investigations have focused on two-path interferometry setups, which inherently limit the complexity of achievable interference patterns [24; 25; 26; 27]. The natural question is, what happens when considering a multi-path interferometer involving interconnected quantum pathways? Despite its conceptual importance and simplicity, multi-path AB oscillations have been relatively underexplored in the literature.
Here, we systematically investigate quantum transport in a multi-path AB interferometer comprising several parallel graphene nanoribbons (GNRs). We found that the magnetoconductance exhibits complex oscillatory behavior due to the intricate interplay between electron trajectories, magnetic flux, and the quantum Hall effect. When the magnetic field is weak, electrons can be transported from source to drain through all pathways, giving rise to AB interference reminiscent of the diffraction grating effect in optics. As the magnetic field increases, the bulk current evolves into chiral edge states due to the quantum Hall effect, resulting in unidirectional flow and the obstruction of specific pathways. In this regime, we demonstrate a dramatic change in the AB oscillation that evolves from multi-path to two-path interference. Our results showcase the complexity achievable in multi-path interferometers and offer new avenues for harnessing the potential of graphene-based systems for quantum-enhanced technologies.
## II Quantum transport at \(T=0\)
The carbon atoms of graphene arrange themselves in a 2D honeycomb lattice, with two atoms per unit cell, giving rise to remarkable properties and behaviors of graphene. To compute the electronic transport in GNR rings we employ the following spinless nearest-neighbor tight-binding model [2]
\[H=\sum_{i}\epsilon_{i}c_{i}^{\dagger}c_{i}+\sum_{\langle i,j\rangle}t_{ij}(c_ {i}^{\dagger}c_{j}+h.c.) \tag{1}\]
where \(c_{i}^{\dagger}\) (\(c_{i}\)) represents the creation (annihilation) operator for an electron at the \(i\)-th lattice site, with an associated on-site potential energy \(\epsilon_{i}\) which we have set to zero in this work. The summation \(\sum_{\langle i,j\rangle}\) is limited to the nearest-neighbor atoms on the honeycomb lattice with a lattice constant of \(a_{0}=0.246\) nm. Under the influence of a uniform perpendicular magnetic field \(\mathbf{B}=(0,0,B)\), the tight-binding Hamiltonian \(H\) is modified through the Peierls substitution [28]
\[t_{ij}=t_{0}e^{-i\frac{2\pi}{\alpha}}\int_{\epsilon_{i}}^{\epsilon_{j}}\mathbf{ A}(\mathbf{r})\cdot d\mathbf{r} \tag{2}\]
where \(t_{0}\approx 2.7\) eV is the hopping parameter and \(\mathbf{A}\) is the gauge field specifically in the Landau gauge as \(\mathbf{A}=(-By,0,0)\). To enhance computational efficiency, we employ a scaling factor of \(s=5\) with the hopping parameter rescaled to \(t=t_{0}/s\) and lattice spacing \(a=a_{0}s\)[29]. At zero temperature the conductance is calculated using the Landauer formula \(G=(e^{2}/h)\sum_{\alpha\beta}\mid t_{\alpha\beta}\mid^{2}\) with \(t_{\alpha\beta}\) representing the probability amplitude for the transmission from mode \(\beta\) in the input lead to mode \(\alpha\) in the output lead. In this work, we employ the Kwant package to numerically simulate the quantum transport dynamics of the multi-path graphene system [30].
Figure 1 depicts a parallel zigzag GNR ring with multiple paths for electron matter wave interference. Figure 1(a) displays the configuration with two paths, which we will refer to as a 2slits system. Furthermore, we introduce a third and fourth pathway, yielding a 3slits system (Fig. 1(b)) and a 4slits system (Fig. 1(c)). Two semi-infinite leads, indicated in red, are symmetrically attached to each end of the ribbon. The dimensions of the rectangular zigzag GNR are defined by the parameters displayed in Fig. 1(a).
In Fig. 2(a), we present the band structure of the lead (which is the same for the three configurations) with a zigzag edge at \(B=0\)T. Only the lowest three conduction subbands are shown as they are relevant to the quantum transport dynamics. The corresponding transmission probability for the 3slits system is depicted in Fig. 2(b). Within the low-energy regime, corresponding to the first subband, a distinct transmission probability pattern, characterized by periodic oscillations of varying amplitudes, emerges. This pattern diminishes as the Fermi energy is increased to cross the second subband. Upon further increase in the Fermi energy, the transmission probability demonstrates regular oscillations similar to those observed in the 2slits system. An interesting phenomenon happens at B=1.5 T. In this scenario, the oscillatory pattern exhibited at low Fermi energies undergoes a process of smearing out and suppression as the Fermi energy increases. At higher Fermi energies, multiple sub-bands contribute to the transport dynamics of electrons, leading to interference patterns becoming less pronounced or even smeared out. In the following, we shall focus on the low Fermi energy regime.
In Fig. 3, the conductance \(G\) is presented as a function of the magnetic field \(B\) for nanoribbon rings. The conductance plot in Fig. 3(a) for the 2slits system exhibits regular oscillations that arise from the two-path AB effect. The oscillation period \(\Delta B\) is approximately 34.5mT, in excellent agreement with the theoretical ex
Figure 1: Schematic illustration of our GNR rings subjected to a uniform magnetic field along the perpendicular direction with (a) two paths, (b) three paths, and (c) four paths. The length is denoted as \(L=500\)nm, the width of the lead is \(W_{L}=300\)nm, and the width of the ring arm is \(W=40\)nm. The ring arms have the same width and enclose the same total area.
Figure 2: (a) Band structure of the lead and (b, c) the corresponding transmission probability of the 3slits system at \(B=0\)T and \(B=1.5\)T, respectively
pression \(\phi_{0}/S\) where \(S\approx L\times W_{L}\). The conductance plots in Fig. 3(b, c) exhibit notable and distinctive features. In the case of the 3slits configuration, two types of oscillation patterns are observed. Within the yellow-shaded region, the amplitudes of the conductance oscillation exhibit an alternating trend, where they undergo systematic increments and decrements as a function of the magnetic field strength. This behavior is prominent for very small values of B. However, as the magnetic field strength increases beyond a certain threshold, which depends on the Fermi energy \(E_{F}\) of the incident electron, a regular oscillation pattern emerges (indicated by the purple-shaded region), corresponding to the behavior observed in the 2slits system. For the 4-slits system, three types of oscillation patterns can be identified, as indicated by the three different colors in Fig. 3(c). The conductance clearly consists of three oscillation frequencies at a very low magnetic field. As the magnetic field strength is increased, the behavior evolves into that for the 3slits and 2slits systems, respectively.
To quantitatively analyze the complex magnetoconductance oscillations presented in Fig. 3, we have performed a fast Fourier transform (FFT). As shown in Fig. 4(a), the conductance oscillations for the 2slits system exhibit a prominent peak at a frequency of 30/T, which agrees with the conventional AB oscillations arising from electron matter wave interference along the two pathways. Higher harmonics, such as the peak at around 60/T, are also visible. For the 3slits system, in addition to the 30/T frequency component, other frequencies emerge, with the one at 15/T dominating. This suggests that the magnetoconductance oscillation of the 3slits system (Fig. 3(b)) results from two types of interference: one between neighboring slits, resulting in slower oscillations at 15/T, and the other between the top and bottom slits, yielding faster oscillations at 30/T. The latter frequency component diminishes at relatively higher magnetic field strengths (indicated by the purple-shaded area in Fig. 3(b)), implying that either the top or bottom slit becomes insulating. Similarly, in Fig. 3(c), three distinct
Figure 3: Conductance \(G\) as a function of the magnetic field \(B\) for Fermi energies \(E_{F}=0.1\)meV (solid line) and \(E_{F}=2\)meV (dashed line) at temperature \(T=0\) for the GNR rings with (a) 2 slits, (b) 3 slits, and (c) 4 slits. The purple-shaded area corresponds to interference between two paths, the yellow-shaded area corresponds to interference between three paths, and the red-shaded area corresponds to interference between four paths. The curves for \(E_{F}=2\)meV have been shifted by \(1e^{2}/h\) for improved visualization.
Figure 4: Fourier transform of the magnetoconductance oscillations shown in Fig. 3, with blue, yellow, and red vertical lines indicating the frequencies resulting from interference between the top and bottom slits, neighboring slit (b) and next nearest-neighboring slits (c), and neighboring slits, respectively.
frequency components emerge at (i) 10/T, (ii) 20/T, and (iii) 30/T, correspondingly to interference between electron matter waves (i) among neighboring slits, (ii) among the next nearest-neighboring slits, and (iii) between the top and bottom slits, respectively. The evolving oscillations in Fig. 3(c) indicate that one, and subsequently two, pathways of the 4slits system become insulating under intermediate and strong magnetic field strengths.
To gain further insight into the conductance behavior in Fig. 3, we examine the current flow of the whole system under various magnetic field strengths. The current between two sites is defined as [25; 31]
\[J_{ij}=i\sum_{\alpha}(\psi_{\alpha i}^{*}t_{ij}\psi_{\alpha j}-\psi_{\alpha j}^ {*}t_{ij}\psi_{\alpha i}) \tag{3}\]
where \(\psi_{\alpha i}\) is the wave function at the \(i\)-th site, and \(\alpha\) denotes the index of the conducting channels of the two leads, spanning all available energy modes up to the Fermi energy \(E_{F}\).
Figure 5 presents the current flow distribution at four different magnetic field strengths, corresponding to the four points denoted as I, II, III, and IV of the 3slits magnetoconductance oscillation curve(Fig. 5(a)), respectively. At \(B=0\)T, the currents in the three pathways flow in the same direction, giving rise to the maximum flow of current and the highest conductance amplitude (Fig. 5(b)). In analogy to the optical grating effect, this scenario corresponds to the constructive interference of electron matter waves along the three paths. At \(B=0.022\)T, the current flows to the right lead via the top pathway and returns back through the two lower pathways, leading to a minimal net current and vanishing conductance(Fig. 5(c)). This corresponds to the destructive interference of the electron matter waves along the three paths. At \(B=0.032\)T, as the Peierls phase continues to accumulate, the current in the lowest path is reversed again, leading to partial constructive interference among the three paths and finite conductance(Fig. 5(d)). As the magnetic field increases, the grating effect gradually fades out, and the current along the top pathway is blocked(Fig. 5(e)). This is related to the formation of Landau levels and the associated chiral edge state that flows along the lower part of the GNR. When the magnetic field is increased to even larger values, the middle slit will also be blocked due to the further localization of the edge state.
Figure 6 shows a contour plot of the conductance as a function of Fermi energy and magnetic field strength. In the case of the 2slits system, the conductance displays pronounced AB oscillation, with a frequency that is largely unaffected by changes in Fermi energy. This phenomenon arises from our choice of Fermi energies near the Dirac point. When we vary the Fermi energy
Figure 5: (a) Conductance G as a function of the magnetic field B for the 3slit configuration at \(E_{F}=2meV\). The corresponding current distributions are illustrated in (b-e), each associated with distinct magnetic field strength. Electrons enter the ribbon from the left lead. Darker blue shades depict lower current density with narrower streamlines, while brighter yellow hues represent higher current density with wider streamlines. This interplay of streamlines and colors visually illustrates flow speed and current distribution within the system.
Figure 6: Contour plot of the conductance (in unit of \(e^{2}/h\)) as a function of the Fermi energy \(E\) and the magnetic field strength \(B\) for the (a) 2slits and (b) 3slits configurations.
while keeping the magnetic field constant, the conductance of the 2slits system exhibits additional oscillations. These oscillations are attributed to the resonant tunneling through the bound states within the GNR, a consequence of confinement along the longitudinal direction. In contrast, the 3slits system exhibits a more complex oscillatory pattern owing to the multi-path interference effects. A diffraction grating effect is evident across a wide spectrum of Fermi energies.
In our previous analysis, we explored the GNR with a zigzag edge, as depicted in Fig. 1. To provide a comprehensive study, Fig. 7 showcases the conductance plot and its FFT for the armchair edge configuration. Fig. 7(a) presents the conductance plotted against the magnetic field for the same three configurations. Different from that for the systems with zigzag edges, the conductance is small at low magnetic fields and gradually increases to \(1e^{2}/h\) at higher B. This phenomenon is related to the distinct band structures of the GNRs. Specifically, GNRs with zigzag edges are characterized as gapless due to the presence of a zero-energy edge state, while those with armchair edges possess a finite energy gap and our choice of Fermi energy is lower than this energy gap. As B increases, the lowest conductance band will become lower and flatter to form the Landau levels. Consequently, the conductance gradually increases as a function of B. In contrast to the case with the zigzag edge, the oscillation pattern displayed in the 3slits and 4slits extends to a wider range of magnetic fields, demonstrating the robustness of this phenomenon. The corresponding Fourier spectrum in Fig. 7(b) also reveals the presence of higher harmonics.
## III Influence of temperature
Extending our analysis to finite temperatures is essential to gain a comprehensive understanding of the multi-path quantum transport dynamics under realistic conditions. This investigation is particularly pertinent to practical applications of graphene-based devices, which often operate at nonzero temperatures.
To incorporate finite temperature effects, we need to
Figure 7: (a) Conductance as a function of the magnetic field for armchair GNR rings in the setup depicted in Fig. 1 and (b) the corresponding Fourier spectrum for Fermi energies \(E_{F}=2.5\)meV. The curves in (a) have been shifted by \(1e^{2}/h\) for improved visualization.
Figure 8: (a) and (c) display the Fourier transforms for the 2slits and 3slits systems, respectively, calculated at finite temperatures ranging from \(0.3\) K to \(4\) K, with \(E_{F}=0.1meV\). The integrated amplitudes of the FFT peaks as functions of temperature are presented in (b) and (d) for the corresponding systems. Specifically, for the 2slits system, (b) displays the integrated amplitude of the peak at \(30\)/T, with the integral in the frequency domain ranging from \((25-35)/T\). In contrast, for the 3slits system, (d) illustrates the integrated amplitudes at \(15\)/T and \(30\)/T, with the integral ranges of \((10-20)/T\) and \((25-35)/T\), respectively.
integrate the product of the transmission and broadening function over an energy window (\(E_{F}\pm 15k_{B}T\)), with \(k_{B}T\) representing the thermal energy [32]:
\[G(B,E_{F},T)=\frac{e^{2}}{h}\int T(E,B)F_{T}(E-E_{F})dE \tag{4}\]
where \(F_{T}(E-E_{F})=-\frac{\partial f_{E_{F}}(E)}{\partial E}\) denotes the thermal broadening function and \(f_{E_{F}}(E)=[1+e^{(E-E_{F})/k_{B}T}]^{-1}\) is the Fermi-Dirac distribution function. In standard cryogenic experiments, a temperature of 4K corresponds to a thermal energy of approximately 0.3meV, providing a useful reference for understanding the energy scales involved in low-temperature simulations. In the calculation, we fix the Fermi energy at 0.1meV, and vary the temperature across the range from 0K to 4K. As shown in Fig. 8(a,c), the integrated amplitude of the FFT peaks at 30/T reduces for both the 2slits and 3slits configurations. In contrast, the FFT peaks at 15/T do not decrease monotonously for the 3slits configuration. The peaks at 15/T may be enhanced at 0.3K. This nontrivial temperature effect is related to the interference of electron matter waves among nearest neighbor paths in the 3slits configuration. Additionally, the peak values at 15/T (without integration) are usually larger than those at 30/T, despite being narrower.
## IV Conclusion
In summary, we have systematically studied the quantum transport dynamics of electrons within a multi-path AB interferometer constructed from parallel graphene nanoribbons. We unveil intriguing oscillatory behavior in conductance at low magnetic field strengths, reminiscent of the diffraction grating effect in optics, underscoring the interplay between electron trajectories, magnetic flux, and the quantum Hall effect. With increasing magnetic fields, certain pathways are blocked, and the system evolves into a two-path interferometer due to the formation of Landau levels and the associated chiral edge states. One important generalization of our work is to include the disorders that are always present in the real material. Disorders can affect the coherence length, mean-free length, and, subsequently, the phenomena of multi-path interference. Furthermore, our exploration can be expanded to encompass spin-orbit coupling in the tight-binding model and applied to other 2D materials. Our findings hold promise for the advancement of interferometry and quantum sensing technologies. They are poised to stimulate further theoretical and experimental investigations of multi-path interference with electronic matter waves.
###### Acknowledgements.
We would like to thank Fan Zhang and Yang-Zhi Chou for stimulating comments and discussions. This work is supported by the ACC-New Jersey under Contract No. W15QKN-18-D-0040 and the Stevens Startup Fund.
|
2309.12467 | CRIRES-POP: a library of high resolution spectra in the near-infrared.
III. Line identification in the K-giant 10 Leo | Context: High-resolution spectra in the near-infrared (NIR) are an important
tool for the detailed study of stellar atmospheres. The accurate identification
of elements and molecules in these spectra can be used to determine chemical
abundances and physical conditions in the photosphere of the observed star.
Such identifications require precise line positions and strengths of both
atomic and molecular features. Aims: This work focusses on the full
identification of absorption lines in the NIR spectrum of the K-giant 10 Leo,
including previously unidentified lines. The large number and complexity of the
observed absorption lines require a deep search for potential spectral
signatures to enable an unambiguous assignment to specific elements or
molecular species. We aim to improve the published line lists of metals, some
of which are determined by model calculations only, and many of which presently
lack the completeness and accuracy of line parameters. Methods: The CRIRES-POP
project provided high-resolution, high signal-to-noise ratio (S/N) spectra of
several bright stars in the 1 to 5 $\mu$m range. For the K-giant 10 Leo, a
spectrum corrected for telluric absorption and with precise wavelength
calibration is available. This has been analysed by comparison with model
spectra and up-to-date line lists. Results: We identified lines of 29 elements
and eight molecular species. While the positions of many known lines could be
confirmed, about 6% of all lines detected in 10 Leo could not be attributed to
any known feature. For CO and its isotopologues, molecular constants could be
derived and several additional lines identified. We report major
inconsistencies for some prominent lines. In addition, abundances for several
key elements in 10 Leo are provided. | Manfred Zendel, Thomas Lebzelter, Christine Nicholls | 2023-09-21T20:21:46Z | http://arxiv.org/abs/2309.12467v1 | CRIRES-POP: a library of high resolution spectra in the near-infrared. III. Line identification in the K-giant 10 Leo
###### Abstract
Context:High-resolution spectra in the near-infrared (NIR) are an important tool for the detailed study of stellar atmospheres. The accurate identification of elements and molecules in these spectra can be used to determine chemical abundances and physical conditions in the photosphere of the observed star. Such identifications require precise line positions and strengths of both atomic and molecular features.
Aims:This work focusses on the full identification of absorption lines in the NIR spectrum of the K-giant 10 Leo, including previously unidentified lines. The large number and complexity of the observed absorption lines require a deep search for potential spectral signatures to enable an unambiguous assignment to specific elements or molecular species. We aim to improve the published line lists of metals, some of which are determined by model calculations only, and many of which presently lack the completeness and accuracy of line parameters.
Methods:The CRIRES-POP project provided high-resolution, high signal-to-noise ratio (S/N) spectra of several bright stars in the 1 to 5 \(\mu\)m range. For the K-giant 10 Leo, a spectrum corrected for telluric absorption and with precise wavelength calibration is available. This has been analysed by comparison with model spectra and up-to-date line lists.
Results:We identified lines of 29 elements and eight molecular species. While the positions of many known lines could be confirmed, about 6% of all lines detected in 10 Leo could not be attributed to any known feature. For CO and its isotopologues, molecular constants could be derived and several additional lines identified. We report major inconsistencies for some prominent lines. In addition, abundances for several key elements in 10 Leo are provided.
Conclusions:
## 1 Introduction
Late-type stars emit the majority of their flux in the infrared, and modern infrared detectors allow for them to be studied - within the atmospheric windows - at a high signal-to-noise ratio (S/N) and with limited telluric absorption. For cool giants, a large number of molecular lines of abundant species such as CO, OH, CN, H\({}_{2}\)O, and SiO populate the range between 1 and 5 \(\mu\)m. These are of key interest as tracers of the temperature stratification of the extended atmospheres, atmospheric dynamics, nucleosynthesis, and mixing processes. In addition, the infrared harbours a wealth of atomic lines; many of them, in particular within the atmospheric windows in the YJ band, have little blending.
However, to extract this kind of information from the stellar spectra, a complete and accurate catalogue of spectral lines that appear in that wavelength range is necessary. Any analysis relying on synthetic spectra to derive stellar parameters and elemental abundances is limited by the quality of the line data and the knowledge of line blends. This is true not only for high-resolution studies, but also for work based on low- to medium-resolution spectroscopy.
High-resolution near-infrared (NIR) spectroscopy became available with the Fourier Transform Spectrometers (FTS) in the 1970s, and even more so with the recent class of NIR echelle spectrographs at 8m-class telescopes, offering access to a wide range of objects to be studied. FTS spectra led to the first high-resolution spectral atlases with the Arcturus atlas (Hinkle et al. 1995b) providing the most extensive compendium to date of line identifications of a cool star in the NIR. While the lasting importance of this compendium is indisputable, similar studies are needed for a wider range of stellar reference objects to constrain effects of temperature, surface gravity, and metallicity.
The CRIRES-POP project (Lebzelter et al. 2012) obtained an observational library of high-resolution infrared stellar spectra covering the wavelength range from 1 to 5 \(\mu\)m for a set of stars throughout the Hertzsprung-Russell (HR) diagram. The spectra have a typical S/N of 200 at a spectral resolution R=90 000. The first star published with a full data reduction including a careful reanalysis of the wavelength calibration and telluric correction was the K-giant 10 Leo (Nicholls et al. 2017). A summary of the stellar parameters of 10 Leo is provided in that paper.
We briefly summarise and provide an update on the star's most important properties in the following (Table 1): 10 Leo is a K1 giant with an effective temperature of 4800 K (da Silva et al. 2011). It is a spectroscopic binary, but no lines from the companion have been found in the infrared spectrum. The CRIRES-POP spectrum has been corrected for the corresponding orbital motion. According to Gaia Early Data Release 3 (EDR3) data (Bailer-Jones et al. 2021), the distance of 10 Leo is 77.3\(\pm\)1.5 pc, which is a minor increase compared to the earlier Hipparcos distance of 75 pc used in Nicholls et al. (2017). da Silva et al. (2011) found a metallicity close to solar. However, we note that the ref
erenced age and log g in Table 1 for 10 Leo are not consistent with the other stellar parameters, and suggest a lower radius for 10 Leo and an age around 2.5 Gy based on the mass of 2 M\({}_{\odot}\), the reported luminosity, distance, and T\({}_{\rm eff}\).
10 Leo differs in some aspects from the K-giant Arcturus. Table 1 allows for a direct comparison of the two stars. The most significant differences for our study are likely the higher temperature and the higher metallicity of 10 Leo compared to Arcturus.
In this paper, we present a detailed exploration of the spectroscopic line content in the NIR for the K-giant 10 Leo. We present a compendium of line identifications, including a substantial number of lines not identified in the Arcturus atlas; list lines lacking identification to date; and derive parameters for the molecular bands of CO and CN between 1 and 5 \(\mu\)m. Furthermore, we discuss the differences between the spectra of 10 Leo and Arcturus, and derive elemental abundances in 10 Leo for a set of prominent atomic species. More extensive documentation of our data analysis is provided in Zendel (2021).
## 2 Spectral lines
### Methods of line identification
We applied two approaches to identify the lines found in the NIR spectrum of 10 Leo. First, we used the Vienna atomic line database (VALD3) provided by Ryabchikova et al. (2015) to determine all of the lines that are known or expected in the wavelength range covered. For the molecular lines of CO, CN, and OH, tables from Goorvitch (1994), Brooke et al. (2014), and the High-resolution TRANSmission (HITRAN) database from Gordon et al. (2017) were used. To select the lines visible for the stellar parameters of 10 Leo, we used the 'extract stellar' option from VALD3 which returns a list of lines with various spectral parameters such as wavenumber, log of value, and damping constants for a particular model atmosphere. The chosen model atmosphere (castelli\({}_{\rm{a}}\)p00k2\({}_{\rm{T04750G30}}\)) for 10 Leo is based on T\({}_{\rm eff}\)=4800 K, log g=2.83, v\({}_{\rm{mic}}\)=2 km/s, and solar abundances. The minor difference to the literature value for v\({}_{\rm{mic}}\) presented in Table 1 resulted from the availability of models.
Second, we used the Arcturus atlas by Hinkle et al. (1995b). This atlas has been serving as a benchmark spectrum for high-resolution infrared spectroscopy of cool giants for decades. It was obtained with the FTS at Kitt Peak National Observatory and has a spectral resolution around 100 000. The line list for Arcturus comprises a total of 6658 absorption lines from 22 elements or ions and seven molecular species within the 1 to 5 \(\mu\)m range. All line identifications in the Arcturus atlas were based on laboratory spectra only. The data files for the IR spectrum of Arcturus (Hinkle et al. 1995a)1 cover two observation periods in 1993 and 1994.
Footnote 1: downloaded from [ftp://ftp.noao.edu/catalogs/arcturusatlas/ir/](ftp://ftp.noao.edu/catalogs/arcturusatlas/ir/)
Most of the bad pixels in the 10 Leo spectrum had already been removed by Nicholls et al. (2017), but to make sure we avoid incorrect line detections, the selection criterion was set to a minimum of three neighbouring pixels being below the continuum. Lines in the 10 Leo spectrum were detected by an automatic search for flux values more than 0.5 % below the local pseudo-continuum level. The corresponding vacuum wavelengths of the detected minima were then cross-correlated with the line lists from Arcturus, VALD3, and HITRAN.
The CRIRES-POP 10 Leo spectrum is divided into the five sections according to the atmospheric windows. We follow this division for the presentation of findings and line lists here. The 10 Leo spectrum comes with a number of gaps in wavelength that have no flux information, for example, the YJ bandpass has 226 gaps with sizes ranging from 0.04 nm to 2.1 nm, mostly due to strong telluric features. Details are described in Nicholls et al. (2017). We emphasise that our compendium does not include information from these gaps.
### Identified atomic lines
Our 10 Leo NIR spectrum features a complex mixture of lines with various line shapes, a wide range of line intensities, and some discontinuities. The removal of telluric lines with the Molectif software (Kausch et al. 2015; Smette et al. 2015) and the applied corrections of various instrumental impacts (Nicholls et al. 2017) created some artefacts in the spectrum, mainly in the form of discontinuities of the continuum. This necessitated a visual inspection of the spectrum to properly identify the spectral lines.
Table 2 provides a summary of the number of lines identified within the 10 Leo NIR spectrum sorted by bandpass. We note that these numbers include all components provided by the above described query of the VALD3 database. A significant fraction of these lines are contributions to blends. Our approach was that whenever the spectral feature showed signs of blending, we added all lines expected within the width of the blend to our line list. As is subsequently described in more detail in Sect. 3, our line compendium takes care of this issue by adding information on the complexity of the line profile to each spectral feature.
Table 3 lists the number of atomic lines identified, sorted by element and bandpass. Equivalent widths (EWs) of the strongest and weakest unblended lines are given as well. Values in parentheses refer to blended lines and should be regarded as upper limits of detection. The observed elements are grouped into alpha process elements (C, O, N, Ne, Mg, Si, Ar, and Ca), odd-Z elements (Na, Al, P, K, and Sc), iron peak elements (Ti, V, Cr, Mn, Fe, Co, and Ni), and s-process elements (Cu, Zn, Rb, Sr, Y, Zr, Ce, and Yb). In addition, lines from H and He have been identified. This distinction is ambiguous in some cases, for example, Ti, Co, and Fe can be formed by the alpha process as well. Ca, Fe, Ti, Mg, and Y are also present in ionised form, and all others are neutral except for Sr, Ce, Dy, and Yb, which are found in ionised form only.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Property & 10 Leo & Ref. & Arcturus & Ref. \\ \hline SpT & K1 IIIVAR & 1 & K0 III & 7 \\ T\({}_{\rm eff}\) & 4801\(\pm\)89 K & 2 & 4286\(\pm\)30 K & 8 \\ L & 59.35 L\({}_{\odot}\) & 2 & 170 \(\pm\) 8 L\({}_{\odot}\) & 9 \\ R & 14 R\({}_{\odot}\) & 2 & 25.4 \(\pm\)0.2 R\({}_{\odot}\) & 8 \\ M & 2 & 3 & 1.08 \(\pm\)0.06 M\({}_{\odot}\) & 8 \\ log \(g\) & 2.83 \(\pm\)0.23 & 2 & 1.66 \(\pm\)0.05 & 8 \\ Distance & 77.3 \(\pm\)1.5 pc & 4 & 11.26\(\pm\)0.07 pc & 10 \\ Age & 4.51 \(\pm\)1.8 Gyr & 5 & 7.1\({}^{+1.5}_{-1.5}\) Gyr & 8 \\ \([Fe/H]\) & -0.03 \(\pm\)0.08 & 2 & -0.52\(\pm\)0.04 & 8 \\ v\({}_{\rm{mic}}\) & 1.3 km s\({}^{-1}\) & 6 & 1.2 \(\pm\)0.11 km s\({}^{-1}\) & 11 \\ \hline \end{tabular}
\end{table}
Table 1: Properties of 10 Leonis and Arcturus
### Molecular lines and bandheads
Many molecular species form band structures in the infrared with distinct patterns for the line positions and strengths. While this in principle eases the identification of molecular lines, some bands are blended with others complicating the analysis. The abundant molecular lines in the 10 Leo spectrum mainly stem from CO, CN, and OH molecules. Lines from the isotopologues \({}^{13}\)C\({}^{16}\)O, \({}^{12}\)C\({}^{17}\)O, and \({}^{12}\)C\({}^{18}\)O are easily detectable throughout the spectrum. However, no lines of C\({}^{15}\)N could be detected based
\begin{table}
\begin{tabular}{c c c c} \hline \hline wavelength & listed lines & \multicolumn{2}{c}{observed lines\({}^{\rm b}\)} \\ range & VALD3\({}^{\rm a}\) & identified & unidentified \\ \hline YJ & 4321 & 3664 & 433 \\ H & 6586 & 4438 & 303 \\ K & 4166 & 3018 & 196 \\ L & 1900 & 1304 & 415 \\ M & 3080 & 2577 & 84 \\ \hline SUM: & 20053 & 15001 & 1431 \\ \hline \end{tabular} \({}^{\rm a}\) From VALD3 stellar extraction corrected for gaps
\({}^{\rm b}\) Includes all observed lines of the 10 Leo NIR spectrum
\end{table}
Table 2: Overview of observed lines
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Element & YJ & H & K & L & M & total & strongest & EW & weakest & EW \\ & & & & & & line [nm] & [mÅ] & line [nm] & [mÅ] \\ \hline H i & 3 & 6 & 1 & 2 & 1 & 13 & 1681.11 & 232 & 1556.07 & (6) \\ He i & 1 & 0 & 0 & 0 & 0 & 1 & 1083,33 & 70 & & \\ \hline C i & 40 & 83 & 19 & 43 & 0 & 185 & 1069.42 & 117 & 3777.32 & 2 \\ Mg i & 41 & 85 & 76 & 75 & 10 & 287 & 3677.19 & 271 & 1030.21 & 12 \\ Mg ii & 2 & 0 & 0 & 0 & 0 & 2 & 1095.48 & 11 & 1091.84 & (3) \\ Si i & 135 & 126 & 140 & 132 & 16 & 549 & 2206.87 & 305 & 1080.01 & 2 \\ S i & 11 & 21 & 19 & 2 & 0 & 53 & 1046.22 & 58 & 2289.35 & 8 \\ Ca i & 54 & 33 & 48 & 21 & 6 & 162 & 2263.10 & 278 & 2194.87 & 3 \\ Ca ii & 5 & 2 & 4 & 0 & 0 & 11 & 1184.22 & 114 & 985.75 & 7 \\ \hline \multicolumn{8}{c}{odd-Z} \\ \hline Na i & 21 & 7 & 17 & 16 & 3 & 64 & 2208.97 & 331 & 991.87 & (3) \\ Al i & 10 & 10 & 7 & 9 & 1 & 37 & 1312.70 & 479 & 3601.13 & 6 \\ P i & 8 & 8 & 0 & 0 & 0 & 16 & 1058.45 & 16 & 1068.43 & 6 \\ K i & 7 & 3 & 2 & 7 & 0 & 19 & 1177.61 & 158 & 2194.87 & 3 \\ Sc i & 0 & 0 & 9 & 0 & 0 & 9 & 2207.13 & 34 & 2173.64 & 2 \\ \hline \multicolumn{8}{c}{Iron-peak} \\ \hline Ti i & 106 & 74 & 29 & 9 & 3 & 221 & 964.09 & 192 & 998.40 & 2 \\ Ti ii & 3 & 7 & 0 & 0 & 0 & 10 & 1587.82 & 68 & 969.39 & (8) \\ V i & 10 & 16 & 8 & 0 & 0 & 34 & 1290.47 & 21 & 1318.86 & 2 \\ Cr i & 51 & 20 & 10 & 10 & 2 & 93 & 1101.85 & 77 & 1295.06 & 4 \\ Mn i & 13 & 36 & 6 & 6 & 0 & 61 & 1332.26 & 322 & 3473.88 & 6 \\ Fe i & 440 & 1205 & 372 & 393 & 9 & 2419 & 1188.61 & 325 & 1217.44 & 2 \\ Fe ii & 4 & 5 & 1 & 0 & 0 & 10 & 1086.56 & 13 & 1112.85 & (3) \\ Co i & 14 & 26 & 4 & 0 & 0 & 44 & 1676.22 & 82 & 2207.29 & 3 \\ Ni i & 43 & 93 & 31 & 41 & 1 & 209 & 1631.50 & 144 & 1240.51 & 3 \\ \hline \multicolumn{8}{c}{s-process} \\ \hline Cu i & 0 & 4 & 0 & 0 & 0 & 4 & 1601.00 & (27) & 1601.09 & (4) \\ Zn i & 3 & 0 & 0 & 0 & 0 & 3 & 1305.71 & 17 & 1105.73 & (9) \\ Rb i & 0 & 2 & 0 & 0 & 0 & 2 & 1529.37 & (6) & \\ Sr ii & 3 & 0 & 0 & 0 & 0 & 3 & 1033.01 & 232 & 1003.94 & 119 \\ Y i & 0 & 1 & 5 & 0 & 0 & 6 & 1766.80 & 17 & 2255.00 & 5 \\ Y ii & 5 & 0 & 0 & 0 & 0 & 5 & 1024.80 & 11 & 1018.93 & 6 \\ Zr i & 3 & 0 & 0 & 0 & 0 & 3 & 982.53 & 3 & 1166.11 & 2 \\ Ce ii & 36 & 15 & 3 & 0 & 0 & 54 & 1708.20 & 56 & 1258.57 & 5 \\ Dy ii & 1 & 0 & 0 & 0 & 0 & 1 & 1052.63 & 7 & & \\ Er ii & 1 & 0 & 0 & 0 & 0 & 1 & 1106.26 & (11) & & \\ Yb ii & 0 & 1 & 0 & 0 & 0 & 1 & 1650.28 & (10) & & \\ \hline \end{tabular}
\end{table}
Table 3: 10 Leo: Identified lines per element
on a search using the catalogue of CN lines from Brooke et al. (2014). Each band was fitted with a fourth order polynomial to locate the individual transition lines and to ensure consistency in transition assignment. Fundamental bands (\(\Delta\)\(\nu\)=1) of CO are observed in the M band, first overtone bands (\(\Delta\)\(\nu\)=2) in the K band, and second overtone bands (\(\Delta\)\(\nu\)=3) in the H band. A total of 105 CO bands (64 in M, 23 in K, and 18 in H) were evaluated which comprises about 4500 CO lines (almost twice the number of CO lines identified in the Arcturus atlas). Table 4 summarises all fundamental bands from which lines could be identified in the 10 Leo spectrum. The wavelengths given for the R0 and P1 lines of each band were computed from the polynomials deduced for each band. Central depths (cdepth) and EWs are given for the strongest detected line in each band.
Table 5 lists all of the identified band heads for the CO molecule in the studied spectral range, including molecular band data. The band heads occur in the R branches as a result of the increase in the distortion term with higher J values. Their intensity decreases steeply from the fundamentals to the second overtones. However, CO bandheads for the fundamentals appear at higher J values (J\(\approx\)88) than for the first overtones (J\(\approx\)51) and second overtones (J\(\approx\)34), which enhances the possibility of detecting overtone bands again. We note that we could not detect any bandheads for the two isotopologues C\({}^{17}\)O and C\({}^{18}\)O in the spectrum of 10 Leo.
Fundamental bands from electronic transitions for CN are located in the YJ and H regions, whereas the first overtone bands from the electronic ground state are found in the K band. For CN, 138 bands (61 in YJ, 41 in H, and 36 in K) have been evaluated resulting in the assignment of about 5158 individual CN lines matched with the VALD3 stellar extraction option (see 2.1) and the Arcturus atlas. The maximum central depth for all observed molecular lines within a band was recorded at J \(\approx\)31 with the exception of the satellite bands of CN, which have the maximum at J \(\approx\)10. For the OH molecule, 32 bands have been identified and evaluated (16 in H and 16 in L).
Weak signatures of CH and SiO have been found. The NH molecule had been detected in the Arcturus spectrum in a region from 2891.00 nm to 3397.94 nm. This region is not covered by the 10 Leo spectrum except for a small part starting at 3367.02 nm. Within this region two lines at 3373.06 nm and 3397.94 nm could be attributed to NH. In the L band, we also found seven lines that could be assigned to the HCI molecule. For HF, two lines (1-0 R3 and 1-0 R9) could be detected.
### Unidentified lines
After carefully attributing known atomic and molecular transitions to the detected spectral features in 10 Leo, about 1400 lines or, in the case of line blends, line components were left unidentified. We mark a line or line component as unidentified if we could not find any candidate transition within 0.1 nm of the observed wavelength position that was not already attributed to a line or line component in the 10 Leo spectrum. The fraction of unidentified lines is \(\approx\) 9% of all observed lines at the 0.5% sensitivity level in the 10 Leo spectrum. Throughout the spectrum, we found more than 350 unidentified lines that are isolated lines with no obvious traces of blending components. Most of the unidentified lines, however, are components within line blends.
Table 6 shows the number of such lines for each band sorted in intervals of EWs of \(<\) 10 mA, \(\geq\)10 mA and \(<\) 50 mA, \(\geq\)50 mA and \(<\) 100 mA, and \(\geq\)100 mA, respectively. The numbers in the last category clearly show that there are several strong lines in the NIR range, which have not been correctly identified yet. The Apache Point Observatory Galactic Evolution Experiment (APOGEE) project has also published a list of unidentified lines (Smith et al., 2021), which is limited to the range between 1547 and 1693 nm. We find 33 unidentified lines in common with their list.
Very recently, Peterson & Kurucz (2022) published a list of new Fe i line identifications which included the infrared range. We cross-matched their line positions with our list of unidentified lines and found 162 lines to agree within \(\pm\)0.05 nm. Since many of these lines are weak with EWs \(<\)10 nm in the spectrum of 10 Leo, it is difficult to determine their correctness without including them in spectral synthesis computations, which is beyond the scope of this paper. For the quite strong line at 4574.46 nm (EW = 644 mA), we suspect an accidental agreement, because other lines in the list of Peterson & Kurucz (2022) with similar line parameters do not show a strong line in 10 Leo and because these authors note that the new lines identified in their paper are typically weak. However, a tentative identification of most lines in common seems reasonable.
Table 7 lists the most intense unidentified lines (EW\(>\)50 mA) for the YJ band along with possible blending transitions. The rightmost column gives the reference suggesting the blending components. Similar lists for the other bands with an EW\(>\)100 mA are provided in the Appendix (Tables A.1 and A.2). For the YJ band, there are no unidentified lines or line components with an EW exceeding 100 mA.
### Incorrect transitions in VALD3
A total of 5718 lines predicted by the spectral extract mode from VALD3 (see 2.1 for the stellar parameters of 10 Leo) could not be observed in the CRIRES-POP spectrum. The most prominent ones (predicted central depth exceeding 10 % of the continuum level) missing in the observed spectrum are listed in Table 8 for the YJ band. This list includes lines from elements of high astrophysical importance such as Mg i and Fe i. Assuming wrong line parameters in these cases, these transitions may correspond to some of the unidentified lines mentioned above. Similar tables for the remaining bands can be found in the Appendix, Tables A.3, A.4, and A.5.
### Comparison of the 10 Leo and Arcturus spectrum
A first comparison of the high-resolution spectra of the two stars was presented in Nicholls et al. (2017). 10 Leo and Arcturus are both K stars, both with luminosity and radius values that place them on the red giant branch. Arcturus is \(\approx\) 500 K cooler than 10 Leo and has a lower metallicity. It is much older than 10 Leo and has only half its mass. Accordingly, they also differ in log \(g\) value, radius, and luminosity. Blum et al. (2003) and Schultheis et al. (2016) have shown that the molecular bands of CO increase in strength in K and early M giants with decreasing temperature, while there is no obvious dependence on metallicity. Model calculations by Aringer et al. (2016) suggest that OH lines in the NIR show a similar behaviour. Rich & Origlia (2005) found an increase in CO band strengths with decreasing log g, which is not seen in individual OH lines they investigated. The low sensitivity of the OH lines to changes in the surface gravity was confirmed by Lebzelter et al. (2008) for the H band. Based on these findings and the differences in the stellar parameters between the two stars, we expect the Arcturus spectrum to tentatively feature more pronounced and stronger molecular lines than 10 Leo. For
atomic lines, the differences in metallicity and stellar parameters between the two stars do not allow for a general prediction.
Looking at the number of identified lines, there are 5171 lines in common between the Arcturus atlas and our 10 Leo line compendium, which also have have an assigned transition from VALD3. Among these are 1084 (21%) isolated lines, that is, lines with no obvious blends; a similar number of lines are part of blends, that is, showing two clearly distinguished line minima with no other obvious blending components; and the remaining lines, which are more affected by line blends.
In addition, we identified 9183 lines in 10 Leo with an assigned transition from VALD3 that were not identified in the Arcturus atlas, with 9% of them being isolated lines, meaning that most of these additional identifications are components of line blends. Figure 1 shows an example section with lines identified in our compendium marked. Interestingly, there are 251 lines in 10 Leo with an assignment from the Arcturus atlas that do not show up in the VALD3 database. Furthermore, 292 lines have an identification in the Arcturus atlas, but are missing from the 10 Leo line list. The latter differences are partly due to slight deviations in the covered wavelength range, defect or saturated spectral regions in either of the two spectra, and partly due to different line strengths excluding some lines from being selected.
\begin{table}
\begin{tabular}{l l r r r r r r} \hline \hline & \multicolumn{4}{c}{R-branch} & \multicolumn{4}{c}{P-branch} \\ \cline{2-9} Band-ID & \(\lambda\) (R0)\({}^{*}\) & Transition\({}^{*}\) & (1-F) & cdepth\({}^{*}\) & \(\lambda\) (P1)\({}^{*}\) & Transition\({}^{*}\) & (1-F) & cdepth\({}^{*}\) \\ & nm & & & nm & & & & \\ \hline CO 1-0 & 4657.49d & R6 & 0.587 & 0.482 & 4674.15d & P11 & 0.517 & 0.487 \\ CO 2-1 & 4715.72 & R14 & 0.506 & 0.492 & 4732.65 & P23 & 0.513 & 0.477 \\ CO 3-2 & 4775.28 & R22 & 0.472 & 0.474 & 4792.48 & P30 & 0.424 & 0.443 \\ CO 4-3 & 4836.21 & R29 & 0.419 & 0.441 & 4853.69 & P17 & 0.397 & 0.416 \\ CO 5-4 & 4898.54d & R34 & 0.373 & 0.400 & 4916.30d & P7 & 0.336 & 0.351 \\ CO 6-5 & 4962.33 & R31 & 0.332 & 0.365 & 4980.38d & P20 & 0.317 & 0.342 \\ CO 7-6 & 5027.62d & R13 & 0.264 & 0.318 & 5045.98 & P4 & 0.232 & 0.262 \\ CO 8-7 & 5094.45 & R25 & 0.253 & 0.301 & 5113.12d & P8 & 0.248 & 0.256 \\ CO 9-8 & 5162.90d & R24 & 0.219 & 0.271 & & & \\ CO 10-9 & 5232.99e & R19 & 0.186 & 0.235 & & & & \\ CO 11-10 & 5304.80e & R27 & 0.168 & 0.210 & & & & \\ CO 12-11 & 5378.53e & R45 & 0.121 & 0.151 & & & & \\ CO 13-12 & 5454.06e & R35 & 0.122 & 0.086 & & & & \\ CO 14-13 & 5531.54e & R55 & 0.043 & 0.051 & & & & \\ \({}^{13}\)CO 1-0 & 4762.56 & R18 & 0.367 & 0.285 & 4779.22 & P25 & 0.307 & 0.266 \\ \({}^{13}\)CO 2-1 & 4820.76d & R30 & 0.344 & 0.280 & 4837.67 & P18 & 0.317 & 0.248 \\ \({}^{13}\)CO 3-2 & 4880.25d & R36 & 0.265 & 0.248 & 4897.42 & P29 & 0.247 & 0.221 \\ \({}^{13}\)CO 4-3 & 4941.07 & R37 & 0.248 & 0.212 & 4958.52d & P25 & 0.231 & 0.189 \\ \({}^{13}\)CO 5-4 & 5003.26d & R10 & 0.177 & 0.130 & 5020.99d & P17 & 0.235 & 0.139 \\ \({}^{13}\)CO 6-5 & 5006.87 & R51 & 0.141 & 0.085 & 5084.89d & P5 & 0.103 & 0.040 \\ \({}^{13}\)CO 7-6 & 5131.95 & R33 & 0.145 & 0.078 & & & & \\ \({}^{13}\)CO 8-7 & 5198.49d & R32 & 0.109 & 0.048 & & & & \\ \({}^{13}\)CO 9-8 & 5266.64e & R26 & 0.079 & 0.029 & & & & \\ \({}^{13}\)CO 10-9 & 5336.37e & R36 & 0.064 & 0.016 & & & & \\ \({}^{13}\)CO 11-10 & 5407.79e & R28 & 0.053 & 0.013 & & & & \\ C\({}^{18}\)O 1-0 & 4771.56 & R22 & 0.108 & 0.158 & 4788.22 & P17 & 0.096 & 0.125 \\ C\({}^{18}\)O 2-1 & 4829.92 & R19 & 0.103 & 0.140 & 4846.67 & P17 & 0.049 & 0.114 \\ C\({}^{18}\)O 3-2 & 4889.23 & R29 & 0.071 & 0.114 & 4906.41 & P15 & 0.069 & 0.079 \\ C\({}^{18}\)O 4-3 & 4950.04 & R45 & 0.035 & 0.062 & 4967.50 & P26 & 0.046 & 0.062 \\ C\({}^{18}\)O 5-4 & 5012.22 & R6 & 0.024 & 0.020 & 5029.94d & P6 & 0.167 & 0.016 \\ C\({}^{18}\)O 6-5 & 5075.83 & R4 & 0.017 & 0.008 & & & & \\ C\({}^{18}\)O 7-6 & 5140.88d & R29 & 0.037 & 0.018 & & & & \\ C\({}^{18}\)O 8-7 & 5207.42 & R17 & 0.011 & 0.008 & & & & \\ C\({}^{17}\)O 1-0 & 4716.96 & R6 & 0.086 & 0.017 & 4733.62 & P21 & 0.111 & 0.033 \\ C\({}^{17}\)O 2-1 & 4775.17 & R11 & 0.103 & 0.025 & 4792.09 & P23 & 0.088 & 0.030 \\ C\({}^{17}\)O 3-2 & 4834.69 & R22 & 0.070 & 0.026 & 4851.88 & P20 & 0.
We compared EWs for all lines identified in both spectra. The largest differences in central depth are noted for Ti i, Sc i, V i, OH, and CO, which show a lower EW in 10 Leo. Some differences between the isotopologues of CO are observed. In 10 Leo, we see stronger lines of \({}^{12}\)C\({}^{17}\)O compared to Arcturus, whereas \({}^{13}\)C\({}^{16}\)O bands are weaker in 10 Leo than in Arcturus. For Fe i, Si, C i, and CN, lines are stronger in 10 Leo. We come back to this point in the context of abundance determination in Sect. 4.
Figure 2 shows a typical spectral section with six major lines at 2205.82 nm (Sc i), 2206.24 nm (Na i), 2206.87 nm (Si i), 2207.13 nm (Sc i), 2207.86 nm (Si i), and 2208.97 nm (Na i) as an overlay between 10 Leo and Arcturus. The most obvious difference between the two spectra are the Sc i lines with a significantly enhanced line depth in Arcturus relative to 10 Leo. Minor differences can be seen for the Na i and Si i lines. Table 9 shows the ratios of EWs between the two stars for various metals and molecules.
## 3 The line compendium
With this paper we publish a complete list of 16 431 lines detected in the CRIRES-POP spectrum of 10 Leo, 91% of which have the line-producing element or molecular species as
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & \multicolumn{6}{c}{Band} \\ \cline{2-7} EW-Interval & YJ & H & K & L & M & All \\ \hline \(<\)10 mA & 248 & 106 & 96 & 135 & 1 & 586 \\ \(\geq\)10 mA \(<\)50 mA & 165 & 129 & 79 & 229 & 32 & 634 \\ \(\geq\)50 mA \(<\)100 mA & 20 & 35 & 12 & 40 & 23 & 130 \\ \(\geq\) 100mA & 0 & 33 & 9 & 11 & 28 & 81 \\ \hline \hline Total & 433 & 303 & 196 & 415 & 84 & 1431 \\ \hline \end{tabular}
\end{table}
Table 6: Number of unidentified lines
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & \multicolumn{4}{c}{\({}^{12}\)CO} & \multicolumn{4}{c}{\({}^{13}\)CO} \\ \cline{2-7} Transition & \(\lambda\) (nm) & Literature\({}^{\rm a}\) & difference (nm) & \(\lambda\)(nm) & Literature\({}^{\rm b}\) & difference (nm) \\ \hline fundamental & & & & & & \\
4-3 & outside & 4465.21 & & 4561.66 & 4561,67 & -0.01 \\
5-4 & outside & 4524.63 & & 4620.95 & 4620,95 & -0.01 \\
6-5 & 4585.45 & 4585.45 & 0.00 & 4681.60 & 4681,60 & 0.00 \\
7-6 & 4647.71 & 4647.72 & -0.01 & 4743.64 & 4743,66 & -0.02 \\
8-7 & 4711.47 & 4711.47 & 0.00 & 4807.13 & 4807,16 & -0.03 \\
9-8 & 4776.76 & 4776.75 & 0.01 & 4872.11 & 4872,14 & -0.03 \\
10-9 & gap & 4843.63 & & 4938.64 & 4938,68 & -0.04 \\
11-10 & 4912.14 & 4912.14 & 0.00 & & & \\
12-11 & 4982.35 & 4982.36 & -0.01 & & & \\
13-12 & 5054.33 & 5054.33 & 0.00 & & & \\ \hline
1st overtone & & & & & & \\
2-0 & 2293.52 & 2293.52 & 0.00 & 2344.85 & 2344.80 & 0.05 \\
3-1 & 2322.65 & 2322.66 & -0.01 & 2373.94 & 2373.90 & 0.04 \\
4-2 & 2352.46 & 2352.46 & 0.00 & 2403.69 & 2403.70 & -0.01 \\
5-3 & 2382.94 & 2382.95 & -0.01 & 2434.12 & 2434.10 & 0.02 \\
6-4 & 2414.14 & 2414.14 & 0.00 & 2465.24 & 2465.20 & 0.04 \\
7-5 & 2446.06 & 2446.08 & -0.02 & 2497.08 & 2497.10 & -0.02 \\
8-6 & 2478.76 & 2478.76 & 0.00 & & & \\
9-7 & 2512.22 & 2512.23 & -0.01 & & & \\
10-8 & & & & & & \\ \hline
2nd overtone & & & & & & \\
3-0 & 1558.16 & 1558.16 & 0.00 & & & \\
4-1 & 1577.95 & 1577.95 & 0.00 & & & \\
5-2 & 1598.18 & 1598.19 & -0.01 & & & \\
6-3 & 1618.89 & 1618.89 & 0.00 & & & \\
7-4 & 1640.08 & 1640.08 & 0.00 & & & \\
8-5 & 1661.77 & 1661.77 & 0.00 & & & \\
9-6 & 1683.97 & 1683.97 & 0.00 & & & \\
10-7 & 1706.71 & 1706.71 & 0.00 & & & \\
11-8 & 1729.99 & 1729.99 & 0.00 & & & \\
12-9 & 1753.84 & 1753.84 & 0.00 & & & \\
13-10 & 1778.28 & 1778.28 & 0.00 & & & \\ \hline \end{tabular} \({}^{\rm a}\)Goorvitch (1994)
\({}^{\rm b}\)Geballe (2020)
\end{table}
Table 5: Identified CO bandheads
signed. Combining this list with the fully reduced spectrum from Nicholls et al. (2017) available online, we have provided everything necessary for the production of a comprehensive infrared spectral atlas for 10 Leo. The line lists are made available via machine-readable tables.
\begin{table}
\begin{tabular}{l l l r} \hline \hline wavelength & EW & possible blends & reference \\ (nm) & mA & & \\ \hline
1113.69 & 79 & Co i 1113.68, Fe i 1113.69, V i 1113.70, Ca i 1113.70 & (1) \\
1120.30 & 64 & & \\
1121.40 & 55 & Fe i 1121.39, Fe ii 1121.41 & (1) \\
1127.28 & 67 & Fe i 1127.28, N i 1127.28? & (1),(2) \\
1131.31 & 89 & Cr i 1131.38 & (1),(2) \\
1131.51 & 97 & (CN 0-QQ(1)38.5 at 1131,47 subtracted) & (1) \\
1132.25 & 58 & doublet Fe ii 1132.24, 1132.25 & (1) \\
1132.40 & 66 & Fe ii 1132.39, CN i-1R(2)25.5 & (1) \\
1139.26 & 56 & Co i 1139.27, Cr ii 1139.28, (CN 1-IQ(2)21.5 at 1139.27 subtracted) & (1) \\
1141.54 & 81 & N i 1141.54,7 Cr ii 1141.52 & (1) \\
1144.07 & 54 & Ti i 1144.05, Mn i 1144.04 & (1) \\
1146.27 & 69 & N i 1146.27?, (CN 1-IR(1)37.5 at 1146.27 subtracted) & (1) \\
1150.19 & 78 & & \\
1165.18 & 61 & Fe i 1165.18, Al i 1165.22, Mn i 1165.24, (CN 1-1P(2)30.5 at 1165.20 subtracted) & (1) \\
1198.77 & 60 & strong shoulder of Si i 1198.75 & (1) \\
1261.72 & 53 & Fe i 1261.69, C i 1261.75 & (1) \\
1263.17 & 53 & Ti i 1263.17, Cr ii 1263.18 & (1) \\
1308.06 & 86 & Fe i 1308.09, Fe i 1308.09, Ti i 1308.08 (cdepth 0.253) subtracted & (1)(2) \\ \hline \end{tabular} References. (1)Ryabchikova et al. (2015),(2)Hinkle et al. (1995a)
\end{table}
Table 7: Most intense unidentified lines with an EW\(>\)50 mÅ in the YJ band and possible contributors
Figure 1: Example section from the 10 Leo spectrum. Identified lines are marked by vertical strokes. The top row (red) corresponds to line identifications in common with the Arcturus atlas. The second row (blue) are identifications attributed based on our model calculations. The single magenta line in the third row indicates an as of yet unidentified feature.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Wavelength & \multicolumn{2}{c}{Absorptanea } & Syntheticb \\ (nm) & species & 1-F & cdepth \\ \hline
964.20 & Fe i & 0.016 & 0.176 \\
969.90 & Ca i & -0.022 & 0.148 \\
974.16 & Fe i & 0.018 & 0.107 \\
977.10 & Fe i & 0.136 & 0.113 \\
982.43 & Ca i & -0.009 & 0.213 \\
983.08 & Ca i & 0.005 & 0.102 \\
1038.78 & Si i & 0.007 & 0.102 \\
1062.08 & Fe i & 0.012 & 0.107 \\
1083.49 & Ca i & -0.005 & 0.152 \\
1096.03 & Mg d & 0.415 & 0.323 \\
1096.03 & Mg d & 0.321 & 0.416 \\
1098.75 & Si i & 0.316 & 0.281 \\
1166.59 & Fe i & -0.015 & 0.141 \\
1181.48 & Ca i & 0.017 & 0.325 \\
1182.42 & Ca i & 0.002 & 0.212 \\
1187.74 & La ii & 0.007 & 0.110 \\
1274.37 & Ca i & -0.003 & 0.160 \\
1312.07 & Ca i & 0 & 0.106 \\ \hline \end{tabular}
\end{table}
Table 8: Lines with cdepth\(>\)0.10 not present in observed YJ band
Figure 2: Overlay spectra 10 Leo (red) and Arcturus (blue). See text for details.
The basic structure of this first table is shown in Table 10. A single-letter code in the rightmost column of the table distinguishes various levels of agreement with the VALD3 database and the Arcturus atlas, respectively. The letter code is explained in Table 11. For instance, while lines in category A are found in the spectra of 10 Leo and Arcturus and in the VALD3 database, lines in category B are present in the 10 Leo spectrum and in VALD3, but are not listed in the Arcturus atlas. In addition, we provide an indicator for the level of blending and quality for each line with the following meaning: S=single component, D=two components, T=three components, Q4, Q5,... 4,5,... components, and X=uncertain continuum level affecting the shape and depth of the line. The latter category accounts for about one-third of all lines. In these cases, the EW given is affected by a higher uncertainty.
Categories C, D, and G indicate lines that are expected from the VALD3-based model or that have been identified in the Arcturus atlas, but do not have a counterpart in the 10 Leo spectrum. We provide separate online tables for each of these cases using a structure similar to Table 10. The lines can be broken down into cases where the line is clearly not visible in 10 Leo ('absent'), and cases where a final decision on the presence is not possible due to noise or missing parts of the 10 Leo spectrum ('inconclusive'). Finally, category G also includes 28 lines where the identification in the Arcturus atlas and from VALD3 is not the same ('IDconflict'). The last row in Table 11 indicates the number of lines in each band without a known identification (category U). These are a subset of the total number of lines in each wavelength band given in the next-to-last row of the table, and are also a subset of the lines in category F. The difference in number between F and U corresponds to the 396 lines we were able to newly assign to molecular and atomic species based on our computations of the expected locations of various, previously unidentified CO lines, and lines from the Peterson & Kurucz (2022) collection of Fe lines.
High-quality spectra of stars can be used to determine various spectral parameters for the CO molecule such as the rotational constants B, the ro-vibrational coupling constant \(\alpha_{e}\), the centrifugal distortion constant, and the transition energies of the vibrational states of \({}^{12}\)CO and \({}^{13}\)CO. Such results are complementary to laboratory measurements that are limited in, for example, temperature. Farrenq et al. (1991) used this approach to derive parameters of the CO molecule from the infrared solar spectrum. The CRIRES-POP 10 Leo spectrum is of sufficient quality to perform a similar study. Resulting values are presented in Table 12 together with the corresponding values from the literature. While there is a good agreement for most constants, we note some differences, most remarkably for \(\omega_{e}\) for \({}^{13}\)CO.
## 4 Element abundances for 10 Leo and Arcturus
A set of synthetic spectra has been compiled to derive elemental abundances for 10 Leo and Arcturus from the observed high-resolution NIR spectra. In a first step, a line catalogue with respective temperature, log \(g\), and microturbulence was requested from VALD3 based on solar abundances. We then computed a synthetic spectrum on the basis of the Model Atmospheres with Radiative and Convective Scheme (MARCS) structure using the tool Synth3 (Kochukhov 2012). Differences between the observed and the computed spectrum were analysed for individual elements. Based on this, element abundances in the VALD3 line catalogue file were modified for a new computation of the synthetic spectrum. This iterative process was repeated until a consistent set of abundances for the synthetic spectrum was obtained. To find the best fit, we both minimised the differences for EWs and carried out a visual inspection of an overlay of the observed and synthetic spectrum. Furthermore, the computed and observed EWs of selected lines were compared in unclear cases individually.
Fig. 3 shows a comparison of the observed and computed central depths for about 340 isolated Fe i lines selected from across the range of the 10 Leo spectrum. Each observed iron line was fitted by a Gaussian curve to derive the respective position, central depth, full width at half maximum (FWHM), and the EW. The central depth from this fit is compared with the observed value of 1-F, where F is the normalised flux at the line centre \(\lambda_{0}\) (filled circles). The second dataset in this plot (crosses) refers to the same lines with the central depth now derived from the VALD3 stellar extraction model for solar abundances (see Sect. 2.1) with the observed value of 1-F. The Gaussian model provides a good fit to the observed line profiles over the whole range of line strengths studied here. The comparison between the observed and the modelled central depths shows a systematic difference in line depth and considerable scatter around the relation. The systematic difference indicates that 10 Leo has a lower iron abundance than the Sun. The high S/N of the spectrum and the good fit of the observed lines by a Gaussian profile, which indicates a low effect of line blending, suggests that the scatter we see between observed and modelled line depths is only partly due to observational noise, but significantly due to uncertainties in the theoretical line data. Indeed, a number of Fe i lines predicted by the synthetic spectrum could not be observed in the 10 Leo spectrum as discussed in Sect. 2.4 (Tables 8, A.3, A.4, and A.5).
In total, abundances for 17 key elements were determined from the 10 Leo and Arcturus spectra. The third and fourth column of Table 13 reflect the set of abundances for the best fit for 10 Leo and Arcturus, respectively, whereas the second column lists the solar reference values used as the starting point. Abundances for C, N, and O were derived from molecular lines using the approach described in Hinkle et al. (2020). In principle, the abundances of these three elements were altered in each fitting step until a simultaneous fit of the lines of CO, CN, and OH was
\begin{table}
\begin{tabular}{c c c} \hline \hline species & Ratio & \# evaluated \\ & Arcturus/10 Leo & lines \\ \hline CO & \(1.73\pm 0.23\) & 35 \\ CH & \(1.37\pm 0.31\) & 10 \\ C\({}_{1}\) & \(0.76\pm 0.15\) & 9 \\ CN & \(0.83\pm 0.13\) & 171 \\ OH & \(3.26\pm 0.43\) & 37 \\ Na i & \(0.94\pm 0.03\) & 3 \\ Mg i & \(1.10\pm 0.14\) & 20 \\ Al i & \(1.12\pm 0.02\) & 10 \\ Si i & \(0.94\pm 0.10\) & 51 \\ K i & \(1.13\pm 0.13\) & 4 \\ Ca i & \(1.01\pm 0.11\) & 22 \\ Sc i & \(1.90\pm 0.68\) & 3 \\ Ti i & \(1.54\pm 0.12\) & 27 \\ V i & \(1.69\pm 0.30\) & 4 \\ Cr i & \(0.95\pm 0.10\) & 17 \\ Mn i & \(1.03\pm 0.01\) & 7 \\ Fe i & \(0.96\pm 0.11\) & 164 \\ Ni i & \(0.97\pm 0.10\) & 10 \\ Sr ii & \(1.04\pm 0.06\) & 3 \\ \hline \end{tabular}
\end{table}
Table 9: Ratios of EWs (Arcturus and 10 Leo)
achieved. The fifth column shows the abundance differences between the two K giants. Abundance uncertainties were derived from the standard deviation of the ratios in EWs between the observed and the synthetic spectrum, and are given in column three of Table 13 for 10 Leo. For Arcturus, the listed abundances come with similar uncertainties. As a K giant, 10 Leo should show an abundance pattern typical for a post first dredge-up object. According to evolutionary models (Karakas & Lattanzio 2014; Cristallo et al. 2015), we thus expect a reduction of \({}^{12}\)C and an enhancement of \({}^{14}\)N on the stellar surface. Both trends are clearly visible in Table 13. The C and O abundances are in excellent agreement with the values derived by Luck (2015) from optical spectra. The same authors give abundances for various other elements in 10 Leo, most of them indicating an overabundance compared to the Sun. From our analysis, however, we find most elements to be slightly underabundant compared to the Sun. Solar-like abundances for 10 Leo were reported by da Silva et al. (2015). While error bars both in our analysis and for the litera
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Wave & Absorptance & Equivalent & \multicolumn{2}{c}{Wavelength} & Species & Transition & Line & Catalogue \\ number & & width & observed & VALD3 & & & quality and & Match \\ v(cm\({}^{-1}\)) & 1-F & EWO(mA) & WLobs(nm) & WL (nm) & ID & & complexity & \\ \hline
10387.73 & 0.196 & 43 & 962.67 & 962.68 & CN & CN 1-0Q[1]50.5 & D & B \\
10387.21 & 0.075 & 4 & 962.72 & 962.72 & CN & CN 2-1Q[12]25.5 & Q4 & B \\
10387.15 & 0.096 & 20 & 962.73 & 962.73 & CN & CN 2-1P[1]26.5 & Q4 & A \\
10386.81 & 0.103 & 22 & 962.76 & 962.76 & CN & CN 1-0R[1]59.5 & Q4 & A \\
10386.12 & 0.127 & 19 & 962.82 & 962.83 & Fe i & & & D & B \\
10386.06 & 0.107 & 2 & 962.83 & 962.84 & Fe i & & & D & B \\
10385.71 & 0.208 & 33 & 962.86 & 962.87 & CN & CN 1-0P[2]42.5 & X & A \\
10385.19 & 0.290 & 79 & 962.91 & 962.91 & Fe i & & & D & B \\
10384.85 & 0.193 & 41 & 962.94 & 962.95 & Fe i & & & D & A \\
10382.94 & 0.102 & 21 & 963.12 & 963.12 & CN & CN 2-1R[1]43.5 & X & A \\ \hline \end{tabular} The full table is available at the CDS.
\end{table}
Table 10: Excerpt from line list for the YJ band
Figure 3: cdepth deduced from the Gaussian fit versus Fe i lines (1-F\({}_{(\rm 10)}\)). The dots are from individual Gaussian fits to isolated Fe lines in the observed 10 Leo spectrum. The crosses refer to depths from the VALD3 extracted stellar calculation for 10 Leo based on solar Fe abundances.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Category & 10 Leo & VALD3 & Arcturus & YJ & H & K & L & M & sum \\ \hline A & y & y & y & 1367 & 1375 & 971 & 358 & 1078 & 5149 \\ B & y & y & n & 1941 & 2958 & 1984 & 824 & 1498 & 9205 \\ C & n & y & n & 1003 & 2104 & 1184 & 702 & 491 & 5484 \\ D & n & y & y & 9 & 113 & 13 & 3 & 7 & 145 \\ E & y & n & y & 60 & 84 & 13 & 93 & 1 & 251 \\ F & y & n & n & 729 & 324 & 245 & 444 & 84 & 1826 \\ G & n & n & y & 4 & 82 & 15 & 67 & 7 & 175 \\ \hline sum & & & & 5113 & 7040 & 4424 & 2491 & 3166 & 22235 \\ U & & & & 436 & 303 & 196 & 415 & 84 & 1434 \\ \hline \end{tabular}
\end{table}
Table 11: Number of line matches between 10 Leo (observed), VALD3 (calculated), and Arcturus (observed, Hinkle atlas)
ture values do not allow a conclusive comparison, we note that our analysis tends to give a lower mean metallicity for 10 Leo. Tan et al. (2016) independently derived Si abundances from the CRIRES-POP spectrum of 10 Leo and got [Si/Fe]=-0.04\(\pm\)0.03, which is in satisfying agreement with our findings.
Using second overtone lines of \({}^{12}\)CO and \({}^{13}\)CO, we derived a \({}^{12}\)C/\({}^{13}\)C ratio of 22\(\pm\)8 using a curve of growth analysis (Hinkle et al., 2016). To our knowledge, the carbon isotopic ratio had not been measured for this star before. To derive ratios for the main isotopes of oxygen, we used fundamental lines of C\({}^{16}\)O, C\({}^{17}\)O, and C\({}^{18}\)O in the M band. We find \({}^{16}\)O/\({}^{17}\)O = 299\({}^{-107}_{+241}\) and \({}^{16}\)O/\({}^{18}\)O = 345\({}^{-125}_{-276}\). These values agree with model expectations for a 2 M\({}_{\odot}\) red giant after a first dredge-up and observations for similar stars (Lebzelter et al., 2015; Cristallo et al., 2015).
To estimate the reliability of our abundance determinations for 10 Leo in the absence of a larger set of literature data, we can compare our results for Arcturus, which have been derived in an identical manner, with literature results. We thereby focus on the most recent papers. Very recently measured abundances of Na, Mg, and Al by Lind et al. (2022) agree within the error bars with our findings. That paper also gives abundance corrections for non-local thermodynamic equilibrium (NLTE), which have not been taken into account in our analysis. Fukue et al. (2021) present abundances for C, N, O, Mg, Si, Ca, Ti, Cr, Fe, and Ni, all of which coincide with our findings within 0.1 dex, except for Cr, where Fukue et al. (2021) found an abundance 0.33 dex lower than our result. Further abundances of key elements were presented by Jonsson et al. (2017) and Lomaeva et al. (2019), partly based on the spectrum from the Arcturus atlas. Results for Mg, Ca, Ti, Sc, V, and Ni agree within our error bars. Their O abundance is slightly lower than our value. The abundances from the two mentioned papers coincide with minor differences with the major study by Jofre et al. (2015). The Cr abundance from Lomaeva et al. (2019) agrees with the results from Fukue et al. (2021) and thus significantly differs from our findings. The reason for this difference is not clear. In addition, our Mn abundance is much larger than the value presented in Lomaeva et al.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Elementa & \multicolumn{3}{c}{Abundances} \\ & **Solar scaledb** & 10 Leo & Arcturus & Leo-Arc \\ \hline C & -3.52 & -3.85\(\pm\)0.05 & -4.14 & 0.29 \\ N & -4.12 & -4.05\(\pm\)0.09 & -4.37 & 0.32 \\ O & -3.21 & -3.35\(\pm\)0.09 & -3.27 & -0.08 \\ Na & -5.71 & -5.80\(\pm\)0.02 & -5.89 & 0.09 \\ Mg & -4.46 & -4.46\(\pm\)0.17 & -4.48 & 0.02 \\ Al & -5.57 & -5.40\(\pm\)0.08 & -5.41 & 0.01 \\ Si & -4.49 & -4.80\(\pm\)0.07 & -4.83 & 0.03 \\ K & -6.92 & -7.00\(\pm\)0.09 & -7.09 & 0.09 \\ Ca & -5.68 & -6.00\(\pm\)0.06 & -6.08 & 0.08 \\ Sc & -8.87 & -9.00\(\pm\)0.52 & -9.10 & 0.10 \\ Ti & -7.02 & -7.30\(\pm\)0.13 & -7.39 & 0.09 \\ V & -8.04 & -8.30\(\pm\)0.31 & -8.74 & 0.44 \\ Cr & -6.37 & -6.55\(\pm\)0.07 & -6.67 & 0.12 \\ Mn & -6.65 & -6.00\(\pm\)0.10 & -6.18 & 0.18 \\ Fe & -4.54 & -4.80\(\pm\)0.17 & -4.96 & 0.16 \\ Ni & -5.79 & -6.25\(\pm\)0.08 & -6.40 & 0.15 \\ Sr & -9.07 & -9.00\(\pm\)0.05 & -9.09 & 0.09 \\ \hline \end{tabular}
\end{table}
Table 13: Abundances for 10 Leo and Arcturus derived from spectral synthesis. Column 2: solar abundances scaled to the metallicity of 10 Leo. Column 3: abundances for 10 Leo derived in this paper by spectral synthesis. Column 4: abundances for Arcturus derived in this paper. Column 5: differences between 10 Leo and Arcturus abundances.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{\({}^{12}\)CO} & \multicolumn{2}{c}{\({}^{13}\)CO} \\ Parameter & This work & Literature & This work & Literature \\ \hline \(\omega_{\rm e}\) & 2169.76 & 2169.756 [1] & 2121.37 & 2141.41 [2] \\ \(\omega_{0}\)(1-0) & 2143.27 & 2143.27 [3] & 2096.07 & 2096.07 [3] \\ \(\omega_{0}\)(2-0) & 4260.07 & 4260.06 [3] & 4166.83 & 4166.82 [3] \\ \(\omega_{0}\)(3-0) & 6350.43 & 6350.44 [3] & 6212.32 [3] \\ \(\omega_{0}\)(3-1) & 4207.17 & 4207.17 [3] & 4116.25 & 4116.25 [3] \\ \(\omega_{0}\)(4-2) & 4154.41 & 4154.41 [3] & 4065.81 & 4065.81 [3] \\ B\({}_{\rm e}\) & 1.9309 & 1.9316 [1] & 1.8461 & \\ B\({}_{\rm 0}\) & 1.9222 & 1.9225 [3] & 1.8372 & 1.8380 [3] \\ B\({}_{\rm 1}\) & 1.9049 & 1.9050 [3] & 1.8215 & 1.8216 [3] \\ B\({}_{\rm 2}\) & 1.8872 & 1.8875 [3] & 1.8055 & 1.8053 [3] \\ B\({}_{\rm 3}\) & 1.8700 & 1.8700 [3] & 1.7890 & 1.7890 [3] \\ \(\omega_{\rm e}\chi_{\rm e}\) & 13.236 & 13.288 [1] & 12.640 & 12.67 [2] \\ \(\alpha_{\rm e}\) & 1.7490E-02 & 1.7505E-02 [1] & 1.6351E-02 & \\ D\({}_{\rm e}\) & 6.120E-06 & 6.122E-06 [1] & 5.592E-06 & 5.593E-06 [3] \\ \({}_{\rm e}\)(Å) & 1.1242 & 1.1282 [1] & \\ \hline \end{tabular}
\end{table}
Table 12: Overview of derived molecular constants for \({}^{12}\)CO and \({}^{13}\)CO (all quantities are in units of cm\({}^{-1}\))
(2019) and in Jofre et al. (2015). Since we also see an outstanding overabundance of this element in 10 Leo compared to the sun, we have to assume the Mn abundances derived in this paper to be potentially erroneous. We aim to understand this discrepancy in a separate study beyond the scope of this paper, which is presenting the line compendium in the CRIRES-POP spectrum of 10 Leo. Except for these two elements, abundances derived for 10 Leo and Arcturus in this paper coincide well with other studies.
## 5 Summary
The main goal of this paper was to provide a compendium of line identifications for a K giant that is as complete as possible. This includes more than 16 000 lines from molecules and atoms, detected in the pipeline-reduced and fully corrected high-resolution NIR spectrum of the cool star 10 Leo, and their assignment to atomic and molecular species. These results are important for computing synthetic spectra, and abundance determination for evolved stars in general. In addition, the search for lines from neutron capture elements profits from a high degree of completeness and correctness of line lists for Fe and below, including molecular lines. This can assist in determining interfering lines that may completely mask or mimic lines of possible neutron capture elements.
The identification of absorption lines has been done visually with the help of various line lists, stellar extraction from VALD3, and other published sources. Hydrogen and lines from various further elements were identified, including C, Na, Mg, Al, Si, P, S, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Rb, Sr, Y, Zr, Ce, and Dy. Some of these elements, for example Sr, Ce, and Dy, are only detectable in their ionised form.
Molecules such as CO, CN, and OH dominate the spectrum. Absorption lines from \({}^{13}\)CO, C\({}^{17}\)O, and C\({}^{18}\)O were easily identified in several bands and with sufficient strengths to determine isotopic ratios. In contrast, absorption lines from \({}^{13}\)CN are very weak and only identified for the 0-0 bands. Lines of C\({}^{15}\)N are not detectable in the spectrum.
The comparison between observed and synthetic spectra allowed us to identify a large number of lines that are present in the synthetic spectra but are not observed, and lines that are observed and identified, but not present in the synthetic spectra. The high number of discrepancies reveals the need to improve the spectroscopic data for metals, for example, for Mg and Fe. The \(L\) band shows the largest fraction of unidentified lines (29%), and the \(M\) band the lowest (6%). Lists of unidentified and missing lines are also part of our line compendium. The quality of the spectrum allows for various spectral parameters for CO to be tested such as the rotational constant, vibrational coupling constant, distortion constant, and transition energies of the vibrational states. We find a good agreement with literature values for \({}^{12}\)C\({}^{16}\)O, but some discrepancies for \({}^{13}\)C\({}^{16}\)O, which require further investigation.
Finally, we have presented abundances for 17 key elements from the CRIRES-POP spectrum of 10 Leo and the FTS spectrum of Arcturus, which are in good agreement with literature values in most cases. This paper complements the CRIRES-POP spectrum of 10 Leo published by Nicholls et al. (2017) providing a complete NIR atlas of this K giant. In comparing it with the Arcturus atlas, metallicity and temperature effects can be studied for individual sections of the 1 to 5 \(\mu\)m range.
###### Acknowledgements.
This work has made use of the VALD3 database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. The authors wish to thank Bernhard Aringer and Kenneth Hinkle for valuable suggestions and discussions on the paper.
|
2309.04205 | Symmetry-Enriched Criticality in a Coupled Spin-Ladder | We study a one-dimensional ladder of two coupled XXZ spin chains and identify
several distinct gapless symmetry-enriched critical phases. These have the same
unbroken symmetries and long-wavelength description, but cannot be connected
without encountering either a phase transition or other intermediate phases.
Using bosonizaion, we analyze the nature of their distinction by determining
how microscopic symmetries are manifested in the long-wavelength fields, the
behavior of charged local and nonlocal operators, and identify the universality
class of all direct continuous phase transitions between them. One of these
phases is a gapless topological phase with protected edge modes. We
characterize its precise nature and place it within the broader classification.
We also find the occurrence of `multiversality' in the phase diagram, wherein
two fixed phases are separated by continuous transitions with different
universality classes in different parameter regimes. We determine the phase
diagram and all its aspects, as well as verify our predictions numerically
using density matrix renormalization group and a mapping onto an effective
spin-1 model. | Suman Mondal, Adhip Agarwala, Tapan Mishra, Abhishodh Prakash | 2023-09-08T08:31:45Z | http://arxiv.org/abs/2309.04205v3 | # Symmetry-Enriched Criticality in a Coupled Spin-Ladder
###### Abstract
We study a one-dimensional ladder of two coupled XXZ spin chains and identify several distinct gapless symmetry-enriched critical phases. These have the same unbroken symmetries and long-wavelength description, but cannot be connected without encountering either a phase transition or other intermediate phases. Using bosonizaion, we analyze the nature of their distinction by determining how microscopic symmetries are manifested in the long-wavelength fields, the behavior of charged local and nonlocal operators, and identify the universality class of all direct continuous phase transitions between them. One of these phases is a gapless topological phase with protected edge modes. We characterize its precise nature and place it within the broader classification. We also find the occurrence of'multiversality' in the phase diagram, wherein two fixed phases are separated by continuous transitions with different universality classes in different parameter regimes. We determine the phase diagram and all its aspects, as well as verify our predictions numerically using density matrix renormalization group and a mapping onto an effective spin-1 model.
###### Contents
* I Introduction
* II Model Hamiltonian and phase diagram
* II.1 Two presentations of the model
* II.2 Symmetries
* II.3 Phases and transitions
* III Bosonization analysis I: characterizing the gapless phases
* III.1 Bosonization formulas for small and large \(J\) and conventional description of phases
* III.2 Multiversality along the \(t=0\) surface
* III.3 Distinguishing gapless phases through effective symmetries
* III.4 Local and non-local observables
* IV Bosonization analysis II: the topological nature of \(\mathrm{XY}_{2}^{*}\)
* IV.1 Edge modes
* IV.2 Why \(\mathrm{XY}_{2}^{*}\) is _not_ an intrinsically gapless topological phase?
* IV.3 A related model where \(\mathrm{XY}_{2}^{*}\)_is_ an intrinsically gapless topological phase
* V Numerical Analysis
* V.1 Diagnostics, Phases and Phase transitions
* V.2 Characterising gapless phases
* VI Mapping to effective spin-1 models
* VII Summary and Outlook
* A Additional bosonization details
* VII.2 Phase diagrams from bosonization
* VII.2.1 The small-\(J\) phase diagram
* VII.2.2 The large-\(J\) phase diagram
* VII.3 Bosonizing string operators
* VII.3.1 Bosonizing \(C(x,y)\) for small \(J\)
* VII.3.2 Bosonizing \(C(x,y)\) for large \(J\)
* VII.3.3 Bosonizing \(U(\pi)\)
## I Introduction
One of the most remarkable characteristics of quantum and classical many-body physical systems is the emergence of distinct, stable _phases_ that are divided by sharp _phase transitions_. There is tremendous theoretical and experimental interest in enumerating all possible phases and transitions and characterizing their properties. Symmetries have provided a guiding principle to facilitate this. It was realized that distinct phases of matter occur when microscopic symmetries are spontaneously broken at long distances [1]. The knowledge of microscopic symmetries allows us to enumerate the different ways it can be spontaneously broken, the properties of the resulting long-range order, and sometimes even the nature of the phase transition. The concept of 'topological' ordering that falls outside the symmetry-breaking framework [2] following the discovery of the quantum Hall effect [3] has expanded the mechanisms by which distinct phases can
arise. This has spurred a flurry of intense research activity over the past decades in classifying and characterizing gapped phases of matter [4]. These new phases represent multiple ways in which symmetries can be unbroken and yet result in different phases. The distinguishing features are detectable in subtle signatures present in entanglement patterns and boundary/ topology effects.
Gapless phases, on the other hand, have been left by the wayside in these recent developments. Despite being ubiquitous in nature and making frequent appearances in the phase diagrams of many known physical systems, the mechanisms by which they arise and are stabilized are relatively unclear although various descriptive frameworks have been successfully devised to understand them. For example, when noninteracting bands of fermions are partially filled they lead to the formation of Fermi liquids [5], Dirac [6] / Weyl [7] semimetals. Using partons and emergent gauge fields to describe systems has also been useful in accessing non-Fermi-liquid phases [8; 9]. The most systematic known mechanism is arguably the spontaneous breaking of continuous symmetries, e.g., which results in the formation of superfluids. The program of classifying gapless states of matter with unbroken symmetries is still in its early stages.
Examples of gapless states hosting edge modes have been reported in various works [10; 11; 12; 13; 14; 15; 16; 17; 18; 19] and was developed into the notion of gapless symmetry protected topological (SPT) phases in refs. [15; 16]. This was generalized in ref. [18] to the concept of'symmetry-enriched criticality' where the authors ask the following question-- given a critical state corresponding to a fixed universality class, how many ways can an unbroken symmetry _enrich_ it? In other words, can microscopic symmetries manifest themselves in inequivalent ways at long distances when the physics is described by conformal field theory (CFT)? The authors demonstrate that the answer is yes and that distinct symmetry-enriched critical states exist that cannot be connected without encountering an abrupt change in universality class or intermediate phases. These critical states may be topological and host edge modes, or may not.
It is desirable to study models and phase diagrams which demonstrate the existence of symmetry-enriched critical phases and transitions between them. The most common critical phases are the so-called 'Luttinger liquids' [20] which is described by the compact-boson CFT [21] and arise as the long-wavelength description for many one-dimensional interacting systems of bosons or fermions. Coupled Luttinger liquids, which naturally arise in spin-ladder models, provide a much richer playground and will be used in this work to investigate subtle symmetry and topological properties of gapless phases. In this paper, we study the phase diagram of a microscopic one-dimensional spin ladder that stabilizes multiple symmetry-enriched Luttinger liquid phases protected by the symmetries of the model. One of these, dubbed XY\({}_{2}^{*}\), is topological, i.e. it has stable symmetry-protected edge modes. Using Abelian bosonization, we give a comprehensive treatment of their symmetry distinction and features, as well as describe local and nonlocal observables that can differentiate between them. We also study this rich variety of phases and phase transitions numerically using density matrix renormalization group (DMRG) as well as an effective low-energy mapping to spin-1 Hamiltonians. We also discuss additional interesting features of the phase diagram such as the presence of'multiversality' [22; 23] wherein the same two phases (Haldane and trivial) are separated by different stable universality classes in different parameter regimes.
The paper is organized as follows -- in Section II, we introduce our model, list its symmetries, summarize the phase diagram and its important elements. We use Abelian bosonization in Section III to establish the symmetry distinction between various gapless phases and in Section IV to analyze the topological Luttinger liquid phase XY\({}_{2}^{*}\). We numerically analyze our model in Section V and reproduce aspects of our phase diagram using an effective spin -1 model in Section VI. Various additional details are relegated to Appendices A to C.
## II Model Hamiltonian and phase diagram
### Two presentations of the model
We study a one-dimensional chain of qubits (spin halves). There are two ways to view the system. The first, shown in the top panel of Fig. 1 is to regard the system as a single chain where the Hamiltonian can be written as an XXZ chain with alternating bond strength and next-nearest-neighbor coupling as follows (the \(S^{z}S^{z}\) coupling constants \(\lambda\) and \(\Delta\) are reversed in sign compared to the usual convention for convenience)
\[H=\sum_{j}\left(1+(-1)^{j}t\right)\left(S^{x}_{j}S^{x}_{j+1}+S^ {y}_{j}S^{y}_{j+1}-\lambda S^{z}_{j}S^{z}_{j+1}\right)\\ +J\sum_{j}\left(S^{x}_{j}S^{x}_{j+2}+S^{y}_{j}S^{y}_{j+2}-\Delta S ^{z}_{j}S^{z}_{j+2}\right), \tag{1}\]
\(\vec{S}_{j}\) are spin \(\frac{1}{2}\) operators, defined as usual in terms of Pauli matrices: \(\vec{S}_{j}=\frac{1}{2}\vec{\sigma}_{j}\). The model has four parameters: \(\{J,\Delta,\lambda\}\in\mathbb{R}\) and \(t\in[-1,1]\). We will be interested in
Figure 1: Schematic representation of the Hamiltonian in the small-\(J\) limit shown in Eq. (1) (top) and the large-\(J\) limit shown in Eq. (2) (bottom). The solid and broken lines represent the various two-spin interaction terms.
two-dimensional phase diagrams varying \(\lambda\) and \(t\) with \(J\) and \(\Delta\) fixed. The representation in Eq. (1) appropriate in the limit of small \(J\) when the next-nearest neighbor (nnn) term can be regarded as a perturbation of the bond-dimerized XXZ spin chain. The phase diagram in this limit is well known [23; 24], and is schematically shown in Fig. 2. We are interested in the gapless Luttinger liquid phase labeled XY\({}_{0}\) which can be adiabatically connected to the one found in the phase diagram of the XXZ model (i.e. \(1/\sqrt{2}<\lambda<1\) for \(t=J=0\)).
For large \(J\), the Hamiltonian is appropriately visualized as a two-rung spin ladder as shown in the bottom panel of Fig. 1, with the following presentation:
\[H =H_{1}+H_{2}+H_{\perp}+H^{\prime}_{\perp},\text{ where,} \tag{2}\] \[H_{\alpha} =J\sum_{j}\left(S^{x}_{\alpha j}S^{x}_{\alpha j+1}+S^{y}_{\alpha j }S^{y}_{\alpha j+1}-\Delta S^{z}_{\alpha j}S^{z}_{\alpha j+1}\right),\] \[H_{\perp} =(1-t)\sum_{j}\left(S^{x}_{1j}S^{x}_{2j}+S^{y}_{1j}S^{y}_{2j}- \lambda S^{z}_{1j}S^{z}_{2j}\right),\] \[H^{\prime}_{\perp} =(1+t)\sum_{j}\left(S^{x}_{2j}S^{x}_{1j+1}+S^{y}_{2j}S^{y}_{1j+1}- \lambda S^{z}_{2j}S^{z}_{1j+1}\right).\]
\(\alpha=1,2\) labels the rungs of the ladder and contains, respectively, the even and odd lattice spins of Eq. (1). \(H_{\alpha}\) represents the intra-rung and \(H_{\perp},\ H^{\prime}_{\perp}\) represent the inter-rung XXZ couplings. In this limit, it is appropriate to treat \(H_{\perp},\ H^{\prime}_{\perp}\) as perturbations to \(H_{\alpha}\). The schematic phase diagram in this limit which we find is shown in Fig. 2. Our prime interest in this phase diagram are the four Luttinger liquid phases labelled XY\({}_{1}\), XY\({}_{1}^{*}\), XY\({}_{2}\) and XY\({}_{2}^{*}\). We will show that all five gapless phases found in the large and small \(J\) phase diagrams are distinct from each other meaning they cannot be connected without encountering a phase transition. Furthermore, we will also show that one of these, XY\({}_{2}^{*}\) is a topological Luttinger liquid containing stable edge modes [14; 18; 19]. A positive finite \(\Delta\), introduces intra-chain ferromagnetic correlations, which is crucial to open up various gapless phases as will be discussed in detail.
Parts of the large-\(J\) phase diagram have appeared in previous studies [25; 26; 27; 28; 29; 30; 31; 32]. However, the complete set of gapless phases, their symmetry distinction and topological properties have not been identified to the best of our knowledge. This will be the focus of our work. We will understand these (a) using bosonization in Sections III and IV, (b)numerically, using density matrix renormalization group (DMRG) in Section V and (c) by mapping Eq. (2) to effective spin-1 models in Section VI.
### Symmetries
Global symmetries of the system will play an important role. Four symmetries are sufficient to characterize all phases and transitions: (i) on-site \(U(1)\) symmetry that corresponds to spin rotations, (ii) on-site \(\mathbb{Z}_{2}^{R}\) spin reflections (iii) \(\mathbb{Z}_{2}^{P}\) lattice symmetry that corresponds to a bond-centred reflection in the small-\(J\) version and site-centred reflection followed by layer-exchange in the large-\(J\) version, (iv) \(\mathbb{Z}\) lattice translations. The symmetry action on spin operators is shown in Table 1. Altogether, the full symmetry group is [33]\(G\cong O(2)\times\mathbb{Z}_{2}^{P}\times\mathbb{Z}\). Additional symmetries are present in the model (eg: time-reversal) but are not needed for our purposes. In other words, they can be explicitly broken without changing the nature of the phase diagram.
### Phases and transitions
The main focus of our work are the five symmetry enriched Luttinger liquid phases XY\({}_{0}\), XY\({}_{1,2}\) and XY\({}_{1,2}^{*}\)
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Symmetry** & **Small**\(J\) & **Large**\(J\) \\ \hline \hline \multirow{2}{*}{\(U(1)\) spin rotations} & \(S^{+}_{j}\mapsto e^{\pm i\chi}S^{\pm}_{j}\) & \(S^{\pm}_{\alpha j}\mapsto e^{\pm i\chi}S^{\pm}_{\alpha j}\) \\ & \(S^{+}_{j}\mapsto S^{+}_{j}\) & \(S^{-}_{\alpha j}\mapsto S^{z}_{\alpha j}\) \\ \hline \multirow{2}{*}{\(\mathbb{Z}_{2}^{R}\) spin reflection} & \(S^{\pm}_{j}\mapsto S^{+}_{j}\) & \(S^{\pm}_{j}\mapsto S^{+}_{\alpha j}\) \\ & \(S^{+}_{j}\mapsto-S^{-}_{j}\) & \(S^{+}_{j}\mapsto-S^{-}_{\alpha j}\) \\ \hline \(\mathbb{Z}_{2}^{P}\) lattice parity & \(S_{j}\mapsto\overline{S}_{-j+1}\) & \(S_{1,j}\leftrightarrow\overline{S}_{2,-j}\) \\ \hline \(\mathbb{Z}\) lattice translation & \(S_{j}\mapsto\overline{S}_{j+2}\) & \(\overline{S}_{\alpha,j}\mapsto\overline{S}_{\alpha,j+1}\) \\ \hline \end{tabular}
\end{table}
Table 1: Symmetries of the model in both the small \(J\) and large \(J\) representations of local operators of the Hamiltonian shown in Eqs. (1) and (2).
Figure 2: The schematic phase diagram for the small-\(J\) Hamiltonian shown in Eq. (1) (top) and the large-\(J\) Hamiltonian shown in Eq. (2) (bottom). Continuous lines indicate second-order phase transitions and broken lines indicate first-order transitions. Cartoon ground states are shown for the gapped phases.
shown in Fig. 2. At long distances, all five of these are described by a compact boson conformal field theory with central charge \(c=1\). However, the presence of global symmetries results in distinctions between them. The microscopic symmetries shown in Section II.2 are imprinted on the long-wavelength degrees of freedom in different ways in each of the five phases, and as a consequence, they cannot be connected without encountering a phase transition or an intermediate phase. Conversely, the distinction can be eliminated between the phases, and they can be connected by explicitly breaking appropriate symmetries. This will be explained in detail using bosonization analysis in Section III.
More operationally, we will show that the distinction between these phases can be demonstrated using appropriate local and string operators. While XY\({}_{0}\), XY\({}_{1}\) and XY\({}_{1}^{*}\) can be distinguished by local operators only, XY\({}_{2}\) is distinguished from XY\({}_{2}^{*}\) using string operators. This is comparable to the situation with gapped phases, where symmetry protected topological (SPT) phases [34] are distinguished by string operators. The phase diagrams shown in Fig. 2 contain a non-trivial SPT phase, the Haldane phase, which is distinguished from the trivial paramagnet using an appropriate string operator. We will see that the same string operator can be used to distinguish between XY\({}_{2}\) and XY\({}_{2}^{*}\). Furthermore, like the Haldane phase, the XY\({}_{2}^{*}\) phase will also contain protected edge modes but with reduced degeneracy. This will also be explained in Section IV using bosonization and confirmed numerically in Section V.
We are also interested in the phase transitions between the gapless phases shown in Fig. 2. These are summarized below along with the universality class.
* XY\({}_{1}\) to XY\({}_{1}^{*}\): c = 2 theory of two compact bosons.
* XY\({}_{1}\) to XY\({}_{2}\) and XY\({}_{1}^{*}\) to XY\({}_{2}^{*}\): c\(=\frac{3}{2}\) theory of a \(c=1\) compact boson CFT combined with a \(c=\frac{1}{2}\) Ising CFT.
The second order transitions out of the gapless phases to either the Haldane or trivial phase in Fig. 2 is of the BKT type, when the value of the Luttinger parameter is such that the perturbation that drives the gapped phase becomes relevant. We will also understand these using bosonization in Appendix B and confirm them numerically in Section V.
Finally, the gapped phases present in Fig. 2 (Haldane SPT, trivial paramagnet and symmetry breaking Neel and ferromagnet) as well as transitions between them are well understood. We mention them for completeness- The Haldane and trivial phases are separated by a compact boson CFT for small \(J\) and by a first-order transition for large \(J\). The Neel phase is separated from the trivial and Haldane phases by an Ising CFT and its symmetry-enriched variant, respectively [18] for both small and large \(J\). Finally, the FM is separated from the Haldane and Trivial phases through a first-order transition for small \(J\).
## III Bosonization analysis I: characterizing the gapless phases
In this section, we will study the properties of various gapless phases and transitions between them using abelian bosonization. We begin by reviewing the framework applicable to the parameter regimes for small and large \(J\) and then proceed to understand the various gapless phases in two ways: (i) by using the effective action of microscopic symmetries on the CFT and (ii) the behavior of local and non-local operators carrying appropriate symmetry charges. We delay a thorough analysis of the topological aspects of the XY\({}_{2}^{*}\) phase to Section IV.
### Bosonization formulas for small and large \(J\) and conventional description of phases
For small \(J\), the Hamiltonian shown in Eq. (1), can be treated as a single XXZ spin chain with perturbations. In the regime of our interest, it can be bosonized using standard arguments [35; 20] as follows (see Appendix A for more details)
\[H \approx\frac{v}{2\pi}\int dx\left[\frac{1}{4K}\left(\partial_{x} \phi\right)^{2}+K\left(\partial_{x}\theta\right)^{2}\right]\] \[\quad+2\mathcal{A}\mathcal{C}t\int dx\ \cos\phi-\frac{\mathcal{B}^{2} \lambda}{2}\int dx\ \cos 2\phi+\ldots \tag{3}\]
\(\phi\cong\phi+2\pi\) and \(\theta\cong\theta+2\pi\) are canonically conjugate compact boson fields with unit radii satisfying the algebra [21]
\[\left[\partial_{x}\phi(x),\theta(x^{\prime})\right]=2\pi i\delta(x-x^{\prime}), \tag{4}\]
and \(\mathcal{A},\mathcal{B},\mathcal{C}\) etc are bosonization prefactors whose precise values are not important. The Luttinger parameter \(K\) and the velocity \(v\) are related to the Hamiltonian parameters [36] (see Appendix A). The bosonized forms of the spin operators are
\[S_{j}^{\pm} \approx\exp\left(\pm i\theta(x)\right)\left((-1)^{j}\mathcal{A}\ + \mathcal{C}\cos\phi(x)+\ldots\right),\] \[S_{j}^{z} \approx\frac{1}{2\pi}\partial_{x}\phi(x)+(-1)^{j}\mathcal{B}\sin \phi(x)+\ldots \tag{5}\]
Equation (3) is a compact boson conformal field theory (CFT) with central charge \(c=1\) perturbed by vertex operators \(\mathcal{U}_{p}\equiv\cos p\phi\) with scaling dimensions [37; 21]
\[\left[\mathcal{U}_{p}\right]=\left[\cos p\phi\right]=p^{2}K. \tag{6}\]
Note that we have only shown the most relevant operators, with the smallest scaling dimensions in Eq. (3). The ellipses \(\ldots\) represent other operators that are not important for our purposes. The the small-\(J\) phase diagram shown in Fig. 2 can be qualitatively reproduced from Eq. (5) by tracking the relevance [38] of \(\mathcal{U}_{p}\) (see Appendix B for a detailed discussion). Bond dimerization \(t\) introduces the vertex operator \(\mathcal{U}_{1}\) and the interaction
\(S^{z}S^{z}\) while \(\lambda\) introduces \(\mathcal{U}_{2}\). For now, we note that in the regime when \(K>2\), _all_ perturbations are irrelevant and correspond to the XY\({}_{0}\) gapless phase.
A different starting point is useful in the large \(J\) limit. We now interpret the Hamiltonian in Eq. (2) as two XXZ spin chains with intra- and inter-rung perturbations. Each leg can be bosonized appropriately to obtain the following two-component compact-boson theory [35]
\[H \approx\frac{v}{2\pi}\sum_{\alpha=1,2}\int dx\left(\frac{1}{4K}( \partial_{x}\phi_{\alpha})^{2}+K(\partial_{x}\theta_{\alpha})^{2}\right)\] \[-\frac{\lambda}{2\pi^{2}}\int dx\ \partial_{x}\phi_{1}\partial_{x}\phi_{2}-4 \mathcal{A}^{2}t\int dx\ \ \cos\left(\theta_{1}-\theta_{2}\right)\] \[\quad-\mathcal{B}^{2}t\int dx\ \lambda\ \left(\cos\left(\phi_{1}+\phi_{2} \right)-\cos\left(\phi_{1}-\phi_{2}\right)\right)\] \[\quad+2\mathcal{C}^{2}\int dx\cos\left(\theta_{1}-\theta_{2} \right)\cos\left(\phi_{1}+\phi_{2}\right)+\ldots \tag{7}\]
where \(\phi_{\alpha}\cong\phi_{\alpha}+2\pi\) and \(\theta_{\alpha}\cong\theta_{\alpha}+2\pi\) are compact boson fields satisfying
\[[\partial_{x}\phi_{\alpha}(x),\theta_{\beta}(x^{\prime})]=2\pi i\delta_{ \alpha\beta}\delta(x-x^{\prime}), \tag{8}\]
\(\mathcal{A}\) and \(\mathcal{B}\) are again unimportant bosonization prefactors and we have only shown the most important operators. The bosonized forms of the spin operators are
\[S^{\pm}_{\alpha j} \approx\exp\left(\pm i\theta_{\alpha}(x)\right)\left((-1)^{j} \mathcal{A}\ +\mathcal{C}\cos\phi_{\alpha}(x)+\ldots\right)\] \[S^{z}_{\alpha j} \approx\frac{1}{2\pi}\partial_{x}\phi_{\alpha}+(-1)^{j}\mathcal{ B}\sin\phi_{\alpha}+\ldots \tag{9}\]
The above theory represents a \(c=2\) CFT with perturbations. We have only retained primary [37; 21] scaling operators in Eq. (9). This is sufficient to determine the structure of the phases and transitions, which is our focus. However, it is known [39] that descendant operators must be considered to understand certain _incommensurability_ aspects of correlations. The large-\(J\) phase diagram can be qualitatively reproduced using Eq. (9) by carefully tracking the relevance of the operators \(\mathcal{V}_{\pm}\equiv\cos\left(\phi_{1}\pm\phi_{2}\right)\), \(\mathcal{W}_{-}\equiv\cos\left(\theta_{1}-\theta_{2}\right)\) and \(\mathcal{W}_{-}\mathcal{V}_{+}\equiv\cos\left(\theta_{1}-\theta_{2}\right) \left(\phi_{1}+\phi_{2}\right)\) (details of this can be found in Appendix B). Here, we again focus only on how the four gapless phases can emerge. An important fact is that the scaling dimensions of the various operators listed above are not all independent. In particular, we have \([\mathcal{V}_{-}]=([\mathcal{W}_{-}])^{-1}\). Therefore, it is impossible for both \(\mathcal{V}_{-}\) and \(\mathcal{W}_{-}\) to be irrelevant at the same time, and for any \(t\neq 0\), the \(c=2\) theory is unstable and flows to a Luttinger liquid phase with \(c=1\) or a gapped phase [26; 27; 35] as seen in Fig. 2. The first, which is of our main interest, occurs when all other operators, especially \(\mathcal{V}_{+}\) are irrelevant. The nature of the resulting gapless phase depends on: (i) which among \([\mathcal{V}_{-}]\) and \([\mathcal{W}_{-}]\) has the smaller scaling dimensions at \(t=0\). This dominates long-distance physics for \(t\neq 0\) resulting in the pinning of either \(\langle\phi_{1}-\phi_{2}\rangle\) or \(\langle\theta_{1}-\theta_{2}\rangle\) and (ii) the value to which \(\langle\phi_{1}-\phi_{2}\rangle=0/\pi\) or \(\langle\theta_{1}-\theta_{2}\rangle=0/\pi\) is pinned, depending on the sign of \(t\) and \(\lambda\). The four possibilities result in the four gapless phases shown in the large-\(J\) phase diagram of Fig. 2 as follows.
1. XY\({}_{1}\): \([\mathcal{V}_{-}]>[\mathcal{W}_{-}]\), \(\langle\theta_{1}-\theta_{2}\rangle=\pi\)
2. XY\({}_{1}^{*}\): \([\mathcal{V}_{-}]>[\mathcal{W}_{-}]\), \(\langle\theta_{1}-\theta_{2}\rangle=0\)
3. XY\({}_{2}\): \([\mathcal{V}_{-}]<[\mathcal{W}_{-}]\), \(\langle\phi_{1}-\phi_{2}\rangle=0\)
4. XY\({}_{2}^{*}\): \([\mathcal{V}_{-}]<[\mathcal{W}_{-}]\), \(\langle\phi_{1}-\phi_{2}\rangle=\pi\).
There are two critical scenarios which we now discuss: when \([\mathcal{V}_{-}]=[\mathcal{W}_{-}]\), the theory flows to a \(c=\frac{3}{2}\) theory corresponding to a compact boson with Ising CFT [40]. For \([\mathcal{V}_{-}]\neq[\mathcal{W}_{-}]\)\(t=0\) corresponds to a phase transition described by the parent \(c=2\) two-component compact boson theory when the pinned value of the appropriate fields changes. At this stage, let us point out elements of the discussion above that already exist in the literature. The competition between \([\mathcal{V}_{-}]\) and \([\mathcal{W}_{-}]\) leading to different phases was discussed in refs. [27; 35; 40]. The importance of the precise values to which the fields are pinned was appreciated relatively recently [14; 18; 19] where it was shown that \(\langle\phi_{1}-\phi_{2}\rangle=\pi\) produces a gapless phase with edge modes.
However, we must be careful in using these pieces of information to conclude that we have distinct phases of matter. It was recently pointed out that this kind of distinction can disappear suddenly [41; 22; 23]. A more robust characterization arises out of symmetry considerations which we now turn to. We do this in two complementary ways. First, we establish the fate of the microscopic symmetries shown in Tables 1 and 2 in the deep IR for each of the gapless phases. The effective theory for all of them is that of a single compact boson. We show that in each of the five phases, the microscopic symmetries act in inequivalent ways that cannot be deformed into each other. Second, we study how appropriately charged local and nonlocal operators behave in the different phases and show a complete list of operators with distinct charges that can serve as order parameters to distinguish the different gapless phases. Our work therefore characterizes a rather subtle interplay of symmetries and topology leading to the emergence of novel gapless phases.
### Multiversality along the \(t=0\) surface
An interesting feature of the phase diagrams shown in Fig. 2 is the nature of transition separating the Haldane and Trivial phases along parts of the \(t=0\) surface. In the small-\(J\) limit, we see from Eq. (3) that the critical theory corresponds to a compact boson CFT with central charge \(c=1\). In the large-\(J\) diagram, the situation is different. Consider the effective theory in Eq. (7) and set \(t=0\). This is a \(c=2\) CFT with perturbations and describes various transitions and phases along the \(t=0\) surface. In particular, the transition between XY\({}_{1}\) and XY\({}_{1}^{*}\) corresponds to the \(c=2\) theory when all perturbations are irrelevant or tuned away. As we move along this surface, the operator \(\mathcal{W}_{-}\mathcal{V}_{+}\) becomes relevant and
gives us a gapped theory with two ground states which precisely correspond to those of the Haldane and trivial phases and therefore represent a first-order transition between them (see Appendix B for a detailed discussion). Now consider the transition between XY\({}_{1}\) and the trivial phase. This is driven by the operator \(\mathcal{V}_{+}\) becoming relevant. Since \(\mathcal{V}_{+}\) has a smaller scaling dimension than \(\mathcal{W}_{-}\mathcal{V}_{+}\), the XX\({}_{1}\) -to- trivial \(c=1\) critical line strikes the \(t=0\) line well before the first-order transition sets in. The same is true for the XY\({}_{1}^{*}\) -to- Haldane transition. Consequently, we expect that a segment of the transition (close to the gapless phases) between the Haldane and the trivial phase will also be described by the \(c=2\) CFT before becoming first-order as shown in Fig. 2. This situation is unusual because it is a different universality class (with a different central charge) compared to the small-\(J\) transition between the same phases. Furthermore, in both cases, the transitions are reached by tuning only a single parameter, without additional fine-tuning.
The presence of multiple stable universality classes that separate the same two phases has been termed'multiversality' [22; 23]. Although there are no physical reasons forbidding multiversality, models that exhibit it are surprisingly rare. We see that the spin ladder model considered in this work exhibits the phenomenon under relatively generic conditions and symmetries (compare this to the example in Ref.[23] where multiversality was observed under more restrictive symmetries and destroyed when symmetries were reduced).
### Distinguishing gapless phases through effective symmetries
We begin this subsection by listing the action of symmetries listed in Table 1 on the compact boson fields in both the small and large \(J\) versions [42; 35]. This is shown in Table 2 and is obtained by comparing the action on the lattice operators shown in Table 1 with the dictionary shown in Eqs. (5) and (9) (see Appendix A for more details). We want to understand the fate of these symmetries in various gapless phases. The long wavelength physics of each of these gapless phases is identical and corresponds to that of a single compact boson with a Hamiltonian of the form
\[H=\frac{v_{\rm eff}}{2\pi}\int dx\left[\frac{1}{4K_{\rm eff}}\left(\partial_{x} \phi\right)^{2}+K_{\rm eff}\left(\partial_{x}\theta\right)^{2}\right]. \tag{10}\]
How do the microscopic symmetries act on the long wavelength effective fields? Observe that the compact boson theory itself has various symmetries such as
\[U(1)_{\theta}:\theta \mapsto\theta+\chi,\ \mathbb{Z}_{2}^{\theta}:\theta\mapsto-\theta,\] \[U(1)_{\phi}:\phi \mapsto\phi+\xi,\ \mathbb{Z}_{2}^{\phi}:\phi\mapsto-\phi, \tag{11}\]
which form the group \(G_{IR}\cong O(2)_{\theta}\times O(2)_{\phi}\)[43].
The action of symmetries can also be studied in the spectrum of local scaling operators \(\mathcal{X}_{m,n}\equiv\exp\left(i\left(m\theta+n\phi\right)\right)\) with scaling dimensions \([\mathcal{X}_{m,n}]=m^{2}K_{\rm eff}+\frac{n^{2}}{4K_{\rm eff}}\) where \(m\) and \(n\) are integers. These read as follows
\[U(1)_{\theta}:\mathcal{X}_{m,n} \mapsto e^{im\chi}\mathcal{X}_{m,n},\ \ \mathbb{Z}_{2}^{\theta}:\mathcal{X}_{m,n}\mapsto\mathcal{X}_{-m,n},\] \[U(1)_{\phi}:\mathcal{X}_{m,n} \mapsto e^{in\xi}\mathcal{X}_{m,n},\ \ \ \mathbb{Z}_{2}^{\phi}:\mathcal{X}_{m,n}\mapsto\mathcal{X}_{m,-n}. \tag{12}\]
The question we are interested in is how the microscopic symmetries of spins \(G_{UV}\) listed in Table 1 attach themselves to those of compact boson degrees of freedom, \(G_{IR}\). In other words, we we are interested in the _homomorphisms_\(G_{UV}\to G_{IR}\). Distinct homomorphisms will lead to inequivalent symmetry enriched Luttinger liquids that cannot be adiabatically connected. We will determine this for each phase one-by-one to confirm this.
#### iii.3.1 Effective symmetries of XY\({}_{0}\)
Let us begin with the gapless phase seen in the small-\(J\) limit, XY\({}_{0}\). The effective action of symmetries were already obtained using the bosonization formulas as listed in Table 2. This can also be used to determine the action on various scaling operators as shown in Table 3. We see that the microscopic \(U(1)\) attaches itself to \(U(1)_{\theta}\), \(\mathbb{Z}_{2}^{R}\) to a simultaneous action of \(\mathbb{Z}_{2}^{\theta}\) and \(\mathbb{Z}_{2}^{\phi}\), and \(\mathbb{Z}_{2}^{P}\) to a composite action of simultaneous \(\pi\) rotation of \(U(1)_{\theta}\) and \(\mathbb{Z}_{2}^{\phi}\) action while UV lattice translations \(\mathbb{Z}\) have no effect in the IR.
\begin{table}
\begin{tabular}{|c|c|c|} \hline & **Small \(J\)** & **Large \(J\)** \\ \hline \hline \(U(1)\) & \(\theta(x)\mapsto\theta(x)+\chi\) & \(\theta_{\alpha}(x)\mapsto\theta_{\alpha}(x)+\chi\) \\ & \(\phi(x)\mapsto\phi(x)\) & \(\phi_{\alpha}(x)\mapsto\phi_{\alpha}(x)\) \\ \hline \(\mathbb{Z}_{2}^{R}\) & \(\theta(x)\mapsto-\theta(x)\) & \(\theta_{\alpha}(x)\mapsto-\theta_{\alpha}(x)\) \\ & \(\phi(x)\mapsto-\phi(x)\) & \(\phi_{\alpha}(x)\mapsto-\phi_{\alpha}(x)\) \\ \hline \(\mathbb{Z}_{2}^{P}\) & \(\theta(x)\mapsto\theta(-x)+\pi\) & \(\theta_{\alpha}(x)\mapsto\tau_{\alpha\beta}^{x}\theta_{\beta}(-x)\) \\ & \(\phi(x)\mapsto-\phi(-x)\) & \(\phi_{\alpha}(x)\mapsto\pi-\tau_{\alpha}^{x}\phi_{\beta}(-x)\) \\ \hline \(\mathbb{Z}\) & \(\theta(x)\mapsto\theta(x)\) & \(\theta_{\alpha}(x)\mapsto\theta_{\alpha}(x)+\pi\) \\ & \(\phi(x)\mapsto\phi(x)\) & \(\phi_{\alpha}(x)\mapsto\phi_{\alpha}(x)+\pi\) \\ \hline \end{tabular}
\end{table}
Table 2: Representation of symmetries (see Table 1) on the boson fields applicable in the small \(J\) and large \(J\) limits of the Hamiltonian shown in Eqs. (1) and (2). \(\tau^{x}\) is the Pauli X matrix.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(U(1)\) & \(\mathbb{Z}_{2}^{R}\) & \(\mathbb{Z}\) & \(\mathbb{Z}_{2}^{P}\) \\ \hline \hline \(\begin{pmatrix}\theta(\alpha)+\chi\\ \phi(x)\end{pmatrix}\) & \(\begin{pmatrix}-\theta(x)\\ -\phi(x)\end{pmatrix}\) & \(\begin{pmatrix}\theta(x)\\ \phi(x)\end{pmatrix}\) & \(\begin{pmatrix}\pi+\theta(-x)\\ -\phi(-x)\end{pmatrix}\) \\ \hline \(e^{im\chi}\)\(\mathcal{X}_{m,n}(x)\) & \(\mathcal{X}_{-m,-n}(x)\) & \(\mathcal{X}_{m,n}(x)\) & \(e^{im\chi}\)\(\mathcal{X}_{m,-n}(-x)\) \\ \hline \end{tabular}
\end{table}
Table 3: Effective action of symmetries in the XY\({}_{0}\) phase.
#### iii.3.2 Effective symmetries of \(Xy_{1}\) and \(Xy_{1}^{*}\)
We now consider the gapless phases in the large-\(J\) limit obtained when \(\mathcal{W}_{-}\equiv\cos\left(\theta_{1}-\theta_{2}\right)\) dominates at long distances pinning \(\vartheta\equiv\theta_{1}-\theta_{2}\). To determine the nature of the resulting compact boson CFT the system flows to, we perform the following \(SL(2,\mathbb{Z})\) transformation which preserves the unit compactification radius of the fields as well as the canonical commutation relation Eq. (8)
\[\begin{pmatrix}\vartheta\\ \vartheta\end{pmatrix}\equiv\begin{pmatrix}\theta_{1}-\theta_{2}\\ \theta_{2}\end{pmatrix},\quad\begin{pmatrix}\varphi\\ \phi\end{pmatrix}\equiv\begin{pmatrix}\phi_{1}\\ \phi_{1}+\phi_{2}\end{pmatrix}. \tag{13}\]
When \(\vartheta\equiv\theta_{1}-\theta_{2}\) is pinned, its conjugate \(\varphi\) is disordered and we obtain physics at long distances by setting
\[e^{im\vartheta}\approx\langle e^{im\vartheta}\rangle\approx e^{im\langle \vartheta\rangle}\text{ and }e^{in\varphi}\approx\langle e^{in\varphi}\rangle\approx 0. \tag{14}\]
The effective theory is simply that of the unpinned canonically conjugate pair of fields, \(\theta=\theta_{2}\) and \(\phi=\phi_{1}+\phi_{2}\) with a Hamiltonian of the form shown in Eq. (10).
Using Eq. (14) and the action of the symmetries on the compact bosons obtained from bosonization at large \(J\) shown in Table 2, we can read off the effective symmetry action on the \(\theta\) and \(\phi\) as well as on the spectrum of scaling operators as shown in Table 4. First, compare these with Table 3. We see that the actions of \(U(1)\) and \(\mathbb{Z}_{2}^{R}\) are identical in all three phases. However, the action of \(\mathbb{Z}\) distinguishes \(\text{XY}_{0}\) from the other two. Finally, the symmetry action of \(\mathbb{Z}_{2}^{P}\) depends on the value of \(\langle\vartheta\rangle\) and distinguishes between \(\text{XY}_{1}\) (\(\langle\vartheta\rangle=\pi\)) and \(\text{XY}_{1}^{*}\) (\(\langle\vartheta\rangle=0\)). Observe that both _electric_ scaling operators (i.e., those carrying \(U(1)\) charge) with the smallest scaling dimensions, \(\cos\theta\) and \(\sin\theta\) are pseudo-scalars for \(\text{XY}_{1}\) and scalars in \(\text{XY}_{1}^{*}\) respectively. Thus, we have succeeded in establishing that \(\text{XY}_{0}\), \(\text{XY}_{1}\) and \(\text{XY}_{1}^{*}\) are distinct from each other.
#### iii.3.3 Effective symmetries of \(Xy_{2}\) and \(Xy_{2}^{*}\)
Finally, we turn to the large-\(J\) gapless phases obtained when \(\mathcal{V}_{-}\) dominates at long distances and pins \(\phi_{1}-\phi_{2}\). To get the effective symmetries of the resulting compact boson CFT the system flows to, we perform a different \(SL(2,\mathbb{Z})\) transformation from Eq. (13)
\[\begin{pmatrix}\vartheta\\ \theta\end{pmatrix}\equiv\begin{pmatrix}\theta_{1}\\ \theta_{1}+\theta_{2}\end{pmatrix},\quad\begin{pmatrix}\varphi\\ \phi\end{pmatrix}\equiv\begin{pmatrix}\phi_{1}-\phi_{2}\\ \phi_{2}\end{pmatrix}. \tag{15}\]
When \(\varphi\equiv\phi_{1}-\phi_{2}\) is pinned, its conjugate \(\vartheta\) is disordered and we obtain the long distance physics by setting
\[e^{im\vartheta}\approx\langle e^{im\vartheta}\rangle\approx 0,\text{ and }e^{in\varphi}\approx\langle e^{in\varphi}\rangle\approx e^{in\varphi}. \tag{16}\]
The effective theory is simply that of the unpinned fields \(\theta\) and \(\phi\) with an effective Hamiltonian of the form shown in Eq. (10).
Using Eq. (16) the symmetry action on the effective low-energy fields and on the spectrum of operators can be read off from Table 2 and is summarized in Table 5. The most striking feature is that the \(\theta\) field is a charge 2 operator for the \(U(1)\) symmetry. Consequently, the smallest \(U(1)\) charge carried by the spectrum of scaling operators is 2. This immediately shows that \(\text{XY}_{2}\) and \(\text{XY}_{2}^{*}\) are distinct from \(\text{XY}_{0}\), \(\text{XY}_{1}\) and \(\text{XY}_{1}^{*}\). Let us now focus on the effective action of \(\mathbb{Z}_{2}^{P}\) which seemingly depends on the value of \(\langle\varphi\rangle\) and distinguishes \(\text{XY}_{2}\) from \(\text{XY}_{2}^{*}\). This is _not_ true- the symmetry actions are merely related by a change of basis. However, keeping track of other symmetry charges exposes the distinction. Consider magnetic scaling operators (those without any \(U(1)\) charge) with the smallest scaling dimensions, \(\cos\phi\) and \(\sin\phi\). We see that in \(\text{XY}_{2}\), the operator with \(\mathbb{Z}_{2}^{R}\) charge (\(\sin\phi\)) transforms as a scalar under \(\mathbb{Z}_{2}^{P}\) whereas the operator without \(\mathbb{Z}_{2}^{R}\) charge (\(\cos\phi\)) transforms as a pseudoscalar. This situation is precisely reversed for \(\text{XY}_{2}^{*}\) where the \(\mathbb{Z}_{2}^{R}\) charged operator is a \(\mathbb{Z}_{2}^{P}\) pseudoscalar, whereas the \(\mathbb{Z}_{2}^{R}\) neutral operator is a \(\mathbb{Z}_{2}^{P}\) scalar. This completes the proof that the five gapless phases are distinct.
#### iii.3.4 Explicit symmetry breaking
Observe that all four microscopic symmetries were important in establishing these distinctions. Explicitly
breaking certain symmetries eliminates the distinction between certain phases and opens a potential path to connect them without phase transitions or intermediate phases. Let us look at a few instances.
1. If we break \(\mathbb{Z}_{2}^{R}\), the distinction between XY\({}_{2}\) and XY\({}_{2}^{*}\) is eliminated and reduces five phases to four: XY\({}_{0}\), XY\({}_{1}\), XY\({}_{1}^{*}\) and (XY\({}_{2}=\) XY\({}_{2}^{*}\)).
2. If we break \(\mathbb{Z}_{2}^{P}\), the distinction between \(XY_{1}\) and XY\({}_{1}^{*}\) is eliminated, as well as between \(XY_{2}\) and XY\({}_{2}^{*}\) and reduces the five phases to three: XY\({}_{0}\), ( XY\({}_{1}=\) XY\({}_{1}^{*}\)) and (XY\({}_{2}=\) XY\({}_{2}^{*}\)).
3. If we only preserve \(U(1)\) and break all other symmetries, the five phases reduce to two: (XY\({}_{0}=\) XY\({}_{1}=\) XY\({}_{1}^{*}\)) and (XY\({}_{2}=\) XY\({}_{2}^{*}\)).
### Local and non-local observables
We now turn to how we can physically characterize various gapless phases using local and non-local observables. We will use the previously determined effective symmetry action listed in Tables 3 to 5 to guide us in this. We will focus on two local operators \(O_{x}^{\pm},\ O_{x}^{\pm}\) and a non-local string operator \(C_{x,y}\) defined as follows (in both the small \(J\) (\(J_{<}\)) and large \(J\) (\(J_{>}\)) representations)
\[O_{s}^{\pm}(j) \equiv\begin{cases}S_{2j-1}^{\pm}+S_{2j}^{\pm}&J_{<}\\ S_{1,j}^{\pm}\ +S_{2,j}^{\pm}&J_{>}\end{cases}, \tag{17}\] \[O_{a}^{\pm}(j) \equiv\begin{cases}S_{2j-1}^{\pm}-S_{2j}^{\pm}&J_{<}\\ S_{1,j}^{\pm}\ -S_{2,j}^{\pm}&J_{>}\end{cases},\] (18) \[C(j,k) \equiv\begin{cases}\sigma_{2j-1}^{z}\ \left(\prod_{l=2j}^{2k} \sigma_{l}^{z}\right)\ \sigma_{2k+1}^{z}&J_{<}\\ \sigma_{2,j}^{z}\ \left(\prod_{l=j+1}^{k-1}\sigma_{1,j}^{z}\sigma_{2,j}^{z} \right)\ \sigma_{1,k}^{z}&J_{>}\end{cases}. \tag{19}\]
The nature of two-point correlation functions of the local operators and the expectation value of the string operator are summarized in Table 6 and completely characterize the phases. We see in Table 6 that local operators uniquely identify the XY\({}_{0}\), XY\({}_{1}\) and XY\({}_{2}\) phases but cannot distinguish between the XY\({}_{2}\) and XY\({}_{2}^{*}\) phases, which the nonlocal operator can. In this section, we will see how this behavior can be determined using the bosonization formulas as well as using the effective symmetry action shown in Tables 3 to 5. These predictions will also be confirmed numerically in Section V.
#### iii.4.1 Local operator behavior from bosonization
Let us begin with XY\({}_{0}\) where, using Eq. (5) the local operators can be bosonized as
\[O_{s}^{\pm}(x)\sim e^{\pm i\theta(x)}\cos\phi(x),\ O_{a}^{\pm}(x)\sim e^{\pm i \theta(x)}. \tag{20}\]
In Eq. (20), we have suppressed the bosonization prefactors and retained only the most relevant scaling operators the lattice operators have an overlap with. Clearly, the two point functions of \(O_{s}^{\pm}\) and \(O_{a}^{\pm}\) are expected to have algebraic decay governed by the parameters of the effective compact-boson CFT that describe the phase at long distances. Recall that for a CFT, the correlation functions of the scaling operators \(\mathcal{X}(x)\) with scaling dimensions \(\Delta_{\mathcal{X}}\) scale as
\[\langle\mathcal{X}(x)\mathcal{X}^{\dagger}(y)\rangle\sim|x-y|^{2\Delta_{X}}. \tag{21}\]
Thus, at long distances \(|x-y|\), we expect
\[|\langle O_{s}^{+}(x)O_{s}^{-}(y)\rangle| \sim|x-y|^{-\left(2K+\frac{1}{2K}\right)},\] \[|\langle O_{a}^{+}(x)O_{a}^{-}(y)\rangle| \sim|x-y|^{-\frac{1}{2K}}. \tag{22}\]
Let us now consider the large-\(J\) phases where, using Eq. (9), we get
\[O_{s}^{\pm}(x) \sim\left(e^{\pm i\theta_{1}(x)}+e^{\pm i\theta_{2}(x)}\right),\] \[O_{a}^{\pm}(x) \sim\left(e^{\pm i\theta_{1}(x)}-e^{\pm i\theta_{2}(x)}\right). \tag{23}\]
We have again suppressed bosonization prefactors and retained only the most relevant scaling operators. When we have the full \(c=2\) theory along the \(t=0\) line shown in Fig. 15 we see that both local operators have algebraic correlations. However, for \(t\neq 0\), when \(\mathcal{V}_{-}\) or \(\mathcal{W}_{-}\) are relevant resulting in the different gapless phases, this changes. Consider the case where \(\mathcal{W}_{-}\) is the most relevant operator and pins \(\vartheta\equiv\theta_{1}-\theta_{2}\). We can use the \(SL(2,\mathcal{I})\) transformation shown in Eq. (13) and Eq. (14) to obtain the following.
\[O_{s}^{\pm}(x) \sim\left(e^{\pm i\theta_{1}}+e^{\pm i\theta_{2}}\right)\approx e^{ \pm i\theta}\left(1+e^{\pm i\langle\vartheta\rangle}\right)\] \[\approx\begin{cases}0&\text{for }\langle\vartheta\rangle=\pi\ (\text{XY} _{1})\\ e^{\pm i\theta}&\text{for }\langle\vartheta\rangle=0\ (\text{XY}_{1}^{*})\end{cases},\] \[O_{a}^{\pm}(x) \sim\left(e^{\pm i\theta_{1}}-e^{\pm i\theta_{2}}\right)\approx e ^{\pm i\theta}\left(1-e^{\pm i\langle\vartheta\rangle}\right)\] \[\approx\begin{cases}e^{\pm i\theta}&\text{for }\langle\vartheta\rangle=\pi\ (\text{XY} _{1})\\ 0&\text{for }\langle\vartheta\rangle=0\ (\text{XY}_{1}^{*})\end{cases}. \tag{24}\]
We see that for each case \(\langle\vartheta\rangle=\pi/0\), only one of the two operators \(O_{s}^{\pm}(x)\) / \(O_{s}^{\pm}(x)\) has vanishing overlap with scaling operators and has algebraic correlations whereas
\begin{table}
\begin{tabular}{|c||c|c|c||c|c|} \hline & XY\({}_{0}\) & XY\({}_{1}\) & XY\({}_{1}^{*}\) & XY\({}_{2}\) & XY\({}_{2}^{*}\) \\ \hline \hline \(\langle O_{s}^{+}(x)\ O_{s}^{-}(y)\rangle\) & alg & exp & alg & exp & exp \\ \hline \(\langle O_{a}^{+}(x)\ O_{a}^{-}(y)\rangle\) & alg & alg & exp & exp & exp \\ \hline \hline \(\langle C(x,y)\rangle\) & 0 & 0 & 0 & 0 & \(\neq 0\) \\ \hline \end{tabular}
\end{table}
Table 6: Local and nonlocal order observables for large \(|x-y|\). We denote algebraic and exponential decay by ‘alg’ and ‘exp’.
the other has exponential correlations:
\[|\langle O^{+}_{s}(x)O^{-}_{s}(y)\rangle|\sim\begin{cases}e^{-\frac{ |x-y|}{\xi}}&(\text{XY}_{1})\\ |x-y|^{-\frac{1}{2k_{\text{eff}}}}&(\text{XY}^{*}_{1})\end{cases},\] \[|\langle O^{+}_{a}(x)O^{-}_{a}(y)\rangle|\sim\begin{cases}|x-y|^{- \frac{1}{2k_{\text{eff}}}}&(\text{XY}_{1})\\ e^{-\frac{|x-y|}{\xi}}&(\text{XY}^{*}_{1})\end{cases}. \tag{25}\]
\(K_{\text{eff}}\) is the effective Luttinger parameter shown in Eq. (10) that characterizes the effective compact boson CFT at long distances. We may wonder if the calculations above are modified if we include the corrections to the bosonization formulas represented by ellipses in Eq. (9). It turns out that the answer is no and can be verified by including all higher terms explicitly. A more powerful way is using symmetries, as will be discussed in the next subsection.
We now turn to the phases obtained when \(\mathcal{V}_{-}\) is dominant and pins \(\varphi\equiv\phi_{1}-\phi_{2}\). Using the \(SL(2,\mathbb{Z})\) transformation shown in Eq. (15) as well as Eq. (16), we get
\[O^{\pm}_{s}\sim e^{\pm i\theta_{1}}+e^{\pm i\theta_{2}}\approx \left(\langle e^{\pm i\vartheta}\rangle+e^{\pm i\theta}\ \langle e^{\mp i\vartheta}\rangle\right)\approx 0,\] \[O^{\pm}_{a}\sim e^{\pm i\theta_{1}}-e^{\pm i\theta_{2}}\approx \left(\langle e^{\pm i\vartheta}\rangle-e^{\pm i\theta}\ \langle e^{\mp i\vartheta}\rangle\right)\approx 0. \tag{26}\]
We see that both \(O^{\pm}_{s}(x)\) and \(O^{\pm}_{a}(x)\) have no overlap with any scaling functions and therefore their correlation functions decay exponentially
\[|\langle O^{+}_{s}(x)O^{-}_{s}(y)\rangle|\sim|\langle O^{+}_{a}(x)O^{-}_{a}(y) \rangle|\sim e^{-\frac{|x-y|}{\xi}}. \tag{27}\]
We can check that this behaviour does not change even when corrections represented by ellipses in Eq. (9) are included. This can also be justified using symmetry arguments as we will now see.
#### iii.2.2 Local operator behaviour from effective symmetry action
The correlations of local operators shown in Table 6 can also be understood directly by using symmetries. Let us begin by noting down the transformations of the local operators under the \(U(1)\), \(\mathbb{Z}_{2}^{R}\) and \(\mathbb{Z}_{2}^{P}\) symmetries. This is shown in Table 7. At this point, let us remark that all local operators are charged under various internal symmetries. The non-local operator, on the other hand, although is neutral overall, has end points that carry \(\mathbb{Z}_{2}^{R}\) charge. This is important to establish the topological nature of phases and will be discussed in Section IV. Now, we can ask if the transformations shown in Table 7 can be obtained in each of the five gapless phases using combinations of the scaling operators \(\mathcal{X}_{mn}(x)\) whose transformations are shown in Tables 3 to 5. If the answer is yes, it will mean that the local operator will have algebraic correlations at long distances with the exponent determined by the scaling dimensions of the said operators with smallest scaling dimensions. If not, then the operators will have exponentially decaying correlations
\(XY_{0}\): Comparing the \(U(1)\) transformations shown in Table 7 and Table 3 tells us that \(O^{\pm}_{s}\) and \(O^{\pm}_{a}\) can have overlap with \(\mathcal{X}_{\pm 1,n}\). Comparing the \(\mathbb{Z}_{2}^{P}\) action tells us that the smallest operators that transform correctly are
\[O^{\pm}_{s}(x)\sim\mathcal{X}_{\pm 1,1}+\mathcal{X}_{\pm 1,-1}\sim e ^{\pm i\theta}\cos\phi, \tag{28}\] \[O^{\pm}_{a}(x)\sim\mathcal{X}_{\pm 1,0}\sim e^{\pm i\theta}, \tag{29}\]
which is precisely what was obtained from the bosonization formulas in Eq. (20) and Eq. (22). This combination also transforms correctly under \(\mathbb{Z}_{2}^{R}\).
\(XY_{1}\) _and \(XY_{1}^{*}\)_: Comparing the \(U(1)\) transformations shown in Table 7 and Table 4 again tells us that \(O^{\pm}_{s}\) and \(O^{\pm}_{a}\) can overlap with \(\mathcal{X}_{\pm 1,n}\). It is easy to check that no combination of scaling operators \(\mathcal{X}_{\pm 1,n}\) can simultaneously reproduce the \(\mathbb{Z}_{2}^{P}\) and \(\mathbb{Z}_{2}^{R}\) transformations of \(O^{\pm}_{s}(x)\) (for \(\langle\vartheta\rangle=\pi\) that is, \(\text{XY}_{1}^{*}\)) and therefore have correlations that decay exponentially. On the other hand, \(\mathcal{X}_{\pm 1,0}\sim e^{\pm i\theta}\) has the right transformation properties as \(O^{\pm}_{a}(x)\) (for \(\langle\vartheta\rangle=\pi\) i.e. \(\text{XY}_{1}\)) and \(O^{\pm}_{s}(x)\) (for \(\langle\vartheta\rangle=0\) i.e. \(\text{XY}_{1}^{*}\)). This reproduces Eqs. (24) and (25).
\(XY_{2}\) _and \(XY_{2}^{*}\)_: The effective \(U(1)\) transformations in Table 5 tell us that all scaling operators have a minimum \(U(1)\) charge of 2 and therefore there are no combinations of scaling operators that have the transformation properties of \(O^{\pm}_{s}\) and \(O^{\pm}_{a}\) and that have a unit \(U(1)\) charge as seen in Table 7. Consequently, the correlations of \(O^{\pm}_{s}\) and \(O^{\pm}_{a}\) have exponential decay in both \(\text{XY}_{2}\) and \(\text{XY}_{2}^{*}\) phases [44]. This reproduces Eqs. (26) and (27).
#### iii.2.3 Behaviour of the non-local operator
We now turn to the nonlocal string operator \(C(x,y)\) defined in Eq. (19) which can be bosonized in both the small-\(J\) (\(J_{<}\)) and large-\(J\) (\(J_{>}\)) limits as follows (see Appendix C for details)
\[C(x,y)\sim C_{L}(x)\ C_{R}(y) \tag{30}\] \[C_{L/R}\approx\begin{cases}\gamma\ \sin\left(\frac{\phi}{2} \right)&(J_{<})\\ \alpha\sin\left(\frac{\phi_{1}+\phi_{2}}{2}\right)+\beta\sin\left(\frac{\phi_{1} -\phi_{2}}{2}\right)&(J_{>})\end{cases} \tag{31}\]
where we have only shown operators with the smallest scaling dimensions and \(\alpha,\beta,\gamma\) are non-zero coefficients whose values we do not fix. It is now easy to verify how \(\langle C(x,y)\rangle\) behaves at large \(|x-y|\). From Eq. (30), we have
\[\langle C(x,y)\rangle\sim\langle C_{L}(x)\rangle\ \langle C_{R}(y)\rangle. \tag{32}\]
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline & \(O^{\pm}_{s}(x)\mapsto\) & \(O^{\pm}_{a}(x)\mapsto\) & \(C(x,y)\mapsto\) \\ \hline \hline \(U(1)\) & \(e^{\pm i\chi}\ O^{\pm}_{s}(x)\) & \(e^{\pm i\chi}\ O^{\pm}_{a}(x)\) & \(C(x,y)\) \\ \hline \(\mathbb{Z}_{2}^{R}\) & \(O^{\mp}_{s}(x)\) & \(O^{\pm}_{a}(x)\) & \(C(x,y)\) \\ \hline \(\mathbb{Z}_{2}^{P}\) & \(O^{\pm}_{s}(-x)\) & \(-O^{\pm}_{a}(-x)\) & \(C(-y,-x)\) \\ \hline \end{tabular}
\end{table}
Table 7: Symmetries transformations of local and non-local operators defined in Eqs. (17) to (19).
Therefore, when \(\langle C_{L/R}\rangle\neq 0\), we have \(\langle C(x,y)\rangle\neq 0\). Among the phases without spontaneous symmetry breaking, this happens when \(\langle\phi\rangle=\pi\) for small \(J\) and \(\langle\phi_{1}\pm\phi_{2}\rangle=\pi\) for large \(J\). From Figs. 14 and 15, we see that \(\langle C(x,y)\rangle\neq 0\) in the Haldane and XY\({}_{2}^{*}\) phases whereas in the trivial gapped phase and other gapless phases XY\({}_{0}\), XY\({}_{1}\) and XY\({}_{1}^{*}\), \(C(x,y)\to 0\) for sufficiently large \(|x-y|\). This confirms the remaining entries of Table 6.
## IV Bosonization analysis II: the topological nature of XY\({}_{2}^{*}\)
We now focus on the XY\({}_{2}^{*}\) gapless phase and study its topological nature. First, we show that it has protected edge modes and then discuss the nature of the topological phase. In particular, we show that the gapless topological phase is not 'intrinsically gapless' and briefly discuss a related model where it is.
### Edge modes
A hallmark of gapped symmetry protected topological phases such as topological insulators and superconductors is the presence of protected edge modes degenerate with the ground state, which have exponentially small splitting at finite system sizes. Gapless topological phases are defined as those that have edge modes protected by symmetries and can be sharply identified at finite volumes by exponential or algebraic splitting with coefficients different from bulk states [14; 15; 16; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 289; 291; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 320; 321; 322; 324; 325; 326; 327; 328; 338; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 3777; 378; 379; 388; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 431; 432; 433; 434; 436; 437; 438; 439; 444; 450; 451; 452; 453; 454; 456; 457; 458; 459; 460; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 483; 484; 485; 486; 487; 488; 489; 490; 491; 492; 493; 494; 495; 496; 497; 498; 499; 500; 501; 502; 503; 504; 505; 506; 507; 508; 509; 510; 511; 512; 513; 514; 515; 516; 517; 518; 529; 530; 519; 540; 510; 511; 519; 529; 541; 542; 556; 557; 56; 578; 58; 591; 592; 593; 594; 595; 596; 597; 598; 599; 600; 61; 62; 630; 641; 65; 666; 67; 68; 699; 701; 602; 642; 66; 691; 65; 692; 667; 693; 603; 604; 605; 606; 61; 607; 611; 612; 62; 647; 613; 638; 648; 65; 668; 671; 68; 693; 614; 68; 694; 695; 607; 62; 608; 609; 614; 609; 621; 609; 622; 631; 649; 650; 66; 672; 68; 696; 697; 608; 609; 610; 611; 623; 651; 66; 698; 612; 609; 632; 613; 66; 614; 617; 62; 67; 68; 699; 718; 700; 719; 719; 720; 731; 74; 75; 769; 76; 77; 78; 79; 800; 701; 721; 78; 79; 81; 839; 84; 85; 86; 87; 88; 89; 90; 81; 88; 89; 80; 82; 83; 86; 87; 88; 89; 80; 83; 88; 89; 80; 84; 89; 81; 85; 89; 80; 86; 87; 88; 89; 81; 89; 80; 87; 81; 88; 89; 80; 89; 80; 89; 80; 80; 81; 89; 81; 80; 81; 81; 82; 83; 84; 85; 86; 87; 88; 89; 82; 89; 83; 89; 80; 84; 86; 87; 89; 80; 88; 89; 82; 89; 80; 89; 80; 89; 81; 80; 82; 89; 80; 81; 83; 89; 80; 83; 81; 84; 85; 86; 89; 82; 86; 87; 88; 89; 83; 89; 84; 86; 89; 87; 89; 88; 89; 80; 89; 89; 80
### Why XY\({}_{2}^{*}\) is _not_ an intrinsically gapless topological phase?
In the taxonomy of gapless topological phases [14; 15; 16; 17; 19], a special role is played by so-called intrinsically gapless topological phases [19; 45; 46]. These are gapless phases with stable edge modes protected by symmetries that do not allow gapped topological phases. In this sense, the topological nature is intrinsically gapless. Phase diagrams in which intrinsically gapless topological phases can be found cannot, by definition, contain gapped topological phases. Therefore, the phase diagrams shown in Fig. 2 that contain the Haldane phase, which is a gapped topological phase, make it clear that the XY\({}_{2}^{*}\) phase is not intrinsically gapless. This is because the symmetries of the model \(G\cong O(2)\times\mathbb{Z}_{2}^{P}\times\mathbb{Z}\) protect both gapless and gapped topological phases. We can ask whether we can break certain symmetries to preserve only the gapless topological phase but eliminate the gapped one. We now show using bosonization that this too is not possible.
Let us focus on the large \(J\) limit where XY\({}_{2}^{*}\) is present. From Fig. 15, we see that the gapped Haldane phase is obtained when \(\langle\phi_{1}+\phi_{2}\rangle=\pi\) whereas the XY\({}_{2}^{*}\) obtains when \(\langle\phi_{1}-\phi_{2}\rangle=\pi\). Let us consider the possibility of eliminating the Haldane phase that has a gap while preserving XY\({}_{2}^{*}\) by adding an operator that ensures that \(\langle\phi_{1}+\phi_{2}\rangle\) can be tuned smoothly to zero, while \(\langle\phi_{1}-\phi_{2}\rangle\) can only be pinned to \(0\) or \(\pi\). The operator that achieves this is
\[\delta H\sim\int dx\sin\left(\phi_{1}+\phi_{2}\right). \tag{42}\]
However, note that the addition of Eq. (42) simultaneously breaks both \(\mathbb{Z}_{2}^{R}\) and \(\mathbb{Z}_{2}^{P}\) symmetries. Therefore, any lattice operator that produces Eq. (42) also generically produces an operator of the form
\[\delta H^{\prime}\sim\int dx\sin\left(\phi_{1}-\phi_{2}\right). \tag{43}\]
which smoothly tunes the pinned value of \(\langle\phi_{1}+\phi_{2}\rangle\) to zero and therefore eliminates XY\({}_{2}^{*}\)[47].
### A related model where XY\({}_{2}^{*}\)_is_ an intrinsically gapless topological phase
We now present a model where XY\({}_{2}^{*}\) is an intrinsically gapless topological phase. We work in the large-\(J\) limit and modify the Hamiltonian in Eq. (2) as follows
\[H =H_{1}+H_{2}+H_{\perp}+H_{\perp}^{\prime}+H_{\perp}^{\prime\prime},\text{ where}, \tag{44}\] \[H_{\alpha} =J\sum_{j}\left(S_{\alpha j}^{x}S_{\alpha j+1}^{x}+S_{\alpha j}^{ y}S_{\alpha j+1}^{y}-\Delta S_{\alpha j}^{z}S_{\alpha j+1}^{z}\right),\] \[H_{\perp} =(1-t)\sum_{j}\left(S_{1j}^{x}S_{2j}^{x}+S_{1j}^{y}S_{2j}^{y}- \lambda S_{1j}^{z}S_{2j}^{z}\right),\] \[H_{\perp}^{\prime} =\frac{(1+t)}{2}\sum_{j}\left(S_{2j}^{x}S_{1j+1}^{x}+S_{2j}^{y}S_ {1j+1}^{y}-\lambda S_{2j}^{z}S_{1j+1}^{z}\right),\] \[H_{\perp}^{\prime\prime} =\frac{(1+t)}{2}\sum_{j}\left(S_{1j}^{x}S_{2j+1}^{x}+S_{1j}^{y}S_ {2j+1}^{y}-\lambda S_{1j}^{z}S_{2j+1}^{z}\right).\]
The presence of the new term, \(H_{\perp}^{\prime\prime}\) preserves all original symmetries shown in Table 1 but importantly introduces a new on-site symmetry which exchanges the two legs. The action on spin operators and large-\(J\) bosonized variables is as follows.
\[\mathbb{Z}_{2}^{L}:\ \vec{S}_{1,j}\leftrightarrow\vec{S}_{2,j},\ \phi_{1}\leftrightarrow\phi_{2},\ \theta_{1}\leftrightarrow\theta_{2}. \tag{45}\]
Remarkably, the bosonized version of Eq. (44) is identical to Eq. (9) and therefore should contain the same phases although in different parameter regimes. Let us now consider including lattice operators that explicitly break the \(\mathbb{Z}_{2}^{R}\) and \(\mathbb{Z}_{2}^{P}\) symmetries but preserve the new \(\mathbb{Z}_{2}^{L}\) symmetry shown in Eq. (45). In the continuum limit, this introduces only the perturbation shown in Eq. (42) but not Eq. (43) since the latter breaks \(\mathbb{Z}_{2}^{L}\). As explained above, this eliminates the Haldane phase. The equivalent of XY\({}_{2}^{*}\) phase in this model is an intrinsically gapless topological phase. Indeed, the residual on-site unitary symmetry \(U(1)\times\mathbb{Z}_{2}^{L}\) is known to not host any gapped symmetry protected topological phases in one dimension [48]. We leave the numerical study of the model in Eq. (44) to future work.
## V Numerical analysis
In this section, we numerically analyze the system at hand and validate the analytical results predicted above. We map the spin system to hard-core bosons, where the on-site occupancy is restricted to \(n=0/1\). The Hamiltonian in terms of hard core bosons (see Eq. (2)) becomes
\[H_{\alpha}= J\Big{[}\sum_{j}\frac{1}{2}\left(b_{\alpha,j}^{\dagger}b_{\alpha,j+ 1}+\text{H.c.}\right)\] \[-\Delta\left(\tilde{n}_{\alpha,j}\tilde{n}_{\alpha,j+1}\right) \Big{]},\ \ \alpha=1,2\] \[H_{\perp}= (1-t)\Big{[}\sum_{j}\frac{1}{2}\left(b_{1,j}^{\dagger}b_{2,j}+ \text{H.c.}\right)-\lambda\left(\tilde{n}_{1,j}\tilde{n}_{2,j}\right)\Big{]}\] \[H_{\perp}^{{}^{\prime}}= (1+t)\Big{[}\sum_{j}\frac{1}{2}\left(b_{2,j}^{\dagger}b_{1,j+1} +\text{H.c.}\right)-\lambda\left(\tilde{n}_{2,j}\tilde{n}_{1,j+1}\right)\Big{]} \tag{46}\]
Figure 3: Schematic representation of the Hamiltonian in Eq. (44) which can host an intrinsically gapless SPT.
where \(b_{j}\) (\(b_{j}^{\dagger}\)) annihilation (creation) operators and \(\tilde{n}_{j}=\left(n_{j}-\frac{1}{2}\right)\) with \(n_{j}\) being the number operator for site \(j\). The ground state of the model Hamiltonian is computed using the Density Matrix Renormalization Group (DMRG) method [49; 50; 51]. The bond dimension is taken to be \(\sim 500\), which is sufficient for convergence for typical system sizes \(L=200\) where \(L\) is the total number of sites in the system. Unless otherwise stated, sites are labeled using a single-site label convention of Eq. (1).
### Diagnostics, Phases and Phase transitions
We explore the parameter space in the \(\lambda-t\) plane with fixed \(J\) and identify the phases and their transitions. The most illustrative limit is to first investigate when \(J=0\)[52] where the system, in the absence of any dimerization (\(t=0\)), undergoes a first order phase transition at \(\lambda=1\) (see Fig. 4(a)). \(t\) engineers gapped phases between \(0<\lambda<\frac{1}{\sqrt{2}}\), however \(t<0\) is trivial and \(t>0\) is topological (Haldane phase) in nature. A gapless phase (XY\({}_{0}\)) opens between \(\frac{1}{\sqrt{2}}<\lambda<1\) where both perturbations \(\lambda\) and \(t\) are irrelevant. Introducing a small finite \(J\), not unexpectedly, only renormalizes the phase boundaries (see \(J=0.1\)\(\lambda-t\) phase diagram in Fig. 4(b)) reducing the size of the gapless XY\({}_{0}\) phase. A further increase in \(J_{2}\) leads to the emergence of two new gapless phases (XY\({}_{1}\)) and XY\({}_{1}^{*}\) as XY\({}_{0}\) disappears (see Fig. 4(c)) and we get the large \(J\) picture.
To explore the large \(-J\) phase diagram schematically shown in Fig. 2, a particularly illustrative parameter choice is to explore the phase diagram for fixed \(J=2.5\) as shown in Fig. 5 (\(\Delta=0.1\)). Four _distinct_ symmetry enriched critical phases are clearly obtained. To conclusively characterize the phase boundaries and the nature of their transitions, we use a host of diagnostics which we now discuss.
#### iv.1.1 BKT Transitions
Transitions from the trivial gapped phase to XY\({}_{1}\) and the Haldane phase to XY\({}_{1}^{*}\) belong to the BKT universality class. To characterize these transitions, it is useful to note that in the (hard-core) bosonic language, the XY\({}_{1}\) and XY\({}_{1}^{*}\) phases are \(\pi/2\)-superfluids (SF(\(\pi/2\))) phases [53; 54]. In such systems, the momentum distribution is given by
\[N(k)=\frac{1}{L}\sum_{i,j}e^{ik|i-j|}\Gamma_{i,j} \tag{47}\]
(where \(\Gamma_{i,j}=\langle b_{i}^{\dagger}b_{j}\rangle\)) and is expected to show a sharp peak at \(k=\pi/2\). At the transition itself, the finite-size scaling of \(N(\pi/2)\) carries the signature of an underlying BKT transition. For example, at the critical point, the BKT ansatz predicts \(N(k)\propto L^{1-\frac{\pi}{2k}}\) which can be used to extract the value of \(K\)[55; 56]. The perfect crossing of the \(N(k=\pi/2)\) data for different lengths at \(t=-0.8\) as shown in Fig. 6(a) indicates a BKT transition which has the Luttinger parameter of value \(K=2\) as expected from
Figure 4: Phase diagram for small \(J\) evaluated using DMRG corresponding to Hamiltonian (see Eq. (46)) for (a) \(J=0.0\), (b) \(J=0.1\) and (c) \(J=0.5\)\(\Delta=\lambda\) in (a-c). Phase boundaries represent the second- and first-order transitions calculated using various diagnostics as mentioned in the text. The transition between the gapped, i.e. Trivial, Haldane and gapless, i.e. XY\({}_{0}\), XY\({}_{1}\) and XY\({}_{1}^{*}\), when they exist are BKT transitions and correspond to a single component compact boson theory with central charge \(c=1\) and Luttinger parameter \(K=2\) whereas the transition between the Haldane and Trivial phase is a single component compact boson theory with central charge \(c=1\) and varying \(K\). All transitions to the FM are first order. Symbols are the only calculated points, and lines connect the points for clarity.
Section III (see also Appendix A). This is found to be true for all values of \(t\) using which the phase boundaries to the SF(\(\pi/2\)) (XY\({}_{1}\)) phase of Fig. 5 have been obtained.
#### iv.2.2 Ising transitions
The transitions from the Neel to trivial and Haldane (circles), from XY\({}_{1}\) to XY\({}_{2}\), and from XY\({}_{1}^{*}\) to XY\({}_{2}^{*}\) phase are found to be of Ising type (see Fig. 5). Such Ising transitions can be characterized by analysing the finite size scaling of the structure factor \(S(k)\) defined by
\[S(k)=\frac{1}{L^{2}}\sum_{l,m}e^{ik(l-m)}\left(\langle n_{l}n_{m}\rangle- \langle n_{l}\rangle\langle n_{m}\rangle\right) \tag{48}\]
In the Neel state, \(S(k)\) shows a peak at \(k=\pi\) signalling antiferromagnetic correlations. At the Ising transitions, it is known that \(S(k=\pi)\) follows a scaling ansatz \(\propto L^{-\frac{2\pi}{\nu}}\) such that at the critical point \(S(k)L^{\frac{2\beta}{\nu}}\) is invariant for different \(L\) with exponents \(\nu=1\) and \(\beta=1/8\)[57, 58]. The perfect crossing of \(S(k)L^{\frac{1}{4}}\) as shown in Fig. 6(b), and eventual collapse of all the data points, shown in Fig. 6(c), for different \(L\) in \(S(\pi)L^{\frac{2\beta}{\nu}}\) vs. \((\lambda-\lambda_{c})L^{\nu}\) plane near the transition point implies an Ising phase transition at \(t=-0.4\) with a critical point \(\lambda_{c}\sim-5.41\). We use the same approach to calculate the Ising phase boundaries in the phase diagram (Fig. 5).
#### iv.2.3 \(c=\frac{3}{2}\) transition between gapless phases
Unlike the previous Ising transitions where one transits from a gapless to a gapped phase, the Ising transitions that appear between XY\({}_{1}\) to XY\({}_{2}\) and XY\({}_{1}^{*}\) to XY\({}_{2}^{*}\) are gapless-to-gapless transitions. Since this CFT appears in addition to the existing compact boson, the total central charge of the transition is expected to be \(c=\frac{3}{2}\). Phase transition points are quantified by analyzing fidelity susceptibility (\(\chi\)) where
\[\chi=\lim_{(\lambda-\lambda^{\prime})\to 0}\frac{-2\mathrm{ln}|\langle\psi( \lambda)|\psi(\lambda^{\prime})\rangle}{(\lambda-\lambda^{\prime})^{2}} \tag{49}\]
where \(|\psi(\lambda)\rangle\) is the ground state at \(\lambda\). At the phase transition point, \(\chi/L\) develops a peak, and the height of the peak diverges linearly with \(L\) for the Ising transition [59, 60, 61]. In Fig. 6(d), we plot \(\chi/L\) for different system sizes, which shows an increase in the peak height with \(L\). The inset of Fig. 6(d) shows the linear divergence of the peak height, implying the Ising transition. The critical point of the transition is determined by extrapolating the position of the peak to the thermodynamic limit, which is marked by the dashed line in Fig. 6(d).
#### iv.2.4 Multiversality along the \(t=0\) line
On \(t=0\) line, the gapless phase with \(c=2\) starts at \(\lambda_{c}\sim-0.01\) which is a BKT transition point that can be calculated using finite size scaling of single particle excitation gap [57]. The excitation gap at half-filling can be defined as,
\[\Delta E_{L}=(E_{N-\Delta n}+E_{N+\Delta n}-2E_{N})/\Delta n, \tag{50}\]
where \(N=L/2\) and \(\Delta n\) is the number of particles in an excitation. The invariance of \(L\Delta E_{L}^{\prime}\) with \(\Delta n=1\) at the critical point and the collapse of all the data in \(L\Delta E_{L}^{\prime}\) vs. \(x_{\lambda,L}\) plane, where
\[\Delta E_{L}^{\prime} =\Delta E_{L}\left[1+1/(2\mathrm{ln}L+C)\right]\] \[x_{\lambda,L} =\mathrm{ln}L-a/\sqrt{\lambda-\lambda_{c}}, \tag{51}\]
at and near the critical point with a suitable choice of constants \(C\) and \(a\) predicts the BKT transition point \(\lambda_{c}\sim-0.01\) (see Fig. 8).
From Fig. 5, we see that the \(c=2\) line separates the gapless XY\({}_{1}\) and XY\({}_{1}^{*}\) as well as the trivial and Haldane gapped phases. The latter phases are separated by a different universality class with \(c=1\) for small \(J\). This is a numerical confirmation of the'multiversality' [22, 23] phenomenon discussed in Section III and Appendix A.
Figure 5: Phase diagram for large \(J\) in \(\lambda-t\) plane with fixed \(J=2.5\) and \(\Delta=0.1\). Bold circles mark the Ising transition with central charge \(c=\frac{1}{2}\) between Néel to trivial and Néel to Haldane phase. The transition between Trivial to XY\({}_{1}\) and Haldane to XY\({}_{1}^{*}\) (up triangles) are BKT transitions, described by a single-component compact boson theory with \(c=1\) and Luttinger parameter \(K=2\). The transition from XY\({}_{1}\) to XY\({}_{1}^{*}\) is a two-component compact boson theory with \(c=2\) with varying Luttinger parameters. The Trivial-to-Haldane phase transition through the MG points is first order, which changes to \(c=2\) for larger \(\lambda\) at \(\lambda_{c}\) (pentagonal point). The transition from XY\({}_{1}\) to XY\({}_{2}\) and from XY\({}_{1}^{*}\) to XY\({}_{2}^{*}\) (down triangles) belong to the Ising universality class stacked on top of a compact boson with \(c=\frac{3}{2}\). Finally, the transition from any phase to the FM phase is a first-order transition (squares). Note that symbols are the only calculated points, and lines connect the points for clarity.
#### iii.1.5 First order transitions
Finally, the transition between the trivial and the Haldane gapped phase for negative values of \(\lambda\) at large \(J\), or that between any of the phases to FM is first-order in nature. These can be characterized by analyzing the level crossings between eigenstate energies. For instance, in the case of transitions to the FM phase, we plot the ground state energy at boson half-filling (\(E_{L/2}\)), which corresponds to zero magnetization sector, and completely filled (\(E_{L}\)), which is equivalent to fully magnetized case (FM phase) across the boundary in Fig. 6(e). The crossing between \(E_{L/2}\) and \(E_{L}\) determines the first-order transition points, which are marked by squares in the phase diagram (Fig. 5). On the other hand, the sharp jump in single particle excitation gap \(\Delta E_{L}\) (Eq. 50 with \(\Delta n=1\)) at \(t=0\) for the transition between the trivial gapped to Haldane gapped phase, as shown in Fig. 6(f), signifies a first-order transition (also see Fig. 5).
Figure 6: The first row of the figure demonstrates the finite size calling to determine (a) the BKT transition between the Trivial and XY\({}_{1}\) phase and (b-c) the Ising transition between the Néel and Trivial phase. The perfect crossing of different \(N(\pi/2)L^{\frac{2\beta}{L}-1}\) curves in (a) with \(t=-0.8\) for different \(L\) implies the transition point with the Luttinger parameter \(K=2\) for the BKT transition. The crossing of different \(S(\pi)L^{\frac{2\beta}{L}}\) curves with \(t=-0.4\) for different \(L\) in (b) reveals the transition point with exponents \(\nu=1\) and \(\beta=1/8\). The collapse of all the data points for different \(L\) in \(S(\pi)L^{\frac{2\beta}{L}}\) vs. \((\lambda-\lambda_{c})L^{\nu}\) shown in (c) further confirms the Ising transition point at \(\lambda_{c}\sim-5.34\). The Ising transition between two gapless phases, from XY\({}_{1}\) to XY\({}_{2}\) phase, at \(t=-1\) using the finite size scaling of fidelity-susceptibility (\(\chi\)) shown in (d). The \(\chi\) peaks at the transition point, and for the Ising transition, the peak height diverges linearly with \(L\) (inset). The transition point at the thermodynamic limit (dotted line) is calculated by extrapolating the peak positions for different \(L\). The eigenvalues are plotted to determine the first-order transitions. (e) The level crossing in the ground state energies \(E_{N}\) at \(t=-0.7\) with \(N=L/2\) and \(N=L\) implies the first-order transition between the XY\({}_{2}\) and FM phase. (f) The sharp jump in single particle excitation gap (\(\Delta E_{L}\) with \(\Delta n=1\)) at \(t=0\) for \(\lambda=-2.0\) signifies the first order transition between the trivial and Haldane phases. See the phase diagram in Fig. 5. In (e) and (f), we consider \(L=200\).
Figure 7: The central charge (\(c\)) is plotted for cuts along (a) \(t=-1.0\) that goes through the XY\({}_{1}\) and XY\({}_{2}\) phases for different \(L\) corresponding to Fig. 5 where \(J=2.5\) and \(\Delta=0.1\). In (b), we plot \(c\) for a system of \(L=200\) along a cut that goes through the interface between the phases XY\({}_{1}\) and XY\({}_{1}\) (\(t=0\)) (see Fig. 5).
#### iv.1.6 Central charge
Now we want to give numerical evidence for the central charge predicted by the bosonization analysis. We find the central charge (\(c\)) by fitting the bipartite Von-Neumann entanglement entropy (\(S_{vN}\)) to its conformal expression [62]
\[S_{vN}=\frac{c}{6}\text{ln}\left[\frac{L}{\pi}\text{sin}\frac{\pi l}{L}\right]+g. \tag{52}\]
Figure 7(a) show how \(c\) changes at the transition between XY\({}_{1}\) and XY\({}_{2}\) phases according to Fig. 5. In Fig. 7(b), we represent \(c\) along the \(t=0\) line that cuts through the interface between the XY\({}_{1}\) and XY\({}_{2}\) phases. As discussed above in the analytical analysis, we find from the numerical analysis, although not exactly, that the \(c\) is close to 2 in the XY\({}_{1}\) - XY\({}_{1}^{*}\) transition and at the Ising transition point between the phases XY\({}_{1}\) and XY\({}_{2}\), the \(c\) is close to 1.5 (upto finite size effects).
### Characterising gapless phases
Since the gapped phases and particularly ordered phases are well understood and can be easily characterized by conventional order parameters, here we will focus our discussion on the gapless phases and their characterization.
#### iv.2.1 String Order Parameter
A particularly useful tool, which also helps in distillation of the topological features of the gapless phases, is the string order parameter \(C_{i,j}\) (see equivalently Eq. (19) upto a phase) where
\[C_{i,j}=-\langle z_{i}e^{i\frac{\pi}{2}\sum_{k=i+1}^{j-1}z_{k}}z_{j}\rangle, \tag{53}\]
and \(z_{i}=1-2b_{i}^{\dagger}b_{i}\). The string order parameter \(|C_{2,L-1}|\), not unexpectedly, shows a finite value in the Haldane (gapped) phases in the system (not shown) [63, 58]. Interestingly, the same order parameter also takes nontrivial values in XY\({}_{2}^{*}\), proving that it is a _gapless topological_ phase. In Fig. 9(a) the behavior of \(|C_{i,j}|\) is shown for both XY\({}_{2}\) (circles) and XY\({}_{2}^{*}\) (squares) - one finds that, unlike XY\({}_{2}\), in XY\({}_{2}^{*}\), \(|C_{i,j}|\) takes a finite value that does not decay with \(|i-j|\). Similarly in Fig. 9(b) we plot \(|C_{i,j}|\) within the phase XY\({}_{1}\) (circles) and XY\({}_{1}^{*}\) (squares), and in Fig. 9(c), we plot it in the XY\({}_{0}\) phase. In both plots, the string-order parameter vanishes, showing that these phases are trivial in nature.
#### iv.2.2 Local Order Parameters
The nature of long-range correlations can also distinguish between the different phases, as shown in Table 6. To this end, we calculate \(|\langle O_{i}^{s+}O_{j}^{s-}\rangle|\) and \(|\langle O_{i}^{a+}O_{j}^{a-}\rangle|\) where \(O_{i}^{s+}=b_{1,i}^{\dagger}+b_{2,i}^{\dagger}\) and \(O_{i}^{a+}=b_{1,i}^{\dagger}-b_{2,i}^{\dagger}\) to distinguish between the trivial, XY\({}_{1}\), XY\({}_{1}^{*}\) and XY\({}_{0}\) phases. The results are shown in Fig. 9(d-g) for all the gapless phases. We see a contrast in the nature of these correlations in two phases. The \(|\langle O_{i}^{s+}O_{j}^{s-}\rangle|\) (\(|\langle O_{i}^{a+}O_{j}^{a-}\rangle|\)) falls exponentially (algebraically) with distance \(|i-j|\) in the XY\({}_{1}\) phase. Whereas, in the XY\({}_{1}^{*}\) phase, the behavior flips. However, in XY\({}_{0}\) and XY\({}_{2}\), both correlation functions are algebraic or exponential, as shown in Figs. 9(f) and (g), respectively.
#### iv.2.3 Edge states
The topological XY\({}_{2}^{*}\) phase exhibits edge states, a hallmark property of such topological phases. In Fig. 10(a) and Fig. 10(b), we plot the number of particles \(n_{r,i}=\langle n_{1,i}+n_{2,i}\rangle\) in strong rungs, where the hopping and interaction coupling are large (even or odd rungs) according to the construction of the system, for the phases XY\({}_{2}\) and XY\({}_{2}^{*}\), respectively. For XY\({}_{2}^{*}\), two edge sites do not belong to the strong bond where we plot \(\langle n_{i}\rangle\). We can see that, only for the XY\({}_{2}^{*}\) phase (Fig. 10(b)), the system exhibits exponentially localized occupied edge states.
The edge states manifest gapless excitations at the edges of the system. To confirm this property, we plot the energy gap for the excitation (Eq. 50) at half-filling. In Fig. 11 we plot \(\Delta E_{L}\) for \(\Delta n=1\) (circles) and \(\Delta n=2\) (triangles) in the phases XY\({}_{2}\) (solid symbols) and XY\({}_{2}^{*}\) (empty symbols). In both phases, the elementary excitation in bulk is \(\Delta n=2\) (a pair of particles on the strong rungs), which is gapless. This can be confirmed from the algebraic decay of \(\Delta E_{L}\). In the XY\({}_{2}^{*}\) phase, due to the presence of edge states, which can be occupied by a
Figure 8: Finite size scaling of \(\Delta E_{L}\) is shown to find the BKT transition point along \(t=0\) line for large \(J\) where the universality class changes from first order to second order with \(c=2\). The crossing of all curves captures the critical point, marked by the arrow. (inset) The collapse of all the data for different \(L\) complements the BKT transition with \(\lambda_{c}=-0.01\).
single particle, we see algebraic decay of \(\Delta E_{L}\) even with \(\Delta n=1\). In the XY\({}_{2}\) phase, however, single particle excitation is gapped where \(\Delta E_{L}\) saturates to a finite value.
Figure 11: The energy gaps \(\Delta E_{L}\) is plotted for XY\({}_{2}\) (solid symbols) and XY\({}_{2}^{*}\) (empty symbols) phases, emerged in Fig. 5, as a function of \(L\). Circles and triangles represent single (\(\Delta n=1\)) and two-particle (\(\Delta n=2\)) excitation gaps, respectively. Here, \(J=2.5\) and \(\Delta=0.1\).
Figure 10: The presence of edge state in the topological gapless phase (XY\({}_{2}^{*}\)), which is present in phase diagram Fig. 5, is portrayed. Here, \(J=2.5\) and \(\Delta=0.1\). (a) shows the density of particles \(n_{r,i}/2\) on the rung \(i\) with stronger coupling is plotted corresponding to XY\({}_{2}\) phase and (b) shows the same except the edge sites where \(\langle n_{i}\rangle\) is plotted corresponding to XY\({}_{2}^{*}\) phase.
Figure 9: String order parameter \(|C_{i,j}|\) is shown in (a-c) for different gapless phases to distinguish the trivial and non-trivial topological phases. (a) The parameter (\(\lambda\), \(t)=(5,-1.0)\) belongs to the XY\({}_{2}\) phase and \((5,1.0)\) belongs to the XY\({}_{2}^{*}\) phase (see Fig. 5). The presence (absence) of long-range \(|C_{i,j}|\) signifies the non-trivial (trivial) topological nature of the XY\({}_{2}^{*}\) (XY\({}_{2}\)) phase. (b) The parameter \((2.5,-0.5)\) belongs to the XY\({}_{1}^{*}\) phase (see Fig. 5). The absence of long-range \(|C_{i,j}|\) signifies the trivial nature of the phases XY\({}_{1}\) and XY\({}_{1}^{*}\). (c) \(|C_{i,j}|\) for the XY\({}_{0}\) phase (\(J=0\)) (see Fig. 4 (a)). For all cases, we calculate \(C_{ij}\) with \(i=L/4+1\), and \(j\) goes from \(\tilde{t}+2\) to \(L\) where \(j-i\in\text{odd}\) for a system of \(L=200\). (d-g) shows the correlation functions \(|\langle O_{i}^{a+}O_{j}^{a-}\rangle|\) (circles) and \(|\langle O_{i}^{a+}O_{j}^{a-}\rangle|\) (squares) as a function of distance \(|i-j|\) for four different gapless phases XY\({}_{1}\), XY\({}_{1}^{*}\), XY\({}_{0}\) and XY\({}_{2}\), respectively. (g) is also applicable to XY\({}_{2}^{*}\). We see different behaviors of these correlations in different gapless phases (see the main text). Note that in (d), (e), and (g), the parameters correspond to the phases in Fig. 5, and the parameters in (f) correspond to the phase in Fig. 4(a). Here, we also use a system of \(L=200\) and calculate the correlations at the center of the system where \(i=L/8\).
## VI Mapping to effective spin-1 models
The connection between phase diagrams of spin \(\frac{1}{2}\) ladders and higher-spin chains is rather well known [40]. Certain parts of the phase diagram shown in Fig. 2 too can be determined using a mapping to an effective spin 1 model. This provides both consistency checks and physical insights into the phases. To do this, let us begin with the Hamiltonian in Eq. (2) and perform a change of basis
\[\begin{pmatrix}S_{1j}^{x}\\ S_{1j}^{y}\\ S_{1j}^{z}\end{pmatrix}\mapsto\begin{pmatrix}-S_{1j}^{x}\\ -S_{1j}^{y}\\ S_{1j}^{z}\end{pmatrix},\begin{pmatrix}S_{2j}^{x}\\ S_{2j}^{y}\\ S_{2j}^{z}\end{pmatrix}\mapsto\begin{pmatrix}S_{2j}^{x}\\ S_{2j}^{y}\\ S_{2j}^{z}\end{pmatrix}. \tag{54}\]
which results in the following change in \(H_{\perp}\) and \(H_{\perp}^{\prime}\)
\[H_{\perp}^{\prime} \mapsto-(1+t)\sum_{j}\left(S_{2j}^{x}S_{1j+1}^{x}+S_{2j}^{y}S_{1j +1}^{y}+\lambda S_{2j}^{z}S_{1j+1}^{z}\right),\] \[H_{\perp} \mapsto-(1-t)\sum_{j}\left(S_{1j}^{x}S_{2j}^{x}+S_{1j}^{y}S_{2j}^ {y}+\lambda S_{1j}^{z}S_{2j}^{z}\right). \tag{55}\]
Let us first consider the parameter regime when \(H_{\perp}\) is dominant, i.e. \(t\approx-1\). Since \(H_{\perp}\) decouples into disjoint pieces each of which has support on two spins living on vertical bonds as shown in Fig. 12 and takes the form
\[h_{\perp}=-(1-t)\left(S_{1j}^{x}S_{2j}^{x}+S_{1j}^{y}S_{2j}^{y}+\lambda S_{1j} ^{z}S_{2j}^{z}\right). \tag{56}\]
it can be easily diagonalized as follows (suppressing site labels for clarity)
\[h_{\perp}=(1-t)\Big{[}(\lambda+2)|s\rangle\langle s|+(\lambda- 2)|0\rangle\langle 0|\\ -\lambda\left(|+1\rangle\langle+1|+|-1\rangle\langle-1|\right) \Big{]}\text{ where }\\ |+1\rangle\equiv|\uparrow_{1}\uparrow_{2}\rangle,\ |-1\rangle\equiv| \downarrow_{1}\downarrow_{2}\rangle,\\ |0\rangle\equiv\frac{|\uparrow_{1}\downarrow_{2}\rangle+|\downarrow_{1} \uparrow_{2}\rangle}{\sqrt{2}},\ |s\rangle\equiv\frac{|\uparrow_{1}\downarrow_{2}\rangle-|\downarrow_{1} \uparrow_{2}\rangle}{\sqrt{2}}. \tag{57}\]
and \(|\uparrow\rangle,|\downarrow\rangle\) represent eigenstates of \(S^{z}\) with eigenvalues \(\pm\frac{1}{2}\) respectively. We see that for all values of \(\lambda>-1\), \(|\pm 1\rangle,|0\rangle\) have the lowest energies. We can project the two-spin Hilbert space on the vertical bonds of every site onto this three-dimensional subspace using the following projection operator
\[\mathbb{P}=\prod_{j}\left(|0\rangle\langle 0|+|+1\rangle\langle+1|+|-1 \rangle\langle-1|\right) \tag{58}\]
as schematically shown in the top figure of Fig. 12 to get an effective spin-1 chain with Hamiltonian
\[H_{eff}=\mathbb{P}H\mathbb{P}^{\dagger}=J_{xy}\sum_{j}\left(L_{j }^{x}L_{j+1}^{x}+L_{j}^{y}L_{j+1}^{y}\right)\\ +J_{z}\sum_{j}L_{j}^{z}L_{j+1}^{z}+D\sum_{j}\left(L_{j}^{z} \right)^{2} \tag{59}\]
where
\[J_{xy}=\left(\frac{J}{2}-\frac{(1+t)}{4}\right),J_{z}=-\left( \frac{J\Delta}{2}+\frac{\lambda(1+t)}{4}\right),\\ \text{ and }D=2(1-t)(1-\lambda). \tag{60}\]
\(L^{x},L^{y},L^{z}\) are the spin 1 representations of the angular momentum algebra with representations
\[\frac{1}{\sqrt{2}}\begin{pmatrix}0&1&0\\ 1&0&1\\ 0&1&0\end{pmatrix},\ \frac{1}{\sqrt{2}}\begin{pmatrix}0&-i&0\\ i&0&-i\\ 0&i&0\end{pmatrix},\ \begin{pmatrix}1&0&0\\ 0&0&0\\ 0&0&-1\end{pmatrix}.\]
The Hamiltonian in Eq. (59) is the familiar spin-1 XXZ model with uniaxial single-ion-type anisotropy whose phase diagram is known [64] and is schematically reproduced in Fig. 13. For the parameter regime close to \(t\approx-1\), the phases and transitions of the Hamiltonian in Eq. (2) are qualitatively reproduced by that of Eq. (59). For example, consider the limit \(t\rightarrow-1\) when Eq. (59) reduces to
\[H_{eff}\rightarrow\frac{J}{2}\sum_{j}\left(L_{j}^{x}L_{j+1}^{x} +L_{j}^{y}L_{j+1}^{y}-\Delta L_{j}^{z}L_{j+1}^{z}\right)\\ +4(1-\lambda)\sum_{j}\left(L_{j}^{z}\right)^{2}. \tag{61}\]
If \(\Delta\) is fixed to a small value, as \(\lambda\) is tuned, we see from Fig. 13 that Eq. (61) passes through the large-D (trivial), \(\text{XY}_{1}\), \(\text{XY}_{2}\) and the Ferromagnetic phases- the same as what is seen in Fig. 2. It is worth emphasizing the crucial role of \(\Delta\) which builds residual ferromagnetic correlations between effective spin-1s, thus leading to the realization of interesting gapless phases. Through the spin-1 mapping we are able to see that in order to access the \(\text{XY}_{2}\) phase, we _need_ to fix \(\Delta\) to be small as was done in our numerical investigations.
Let us now consider the limit when the Hamiltonian Eq. (2) is dominated by \(H_{\perp}^{\prime}\). First, let us observe that
Figure 12: Mapping to an effective spin-1 chain in the regime \(t\approx-1\) (top) and \(t\approx+1\) (bottom). Circles represent the qubits from the original Hilbert space and the boxes enclosing circles represent which pair of qubits are mapped to effective spin 1 entities (squares). Boundary effects are seen in the latter case where the mapping leaves behind a qubit on each end.
with periodic boundary conditions, \(t\mapsto-t\) is induced by a unitary transformation generated by a single-site translation on one of the legs of the ladder \(\vec{S}_{1j}\mapsto\vec{S}_{1j+1}\). As a result, the phase diagram for Eq. (2) is perfectly symmetric under \(t\mapsto-t\). The identity of the phases, however, can change under this map. In particular, the unitary transformation is ill defined with open boundary conditions and therefore it is conceivable that the distinction between the regions related by \(t\mapsto-t\), is topological in nature. We will now map the \(H^{\prime}_{\perp}\) dominant Hamiltonian to a spin 1 chain. To do this, we repeat the steps above and observe that with periodic boundary conditions, \(H^{\prime}_{\perp}\) decouples into disjoint pieces, each of which has support on two spins, this time living on the diagonal bonds as schematically shown in the bottom figure of Fig. 12. We again perform a convenient change of basis similar to Eq. (55) to get the following local term
\[h^{\prime}_{\perp}=-(1+t)\left(S^{x}_{2j}S^{x}_{1j+1}+S^{y}_{2j}S^{y}_{1j+1}+ \lambda S^{z}_{2j}S^{z}_{1j+1}\right).\]
This is easily diagonalized as
\[h^{\prime}_{\perp}=(1+t)\Big{[}(\lambda+2)|s\rangle\langle s|+( \lambda-2)|0\rangle\langle 0|\\ -\lambda\left(|+1\rangle\langle+1|+|-1\rangle\langle-1|\right) \Big{]} \tag{62}\]
where \(|\pm 1\rangle,|0\rangle\) and \(|s\rangle\) are as defined as in Eq. (57). Projecting onto the low-energy Hilbert space spanned by \(|\pm 1\rangle,|0\rangle\) on each diagonal bond, we again get an effective spin-1 chain with the following Hamiltonian
\[H^{\prime}_{eff}=J^{\prime}_{xy}\sum_{\tilde{j}}\left(L^{x}_{ \tilde{j}}L^{x}_{\tilde{j}+1}+L^{y}_{\tilde{j}}L^{y}_{\tilde{j}+1}\right)\\ +J^{\prime}_{z}\sum_{\tilde{j}}L^{z}_{\tilde{j}}L^{z}_{\tilde{j}+ 1}+D^{\prime}\sum_{\tilde{j}}\left(L^{z}_{\tilde{j}}\right)^{2} \tag{63}\]
with
\[J^{\prime}_{xy}=\left(\frac{J}{2}-\frac{(1-t)}{4}\right),\ J^{ \prime}_{z}=-\Big{(}\frac{J\Delta}{2}+\frac{\lambda(1-t)}{4}\Big{)},\\ D^{\prime}=2(1+t)(1-\lambda). \tag{64}\]
We have denoted the bond between spins \((2,j)\) and \((1,j+1)\) by \(\tilde{j}\). So far, Eq. (63) looks identical to Eq. (59) with the replacement \(t\mapsto-t\). However, a change occurs with open boundary conditions. There is no natural association of the boundary qubits with any diagonal bond. As a result, it survives the the projection and remains as a qubit on the ends of the chain. The effective Hamiltonian with open boundary conditions is thus
\[H^{\prime}_{eff}=J^{\prime}_{xy}\sum_{\tilde{j}=1}^{L-1}\left(L^ {x}_{\tilde{j}}L^{x}_{\tilde{j}+1}+L^{y}_{\tilde{j}}L^{y}_{\tilde{j}+1}\right) \\ +J^{\prime}_{z}\sum_{\tilde{j}=1}^{L-1}L^{z}_{\tilde{j}}L^{z}_{ \tilde{j}+1}+D^{\prime}\sum_{\tilde{j}=1}^{L}\left(L^{z}_{\tilde{j}}\right)^{ 2}+H^{\partial}. \tag{65}\]
where \(J^{\prime}_{xy},J^{\prime}_{z}\) and \(D^{\prime}\) are the same as in Eq. (64). \(H^{\partial}\) is the effective boundary Hamiltonian,
\[H^{\partial}=J^{\partial}_{xy}\left(S^{x}_{11}L^{x}_{\tilde{1}}+ S^{y}_{11}L^{y}_{\tilde{1}}+L^{x}_{\tilde{L}}S^{x}_{2L+1}+L^{y}_{\tilde{L}}S^{y}_{2L +1}\right)\\ +J^{\partial}_{z}\left(S^{z}_{11}L^{z}_{\tilde{1}}+L^{z}_{\tilde {L}}S^{z}_{2L+1}\right) \tag{66}\]
where the coupling constants to the boundary qubits \(\vec{S}_{11}\) and \(\vec{S}_{2L+1}\) are
\[J^{\partial}_{xy}\equiv\left(\frac{J}{2}-\frac{(1-t)}{2}\right),\ J^{\partial} _{z}=-\left(\frac{J\Delta}{2}+\frac{\lambda(1-t)}{2}\right).\]
The picture above suggests an interesting alternative method of analysis to the abelian bosonization of Section III by treating the boundary spin \(1/2\) as a quantum impurity [65], however, we will not pursue this route in this work and leave it for future work.
Let us make a few comments on the limitations and utility of the mapping to a spin 1 chain before we proceed to a discussion of the phases in the effective Hamiltonian for the \(t\sim 1\) limit. Recall that for the \(t\sim 1\) limit, the phase diagram for the spin 1 XXZ chain accurately reproduces the phases of the spin ladder. To identify the phases of the spin 1 XXZ with that of Eq. (2) in the \(t\sim-1\) limit, we need additional tools, although plausible arguments can be made, especially for the gapped phases. For instance, it is clear that the identity of the Ferromagnet obtained for large \(\lambda\) remains the same in Eqs. (59) and (65) as can be easily seen by taking \(\lambda\) to a large value in Eq. (2). The identities of the large-D and Haldane phase in Eq. (59) are reversed in Eq. (65)
Figure 13: Schematic phase diagrams of the spin 1 XXZ chain Hamiltonians shown in Eqs. (59) and (65) applicable to the limits \(t\sim-1\) (top) and \(t\sim+1\) (bottom) of Eq. (2) whose phase diagram is shown in Fig. 2.
and can be understood from the effect of additional end qubits appearing with open boundary conditions. On the one hand, the qubit hybridizes with the edge mode of the Haldane phase and gaps out the edge degeneracy, rendering it a trivial phase. On the other hand, the same qubits contribute to the edge degeneracy to the large D phase where the gapped bulk protects the hybridization between qubits on opposite ends of the chain, thus converting it to a topological phase. The effect of the qubits on the gapless phases is not straightforward to determine. One could extend the previous argument to justify the mapping of the XY\({}_{2}\) phase to the topological XY\({}_{2}^{*}\) phase, which has edge modes, but the absence of a bulk gap makes it heuristic at best. Indeed, the mapping of XY\({}_{1}\) to a different gapless phase XY\({}_{1}^{*}\) which does not have edge modes, is not easily explained within the spin 1 mapping. We need more sophisticated tools, such as bosonization and numerical analysis, to nail down the precise identity and nature of gapless phases, as has been achieved in the previous sections.
In summary, spin 1 mapping presents an independent confirmation of distinct phases in the limits \(t\sim 1\). It also guides us to fix the parameters to open up various gapless phases, especially XY\({}_{2}\). It also confirms that the topology of the \(t\sim-1\) phase diagram is identical to that of \(t\sim+1\). However, additional analysis, as has been shown in the previous sections, is needed to determine the identity of phases in the latter limit although heuristic arguments are consistent with detailed analysis.
## VII Summary and Outlook
In this work we have studied a coupled spin model hosting several symmetry enriched gapless phases that exhibit an intricate interplay of symmetries, strong correlations, and topological features. Our multipronged approach, which includes bosonization (Sections III and IV), DMRG studies (Section V) and effective low-energy modelling (Section VI) provides a comprehensive understanding of all aspects of the phase diagram. Our study points out that even the well-known Luttinger liquid state can appear in the form of distinct phases based on how the microscopic UV symmetries inherited from the underlying spin model get reflected in the low-energy IR (see Section III.3). Among these phases is an interesting _gapless_ topological phase XY\({}_{2}^{*}\) that hosts symmetry-protected edge modes. Finally, our mapping to a Spin 1 XXZ chain (Section VI) provides an alternative view point to understand the nature of the gapless phases and their transitions. We also find the presence of multiple stable universality classes -'multiversality' along the critical surface separating the gapped trivial and Haldane phases.
There are many generalizations that can follow from our work. First, it would be useful to use more sophisticated tools of boundary CFT [16; 66] to gain insight into the gapless phases seen in this work. Second, although in this work we have focused on a two-chain ladder, we believe that as the number of chains increases, a much wider variety of symmetry-enriched criticality may be realizable in such systems, leading to a host of unique gapless phases and transitions [67; 68]. Another interesting direction is to couple such one-dimensional chains to realize possibly novel two-dimensional gapless states [69; 70; 71; 72] mimicking the success of gapped topological phases [73; 74; 75; 76]. Finally, it would be interesting to see if the symmetry enriched gapless phenomena investigated in this work can be observed in Rydberg simulators [77] where other gapless phenomena have been postulated to exist [78; 79; 80; 81]. We leave these and other questions to future work.
## Acknowledgements
We thank Masaki Oshikawa, Siddharth Parameswaran, Nick Bultinck, Sounak Biswas, Michele Fava, Nick Jones, Yuchi He and Diptarka Das for useful discussions. We are especially grateful to Fabian Essler for collaboration in the early stages of this work. During the initial stages of this work, A.P. was supported by a grant from the Simons Foundation (677895, R.G.) through the ICTS-Simons postdoctoral fellowship. He is currently funded by the European Research Council under the European Union Horizon 2020 Research and Innovation Programme, Grant Agreement No. 804213-TMCS and the Engineering and Physical Sciences Research Council, Grant number EP/S020527/1. SM acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 436382789, 493420525 via large-equipment grants (GOE-Grid cluster). AA acknowledges support from IITK Initiation Grant (IITK/PHY/2022010). TM acknowledges support from DST-SERB, India through Project No. MTR/2022/000382. The authors acknowledge the hospitality of ICTS-TIFR, Bangalore, and ICTP, Trieste, where discussions and parts of this work were carried out.
## Appendix A Additional bosonization details
The subject of bosonization has been extensively discussed in several excellent books and reviews. In this appendix, we review a few details that are subtle and are easy to miss. The CFT term in Eqs. (3) and (7) is determined using standard techniques [35] from the XXZ Hamiltonian. The various perturbations can be determined from the bosonized form of the spin operators shown in Eqs. (5) and (9) in a straightforward manner for the most part. Cases involving coincident field operators should be treated with care employing a 'point-splitting' device to determine how coincident vertex operators are multiplied. Let us review this in the single component/
small \(J\) limit:
\[e^{im\phi(x)}e^{in\theta(x)}=\lim_{\epsilon\to 0}e^{im\phi(x+ \epsilon)}e^{in\theta(x-\epsilon)}\] \[\qquad=\lim_{\epsilon\to 0}e^{i(m\phi(x+\epsilon)+n\theta(x- \epsilon))}e^{-\frac{mn}{2}[\phi(x+\epsilon),\theta(x-\epsilon)]}\] \[\qquad=\lim_{\epsilon\to 0}e^{in\theta(x-\epsilon)}e^{im\phi(x+ \epsilon)}e^{-mn[\phi(x+\epsilon),\theta(x-\epsilon)]}. \tag{11}\]
This is determined using an integrated version of Eqs. (4) and (8)
\[[\phi_{\alpha}(x),\theta_{\beta}(x^{\prime})] =i\pi\delta_{\alpha\beta}\text{sgn}(x-x^{\prime}),\] \[[\phi(x),\theta(x^{\prime})] =i\pi\text{sgn}(x-x^{\prime}). \tag{12}\]
using which we get
\[e^{im\phi(x)}e^{in\theta(x)}=(-1)^{mn}e^{in\theta(x)}e^{im\phi(x )}. \tag{13}\]
Equation (13) is needed to obtain the correct bosonized form for operators involving products of \(S^{\pm}\) such as the bond-dimerization term \(\propto\sum_{j}\left((-1)^{j}S^{+}_{j}S^{-}_{j+1}+h.c.\right)\) in Eq. (1). Another important place where point splitting is needed is in determining the correct symmetry action. The \(U(1),\mathbb{Z}_{2}^{R}\) and \(\mathbb{Z}\) actions are easy to read off by directly comparing the action on the lattice operators shown in Table 1 with Eqs. (5) and (9). The action of lattice parity \(\mathbb{Z}_{2}^{R}\) on the bosonized variables, on the other hand, needs some care. Let us review this again in the small \(J\), single component version. Recall that the action of \(\mathbb{Z}_{2}^{P}\) is bond inversion, which can be thought of as a composite of site inversion and single-site translation. Since translation is straightforward by direct comparison, let us focus on site inversion \(\vec{S}_{j}\mapsto\vec{S}_{-j}\). On the continuum operators and simple vertex operators, this naively acts as
\[\phi(x)\mapsto\phi(-x),\theta(x)\mapsto\theta(-x). \tag{14}\]
Let us look at how this naive action is reflected on products of non-commuting operators,
\[e^{im\theta(x)}e^{in\phi(x)}=\lim_{\epsilon\to 0}e^{i\frac{mn\pi}{2} \text{sgn}(\epsilon)}e^{i(m\theta(x-\epsilon)+n\phi(x+\epsilon))}\\ \mapsto\lim_{\epsilon\to 0}e^{i\frac{mn\pi}{2}\text{sgn}( \epsilon)}e^{i(m\theta(-x+\epsilon)+n\phi(-x-\epsilon))}\\ =\lim_{\epsilon\to 0}e^{imn\pi\text{sgn}(\epsilon)}e^{im \theta(-x+\epsilon)}e^{in\phi(-x-\epsilon)}\\ =(-1)^{mn}e^{im\theta(-x)}e^{in\phi(-x)}. \tag{15}\]
Using Eqs. (14) and (15) we get
\[S^{\pm}_{-j} \approx\exp\left(\pm i\theta(-x)\right)\left((-1)^{j}\mathcal{A} \right.\left.-\mathcal{C}\cos\phi(x)+\dots\right),\] \[S^{z}_{-j} \approx\frac{1}{2\pi}\partial_{x}\phi(-x)+(-1)^{j}\mathcal{B}\sin \phi(-x)+\dots \tag{16}\]
We can now read off the symmetry action corresponding to site reflection from Eq. (16) as
\[\phi(x)\mapsto\pi-\phi(x),\ \theta(x)\mapsto\theta(-x). \tag{17}\]
Combining Eq. (17) with the action of translation shown in Table 2, we get the final effective action of \(\mathbb{Z}_{2}^{R}\) shown in Table 2.
## Appendix B Phase diagrams from bosonization
In this appendix, we use bosonization to obtain the qualitative details of the phase diagrams shown in the main text in both the small and large \(J\) limits.
### The small-\(J\) phase diagram
Let us write down the form of the Hamiltonian at small \(J\) shown in Eq. (1)
\[H=\sum_{j}\left(1+(-1)^{j}t\right)\left(S^{x}_{j}S^{x}_{j+1}+S^ {y}_{j}S^{y}_{j+1}-\lambda S^{z}_{j}S^{z}_{j+1}\right)\\ +J\sum_{j}\left(S^{x}_{j}S^{x}_{j+2}+S^{y}_{j}S^{y}_{j+2}-\Delta S ^{z}_{j}S^{z}_{j+2}\right), \tag{18}\]
and its bosonized version shown in Eq. (3),
\[H\approx\frac{v}{2\pi}\int dx\left[\frac{1}{4K}\left(\partial_{ x}\phi\right)^{2}+K\left(\partial_{x}\theta\right)^{2}\right]\\ +2\mathcal{A}\mathcal{C}t\int dx\ \cos\phi-\frac{\mathcal{B}^{2} \lambda}{2}\int dx\ \cos 2\phi+\dots \tag{19}\]
The Luttinger parameter \(K\) and velocity \(v\) depend on Hamiltonian parameters and can be determined from the Bethe ansatz solution of the XXZ spin chain [36]
\[K=\frac{\pi}{2\arccos\lambda},\ v=\frac{K}{(2K-1)}\sin\left(\frac{\pi}{2K} \right). \tag{20}\]
Let us comment on a few limits of Eq. (18). If we switch off both the nnn coupling \(J\) and dimerization \(t\), we have the XXZ model, which can be solved by Bethe ansatz [82; 83; 84] with the phases shown in the \(t=0\) line of the figure in Fig. 2. The phase diagram with \(t\neq 0\) and \(J\neq 0\) can be easily understood as a perturbation of the XXZ spin chain [24] using the bosonized Hamiltonian shown in Eq. (19). This is done by tracking the relevance (in the RG since) of the two vertex operators \(\cos\phi\) and \(\cos 2\phi\) which have scaling dimensions \(K\) and \(4K\), respectively, as follows:
Figure 14: The small-\(J\) phase diagram as determined from bosonization.
_The \(XY_{0}\) phase:_ In the regime when \(K>2\), which corresponds to \(\frac{1}{\sqrt{2}}<\lambda<1\) from the formula in Eq. (101), both \(\cos\phi\) and \(\cos 2\phi\) are irrelevant, and we get a gapless phase, \(\mathrm{XY}_{0}\).
_The Haldane and Trivial phases:_ When \(\frac{1}{2}<K<2\) which corresponds to \(-1<\lambda<\frac{1}{\sqrt{2}}\), \(\cos\phi\) is relevant while \(\cos 2\phi\) is irrelevant. Therefore, we get gapped phases for \(t\neq 0\) where \(\langle\phi\rangle\rightarrow\pi\) for \(t>0\) corresponds to the Haldane phase and \(\langle\phi\rangle\to 0\) for \(t<0\) corresponds to the trivial phase.
_The Neel phase:_ When \(K<\frac{1}{2}\) which corresponds to \(\lambda<-1\), both \(\cos\phi\) and \(\cos 2\phi\) are relevant. When \(\cos 2\phi\) is dominant (eg: when \(t=0\)), we get a Neel phase with \(\langle\phi\rangle\rightarrow\pm\frac{\pi}{2}\). The transition between the Haldane/ trivial phase and Neel phase is second-order and corresponds to the Ising universality class. See [85] for an explanation of this.
_The Ferromagnet:_ As \(\lambda\to 1\), we get \(K\rightarrow\infty\) and \(v\to 0\) and the Luttinger liquid description becomes invalid as the system transitions to a ferromagnet through a first-order transition.
Putting these various pieces together, we reproduce the topology of the small-\(J\) phase diagram seen for small \(t\). This is shown in Fig. 14.
### The large-\(J\) phase diagram
Let us now write down the Hamiltonian form appropriate for large-\(J\)
\[H =H_{1}+H_{2}+H_{\perp}+H^{\prime}_{\perp},\text{ where}, \tag{102}\] \[H_{\alpha} =J\sum_{j}\left(S^{x}_{\alpha j}S^{x}_{\alpha j+1}+S^{y}_{\alpha j }S^{y}_{\alpha j+1}-\Delta S^{z}_{\alpha j}S^{z}_{\alpha j+1}\right),\] \[H_{\perp} =(1-t)\sum_{j}\left(S^{x}_{1j}S^{x}_{2j}+S^{y}_{1j}S^{y}_{2j}- \lambda S^{z}_{1j}S^{z}_{2j}\right),\] \[H^{\prime}_{\perp} =(1+t)\sum_{j}\left(S^{x}_{2j}S^{x}_{1j+1}+S^{y}_{2j}S^{y}_{1j+1} -\lambda S^{z}_{2j}S^{z}_{1j+1}\right),\]
and its bosonized form
\[H \approx\frac{v}{2\pi}\sum_{\alpha=1,2}\int dx\left(\frac{1}{4K}( \partial_{x}\phi_{\alpha})^{2}+K(\partial_{x}\theta_{\alpha})^{2}\right)\] \[-\frac{\lambda}{2\pi^{2}}\int dx\ \partial_{x}\phi_{1}\partial_{x}\phi_{2}-4\mathcal{A}^{2}t\int dx \ \ \cos\left(\theta_{1}-\theta_{2}\right)\] \[\quad-\mathcal{B}^{2}t\int dx\ \lambda\ \left(\cos\left(\phi_{1}+\phi_{2} \right)-\cos\left(\phi_{1}-\phi_{2}\right)\right)\] \[\quad+2\mathcal{C}^{2}\int dx\cos\left(\theta_{1}-\theta_{2} \right)\cos\left(\phi_{1}+\phi_{2}\right)+\ldots \tag{103}\]
We now reproduce qualitative features of its diagram shown in Fig. 2. We will focus on the phases surrounding the \(c=2\) line over which we have good analytical control. The leading term in Eq. (103) is a c=2 CFT of two identical compact bosons. The operator \(\partial_{x}\phi_{1}\partial_{x}\phi_{2}\) has scaling dimensions 2 and is therefore exactly marginal. It generates motion in the space of \(c=2\) CFTs where the compact bosons are no longer identical and have different compactification radii. We also have operators \(\mathcal{V}_{\pm}\equiv\cos\left(\phi_{1}\pm\phi_{2}\right)\), \(\mathcal{W}_{-}\equiv\cos\left(\theta_{1}-\theta_{2}\right)\) and \(\mathcal{W}_{-}\mathcal{V}_{+}\equiv\cos\left(\theta_{1}-\theta_{2}\right) \left(\phi_{1}+\phi_{2}\right)\) whose scaling dimensions can be obtained perturbatively to the leading order in \(\lambda\) as [35]
\[\left[\mathcal{V}_{\pm}\right] =K_{\pm}\approx 2K\ \left(1\pm\frac{\lambda K}{\pi v}\right),\] \[\left[\mathcal{W}_{-}\right] =\frac{1}{K_{-}}\approx\frac{1}{2K}\ \left(1+\frac{\lambda K}{\pi v}\right)\text{ and}\] \[\left[\mathcal{W}_{-}\mathcal{V}_{+}\right] =\frac{1}{K_{-}}+K_{+}\approx\left(\frac{1}{2K}+2K\right)\left(1+ \frac{\lambda K}{\pi v}\right) \tag{104}\]
where, again, relationship of the Luttinger parameter \(K\) and velocity \(v\) with the parameters in the Hamiltonian is determined from the Bethe ansatz solution of the XXZ spin chain [36] as
\[K=\frac{\pi}{2\arccos\Delta},\ v=\frac{JK}{(2K-1)}\sin\left(\frac{\pi}{2K} \right). \tag{105}\]
Note that we have \(\left[\mathcal{V}_{-}\right]\left[\mathcal{W}_{-}\right]=1\). As a result, it is impossible for both \(\mathcal{V}_{-}\) and \(\mathcal{W}_{-}\) to be irrelevant at the same time. Consequently, for any \(t\neq 0\), the \(c=2\) theory is unstable and flows a gapless phase with \(c<2\) or a gapped phase [26; 27; 35] as seen in Fig. 2.
#### b.2.1 The phases and transitions
Let us begin in the limit \(t\to 0\) in Eq. (103) when \(\mathcal{V}_{+}\mathcal{W}_{-}\) is irrelevant, giving us a \(c=2\) theory. Recall that one of the two operators \(\mathcal{W}_{-}\equiv\cos\left(\theta_{1}-\theta_{2}\right)\) or \(\mathcal{V}_{-}\equiv\cos\left(\phi_{1}-\phi_{2}\right)\) is always relevant and, therefore, for \(t\neq 0\), the theory flows to either a gapless state with \(c<2\) or gaps out completely. We are interested in the case where the system does not gap out completely which occurs when \(\mathcal{V}_{+}\equiv\cos\left(\phi_{1}+\phi_{2}\right)\) is irrelevant and
Figure 15: The large-\(J\) phase diagram as determined from bosonization.
the theory flows to effective single-component Luttinger liquid gapless phases. The nature of the phase depends on (i) which among \(\mathcal{W}_{-}\) and \(\mathcal{V}_{-}\) dominates at large distances, pinning \(\theta_{1}-\theta_{2}\) or \(\phi_{1}-\phi_{2}\) and (ii) the sign of \(t\) which determines the value to which the fields are pinned \(\langle\theta_{1}-\theta_{2}\rangle=0/\pi\) or \(\langle\phi_{1}-\phi_{2}\rangle=0/\pi\). We label these four cases XY\({}_{1,2}\) and XY\({}_{1,2}^{*}\) as shown in Fig. 15. All four are distinct phases. The universality class of a direct continuous transition between XY\({}_{1/2}\) and XY\({}_{1/2}^{*}\) is the parent \(c=2\) theory obtained by tuning \(t\to 0\). The transition between XY\({}_{1}\) and XY\({}_{2}\) or between XY\({}_{1}^{*}\) and XY\({}_{2}^{*}\) corresponds to a compact boson plus Ising CFT with central charge \(c=\frac{3}{2}\)[23; 40; 86]. In the parameter regime we study the model numerically, a direct transition between XY\({}_{2}\) and XY\({}_{2}^{*}\) is not observed.
When we are in the XY\({}_{1}\) or XY\({}_{1}^{*}\) phases where \(\mathcal{W}_{-}\) pins the value of \(\theta_{1}-\theta_{2}\), a transition to a gapped phase can occur when \(\mathcal{V}_{+}\) also becomes relevant. The gapped phases resulting when \(\theta_{1}-\theta_{2}\) and \(\phi_{1}+\phi_{2}\) are pinned correspond to the Haldane or trivial phase [27] as shown in Fig. 15. A different transition can occur when we are in any of the four gapless phases, XY\({}_{1,2}\) and XY\({}_{1,2}^{*}\) and the Luttinger velocity vanishes, resulting in a first-order transition to a FM similar to the single-component small-\(J\) case.
#### v.2.2 The \(t=0\) line and its proximate phases
We now analyze the \(t=0\) line and its proximity in detail. First, let us analyse which gapless phase results when \(t\neq 0\) is switched on. This is determined by which operator \(\mathcal{W}_{-}\) or \(\mathcal{V}_{-}\) has the smaller scaling dimension. In the parameter regime we studied numerically, we only find the former situation as shown in Fig. 15. When \(\mathcal{V}_{+}\) becomes relevant along with \(\mathcal{W}_{-}\), we see that \(t\neq 0\) results in gapped phases. Let us denote \(\lambda_{2}^{c}\) as the location along the \(t=0\) line when \(\mathcal{V}_{+}\) is marginal, i.e. \([\mathcal{V}_{+}]=2\) where the XY\({}_{1}\) to the trivial phase boundary and the XY\({}_{1}^{*}\) -to- Haldane phase boundary meets the \(c=2\) line at \(t=0\).
Now, as seen in Eq. (100), the \(c=2\) theory is destroyed by either (i) the composite operator \(\mathcal{W}_{-}\mathcal{V}_{+}\) becomes relevant leading to a gapped state with two degenerate vacua \(\langle\phi_{1}+\phi_{2}\rangle=\pi-\langle\theta_{1}-\theta_{2}\rangle=0/\pi\) or (ii) the Luttinger velocity for one of the sectors vanishes rendering the continuum description invalid and we get a first-order transition to a FM. Let us denote the critical values of \(\lambda\) that result in each of these as \(\lambda_{1}^{c}\) and \(\lambda_{3}^{c}\) respectively. From the perturbative result shown in Eq. (101), we can get rough estimates for \(\lambda_{1}^{c}-\lambda_{3}^{c}\) although these estimates are not very reliable when they result in large values of \(|\lambda_{k}^{c}|\) where the validity of perturbation theory no longer holds.
The nature of the phase transition between the trivial and Haldane phases that occurs at \(t=0\) depends on whether we are at \(\lambda<\lambda_{1}^{c}\) or \(\lambda_{1}^{c}<\lambda<\lambda_{2}^{c}\). As shown in Fig. 15, the latter results in a first-order phase transition in which the vacua of the Haldane and the trivial phase are degenerate whereas the former results in a second-order transition with \(c=2\). Putting all this together, we get the form shown in Fig. 15.
#### v.2.3 Multiversality
A curious observation is that although the small-\(J\) and large-\(J\) gapped Haldane and trivial phases are adiabatically connected, the nature of the second-order transitions between them is different at small-\(J\) and large-\(J\). For small \(J\), it is a \(c=1\) critical theory whereas for large-\(J\) it is \(c=2\). Both are obtained by tuning a single parameter and are therefore generic. This phenomenon, called multiversality, has received attention in recent studies [22; 23] although microscopic models that exhibit them are rare.
#### v.2.4 A nice possible proximate phase diagram
In the parameter regime when \(\mathcal{V}_{+}\) is irrelevant, we previously argued that close to the \(t=0\) line \(t\neq 0\) resulted in a gapless XY\({}_{1}\) or (t\(<\)0) XY\({}_{1}^{*}\) (t\(>\)0) phases if \([\mathcal{W}_{-}]<[\mathcal{V}_{-}]\) and XY\({}_{2}\) (t\(<\)0) or XY\({}_{2}^{*}\) (t\(>\)0) phases if \([\mathcal{W}_{-}]>[\mathcal{V}_{-}]\). If the c=2 theory survived as \([\mathcal{W}_{-}]=[\mathcal{V}_{-}]\) (at some putative value \(\lambda_{4}^{c}\), say) then it would open a direct transition between the phases XY\({}_{2}\) and XY\({}_{2}^{*}\). The \(c=\frac{3}{2}\) lines discussed previously that separated the phases XY\({}_{1}\) and XY\({}_{2}\) (t\(<\)0) and XY\({}_{1}^{*}\) and XY\({}_{2}^{*}\) (t\(>\)0) would meet the line \(t=0\) at this point \(\lambda_{4}^{c}\). Alternatively, the gapless theory becomes unstable before this can happen, giving us the situation shown in Fig. 15 which we observe in our numerical investigation. We postulate that there is some proximate parameter regime of our microscopic Hamiltonian where \(\lambda_{3}^{c}>\lambda_{4}^{c}\) can be realized. In this case, we should see a phase diagram as shown in Fig. 16 which contains all the same phases as in Fig. 15 but also a direct transition between XY\({}_{2}\) and XY\({}_{2}^{*}\).
Figure 16: A nice proximate phase diagram at large \(J\) suggested by bosonization.
## Appendix C Bosonizing string operators
### Bosonizing \(C(x,y)\) for small \(J\)
Bosonizing string order parameters is known to be tricky and rife with ambiguities [87; 88]. Let us try to naively apply Eq. (5) to bosonize the string operator in Eq. (19) in the small -\(J\) limit.
\[C(x,y)\propto e^{\pm i\pi\sum_{l=x}^{y}S_{l}^{z}}\sim e^{\pm\frac{ i}{2}\int_{l}^{y}ds\ \partial_{s}\phi(s)}\\ \sim e^{\pm\frac{i}{2}(\phi(x)-\phi(y))}. \tag{111}\]
Equation (111) leads to the conclusion that \(\langle C(x,y)\rangle\neq 0\) anytime \(\langle\phi\rangle\neq 0\), in particular both in the Haldane and in the trivial phases. This is incorrect. We now use symmetries to identify the correct bosonized form of \(C(x,y)\). We begin by postulating the following general bosonized form for \(C(x,y)\)
\[C(x,y) \sim C_{L}(x)\ C_{R}(y)\ \text{where}\, \tag{112}\] \[C_{L/R}(x) \sim\sum_{m\in\mathcal{Z}}A_{m}^{L/R}\ e^{\frac{i}{2}m\phi(x)}. \tag{113}\]
While the form in Eq. (109) appears as though the string operator \(C(x,y)\) has been written in terms of local operators with support at \(x\) and \(y\), this is not so. The half-integer prefactor to the fields \(\frac{\phi_{m}}{2}\) ensures that the operators in \(C_{L/R}\) are not part of the spectrum of local operators \(\mathcal{X}_{m,n}\equiv\exp\left(i\left(m\theta+n\phi\right)\right)\) and are therefore nonlocal. Furthermore, we have used the fact that \(C(x,y)^{2}=1\) to restrict the coefficients to multiples of \(\frac{1}{2}\). We now impose constraints on \(A_{m}^{L/R}\) using symmetry. First, observe that the end-points of \(C(x,y)\) defined in terms of spin operators as shown in Eq. (19) are charged under \(\mathbb{Z}_{2}^{R}\) (\(S_{j}^{z}\mapsto-S_{j}^{z}\)). Using the action of \(\mathbb{Z}_{2}^{R}\) on the boson fields shown in Table 2, we obtain a constraint on \(A_{m}^{L/R}\) as
\[\mathbb{Z}_{2}^{R}:C_{L/R}\xrightarrow{\phi\mapsto-\phi}-C_{L/R}\implies A_{m}^{L/R}=-A_{-m}^{L/R}. \tag{114}\]
We now impose the action of \(\mathbb{Z}_{2}^{P}\) shown in Table 7 on the bosonized form of \(C(x,y)\) using Table 2 which gives a relationship between \(A_{m}^{L}\) and \(A_{m}^{R}\) as
\[\mathbb{Z}_{2}^{P}:C_{L}(x)\xrightarrow{\phi(x)\mapsto-\phi(-x) }C_{R}(-x)\\ \implies A_{m}^{R}=\ A_{-m}^{L}=-A_{m}^{L}. \tag{115}\]
Using Eqs. (114) and (115) in Eq. (113), we get the final bosonized form for \(C(x,y)\sim C_{L}(x)C_{R}(y)\) with \(C_{L}(x)=-C_{R}(x)\) and
\[C_{L}\sim\sum_{m\in\mathcal{Z}^{+}}\alpha_{m}\sin\left(\frac{m \phi}{2}\right)\approx\alpha_{1}\sin\left(\frac{\phi}{2}\right). \tag{116}\]
where the coefficients \(\alpha_{m}\) are linear combinations of \(A_{m}^{L/R}\). This correctly reproduces the numerically observed behaviour of \(\langle C(x,y)\rangle\), which is nonzero when \(\langle\phi\rangle=\pi\) such as in the Haldane phase but not when \(\langle\phi\rangle=0\) such as in the trivial phase.
### Bosonizing \(C(x,y)\) for large \(J\)
We now bosonize the string operator in the large-\(J\) version. We follow the same line of reasoning as shown previously for the small \(J\) version. Let us begin by attempting to bosonize \(C(x,y)\) using the formulas shown in Eq. (9):
\[C(x,y) \propto e^{i\pi\left(\sum_{l=x}^{y-1}S_{l,l}^{z}+\sum_{l=x+1}^{y} S_{l,l}^{z}\right)}\sim e^{\pm\frac{i}{2}\int_{x}^{y}ds\ \partial_{s}(\phi_{1}+\phi_{2})}\] \[\sim e^{\frac{i}{2}(\phi_{1}(x)+\phi_{2}(x))}\ e^{-\frac{i}{2}( \phi_{1}(y)+\phi_{2}(y))}. \tag{117}\]
We may just as well have gone a different route to get
\[C(x,y) \propto e^{i\pi\left(\sum_{l=x}^{y-1}S_{l,l}^{z}-\sum_{l=x+1}^{y} S_{l,l}^{z}\right)}\sim e^{\frac{i}{2}\int_{x}^{y}ds\ \partial_{s}(\phi_{1}-\phi_{2})}\] \[\sim e^{\frac{i}{2}\left(\phi_{1}(x)-\phi_{2}(x)\right)}\ e^{- \frac{i}{2}\left(\phi_{1}(y)-\phi_{2}(y)\right)}. \tag{118}\]
The bosonized expressions in Eqs. (117) and (118) lead to very different physics. We have \(\langle C(x,y)\rangle\neq 0\) when \(\langle\phi_{1}+\phi_{2}\rangle\neq 0\) according to Eq. (117) and when \(\langle\phi_{1}-\phi_{2}\rangle\neq 0\) according to Eq. (118) which corresponds to very different phases as seen in Fig. 15. Now we use symmetries to write down the correct bosonized form of \(C(x,y)\). We again begin by postulating the following form for \(C(x,y)\)
\[C(x,y) \sim C_{L}(x)\ C_{R}(y)\ \text{where}\, \tag{119}\] \[C_{L/R}(x) \sim\sum_{m,n\in\mathcal{Z}}A_{m,n}^{L/R}\ e^{\frac{i}{2}(m\phi_{ 1}(x)+n\phi_{2}(x))}. \tag{120}\]
We now impose constraints on \(A_{m,n}^{L/R}\) using symmetry. First, we use the fact that the end-points of \(C(x,y)\) are charged under \(\mathbb{Z}_{2}^{R}\) (\(S_{\alpha j}^{z}\mapsto-S_{\alpha j}^{z}\)). Using the action of \(\mathbb{Z}_{2}^{R}\) on the boson fields shown in Table 2, we get
\[\mathbb{Z}_{2}^{R}:C_{L/R}(x)\xrightarrow{\phi_{\alpha}\mapsto- \phi_{\alpha}}- C_{L/R}(x)\\ \implies A_{m,n}^{L/R}=-A_{-m,-n}^{L/R}. \tag{121}\]
We now impose the action of \(\mathbb{Z}_{2}^{P}\) shown in Table 7 on the bosonized form of \(C(x,y)\) using Table 2 which gives a relationship between \(A_{mn}^{L}\) and \(A_{mn}^{R}\) as
\[\mathbb{Z}_{2}^{P}:C_{L}(x)\xrightarrow{\phi_{1}(x)\mapsto\pm\pi- \phi_{2}(-x)\over\phi_{2}(x)\mapsto\pi-\phi_{1}(-x)}C_{R}(-x)\implies\\ A_{m,n}^{R}=\pm(i)^{m+n}\ A_{-n,-m}^{L}=\mp(i)^{m+n}\ A_{n,m}^{L}. \tag{122}\]
Equations (121) and (122) are mutually compatible for non-zero \(A\) iff \((m+n)\) is even. Note that we have allowed a sign ambiguity in the action of \(\mathbb{Z}_{2}^{P}\), \(\phi_{1}\mapsto\pm\pi-\phi_{2}\) which results in a harmless overall multiplicative sign factor in the final answer. Using these in Eq. (120), we obtain the final bosonized form of \(C(x,y)\sim C_{L}(x)C_{R}(y)\) with \(C_{L}(x)=\pm C_{R}(y)\) and
\[C_{L} \approx \alpha\sin\left(\frac{\phi_{1}+\phi_{2}}{2}\right)\,+\,\beta\sin \left(\frac{\phi_{1}-\phi_{2}}{2}\right). \tag{123}\]
where we have only shown operators with the smallest scaling dimensions and the coefficients \(\alpha,\beta\) are linear combinations of \(A_{m,n}^{L/R}\). This reproduces the observations in Section V that \(\langle C(x,y)\rangle\neq 0\) when \(\langle\phi_{1}\pm\phi_{2}\rangle=\pi\) i.e. in the Haldane and XY\({}_{2}^{*}\) phases.
### Bosonizing \(U(\pi)\)
We can obtain the bosonized form of the symmetry operator \(U(\pi)\) defined on a finite interval \(x\in[0,L]\), used in the main text using arguments similar to the above by treating it as a string operator defined for any interval. In the small-\(J\) limit, we can postulate the following form
\[U(\pi) \sim U_{L}\ U_{R}, \tag{101}\] \[U_{L/R} \sim\sum_{m}B_{m}^{L/R}e^{\frac{i}{2}m\phi}. \tag{102}\]
Unlike \(C(x,y)\) which has \(\mathbb{Z}_{2}^{R}\) charged end-points, \(U_{L/R}\) do not carry any charge. Thus, we have
\[\mathbb{Z}_{2}^{R}:U_{L/R}\xrightarrow{\phi\mapsto-\phi}U_{L/R}\implies B_{m}^{L/R}=B_{-m}^{L/R}. \tag{103}\]
Imposing the action under \(\mathbb{Z}_{2}^{P}\), we get
\[\mathbb{Z}_{2}^{P}:U_{L}(x)\xrightarrow{\phi(x)\mapsto-\phi(-x)} U_{R}(-x)\\ \implies B_{m}^{R}=\ B_{-m}^{L}=B_{m}^{L}. \tag{104}\]
Using Eqs. (103) and (104), we get
\[U_{L/R}\sim\beta\cos\frac{\phi}{2}+\ldots \tag{105}\]
where we have shown only the operator with the smallest scaling dimensions, and \(\beta\) is some combination of \(B_{m}^{L/R}\). In the large \(J\) limit, we can postulate the form
\[U(\pi) \sim U_{L}\ U_{R}, \tag{106}\] \[U_{L/R} \sim\sum_{m,n}B_{m,n}^{L/R}\ e^{\frac{i}{2}(m\phi_{1}(x)+n\phi_{ 2}(x))}. \tag{107}\]
Again, imposing \(\mathbb{Z}_{2}^{R}\) invariance of the endpoints, we get
\[\mathbb{Z}_{2}^{R}:U_{L/R}(x)\xrightarrow{\phi_{\alpha}\mapsto- \phi_{\alpha}}U_{L/R}(x)\\ \implies B_{m,n}^{L/R}=B_{-m,-n}^{L/R}. \tag{108}\]
The action of \(\mathbb{Z}_{2}^{P}\) further gives us
\[\mathbb{Z}_{2}^{P}:B_{L}(x)\xrightarrow{\phi_{1}(x)\mapsto\pm \pi-\phi_{2}(-x)}B_{R}(-x)\implies\\ B_{m,n}^{R}=\pm(i)^{m+n}\ B_{n,m}^{L}. \tag{109}\]
Again, Eqs. (108) and (109) are mutually compatible for non-zero B iff \((m+n)\) is even and we have retained the sign ambiguity in the action of \(\mathbb{Z}_{2}^{P}\) as before when we bosonized \(C(x,y)\). Using these in Eq. (101), we get the final form \(U(\pi)\sim U_{L}U_{R}\) with \(U_{L}=\pm U_{R}\) and
\[U_{L}\approx\gamma\ \cos\left(\frac{\phi_{1}+\phi_{2}}{2}\right)+\delta\ \cos\left(\frac{\phi_{1}-\phi_{2}}{2}\right). \tag{110}\]
where we have only shown operators with the smallest scaling dimensions, and the coefficients \(\gamma,\delta\) are linear combinations of \(B_{m,n}^{L/R}\).
|
2309.11608 | Dataset Factory: A Toolchain For Generative Computer Vision Datasets | Generative AI workflows heavily rely on data-centric tasks - such as
filtering samples by annotation fields, vector distances, or scores produced by
custom classifiers. At the same time, computer vision datasets are quickly
approaching petabyte volumes, rendering data wrangling difficult. In addition,
the iterative nature of data preparation necessitates robust dataset sharing
and versioning mechanisms, both of which are hard to implement ad-hoc. To solve
these challenges, we propose a "dataset factory" approach that separates the
storage and processing of samples from metadata and enables data-centric
operations at scale for machine learning teams and individual researchers. | Daniel Kharitonov, Ryan Turner | 2023-09-20T19:43:37Z | http://arxiv.org/abs/2309.11608v1 | # Dataset Factory: A Toolchain For Generative Computer Vision Datasets
###### Abstract
Generative AI workflows heavily rely on data-centric tasks--such as filtering samples by annotation fields, vector distances, or scores produced by custom classifiers. At the same time, computer vision datasets are quickly approaching petabyte volumes, rendering data wrangling difficult. In addition, the iterative nature of data preparation necessitates robust dataset sharing and versioning mechanisms, both of which are hard to implement ad-hoc. To solve these challenges, we propose a "dataset factory" approach that separates the storage and processing of samples from metadata and enables data-centric operations at scale for machine learning teams and individual researchers.
## 1 Introduction
The proliferation of visual foundation models like Stable Diffusion [9], Florence [12], and SwinTransformer [5] is rapidly shifting the research focus in computer vision towards data selection and data curation tasks. This movement to Data-Centric AI (DCAI) assumes that downstream task accuracy can be improved by feeding expressive models with more data and refined metadata (e.g., as demonstrated by Zhai et al. [13]). While research in DCAI remains a new and exciting field, some early conclusions point towards the importance of continuous data training, actionable data monitoring, end-to-end versioning, and tighter control for metadata [7]. Meanwhile, researchers aiming to work with large-scale datasets in computer vision experience significant hurdles in implementing these "best practices" at scale. As an illustrative example, the prestigious DataComp competition built around the Common Crawl data features four scale tracks (12.8M, 128M, 1.28B, and 12.8B samples [3]), but in the first few months, the leaderboard for the top two tracks have only attracted submissions from Meta and the DataComp team itself [2].
We will briefly highlight some of the practical difficulties of doing DCAI on generative datasets below.
### Storage and access to data
Storage is the most immediate issue when working with large-scale computer vision tasks. For example, the full LAION-5B dataset takes up to 0.5PB in volume for data plus metadata [10]. Therefore, working with such a dataset is feasible only using cloud storage or a networked-attached storage cluster, preferably with minimal ETL (extract, transform, and load) operations. Most inexpensive networked storage solutions, however, feature bottlenecks in both download speed and object GET request rates. The latter limitation is especially problematic for large datasets because each sample is typically accompanied by multiple metadata files (JSON, vector embeddings, etc.). As a result, even a simple task of downloading 5.8 billion image-text pairs from the cloud may take many days to complete.
The rate-limit bottleneck in the cloud can be mitigated by grouping samples and metadata into archives (tar, parquet, or compressed.npz formats) to reduce the total number of files. The trade-off in this method is losing random access to particular samples or features inside the archives. This limitation breaks most solutions that rely on constructing a URI to pinpoint the location of every sample or attribute in the storage.
### Dataset sharing and versioning
Curation of a generative dataset is a multi-step process by design and typically involves stages specialized for the removal of NSFW images (i.e., adult content), detection of near-duplicates, digital privacy preservation, image clustering, and more. Each stage is usually driven by an auxiliary ML model and produces a subset (or a superset in the case of data augmentation) of the original data sources. The necessity of experimenting on these curation tasks creates the dual problems of dataset versioning (keeping track of the code versions plus the experiment outcomes) and dataset sharing (passing results of each stage to a different machine or another research team).
However, most computer vision datasets are not stored in a format that allows for easy versioning and sharing. For instance, the LAION-5B dataset is often shipped as a direc
tory tree of images and their metadata in the Apache Parquet files, so sharing a subset of LAION requires repackaging and duplicating the original images and their metadata. Some older computer vision datasets (e.g., COCO [4]) appear more flexible because they offer a common JSON-based index of references that can be amended to create the derivatives; yet such formats do not support storing code alongside data, and (de-)serialization of large generative datasets into a JSON file can be a daunting task.
### Persistent auxiliary features
Generative computer vision datasets are often distributed with some pre-computed image attributes (e.g., NSFW scores, vector embeddings, or cosine similarity between the images and their captions). However, downstream filtering stages will almost certainly require more features: for example, aesthetic scores, clustering model coordinates, chromatic identifiers, etc. In many cases, it makes sense to save these (newly computed) auxiliary attributes for later reuse. Unfortunately, storage-bound dataset formats are often ill-suited for this task because the new attributes must be repackaged back into the original containers (such as JSON files or Apache Parquet tables)--which is inconvenient and slow.
### Data provenance and incremental updates
In its simplest form, a DCAI workflow is a linear pipeline of the processing stages that starts in storage and terminates in a data loader. A more complex DCAI workflow is best represented as a graph where multiple data sources are linked together to form a final dataset. One good example of this arrangement is the "bring your own data" (BYOD) track in the DataComp competition.
Since many DCAI stages are computationally intensive, any efficient data management system must track the _provenance_[8, 11] of every sample within the DCAI graph. This satisfies separate objectives: (1) Re-run downstream processing stages only for new (previously unseen) samples, and (2) When changing an intermediate-stage code, model, or data source, re-run only the affected downstream stages.
## 2 The "Dataset Factory" approach
The key idea behind the proposed _dataset factory_ (DF) architecture intended to solve the aforementioned problems is realizing a fundamental difference between metadata and raw data: Metadata forms the basis for data curation and is queried frequently; meanwhile, the actual samples are only read when computing new signals or when instantiating a dataset for the ultimate model training. Moreover, while one sample may be associated with many attributes or features, the cumulative storage requirements for metadata to store them are typically an order of magnitude less than for storage of samples.1
Footnote 1: As a practical example, the DataComp dataset at the xlarge scale stores 450TB of data in tar files, but only 3TB of parquet metadata.
This asymmetry suggests data and metadata must be treated differently. We find that a good model for working with large generative datasets is to represent them as tables with rows representing samples as pointers into storage, and with columns containing the sample attributes.
Simultaneously, the requirement to handle very large datasets dictates a switch from in-memory solutions such as pandas (where memory limitations have been clearly documented [6]) to the use of database technologies for the back-end dataset representation.
The initial formation of a database table representing a dataset can be done with a hybrid-ETL process that allows a sample to be sourced directly from storage (via appropriate cloud and archive format adapters), while the metadata is read, parsed, and stored in the columnar format in the same way the traditional ETL process works.
Since the bulk of data under this approach is not moved, it allows for efficient sharing and versioning where DCAI stages can pass the (relatively compact) tables to each other. Accessing a working dataset from any stage then equates to pulling a named table. Assuming that all team members have credentials to the underlying storage systems, this approach also solves the problem of efficient collaboration within the research group. This simple idea forms the basis for the bulk of the dataset factory features we highlight in the following.
### Data selection and signal processing
As described above, the datasets in a dataset factory are internally represented as database tables. As a result, they accept the typical parallelized filtering and analytical expressions in Python or SQL syntax. Creating a new signal (for instance, running a new embedding model) in this paradigm is equivalent to writing a user-defined function (UDF) to operate on data samples and produce new features. Some UDFs (including parsing tuples of samples and their JSON metadata, or computing embedding vectors) are fairly standard and are bundled with the factory. Other UDFs are user-supplied and can be completely custom.
### Dataset production and provenance
A dataset factory can use any suitable analytical or vector database according to performance requirements. However, unlike a typical database which treats tables as mutable entities, DF assumes datasets to be immutable. It also associates the dataset with the data sources and the code that was used to construct it. This ensures experiment reproducibility2 and provenance tracking. Changing any dataset in the
DF workflow always creates a new dataset version, and may trigger the execution of the dependent downstream stages. Operation in this manner is similar to a factory pipeline (hence the name "dataset factory"), where each DCAI stage produces a new named dataset, and any upstream changes (in source data, metadata, or the handling instructions of models) result in a new dataset version.
## 3 Workflow Example
For a practical example of a dataset factory workflow, we consider the LAION-5B dataset. Let us assume the data is stored in the cloud with images in tar files and metadata in the compressed.npz format, and we intend to do the usual DCAI steps: filter based on some existing (pre-computed) attributes, filter based on the output of a custom ML model, and serve the final dataset for training.
### ETL stage
The first stage of the dataset factory working on LAION is to extract the attributes, match them with respective samples, and construct the schema to access the samples on-demand. Any attributes stored in separate collections are also joined in this stage. As a result, after the ETL the LAION dataset is represented as a named table lain5b.v1 seen in Figure 1. Note that references to data samples are pointing to the tar file offsets in their respective cloud storage locations, while signals originally stored in the cloud alongside samples (not shown) were unpacked and copied into the table columns.
### Running Queries
Assuming we saved our table representation of LAION-5B as a named dataset, filtering by an existing attribute becomes as simple as applying a simple Boolean expression to all samples:
data_uri - "lain5b.v1" large_images - DatasetQuery(data_uri). \ filter(size > 1000)
In the next stage, we might want to employ a custom ML model. For example, we might be interested in enriching the dataset using fashion-CLIP [1] representations. The dataset factory API allows for a seamless addition of extra signals from a UDF purely within a Python interface that is very similar to the familiar AI coding paradigm:
fclip - FashionCLIP("fashion-clip") large_images. \ add_signals(fclip.encode_images) Once the embedding signal is in place, we can easily filter it down to, for example, the five hundred samples most similar to an exemplar image:
target, - fclip.encode_images([exemplar]) similar_images - large_images. \ mutate(dist-cos_dist(fclip_embed, target)). \ order_by(dist).limit(500) While the syntax for computing a new attribute remains as simple as an expression for filtering the pre-existing signal, under the hood the sequence of operations is different. An embedding ML model requires access to original samples, and the dataset factory takes care of extracting the data from storage, passing it to the embedder, and saving the results back into the dataset:
ds_uri - similar_images.save("most-similar") After this last operation, the backend database will feature a table with an additional column denoting the fashion-CLIP vector (Figure 2). Saving the dataset object as a new database table enables persistent and shared access for intermediate results of any processing stage:
very_large_images - DatasetQuery(ds_uri). \ filter(size > 5000)
## 4 Passing data to the training stage
So far we have described the dataset factory operation as a hierarchy of named datasets separated by the code executions: ETL \(\rightarrow\) filter 1 \(\rightarrow\) enrichment 2 \(\rightarrow\) filter 3, and so on. The final stage of the dataset factory terminates in a data loader object that can be passed directly to popular AI frameworks and supports distributed
Figure 1: DF table representing the LAION dataset after the ETL.
Figure 2: Dataset columns after adding a signal (fragment).
training. As usual, the dataset factory data loader hides the fact that the actual samples reside in the cloud, and performs the necessary data extraction steps. To speed up the repeated access to samples, the data loader also employs the local and (optionally) an intermediate cache (Figure 3).
## 5 Conclusions
The dataset factory is a data representation and access approach designed for DCAI and architected to address the issues arising from curating large generative datasets. The approach is built on separating data from metadata and employs the "dataset as a table" abstraction to reference data samples along with their associated features. Using this abstraction, the dataset factory effectively solves the dataset versioning and sharing problem, which enables collaborative data wrangling at a very large scale.
We are providing our implementation of the dataset factory as an open-source project and hope it will become a useful tool for AI teams in both the industry and academia.
|
2309.10516 | Evaluating the Benefits: Quantifying the Effects of TCP Options, QUIC,
and CDNs on Throughput | To keep up with increasing demands on quality of experience, assessing and
understanding the performance of network connections is crucial for web service
providers. While different measures, like TCP options, alternative transport
layer protocols like QUIC, or the hosting of services in CDNs, are expected to
improve connection performance, no studies are quantifying such impacts on
connections on the Internet.
This paper introduces an active Internet measurement approach to assess the
impacts of mentioned measures on connection performance. We conduct downloads
from public web servers considering different vantage points, extract
performance indicators like throughput, RTT, and retransmission rate, and
survey speed-ups due to TCP option usage. Further, we compare the performance
of QUIC-based downloads to TCP-based downloads considering different option
configurations.
Next to significant throughput improvements due to TCP option usage, in
particular TCP window scaling, and QUIC, our study shows significantly
increased performance for connections to domains hosted by different giant
CDNs. | Simon Bauer, Patrick Sattler, Johannes Zirngibl, Christoph Schwarzenberg, Georg Carle | 2023-09-19T10:55:04Z | http://arxiv.org/abs/2309.10516v1 | # Evaluating the Benefits: Quantifying the Effects of TCP Options, QUIC, and CDNs on Throughput
###### Abstract.
To keep up with increasing demands on quality of experience, assessing and understanding the performance of network connections is crucial for web service providers. While different measures, like TCP options, alternative transport layer protocols like QUIC, or the hosting of services in CDNs, are expected to improve connection performance, no studies are quantifying such impacts on connections on the Internet.
This paper introduces an active Internet measurement approach to assess the impacts of mentioned measures on connection performance. We conduct downloads from public web servers considering different vantage points, extract performance indicators like throughput, RTT, and retransmission rate, and survey speed-ups due to TCP option usage. Further, we compare the performance of QUIC-based downloads to TCP-based downloads considering different option configurations.
Next to significant throughput improvements due to TCP option usage, in particular TCP window scaling, and QUIC, our study shows significantly increased performance for connections to domains hosted by different giant CDNs.
TCP options, QUIC, CDNs, performance measurements, Internet measurements +
Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0274-7/23/07...$15.00[https://doi.org/10.1145/30/60464.306474](https://doi.org/10.1145/30/60464.306474)
+
Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0274-7/23/07...$15.00[https://doi.org/10.1145/30/60464.306474](https://doi.org/10.1145/30/60464.306474)
+
Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0274-7/23/07...$15.00[https://doi.org/10.1145/30/60464.306474](https://doi.org/10.1145/30/60464.306474)
+
Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0274-7/23/07...$15.00[https://doi.org/10.1145/30/60464.306474](https://doi.org/10.1145/30/60464.306474)
+
Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0274-7/23/07...$15.00[https://doi.org/10.1145/30/60464.306474](https://doi.org/10.1145/30/60464.306474)
+
Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0274-7/23/07...$15.00[https://doi.org/10.1145/30/60464.306474](https://doi.org/10.1145/30/60464.306474)
+
Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0274-7/23/07...$15.00[https://doi.org/10.1145/30/60464.306474](https://doi.org/10.1145/30/60464.306474)
## 1. Introduction
Due to its impact on user satisfaction, understanding the performance of connections and the impact of potential performance improvements is crucial for service and infrastructure providers. The same applies from a research perspective to assess the effectiveness of arising or widely deployed measures to improve connection performance.
The transmission control protocol (TCP), responsible for the majority of Internet traffic (Bahani et al., 2017), was extended for several options proposed to improve the performance shortcomings of its original design. Further, QUIC and HTTP3 represent an arising alternative to the commonly used TCP/HTTPS stack (Bahani et al., 2017). This development also motivates a closer look at performance differences between QUIC and TCP connections targeting the same resources in productive deployments, i.e., on the Internet. Next to protocol usage and their configuration, the hosting of service infrastructure is crucial to ensure availability and performance. This led to the trend towards a more centralized Internet, i.e., more services being hosted in large-scale content delivery networks (CDNs) (Bahani et al., 2017; Bahani et al., 2017; Bahani et al., 2017; Bahani et al., 2017).
However, while there are publications surveying deployments and usage of different TCP options (Bahani et al., 2017; Bahani et al., 2017; Bahani et al., 2017), comparing the performance of TCP and QUIC connections in controlled test environments (Bahani et al., 2017; Bahani et al., 2017), or focusing on optimizations of both protocol stacks (Bahani et al., 2017), there are no insights on the impact of TCP option usage, QUIC usage, or CDN hosting on the performance of Internet connections.
This paper assesses the impacts of the named measures on connection performance by conducting active measurements with public Internet servers. Thereby, we exploit the capability of active Internet measurements to determine client configurations and target selection.
Our contributions in this work are:
1. We introduce an active measurement approach for public Internet web servers covering crawling of suitable measurement targets, conducting downloads with different client configurations, and analyzing the performance of connections by extracting different performance indicators.
2. We apply the introduced approach to measurements from different vantage points on a set of publicly available web servers chosen from Internet top lists and discuss corresponding measurement results in this paper.
3. We publish the implemented measurement pipeline.
Background
Performance-related TCP OptionsIn this paper, we consider three options addressing the performance of TCP connections: window scaling (WS), selective acknowledgments (SACK), and explicit congestion notifications (ECN). TCP window scaling (Kulner et al., 2017) is proposed to exceed limitations of bytes in flight implied by the length of the window field in the TCP header from 65 kb up to 1 GB. TCP WS enables the exchange of a factor during connection establishment to shift all following values of the window field.
Selected acknowledgments (Kulner et al., 2017) are purposed to avoid unnecessary retransmissions by specifying lost packets. This is achieved by exchanging ranges of sequence numbers of successfully received packets. Today, WS and SACK are enabled on all major operating systems like Linux, MacOS, or Windows (Bauer et al., 2019).
Explicit congestion notifications (ECN) (Kulner et al., 2017) are a measure to avoid overload on the network and, accordingly, lost packets by enabling routers to signal congestion actively. As routers do not access Layer 4 headers, ECN relies on two bits each in a packet's IP and TCP header. Permutations of such flags are then used to signal ECN support, detected congestion, and reaction to observed congestion. In addition to the endpoints, all routers of a network path have to support ECN. Typical operating systems support the usage of ECN for at least incoming connections (Bauer et al., 2019).
Note that we do not consider TCP Fast Open (TFO) (Kulner et al., 2017) for this study. TFO is particularly effective if a client requests several sources from a server, resulting in several TCP handshakes implying overhead. This use case does not match our approach to explicitly download single files from a target, as described in Section 4.
QUIC is a transport layer protocol specified in 2021 (Kulner et al., 2017; Kulner et al., 2017; Kulner et al., 2017), providing properties like reliable data transmission, connection migration, and encryption while relying only on UDP packet sequences. QUIC implements selective acknowledgments by acknowledging ranges of packet numbers indicating lost packets. In contrast to TCP, which limits selective acknowledgments to a maximum of three ranges of sequence numbers, QUIC supports up to 256 ranges. Further, QUIC supports ECN usage. Upper bounds for receiver window sizes differ between QUIC implementations.
## 3. Related Work
TCP options and their deployment are frequently addressed topics. Studies of TCP option deployments conducted in the early 2000s observed the evolution of option deployment, starting from only small adoption of servers to TCP options (Kulner et al., 2017; Kulner et al., 2017; Kulner et al., 2017). A study conducted in 2013 by Kuhlewind et al. (Kulner et al., 2017) reports widespread deployment of WS and SACK and observes a slower spreading of ECN usage. Such observation is confirmed by Murray et al. (Murray et al., 2017) in 2017, who only observed small usage of ECN in captured Internet traffic from a university network. More recent studies observe that ECN is used by the majority of domains listed in the Alexa Top 1 M list (Kulner et al., 2017), respectively, in passively captured university network traffic (Bauer et al., 2019). The interference of middleboxes on TCP options is surveyed by Honda et al. (Honda et al., 2018). Edeline and Donnet (Edeline and Donnet, 2018) survey the impact of TCP option usage in controlled test environments showing the beneficial effects of TCP options.
According to W3Techs (Kulner et al., 2017) QUIC accounted for 8% of the total global Internet traffic in 2022. Shreedhar et al. (Shreedhar et al., 2020) compare QUIC to the TCP/TLS stack and observe significantly smaller connection duration for web workloads on the Internet. However, TCP option usage is not considered by the study. Further publications show that QUIC outperforms TCP in different controlled test environments (Kulner et al., 2017; Kulner et al., 2017; Kulner et al., 2017).
Additional studies survey quality of experience (QoE) metrics of different web applications based on passive data sets (Kulner et al., 2017; Kulner et al., 2017; Kulner et al., 2017). In contrast, our work analyzes transport layer performance based on active measurements. The reproduction of realistic web applications and web pages for performance measurements was studied by Jun et al. (Jun et al., 2019) and Zilberman et al. (Zilberman et al., 2019)
Considering the above state of the art, there are only limited insights into the impact of TCP option usage on the performance of Internet connections, while TCP options are commonly deployed. The same applies to the implications of QUIC usage and the impacts of CDN hosting on connection performance.
## 4. Approach
For our study, we download files provided by public web servers taken from Internet top lists with varying TCP options and QUIC. This section describes the different steps of our active measurement approach, like determining and selecting suitable measurement targets and conducting downloads with controlled client configurations. Considered performance indicators and other extracted metrics are introduced in Section 5.
### Determining Measurement Targets
Conducting active measurements with public and uncontrolled targets on the Internet requires crawling domains for suitable files to be downloaded. We refer to a suitable file if it satisfies a specific minimum file size, purposed to provide comparable results between different domains. For our study, we choose a minimum file size of 1 MB. While this is a relatively large file size considering the distribution of flow sizes observed in passive data sets (Bauer et al., 2019), the same study emphasizes the relevance of connections with such size regarding the totally observed bytes.
Based on suitable target domains and corresponding files, we compose a target set considering six different groups regarding CDN hosting. Firstly we consider domains hosted by different giant CDNs, i.e., Akamai, Amazon, Cloudflare, Google, and Microsoft (200 domains for each). Secondly, a sixth target group consisting of targets not hosted by the listed organizations (1000 domains). We map domains to the selected CDN providers based on the IP addresses observed for a domain, their mapping to the announcing autonomous system (AS) based on BGP dumps from a Route Views (K
no option enabled) to avoid bias between downloads of the same measurement run due to edge caching by CDNs. After conducting TCP downloads, we run downloads of the same file with different QUIC clients as elaborated in Section 5. Afterward, we continue with downloads from the following domain in the target list.
We examine a specific collection of TCP option configurations, which include: (i) a baseline configuration (BL) that does not utilize any options, (ii) configurations supporting only one of the considered options (ECN, SACK, WS), and (iii) a configuration that enables all options (ALL). We use the maximum window scaling factor of 14 for all listed configurations supporting the option. To allow a fair comparison between TCP and QUIC, we only conduct TCP downloads via HTTPS, implying that TCP and QUIC connections provide encrypted data transmission.
By design, this measurement approach is limited to control configuration and conditions at the client. This implies that the server and its characteristics, like operating systems, server implementations, or used congestion control algorithms, are not known. The same applies to load conditions on the Internet paths and at the target server. We conduct downloads with different configurations back to back for one domain, referred to as one measurement run. This procedure ensures that conditions in the network and on the server side are as similar as possible for all downloads of one run, e.g., regarding daytime patterns of service usage and corresponding load. Further, measured performance indicators are compared within one measurement run, for instance, to calculate speed-ups by a configuration.
### Vantage Points
As connection performance also depends on the location of the vantage point (VP), considering the distance to target servers and last-mile network conditions, we use three different vantage points for our measurements: First, a physical server located in a campus data center in Munich (MUC) and, second, two virtual machines hosted by the cloud provider DigitalOcean in data centers located in San Francisco (SFO) and Singapore (SGP). The physical server hosted in Munich is connected with a 1 \(\frac{\text{Gb}}{\text{s}}\) up- and downlink to the German science network (DFN) that connects to the Internet via a major Tier 1 provider. The measurement host is equipped with an Intel Xeon E5-2630 CPU providing six physical cores at a clock frequency of 2.6 GHz, 32 GB memory, and a Broadcom NetXtreme BCM5719 Gigabit NIC. The virtual machines hosted in SFO and SGP are equipped with two virtual CPU cores and 4 GB memory.
### Ethical Considerations
Active measurements on public infrastructure like the Internet require responsible measurement practices. We followed a set of ethical measures, i.e., informed consent (Srivastava et al., 2017) and community best practices (Srivastava et al., 2017) during all our scans. Our measurement hosts' IP addresses can be identified via reverse DNS or WHOIS information, while the measurement host operates an explanatory website. We maintain an abuse contact email and react quickly to all requests, including the option to exclude a domain or IP range from further measurements. We use a custom HTTP user agent to be identifiable as a research group and follow crawling instructions in the robots.txt according to the Robots Exclusion protocol (Srivastava et al., 2017).
## 5. Implementation
This section describes the implementation of the different measurement pipeline components. The implemented pipeline is publicly available (Srivastava et al., 2017).
### Crawling and Conducting Downloads
Based on a set of domains, crawling aims to identify web servers providing suitable files, as described in Section 4. In order to determine files for downloads, the crawler recursively follows links found on a crawled website if links explicitly point to the same domain and can be reached by the same IP address as the initially crawled website. The crawling component sends HTTP HEAD requests to extract the optional HTTP Content-Length field (Krishnan et al., 2017) with the Python crawling library _Scrapy_ to determine the size of a crawled file. First tests showed that many targets do not provide Content-Length information. Accordingly, we implemented a fallback by starting downloads of files and stopping them if the minimum file size was successfully downloaded.
Crawling targets results in a list of domains and corresponding files suitable for our downloads. The download component iterates over the list of determined files and conducts a download with each specified TCP configuration, respectively QUIC. Before downloads are established, the downloader resolves the target domain's IP address to ensure an up-to-date resolution. Afterward, an HTTP GET request is sent to the freshly resolved target IP with the python _HTTP Requests_ library. We use the _ForcedIPHTIPS adapter_(Krishnan et al., 2017) to enable the use of specific IP addresses while establishing a TLS/SSL connection. Download traffic gets captured with _tcpdump_.
The downloader relies on the corresponding settings of the Linux kernel to configure TCP option usage. In particular, the downloader sets flags indicating ECN, SACK, and WS usage and sets the kernel's TCP receive memory size to enforce the use of a specific window scaling factor. To conduct QUIC downloads, we rely on the QUIC implementations _aioquic_(Krishnan et al., 2017) and _quiche_(Krishnan et al., 2017). Considering different QUIC implementations is motivated by a recently conducted study comparing the performance of QUIC implementations in controlled test environments (Krishnan et al., 2017). According measurement results show that _quiche_ outperforms _aioquic_ in high bandwidth scenarios. This observation motivates to survey whether such performance differences are also represented in downloads conducted on the Internet. Integrating additional QUIC implementations is a considered extension of our measurement pipeline.
### Traffic Analysis
For our study, we primarily examine the throughput of connections. We calculate average throughput, in the following referred to as mean throughput, as the fraction of transmitted data and the duration of a connection. The amount of transferred data is determined by the sum of packet sizes of a connection, as specified by the _total length_ field in IP packet headers.
In addition, the traffic analysis component also extracts performance indicators like mean round trip time (RTT), total retransmission rate (RR), or goodput to further survey the performance characteristics of analyzed connections. We calculate the RTT based on the TCP timestamp option, which enables matching tuples of
corresponding packets based on _TSval_ and _TSeer_ values. The difference of corresponding packet timestamps then determines an RTT sample. We determine packets as retransmissions according to the Wireshark documentation considering retransmissions, fast retransmissions, and spurious retransmissions (Wang et al., 2018). The retransmission rate is then calculated as the fraction of data transported by retransmitted packets and the total number of observed data. Goodput is calculated as the fraction of the sum of packet sizes without considering retransmissions and the duration of a connection. Further, we extract different IP and TCP header fields to survey the effective use of TCP options like ECN echo and CWR flags or SACK blocks from the TCP header. For QUIC connections, we only consider mean throughput as a performance indicator, calculated in the same manner as for TCP.
## 6. Internet Measurements
This section evaluates performance indicators extracted during conducted downloads. In addition to connection performance, this section surveys the option deployment across domains in the Alexa Top 1M list.
### Option Deployment
To provide a recent view on option deployment on the Internet, we conduct active measurements to all domains included in the Alexa Top 1M list (the selected list contained 1M domains). Measurements are conducted by establishing TCP handshakes to the index page of each domain. After establishing the handshake, we immediately terminate TCP connections, as the handshake is sufficient to extract supported options. 5.3 % of domains do not support a single option while 81.0 % support all three considered options. ECN is supported by 85.8 %, SACK by 91.4 % and WS by 91.1 % of the domains.
### Targets for Performance Measurements
To compose our target set, we crawl the top 100K entries of the Alexa Top 1M list. Crawling results in over 22K measurement targets providing a file of at least 1 MB. We select 2000 domains according to the organization maintaining the AS number of a domain, as described in Section 4, referred to as the TCP target set. We observe that not all downloads from successfully crawled domains succeed. Such observation can be explained by crawled files being no longer available. Further, we observed that a significant share of domains hosted by Cloudflare did not result in successful downloads for the vantage points in SFO and SGP during previous measurements, while downloads succeeded from the vantage point in MUC. This observation indicates that Cloudflare blocked some of our download attempts, e.g., through human verification for selected IP ranges (Wang et al., 2018). However, we do not observe such impact during the final measurements.
We find only negligible shares of domains resulting in successful QUIC downloads in the TCP target set. Therefore, we compose a second target set, referred to as the QUIC target set. As the Alexa Top 1M list was retrieved in February 2023, the QUIC target set comprises domains taken from the top 100K entries of Google's CrUX dataset (Wang et al., 2018). We determine domains supporting QUIC with the _QScanner_ introduced by Zirngibl et al. (Zirngibl et al., 2018). Based on the list of domains supporting QUIC, we choose targets providing a suitable file and supporting all considered TCP options. Finally, we merge QUIC-supporting targets from the Alexa-based TCP target set with the domains taken from the CrUX dataset. This procedure results in 558 suitable measurement targets. In the future, scanning for QUIC targets might be replaced by analyzing HTTPS DNS resource records which i.a., provide information regarding a domain supporting QUIC. However, Zirngibl et al. have shown that the record is currently used mainly by Cloudflare (Zirngibl et al., 2018).
We run three measurement iterations for both target sets, while one iteration consists of one measurement run per domain. Note that measurements based on the QUIC target set only consider downloads with _aioquic_, _quiche_, TCP-BL, TCP-ALL, and a warm-up run. Table 1 lists the number of domains resulting in successful downloads during the first measurement iteration, numbers of
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**SGP**}} & \multirow{2}{*}{**SGP**} & \multirow{2}{*}{**TCP**} & \multirow{2}{*}{**SGP**} & \multirow{2}{*}{**SGP**} & \multirow{2}{*}{**SGP**} \\ \cline{1-1} \cline{5-8} \cline{7-8} MUC & & & & & & & & \\ \cline{1-1} SFO & & & & & & & \\ \cline{1-1} SGP & & & & & & & \\ \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{**QUIC**} \\ \hline MUC Q & 511 & 3 & 15 & 289 & 2 & 0 & 202 \\ SFO\_Q & 506 & 3 & 14 & 285 & 2 & 0 & 202 \\ SGP\_Q & 495 & 3 & 13 & 276 & 2 & 0 & 201 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Number of domains resulting in successful downloads.
Figure 1. Measured KPIs per VP.
successful downloads for following iterations vary slightly within a deviation smaller than 3 %.
### Comparison of Performance Indicators per Vantage Point
Conducting measurements from different vantage points indicates varying impacts on observed performance, for instance, due to different traversed Internet paths and distances between vantage points and measurement targets. Therefore, we survey performance indicators independent of the client configuration for the used vantage points. Figure 1 shows the cumulative distribution function (CDF) of mean throughput and mean RTT for the three vantage points for all considered option configurations except warm-up runs. The VP in MUC results in a larger mean throughput up to about the 75th percentile of samples. Afterward, the VPs in SFO and SGP show larger shares of downloads with significantly increased mean throughput compared to the VP in MUC. This observation correlates to the distribution of mean RTTs. In particular, we find that the VPs in SFO and SGP result in a significant share of mean RTTs smaller than 5 ms, which is not observed for the VP in MUC. However, the VP in MUC shows smaller mean RTTs for most of the remaining samples. We find that measurements conducted from SFO and SGP with mean RTTs smaller than 5 ms are associated with measurement targets hosted by Akamai, Cloudflare, and partly Amazon.
### Impact of TCP Options
To assess the impact of TCP option configuration on performance, we survey the CDF of mean throughput for each configuration, as shown in Figure 2. For the VP in MUC, we observe two groups of distributions: _(i)_ configurations without enabled WS and _(ii)_ configurations with enabled WS, while the latter indicates a significantly larger mean throughput. The same observation applies to measurements conducted with the VPs in SFO and SGP, which additionally show slightly increased mean throughput for downloads conducted with ECN, respectively SACK, compared to the baseline configuration.
As the distribution of observed mean throughput does not show the explicit difference between two downloads of a measurement run, we calculate the share of measurements with an option configuration resulting in a specific speed-up compared to the baseline configuration. Table 2 shows shares of downloads resulting in a specific speed-up gathered across all measurement iterations and vantage points. As also observed for the CDFs of mean throughput,
\begin{table}
\begin{tabular}{l l c c c c c c c c c c} \hline \hline & & & & & & & & & & & & & \\ \hline Config & vs. & + & - & 0.7 - 0.8 & 0.8 - 0.9 & 0.9 - 1.0 & 1.0 - 1.1 & 1.1 - 1.2 & 1.2 - 1.3 & 1.3 - 1.5 & 1.5 - 2 & \textgreater{}2 \\ \hline Warm-up & BL & 35.4\% & 64.6\% & 3.8\% & 8.3\% & 37.1\% & 25.9\% & 3.8\% & 1.5\% & 1.6\% & 1.2\% & 1.5\% \\ ECN & BL & 53.3\% & 46.7\% & 2.1\% & 5.3\% & 34.0\% & 35.0\% & 6.1\% & 2.3\% & 2.6\% & 2.2\% & 5.1\% \\ SACK & BL & 54.2\% & 45.8\% & 2.1\% & 5.3\% & 33.2\% & 34.7\% & 6.3\% & 2.8\% & 2.6\% & 2.4\% & 5.5\% \\ WS & BL & 90.3\% & 9.7\% & 0.9\% & 1.6\% & 3.5\% & 5.7\% & 6.8\% & 6.1\% & 10.8\% & 22.9\% & 38.0\% \\ All & BL & 91.4\% & 8.6\% & 1.0\% & 1.3\% & 3.2\% & 5.6\% & 6.8\% & 5.7\% & 10.2\% & 22.8\% & 40.2\% \\ \hline \multicolumn{10}{c}{**QUIC and TCP**} \\ \hline Config. & vs. & + & - & 0.7 - 0.8 & 0.8 - 0.9 & 0.9 - 1.0 & 1.0 - 1.1 & 1.1 - 1.2 & 1.2 - 1.3 & 1.3 - 1.5 & 1.5 - 2 & \textgreater{}2 \\ \hline quiche & aioquic & 70.0\% & 30.0\% & 4.0\% & 2.8\% & 2.9\% & 4.0\% & 4.6\% & 5.9\% & 7.7\% & 8.9\% & 38.9\% \\ aioquic & TCP-BL & 59.7\% & 40.3\% & 2.4\% & 2.1\% & 2.5\% & 3.4\% & 2.4\% & 2.2\% & 6.4\% & 8.5\% & 36.7\% \\ aioquic & TCP-ALL & 44.5\% & 55.5\% & 5.1\% & 7.1\% & 4.7\% & 3.3\% & 1.6\% & 1.1\% & 1.6\% & 4.6\% & 32.4\% \\ quiche & TCP-BL & 82.9\% & 17.1\% & 2.1\% & 2.8\% & 3.9\% & 5.2\% & 3.9\% & 2.9\% & 4.7\% & 14.6\% & 51.5\% \\ quiche & TCP-ALL & 71.9\% & 28.1\% & 4.3\% & 4.3\% & 5.5\% & 9.1\% & 7.5\% & 4.3\% & 4.3\% & 5.9\% & 40.7\% \\ \hline \hline \end{tabular}
\end{table}
Table 2. Shares of downloads of a configuration (Config.) resulting in a certain speed-up compared to another configuration (vs.) aggregated for all VPs.
Figure 2. Distribution of mean throughput measured with different TCP option configurations.
we find that measurements with TCP window scaling outperform measurements without window scaling. In particular, window scaling results in a speed up larger than 1.5 for over 60 % of conducted measurements. Measurements with only ECN, respectively SACK enabled, mostly show comparable performance to the baseline measurements. Nearly 70 % of measurements with ECN, respectively SACK, enabled result in a speed-up of +/- 10 %.
### TCP vs. QUIC
To survey performance differences between TCP- and QUIC-based downloads, Table 2 shows the speed-ups by downloads conducted with _aquic_ and _quic_ compared to the throughput observed for TCP-BL and TCP-ALL. Further, Table 2 compares the mean throughput observed for downloads conducted with _quic_ to throughput observed for downloads conducted with _aiquic_. We find that the mean throughput of downloads with _quic_ exceeds the throughput of 70 % of downloads with _aiquic_, while mean throughput doubles for over 35 % of downloads conducted with _quicic_.
Comparing the mean throughput of _quic_ to TCP-BL downloads results in positive speed-ups for over 80 % of measurements, while over 65 % of measurements show a speed-up by more than 50 %. In comparison to TCP downloads with all options enabled, _quic_ shows increased mean throughput for about 70 % of samples, while the shares of larger speed-ups are significantly smaller, as observed for the comparison of _quic_ and TCP-BL.
### CDN Impact
To survey the impact of CDN hosting, we group measurement results according to the five considered giant CDNs and the sixth group, which includes all remaining domains. Figure 3 shows the CDF of mean throughput per domain for considered CDNs. The distribution of mean throughput shows that domains hosted by Cloudflare and Akamai provide the most significant shares of larger mean throughput. Domains hosted by Google and Microsoft show the least improved mean throughput compared to domains not hosted by one of the five giant CDNs.
As mentioned in Section 4, we conduct a warm-up download before all remaining option configurations to remove bias caused by potential caching of downloaded files. To further survey performance impacts by CDN hosting, we compare the distribution of mean throughput of warm-up runs to downloads conducted with the TCP baseline configuration, as shown in Figure 2. We observe that potential caching impacts are significantly larger for VPs in SFO and SGP, while such performance gains can be mainly traced back to targets hosted by Akamai, Amazon, and Cloudflare, which all provide edge caching services. Such observation is confirmed by assessing throughput amplitudes between the warm-up run and the baseline download, as shown in Table 2. While the majority of samples show comparable performance between the warm-up and the baseline, over 15 % of warm-up samples result in a throughput decrease larger than 30 %.
In general, our measurements confirm the expectation that CDN hosting increases performance. The degree of performance gain for each CDN varies between the three vantage points, which is reasonable since the vantage point location determines the nearest point of presence (PoP) of a CDN.
## 7. Conclusion
In this study, we conducted active Internet measurements with public web servers to assess the impact of TCP option usage, QUIC, and CDN hosting on connection performance.
Our measurements show that TCP window scaling is crucial to increase throughput. Replacing TCP (using all options) with the QUIC implementation _quiche_ implies performance gain for over 70 % of samples, while the used QUIC implementation also impacts measured performance. CDN hosting increases throughput for most considered CDNs compared to domains not hosted by one of the considered giant CDNs, while we observe varying performance depending on the vantage point.
For future work, we consider extending the introduced measurement pipeline to support additional protocol parameters, performance indicators, and analysis approaches like root cause analysis to determine throughput limitations.
###### Acknowledgements.
The authors would like to thank the anonymous reviewers for their valuable feedback. This work was partially funded by the German Federal Ministry of Education and Research under the project PRIMEnet (16KIS1370), 6G-life (16KISK002), and 6G-ANNA (16KISK107) as well as the German Research Foundation (HyperNIC, grant no. CA595/13-1). Additionally, we received funding by the Bavarian Ministry of Economic Affairs, Regional Development, and Energy as part of the project 6G Future Lab Bavaria and the European Union's Horizon 2020 research and innovation program (grant agreement no. 101008468 and 101079774).
Figure 3. Distribution of mean throughput measured for domains grouped by their CDN affiliation. |
2309.17279 | Electronic structure and magnetic properties of La$_{3}$Ni$_{2}$O$_{7}$
under pressure: active role of the Ni-$d_{x^2-y^2}$ orbitals | Following the recent report of superconductivity in the bilayer nickelate
La$_{3}$Ni$_{2}$O$_{7}$ under pressure, we present an analysis of the
electronic and magnetic properties of La$_{3}$Ni$_{2}$O$_{7}$ as a function of
pressure using correlated density functional theory methods (DFT+$U$). At the
bare DFT level, the electronic structure of the ambient and high-pressure
phases of La$_{3}$Ni$_{2}$O$_{7}$are qualitatively similar. Upon including
local correlation effects within DFT+$U$ and allowing for magnetic ordering, we
find a delicate interplay between pressure and electronic correlations. Within
the pressure-correlations phase space, we identify a region (at $U$ values
consistent with constrained RPA) characterized by a high spin to low spin
transition with increasing pressure. In contrast to previous theoretical work
that only highlights the crucial role of the Ni-$d_{z^2}$ orbitals in this
material, we find that the Ni-$d_{x^{2}-y^{2}}$ orbitals are active upon
pressure and drive this rich magnetic landscape. This picture is preserved in
the presence of oxygen deficiencies. | Harrison LaBollita, Victor Pardo, Michael R. Norman, Antia S. Botana | 2023-09-29T14:30:31Z | http://arxiv.org/abs/2309.17279v3 | # Electronic structure and magnetic properties of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure
###### Abstract
Following the recent report of superconductivity in the bilayer nickelate La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure, we present an analysis of the electronic and magnetic properties of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) as a function of pressure using correlated density functional theory methods (DFT+\(U\)). At the bare DFT level, the electronic structure of the ambient and high-pressure phase of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) is qualitatively similar. Upon including local correlation effects within DFT+\(U\) and allowing for magnetic ordering, we find a delicate interplay between pressure and electronic correlations. Within the pressure-correlations phase space, we identify a region (at \(U\) values consistent with constrained RPA calculations) characterized by a spin-state transition with increasing pressure, where the ground state of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) changes from a high-spin A-type antiferromagnet to a low-spin G-type antiferromagnet with contrasting electronic structures. While the energy landscape of this material is rich, with many competing magnetic states, we find a common thread to be that all of these different states are driven by the Ni-\(d_{x^{2}-y^{2}}\) orbitals.
+
Footnote †: preprint: APS/123-QED
## I Introduction
The recent observation of superconductivity in low-valence layered nickelates has produced tremendous excitement in the community in the last few years, first in the infinite-layer compounds _R_NiO\({}_{2}\) (\(R=\) rare-earth) [1, 2, 3, 4], and more recently in the quintuple-layer compound Nd\({}_{6}\)Ni\({}_{5}\)O\({}_{12}\)[5]. Structurally, these materials possess quasi-two-dimensional NiO\({}_{2}\) planes (analogous to the CuO\({}_{2}\) planes of the cuprates) and belong to a larger family represented by the general chemical formula \(R_{n+1}\)Ni\({}_{n}\)O\({}_{2n+2}\) where \(n\) denotes the number of NiO\({}_{2}\) planes per formula unit along the \(c\) axis. The discovery of superconductivity in this family of nickel oxide compounds completed a long quest to find materials that can serve as proxies for cuprate physics.
Despite many structural, chemical, and electronic similarities to the cuprates [6], the superconducting layered nickelates show some relevant differences, the most obvious being their superconducting critical temperatures [2, 3, 4, 7], which are much lower than those obtained in the cuprates. Improved crystalline quality samples of the infinite-layer nickelate [8], as well as the application of hydrostatic pressure [9] have shown small incremental increases in T\({}_{c}\), but still remain far away from typical cuprate values of T\({}_{c}\sim 80-160\) K.
Very recently, a breakthrough T\({}_{c}\) near 80 K has been reported [10, 11] in the bilayer Ruddlesden-Popper (RP) nickelate La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at modest pressures (P \(\sim 14-42\) GPa). Bilayer La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) differs from the previously reported superconducting nickelates in that it belongs to the parent \(R_{n+1}\)Ni\({}_{n}\)O\({}_{3n+1}\) series. As such, it possesses NiO\({}_{6}\) layers (retaining the apical oxygens, rather than having a square planar environment for the Ni atoms). Also, the oxidation state of the Ni is 2.5+, corresponding to an average 3\(d^{7.5}\) filling, far from the hard-to-stabilize \(\sim\)3\(d^{8.8}\) filling that all other previously observed superconducting layered nickelates present.
Work on La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) prior to the discovery of superconductivity had focused on analyzing some of its structural [12, 13] and electronic structure [14] characteristics, suggesting that the emergence of a cuprate-like electronic structure could be possible in this compound. More recent theoretical studies [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] aimed at exploring the electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) with pressure in relation to superconductivity (using a variety of techniques from DFT+\(U\), to DFT+DMFT, GW+DMFT, and model Hamiltonians) point instead to the active role of the Ni-\(d_{z^{2}}\) states in the vicinity of the Fermi level, in contrast to the cuprates.
Here, we study the electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under the influence of pressure using DFT and DFT+\(U\) methods with different double-counting corrections. Within our methodology, we find a delicate interplay between pressure and electronic correlations with different phases closely competing in energy. Within the pressure-correlations phase diagram, we find that the region of \(U\)'s consistent with constrained RPA (cRPA) calculations [18] support a spin-state transition with pressure. Specifically, we find a high-spin A-type antiferromagnetic state (with Ni moments coupled ferromagnetically in-plane and antiferromagnetically out-of-plane) at low pressures that transforms to a low-spin G-type antiferromagnetic state (with antiferromagnetic planes coupled antiferromagnetically out-of-plane) at high pressures (\(>10\) GPa). Remarkably, we find that the ground states derived for all \(U\)s are controlled by the in-plane Ni-\(d_{x^{2}-y^{2}}\) states.
## II Methods
We start by carrying out structural optimizations under pressure for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) using DFT [31; 32]. The experimental lattice parameters were adopted at the following pressures: P = 0, 5, 10, 15, 20, and 29.5 GPa [10; 13]. With these lattice constants, the crystal structure for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) was constructed in both the low-symmetry _Amann_ and high-symmetry _Fmmm_ space groups. The internal coordinates of the atomic positions were optimized using the plane-wave pseudopotential DFT code VASP [33; 34; 35] within the generalized gradient approximation (GGA) [36]. The number of plane waves in the basis was set by an energy cutoff of 500 eV. The integration in reciprocal space was carried out on an \(8\times 8\times 4\) grid. The internal forces on each atom were converged to \(10^{-6}\) eV/A.
The electronic structure for each structure was then calculated within DFT as implemented in the all-electron, full-potential code wien2k[37]. The local-density approximation (LDA) [38] was adopted for the exchange-correlation functional and correlation effects were included using DFT + Hubbard \(U\) (DFT+\(U\)), which allows for the incorporation of local Coulomb interactions for the localized Ni(\(3d\)) states. Within the DFT+\(U\) scheme, the double-counting term plays a central role in determining the underlying low-energy physics of the material [39]. We have adopted two choices that are commonly used for the double-counting correction: the fully-localized limit (FLL) [40] and the around mean-field (AMF) [41]. We note that a careful choice of the DFT+\(U\) implementation is important to capture the physics of these nickelates [42]; for instance, when the two spin-states are nearly degenerate, it becomes essential when applying DFT+\(U\) to understand the tendencies of the various choices of functional: the around mean field (AMF) scheme is known to favor the stabilization of low-spin configurations, whereas the fully localized limit (FLL) tends to favor a high-spin configuration [39]. The results shown in the main text adopt the AMF double counting term, which will be further motivated below. We use a range of \(U\) values from 1 to 5 eV. The Hund's coupling \(J_{\rm H}\) is fixed to a typical value of 0.7 eV for transition-metal \(3d\) electrons. A 10\(\times\)10\(\times\)9 and a 10\(\times\)10\(\times\)10 \(k\)-point mesh was used for Brillouin zone integration for the _Amm_ phase and for the _Fmmm_ phase, respectively. The basis set size is determined by \(RK_{\rm max}=7\) and muffin-tin radii (in atomic units) set to 2.30, 1.86, and 1.65 for La, Ni, and O, respectively.
## III Crystal structure
In addition to the report of superconductivity, the experiments performed in Ref. [10] reveal a structural transition in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) from a low-pressure _Aman_ phase to a high-pressure _Fmmm_ phase. In order to analyze the implications of the change in space group, we start by
Figure 1: Crystal structures and structural data from DFT. (a) Crystal structure for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) in the low-symmetry _Aman_ (left) and high-symmetry _Fmmm_ (right) phases. Small red spheres represent oxygen atoms, forming octahedral cages around the Ni (gray) atoms. The green spheres are the La atoms. Structural data for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) as a function of pressure: (b) enthalpy (\(H=E+PV\)), (c) lattice constants extracted from the experimental data in [10], (d) relaxed apical and planar Ni-Ni bond lengths, and (e) relaxed Ni-O-Ni interplanar bond angles. The shaded, hatched area denotes the region where the structural transition occurs experimentally.
looking at the evolution of the _Amm_ and _Fmmm_ phases under pressure, as summarized in Figure 1. We find that, from DFT calculations, the _Amm_ phase naturally evolves to the _Fmmm_ phase when using the experimental lattice constants and relaxing the internal atomic coordinates. The _Fmmm_ phase becomes energetically more favorable than the _Amm_ phase at around \(\sim\)10 GPa, in agreement with experiments [10]. Coinciding with this pressure, the octahedral NiO\({}_{6}\) tilting of the ambient pressure structure is (nearly) suppressed as shown by the Ni-O-Ni interplanar bond angle (see Fig. 1). Interestingly, if the crystal lattice is also allowed to relax, we find that contrary to the experiments, the material has a tendency to "tetragonalize" with the \(a\) and \(b\) lattice constants collapsing to the same value (see Appendix A). This may point to the presence of oxygen deficiencies in the experiments, consistent with the transport data reporting a metal-to-insulator transition at low pressure [10; 12]. Note that the suppression of the NiO\({}_{6}\) octahedra tilts still takes place in the relaxations allowing for both lattice constants and internal coordinates to be optimized. We present a discussion on the potential role that variations in the oxygen stoichiometry may play in the electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) in Appendix B.
## IV Electronic structure and magnetism
### Electronic structure at the LDA level
A summary of the electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at ambient and high (P = 29.5 GPa) pressure within LDA is presented in Fig. 2. The electronic structure at ambient pressure (for the _Amm_ structure) is characterized near the Fermi level by Ni-\(e_{g}\) (\(d_{z^{2}}\), \(d_{x^{2}-y^{2}}\)) states hybridized with O(2\(p\)) states, which is consistent with previous works [15; 17; 43; 10]. The Ni-\(t_{2g}\) states are completely occupied and centered around \(-2\,\mathrm{eV}\), just above the top of the O(2\(p\)) bands. The rare-earth La(5\(d\)) states are completely removed from the low-energy physics of this material and are unoccupied, unlike the superconduct
Figure 2: LDA non-magnetic electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) in the ambient pressure _Amm_ phase (top row) and in the _Fmmm_ phase at P = 29.5 GPa (bottom row). (a,d) Orbital resolved density of states (DOS) for the La, Ni, and O atoms (upper panels) and partial density of states (PDOS) for the Ni-\(d_{z^{2}}\), Ni-\(d_{x^{2}-y^{2}}\), and Ni-\(t_{2g}\) states (lower panels). Insets are a zoom-in around the Fermi level. (b,e) Band structure along high-symmetry lines at ambient pressure (_Amm_ structure) and at P= 29.5 GPa (_Fmmm_ phase) where the corresponding \(k\)-path is the same in both zones (see more details in Appendix C). Colors denote the orbital character for the Ni-\(d_{z^{2}}\) and Ni-\(d_{x^{2}-y^{2}}\) orbitals. (c,f) Respective Fermi surfaces in the \(k_{z}=0\) plane. Note the zone folding of _Amm_ relative to _Fmmm_.
ing infinite-layer and quintuple-layer nickelates, where the role of the rare-earth band degree of freedom is still being highly contested [44, 45, 46, 47, 48, 49, 50, 51]. The removal of the La(5\(d\)) states from the vicinity of the Fermi level is expected given the change in nominal valence for the Ni ions from 1.2+ to 2.5+. The derived charge-transfer energy (\(\Delta=\varepsilon_{d}-\varepsilon_{p}\)) is \(\Delta=3.2\) eV, much reduced from that in layered nickelates [52, 6, 53] and closer to a typical cuprate value.
Focusing on the Ni-\(e_{g}\) states, the Ni-\(d_{z^{2}}\) states are split by \(\sim 1\) eV into an occupied bonding and unoccupied antibonding combination due to the quantum confinement of nickel oxygen bilayers in the structure [54, 55, 56]. The presence of apical oxygens broadens the Ni-\(d_{z^{2}}\) bands with respect to the low-valence layered compounds such as La\({}_{3}\)Ni\({}_{2}\)O\({}_{6}\) or La\({}_{4}\)Ni\({}_{3}\)O\({}_{8}\), but the bonding-antibonding splitting caused by the blocking layers is still present in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\)[57]. The Ni-\(d_{z^{2}-y^{2}}\) dispersion is large with a bandwidth of \(\sim\)2.5 eV and this orbital remains only partially occupied. As mentioned above, nominally, the Ni valence would be Ni\({}^{2.5+}\) (3\(d^{7.5}\)). As the \(t_{2g}\) electronic states are completely occupied, this average filling means that 1.5 \(e_{g}\)-electrons per Ni need to be accommodated close to the Fermi level given that the O(2\(p\)) bands are (almost) completely filled in this material.
Turning to the electronic spectrum of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at high pressure (P = 29.5 GPa, \(Fmmm\) phase), we find that the overall electronic structure within LDA is qualitatively similar to the ambient pressure _Aman_ phase, with some quantitative differences. The Ni \(d_{x^{2}-y^{2}}\) dispersion increases to 4 eV, the bonding-antibonding Ni-\(d_{z^{2}}\) splitting increases to \(\sim\) 1.5 eV, and the charge-transfer energy \(\Delta\) value increases to 3.6 eV. Also, the Ni-\(t_{2g}\) bands are pushed farther away from the Fermi level. Similar to the ambient pressure case, the dominant DOS around the Fermi level (\(\varepsilon_{\rm F}\)) for the non-magnetic calculation at 29.5 GPa is that coming first from Ni-\(d_{z^{2}}\) orbitals followed by that of the Ni-\(d_{x^{2}-y^{2}}\) orbitals.
The corresponding LDA Fermi surfaces at ambient pressure and at P= 29.5 GPa are shown in Figs. 2(c,f), respectively. Both surfaces are comprised of two sheets from hybridized Ni-\(e_{g}\) bands (an electron sheet centered at \(\Gamma\) and a larger hole sheet centered around \(M\)) and small hole pockets at \(M\) coming from the flattened Ni-\(d_{z^{2}}\) bands. The zone folding between the _Aman_ and _Fmmm_ Brillouin zones is clearly seen from the Fermi surfaces (see Appendix C for more details on the corresponding Brillouin zones). We explore possible instabilities of the Fermi surface at the LDA level by calculating the static susceptibility from the near Fermi level bands in Appendix D.
### Interplay between pressure and electronic correlations
Experiments suggest a complicated electronic and magnetic structure for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\). Transport and magnetic susceptibility measurements hint at charge and spin ordering similar to the trilayer La\({}_{4}\)Ni\({}_{3}\)O\({}_{10}\) Ruddlesden-Popper nickelate, and point to the presence of antiferromagnetic correlations [58].
We explore the pressure and correlation phase space to reveal magnetic tendencies in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) within an LDA+\(U\) framework. The choice of AMF double counting has been shown in the past to give a reliable comparison to experiments in other layered nickelates [54, 59, 60, 61] so we focus on this scheme in the main text and present LDA+\(U\)(FLL) results in Appendix E.1. For each double-counting scheme, we study three possible magnetic orderings within a range of \(U\) from 1 to 5 eV: ferromagnetic (FM), A-type antiferromagnetic (AFM-A), and G-type antiferromagnetic (AFM-G), where AFM-A(G) corresponds to Ni moments coupled FM (AFM) in-plane and AFM (AFM) out-of-plane (see Appendix E.1 for more details). We note that the inclusion of a Hubbard \(U\) is necessary to initially converge the magnetically ordered states, otherwise, a non-magnetic solution is obtained.
Figure 3 summarizes two representative cuts from the pressure and correlation phase space within LDA+\(U\)(AMF). From the energy difference between the different magnetic configurations as a function of Hubbard \(U\), we find a transition from a high-spin (HS) AFM-A ordered phase to a low-spin (LS) AFM-G ordered phase for a range of \(U\) values between 2 and 4 eV. Above \(\sim 4\) eV, this transition is suppressed and the ground state remains in the HS AFM-A phase at all pressures. This competition of different magnetically ordered phases with different magnetic moments and exchange couplings suggests there is a rich energy landscape with many (nearly) degenerate ground states.
With these two distinct regimes identified, we now explain how the electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) evolves as a function of pressure. In the high-\(U\) regime, the
Figure 3: Left: Energy difference between high-spin A-type AFM (AFM-A) and low-spin G-type AFM (AFM-G) phases at P = 0 (_Aman_ phase) and 29.5 GPa (_Fmmm_ phase) as a function of \(U\). Right: Energy difference between AFM-A and AFM-G phases as a function of pressure for \(U=3.5\) eV. Hund’s coupling (\(J_{\rm H}\)) is fixed to 0.7 eV. The inset shows the evolution of the Ni magnetic moment (\(\mu_{\rm B}\)) as a function of pressure.
HS AFM-A ground state at all pressures portrays a van Hove singularity pinned at the Fermi level which would make the AFM order unstable (see Appendix E.2). For this reason, we focus here on the region of the pressure-correlations phase space in which a spin-state transition takes place (for 2 eV \(<U<\)4 eV), consistent with the \(U\)= 3.4 eV derived for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) from constrained RPA calculations [18]. As shown in Fig. 3, at this value of \(U\) the spin-state transition from a HS AFM-A phase to a LS AFM-G phase occurs around 10 GPa, which corresponds to the structural transition from the _Aman_ to the _Fmmmm_ phase. In the LDA+\(U\)(FLL) scheme, this spin-state transition still occurs but within a narrower range of \(U\) values (see Appendix E.1).
The evolution of the band structure across the spin-state transition is shown in Fig. 4 where colors denote the weights of the Ni-\(d_{z^{2}}\) and Ni-\(d_{x^{2}-y^{2}}\) orbitals. At 0 GPa in the AFM-A HS ground state, we find that the near Fermi level bands correspond to hybridized Ni-\(e_{g}\) (\(d_{z^{2}}\) and \(d_{x^{2}-y^{2}}\)) states with a van Hove singularity at the \(X\) point essentially at the Fermi energy (the doubled bands are due to \(Aman\) zone folding). The bonding Ni-\(d_{z^{2}}\) states are fully occupied. The Ni-\(d_{x^{2}-y^{2}}\) majority states are nearly full, with a \(\sim\) 0.75 filling obtained from an energy integration of the spectrum. The Ni magnetic moment is \(\sim\) 1.2\(\mu_{\rm B}\), which is slightly reduced due to hybridizations from the HS ionic value. The overall weak metallic signature in this AFM-A phase at 0 GPa coincides with the poor conductivity evidenced by the experimental resistivity [10] at ambient pressure.
Near the experimentally reported structural transition (P = 10 GPa), we find that the electronic structure changes dramatically with the material adopting a LS AFM-G ground state characterized by the dominant role of the \(d_{x^{2}-y^{2}}\) states around the Fermi level. We find that with pressure the Ni-\(d_{x^{2}-y^{2}}\) orbital becomes half-filled per spin channel leading to an overall quarter-filling, which further stabilizes the in-plane AFM coupling within this G-type AFM phase [14]. As shown in Fig. 1, applying pressure decreases the planar Ni-Ni distance faster than the apical Ni-Ni distance. Decreasing the planar Ni-Ni distance will favor the low-spin state to accommodate for this reduction [62]. We note a similar spin-state transition has been reported in the trilayer nickelate La\({}_{4}\)Ni\({}_{3}\)O\({}_{8}\) under pressure [54].
In Fig. 5 we compare the Ni-\(e_{g}\) partial density of states (PDOS) for the high-spin (AFM-A) and low-spin (AFM-G) solutions at the same three pressures analyzed above: 0 GPa (with an AFM-A HS ground state), 10 GPa (AFM-G LS ground state), and 29.5 GPa (AFM-G LS ground state). Overall, it can be clearly seen that the PDOS is dominated by Ni-\(d_{x^{2}-y^{2}}\) states around the Fermi level for all solutions. Analyzing the Ni-\(d_{z^{2}}\) DOS (left panels), their spectrum remains inert between the LS and HS solutions at all pressures. For the Ni-\(d_{x^{2}-y^{2}}\) orbitals (right panels), we find that a large redistribution takes place when comparing the AFM-A HS solution (ground state at 0 GPa) and the AFM-G LS solution (ground state
Figure 4: LDA+\(U\)(AMF) (\(U\) = 3.5 eV, \(J_{\rm H}\) = 0.7 eV) band structures along high-symmetry lines in the Brillouin zone for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure. From left to right: P = 0 GPa _Aman_ structure in its HS AFM-A ground state, P = 10 GPa _Fmmm_ structure in its LS AFM-G ground state, and P = 29.5 GPa _Fmmm_ structure in its LS AFM-G ground state. Top (bottom) rows shows the Ni-\(e_{g}\) orbital character in the majority (minority) channels, where blue (pink) corresponds to Ni-\(d_{x^{2}-y^{2}}\) (Ni-\(d_{z^{2}}\)) states. Note for the _Fmmm_ structures (10 GPa and 29.5 GPa), supercells need to be constructed to allow for G-type ordering, which have _Aman_ symmetry.
above 10 GPa), indicating that these are the orbitals that drive the spin-state transition. This is also revealed in Appendix E.1 (Fig. 12) from the (nearly) constant Ni-\(d_{x^{2}}\) occupations under varying pressure and across different spin-state solutions, in contrast to the notable variations in Ni-\(d_{x^{2}-y^{2}}\) occupations between spin-state solutions. Additional systematic trends of the ground state electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) with pressure are shown in Fig. 12(c) (Appendix E.3).
Based on this derived spin-state transition with very clear changes in the electronic structure with pressure, we speculate that the HS solution is likely unfavorable for superconductivity in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\). Importantly, the LS ground state dominated by Ni-\(d_{x^{2}-y^{2}}\) orbitals obtained at high pressure (where superconductivity arises) brings the physics of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) closer to a cuprate-like picture.
## V Summary and discussion
We have studied the evolution of the electronic structure of the bilayer RP nickelate La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) with pressure using a density-functional theory framework. We capture the experimentally observed structural transition from _Amm_ to _Fmmm_, which corresponds to the suppression in the tilts of the NiO\({}_{6}\) octahedra. At the LDA level, the electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at ambient and high-pressures is qualitatively similar: hybridized Ni-\(e_{g}\) states are dominant near the Fermi level with additional weight from the O(2\(p\)) orbitals without the involvement of rare-earth bands in the low-energy physics, in contrast to the superconducting (infinite-)layered nickelates.
Using LDA+\(U\), we explore the pressure and correlation phase space and find crucial differences with respect to the uncorrelated electronic structure. At low \(U\) values (between 2 and 4 eV, consistent with cRPA calculations) a spin-state transition with pressure that coincides with the structural transition takes place. For this range of \(U\)s, the ground state changes with increasing pressure from a HS AFM-A phase with FM in-plane coupling to a LS AFM-G phase with AF in-plane coupling. The transition is driven by a redistribution of the (dominant) Ni-\(d_{x^{2}-y^{2}}\) orbitals. Based on our derived spin-state transition in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\), we hypothesize that the HS solution is unfavorable for superconductivity in this material while the LS solution may promote it instead.
Overall, we conclude that while there are many competing magnetic states in the bilayer RP nickelate, our DFT+\(U\) calculations indicate that a single-orbital (\(d_{x^{2}-y^{2}}\)-like) picture with low spin nickel might be the correct theoretical description for superconducting La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\). This resemblance to a cuprate-like single-band picture seems to be a common feature of all superconducting nickelates.
###### Acknowledgements.
HL and ASB acknowledge the support from NSF-DMR 2045826 and from the ASU Research Computing Center for HPC resources. VP acknowledges support from the Ministry of Science of Spain through the Project No. PID2021-122609NB-C22. MN was supported by the Materials Sciences and Engineering Division, Basic Energy Sciences, Office of Science, U.S. Dept. of Energy.
## Appendix A "Tetragonalization" of crystal structure
In the main text we build the crystal structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) using the experimental lattice parameters at different pressures. Additionally, we investigated _ab-initio_ predictions of the crystal structure when pressure is applied. This is achieved by relaxing the crystal lattice and internal coordinates where an additional term is
Figure 5: Ni-\(e_{g}\) partial density of states (PDOS) as a function of pressure for the high-spin (AFM-A) and low-spin (AFM-G) solutions within LDA+\(U\) (\(U=3.5\) eV, \(J_{\rm H}=0.7\) eV). Positive (negative) denotes the majority (minority) channel. The ground state (GS) and crystal structure at each pressure is denoted within each panel.
added to the total energy calculation to incorporate the stress tensor modeling the external pressure. Structural optimizations are completed in the VASP code using the same computational settings described in Section II for the structural relaxations.
The structural data from our fully optimized crystal structures are summarized in Fig. 6. In contrast to the experimental data, we find that the in-plane lattice constants (\(a\), \(b\)) "tetragonalize" collapsing to the same value. We hypothesize that a possible explanation for the mismatch between the theory and experiment could be the presence of oxygen deficiencies in experiments, known to be a challenge in the bilayer Ruddlesden-Popper nickelate [12]. Importantly, even in this (quasi)-tetragonal structure, we find that the main signatures of the experimental structure under pressure remain with the Ni-Ni distances and Ni-O-Ni interplanar angle following the same trends (see Fig. 6(b,c)).
Figure 7 compares the LDA density of states (DOS) of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) obtained using the experimental lattice parameters at P = 29.5 GPa (_Fmmm_ phase) and our fully relaxed _ab-initio_-determined crystal structure at 30 GPa. The electronic structure for both settings is essentially identical and only small quantitative differences can be noticed.
## Appendix B Doping via oxygen vacancies
The experiments in Ref. [10] show a metal-to-insulator transition in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) when going from ambient pressure to 1 GPa. Previous work in the La\({}_{3}\)Ni\({}_{2}\)O\({}_{7-\delta}\) series [12] showed a very similar trend when comparing stoichiometric La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) with oxygen-deficient samples. Thus, we analyze how the electronic structure is affected by electron doping induced by oxygen non-stoichiometries, in particular whether the dominance of \(d_{x^{2}-y^{2}}\) states around the Fermi level described in the main text remains in the oxygen-deficient compound.
Experimentally, the structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{6.94}\) has been previously resolved in the _Fmmm_ space group [63]. Using the experimental structural data, we have constructed a 4 \(\times\) 4 \(\times\) 1 supercell in which a single O atom can be removed to obtain the desired stoichiometry (La\({}_{3}\)Ni\({}_{2}\)O\({}_{6.9375}\)) corresponding to around 2% electron doping. As shown in Ref. [56], the removal of an apical oxygen is most probable based on the calculated energetics. We allow the internal atomic coordinates to relax
Figure 6: Structural data from _ab-inito_ relaxations that allow for the lattice and internal coordinates to be optimized: (a) lattice constants, (b) Ni-Ni distances (apical and planar), and (c) Ni-O-Ni interplanar bond angle. Shaded regions denote the experimental region where a structural transition occurs.
Figure 7: Comparison of the non-magnetic LDA electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at 29.5 GPa in the _Fmmm_ phase using the experimental lattice parameters and the fully DFT-relaxed La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) structure at 30 GPa (_I4/mmm_). Total DOS (top), Ni-\(d_{z^{2}}\) PDOS (center), and Ni-\(d_{x^{2}-y^{2}}\) PDOS (bottom).
to enable the lattice to respond to this local defect using the same procedures detailed in Section II. Because this calculation is computationally intensive, we analyze only the non-magnetic state at the level of LDA. The averaged Ni-\(e_{g}\) partial density of states (PDOS) for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) and La\({}_{3}\)Ni\({}_{2}\)O\({}_{6.94}\) at ambient pressure are shown in Fig. 8. As described above, within LDA, the states populating the Fermi level at any pressure in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) are the strongly mixed Ni-\(e_{g}\) states with the Ni-\(d_{z^{2}}\) showing a large peak. Interestingly, for the oxygen-deficient sample, we find that the states derived from the Ni \(d_{x^{2}-y^{2}}\) orbitals become largely dominant at the Fermi level already in the LDA results at ambient pressure. This type of calculation suggests that oxygen vacancies should play an important role in the electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) and will require further theoretical and experimental investigation.
## Appendix C Brillouin zone coordinates
Throughout this work, we compare the electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) within two different space groups _Amm_ and _Fmmm_. For all calculations the high-symmetry points are given by: \(\Gamma=(000)\), \(X=(110)\), \(M=(020)\), and \(Z=(002)\) in units of (\(\pi/\)a, \(\pi/\)b, \(\pi/c\)) and assuming the long axis is along \(c\). Table 1 summarizes the different space groups of the structures used in this work.
## Appendix D Fermi surface and instabilities
To gain further insight into possible Fermi surface instabilities, we calculate the static susceptibility using the LDA non-magnetic band structure. We start by performing a paramagnetic calculation within La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) using _Fmmm_ coordinates at ambient pressure. The obtained LDA bands are shown in Fig. 9(a) where the Ni-\(e_{g}\) orbital characters are highlighted in color. The corresponding LDA Fermi surface in the \(k_{z}=0\) and \(k_{z}=\pi/c\) planes are shown in Fig. 9(b,c). The Fermi surface is comprised of three sheets: (1) small hole pockets at the zone corners from the bonding Ni-\(d_{z^{2}}\) band, (2) large hole pockets centered at the zone corners that extend almost to \(X\), and (3) an electron pocket centered around \(\Gamma\) with mostly Ni-\(d_{x^{2}-y^{2}}\) character. Comparing the two \(k_{z}\) cuts, we can see there is noticeable \(k_{z}\) dispersion.
To calculate the static susceptibility \(\chi_{nn^{\prime}}(\mathbf{q},0)\), from the LDA bands, we interpolate the near Fermi level bands using a Fourier spline series. Specifically, a Fourier series spline fit [64] to the DFT bands was made with 2813 face centered orthorhombic (_Fmmm_) Fourier functions fit to 511 \(k\)-points in the irreducible wedge of the Brillouin zone. Both the density of states and the susceptibility were calculated using a tetrahedron decomposition of the Brillouin zone (\(6\times 8^{n}\) tetrahedra in the irreducible wedge with \(n=6\) used for the density of states in order to obtain an accurate value for the Fermi energy and \(n=5\) for the susceptibility) [65]. The susceptibility was calculated using the three bands crossing the Fermi energy, which are shown in Fig. 9(b,c).
Figures 9(d,e) show a decomposition of the susceptibility into total, intra-band (\(n=n^{\prime}\)) and inter-band (\(n\neq n^{\prime}\)) contributions in the \(q_{z}=0\) and \(q_{z}=\pi/c\) planes, respectively. Interestingly, we find there are _no_ obvious trends for nesting. This is contrast to previous calculations using a tight binding model where they find a van Hove singularity very near the Fermi level [25].
Figure 8: Ni-\(e_{g}\) density of states (DOS) for stoichiometric La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at ambient pressure (_Amm_) (top) and oxygen-deficient La\({}_{3}\)Ni\({}_{2}\)O\({}_{6.94}\), which was resolved in the _Fmmm_ space group at ambient pressure [63] (bottom). For La\({}_{3}\)Ni\({}_{2}\)O\({}_{6.9375}\) the Ni-\(e_{g}\) DOS has been averaged over all inequivalent Ni sites.
\begin{table}
\begin{tabular}{l c c c} Calculation & Pressure (GPa) & Space group & Figure \\ \hline NM & 0 & _Amm_ & 2 \\ AFM-A & 0 & _Amm_ & 11 \\ AFM-A (HS) & 10 & _Fmmm_ & 11 \\ AFM-G (LS) & 10 & _Amm_ & 4 \\ NM & 29.5 & _Fmmm_ & 2 \\ AFM-A (HS) & 29.5 & _Fmmm_ & 11 \\ AFM-G (LS) & 29.5 & _Amm_ & 4 \\ \end{tabular}
\end{table}
Table 1: Summary of space groups used in each of the calculations presented in the manuscript. Note, for the 10 GPa and 29.5 GPa AFM-G calculations, we construct a supercell from the _Fmmm_ structures to allow for G-type ordering, and this cell has _Amm_ symmetry.
## Appendix E Additional DFT data
### Energetics and magnetic moments
As a best practice, DFT+\(U\) requires careful analysis of the selected double-counting correction. Here, we track the influence of the choice of double-counting correction (AMF or FLL) on the energetics and the magnetic moments of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at 0 GPa and 29.5 GPa.
Figure 10 summarizes the systematic change in the energetics and magnetic moments within LDA+\(U\) (AMF or FLL) for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at 0 GPa and 29.5 GPa for a range of \(U\) values. Note that for all calculations the Hund's coupling \(J_{\rm H}\) has been fixed to 0.7 eV. The LDA+\(U\)(AMF) results have been described in the main text. The data shown in Fig. 10 includes the energetics of the ferromagnetic solution as at higher values of \(U\), a crossover between the AFM-A and a FM state will occur. Compared to the AMF results, we can see that the choice of FLL gives qualitatively similar energetics and moments. Importantly, a spin-state transition still occurs within LDA+\(U\)(FLL), but, as FLL strongly favors high-spin solutions, the range of \(U\)s for which the AFM-G LS state is stable is smaller relative to the LDA+\(U\)(AMF) scheme.
### High-spin AFM-A solutions
As described in the main text, for larger values of \(U\) we find that the ground state of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) does not change with pressure and remains in the AFM-A magnetically ordered phase. The electronic structure for the AFM-A (HS) solutions at \(U=4.5\) eV (\(J_{\rm H}=0.7\) eV) for three representative pressures is summarized in Fig. 11(a,b,c) within LDA+\(U\) (AMF). A common feature in all cases is the presence of a (quasi) van Hove singularity pinned near the Fermi level. This feature is also present within the FLL double-counting scheme (not shown). This suggests that these HS AFM-A solutions are unstable. Possible mechanisms to relieve this instability would require
Figure 9: DFT static susceptibility \(\chi_{nn^{\prime}}({\bf q},0)\) for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\). (a) Band structure along high-symmetry lines within the fat band representation (Ni-\(e_{g}\) orbitals highlighted) for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) using _Fmmm_ coordinate at ambient pressure. LDA Fermi surfaces in the (b) \(k_{z}=0\) and the (c) \(k_{z}=\pi/c\) planes. Colors correspond to band indices 1,2,3 used in the susceptibility calculation. Static susceptibility \(\chi_{nn^{\prime}}({\bf q},0)\) along high-symmetry directions for (d) \(q_{z}=0\) and (e) \(q_{z}=\pi/c\). High-symmetry coordinates are defined in Appendix C and the coordinates for the susceptibility plots are in \(\pi/\)a,b,c units.
further investigation.
### DFT occupations and general trends
Figure 12(a,b) shows the spin-resolved and orbital-resolved occupations (\(n^{\sigma}\)) and moments (\(n^{\uparrow}-n^{\downarrow}\)) for the HS (AFM-A) and LS (AFM-G) solutions as a function of pressure, respectively. For the Ni-\(d_{z^{2}}\) orbitals, we see that the occupations and moments change the same at all pressures when comparing the HS and LS solutions. In contrast, the occupation of the Ni-\(d_{x^{2}-y^{2}}\) orbitals is very different when comparing the HS and LS solutions.
Figure 12(c) reveals generic trends in ground state electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) as a function of pressure. The Ni-\(d_{z^{2}}\) bonding-antibonding splitting increases systematically with pressure as expected. From the Ni(\(3d\))+O(\(2p\)) DOS, we see that the centroid of the O(\(2p\)) bands is pushed to lower energy as pressure increases. This decreases the \(p\)-\(d\) hybridization, thus increasing the charge-transfer energy, similar to the trends described in the main text for the non-magnetic calculations.
Figure 11: AFM-A high-spin solutions at P = 0 GPa (_Amm_), 10 GPa (_Fmmm_), and 29.5 GPa (_Fmmm_) (left to right) with LDA+\(U\)(AMF) (\(U\) = 4.5 eV, \(J_{\rm H}\) = 0.7 eV). (a) Same as Fig. 3 with \(U\) = 4.5 eV data showing the lack of a spin-state transition at higher values of \(U\). (b) Ni-\(e_{g}\) PDOS at different pressures. (c) Band structures along high-symmetry lines with Ni-\(e_{g}\) orbital character highlighted. Ni-\(d_{x^{2}-y^{2}}\) (Ni-\(d_{z^{2}}\)) is shown in blue (pink).
Figure 10: (a) Schematic diagram of the different magnetic orderings considered. Energetics and magnetic moments obtained within LDA+\(U\) (AMF and FLL) for (b) P = 0 GPa and (c) P = 29.5 GPa.
Figure 12: (a) Spin-resolved and orbital-resolved occupations (\(n^{\sigma}\)) and (b) moments (\(n^{\uparrow}-n^{\downarrow}\)) for the HS (AFM-A) and LS (AFM-G) solutions as a function of pressure. (c) Systematic trends from the ground-state electronic structure of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) at various pressures. PDOS of the Ni-\(d_{z^{2}}\) orbitals (left) and Ni(\(3d\))+O(\(2p\)) DOS (right) shows that the Ni-\(d_{z^{2}}\) bonding-antibonding splitting and \(p\)-\(d\) splitting (proxy for charge-transfer energy) both increase with pressure. |
2309.14980 | Statistical Analysis of Quantum State Learning Process in Quantum Neural
Networks | Quantum neural networks (QNNs) have been a promising framework in pursuing
near-term quantum advantage in various fields, where many applications can be
viewed as learning a quantum state that encodes useful data. As a quantum
analog of probability distribution learning, quantum state learning is
theoretically and practically essential in quantum machine learning. In this
paper, we develop a no-go theorem for learning an unknown quantum state with
QNNs even starting from a high-fidelity initial state. We prove that when the
loss value is lower than a critical threshold, the probability of avoiding
local minima vanishes exponentially with the qubit count, while only grows
polynomially with the circuit depth. The curvature of local minima is
concentrated to the quantum Fisher information times a loss-dependent constant,
which characterizes the sensibility of the output state with respect to
parameters in QNNs. These results hold for any circuit structures,
initialization strategies, and work for both fixed ansatzes and adaptive
methods. Extensive numerical simulations are performed to validate our
theoretical results. Our findings place generic limits on good initial guesses
and adaptive methods for improving the learnability and scalability of QNNs,
and deepen the understanding of prior information's role in QNNs. | Hao-kai Zhang, Chenghong Zhu, Mingrui Jing, Xin Wang | 2023-09-26T14:54:50Z | http://arxiv.org/abs/2309.14980v1 | # Statistical Analysis of Quantum State Learning Process in Quantum Neural Networks
###### Abstract
Quantum neural networks (QNNs) have been a promising framework in pursuing near-term quantum advantage in various fields, where many applications can be viewed as learning a quantum state that encodes useful data. As a quantum analog of probability distribution learning, quantum state learning is theoretically and practically essential in quantum machine learning. In this paper, we develop a no-go theorem for learning an unknown quantum state with QNNs even starting from a high-fidelity initial state. We prove that when the loss value is lower than a critical threshold, the probability of avoiding local minima vanishes exponentially with the qubit count, while only grows polynomially with the circuit depth. The curvature of local minima is concentrated to the quantum Fisher information times a loss-dependent constant, which characterizes the sensibility of the output state with respect to parameters in QNNs. These results hold for any circuit structures, initialization strategies, and work for both fixed ansatzes and adaptive methods. Extensive numerical simulations are performed to validate our theoretical results. Our findings place generic limits on good initial guesses and adaptive methods for improving the learnability and scalability of QNNs, and deepen the understanding of prior information's role in QNNs.
## 1 Introduction
Recent experimental progress towards realizing quantum information processors [1; 2; 3] has fostered the thriving development of the emerging field of quantum machine learning (QML) [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19], pursuing quantum advantages in artificial intelligence. Different QML algorithms have been proposed for various topics, e.g., quantum simulations [20; 21; 22; 23; 24], chemistry [25; 26; 27; 28; 29], quantum data compression [30; 31], generative learning [32; 33] and reinforcement learning [34], where quantum neural networks (QNNs) become a leading framework due to the hardware restriction from noisy intermediate scale quantum (NISQ) [35] devices. As quantum analogs of artificial neural networks, QNNs typically refer to parameterized quantum circuits which are trainable based on quantum measurement results.
However, QNNs face a severe scalability barrier which might prevent the realization of potential quantum advantages. A notorious example is the barren plateau phenomenon [36] which shows that the gradient of the loss function vanishes exponentially in the system size with a high probability for randomly initialized deep QNNs, giving rise to an exponential training cost. To address this issue, a variety of training strategies has been proposed, such as local loss functions [37], correlated parameters [38], structured architectures [39; 40; 41], good initial guesses [42; 43], initialization heuristics near the identity [44; 45; 46; 47], adaptive methods [28] and layerwise training [48], etc. Nevertheless, there
is a lack of scalability analyses for these strategies to guarantee their effectiveness. Especially, since the identity initialization and adaptive or layerwise training methods do not require uniformly random initialization of deep QNNs, they are out of the scope of barren plateaus and urgently need a comprehensive theoretical analysis to ascertain their performance under general conditions.
In this work, we analyze the learnability of QNNs from a statistical perspective considering the information of loss values. Specifically, given a certain loss value during the training process, we investigate the statistical properties of surrounding training landscapes. Here we mainly focus on quantum state learning tasks [49; 50], which can be seen as a quantum analog of probability distribution learning and play a central role in QML. To summarize, our contributions include:
* We prove a no-go theorem stating that during the process of learning an unknown quantum state with QNNs, the probability of avoiding local minima is of order \(\mathcal{O}(N^{2}2^{-N}D^{2}/\epsilon^{2})\) as long as the loss value is lower than a critical threshold (cf. Fig. 1). The bound vanishes exponentially in the qubit count \(N\) while only increases polynomially with the circuit depth \(D\). The curvature of local minima is concentrated to the quantum Fisher information times a loss-dependent constant. The proof is mainly based on the technique of "subspace Haar integration" we developed in Appendix A.1. A generalized version for the local loss function is provided in Appendix C.
* We conduct extensive numerical experiments to verify our theoretical findings. We first compare our bound with practical loss curves to show the prediction ability on the statistical behavior of the actual training process. Then we sample landscape profiles to visualize the existence of asymptotic local minima. Finally, we compute the gradients and diagonalize the Hessian matrices to directly verify the correctness of our bound.
* Our results place general limits on the learnability of QNNs, especially for the training strategies beyond the scope of barren plateaus, including high-fidelity initial guesses, initialization heuristics near the identity, and adaptive and layerwise training methods. Moreover, our results provide a theoretical basis for the necessity of introducing prior information into QNN designs and hence draw a guideline for future QNN developments.
Figure 1: Sketches of our work characterizing the statistical performance of QNNs on quantum state learning tasks. (a) indicates the existence of a critical loss value \(\mathcal{L}_{c}=1-2^{-N}\) below which the local minima start to become severe to trap the training process. (b) depicts the setup of quantum state learning tasks where a QNN is used to learn an unknown target state encoding practical data. All target states with a constant distance to the output state \(|\psi^{*}\rangle\) form our ensemble \(\mathbb{T}\), depicted by the contour on the Bloch sphere. (c) shows typical loss curves for different qubit counts of \(N=2,6,10\) and circuit depths of \(D=1,3,5\). The intensity of the background colors represents the magnitude of our theoretical bound on the probability of encountering a local minimum, which hence signifies the hardness of optimization. One can find that the theoretical bound appropriately reflects the hardness encountered by the practical loss curves.
### Related works
The barren plateau phenomenon was first discovered by [36], which proves that the variance of the gradient vanishes exponentially with the system size if the randomly initialized QNN forms a unitary \(2\)-design. Thereafter, [37] finds the dependence of barren plateaus on the circuit depth for loss functions with local observables. [51] proves that training QNNs is in general NP-hard. [52] introduces barren plateaus from uncertainty which precludes learning scramblers. [52; 53] establish connections among the expressibility, generalizability and trainability of QNNs. [54] and [55] show that apart from barren plateaus, QNNs also suffer from local minima in certain cases.
On the other hand, many training strategies have been proposed to address barren plateaus. Here we only list a small part relevant to our work. [44; 45; 46; 47] suggest that initializing the QNN near the identity could reduce the randomness and hence escape from barren plateaus. [28] and [48] propose adaptive and layerwise training methods which avoid using randomly initialized QNNs in order to avoid barren plateaus, whereas [56] finds counterexamples where the circuit training terminates close to the identity and remains near to the identity for subsequently added layers without effective progress.
## 2 Quantum computing basics and notations
We use \(\|\cdot\|_{p}\) to denote the \(l_{p}\)-norm for vectors and the Schatten-\(p\) norm for matrices. \(A^{\dagger}\) is the conjugate transpose of matrix \(A\). \(\mathrm{tr}\,A\) represent the trace of \(A\). The \(\mu\)-th component of the vector \(\mathbf{\theta}\) is denoted as \(\theta_{\mu}\) and the derivative with respect to \(\theta_{\mu}\) is simply denoted as \(\partial_{\mu}=\frac{\partial}{\partial\theta_{\mu}}\). We employ \(\mathcal{O}\) as the asymptotic notation of upper bounds.
In quantum computing, the basic unit of quantum information is a quantum bit or qubit. A single-qubit pure state is described by a unit vector in the Hilbert space \(\mathbb{C}^{2}\), which is commonly written in Dirac notation \(|\psi\rangle=\alpha|0\rangle+\beta|1\rangle\), with \(|0\rangle=(1,0)^{T}\), \(|1\rangle=(0,1)^{T}\), \(\alpha,\beta\in\mathbb{C}\) subject to \(|\alpha|^{2}+|\beta|^{2}=1\). The complex conjugate of \(|\psi\rangle\) is denoted as \(\langle\psi|=|\psi\rangle^{\dagger}\). The Hilbert space of \(N\) qubits is formed by the tensor product "\(\otimes\)" of \(N\) single-qubit spaces with dimension \(d=2^{N}\). We denote the inner product of two states \(|\phi\rangle\) and \(|\psi\rangle\) as \(\langle\phi|\psi\rangle\) and the overlap is defined as \(|\langle\phi|\psi\rangle|\). General mixed quantum states are represented by the density matrix, which is a positive semidefinite matrix \(\rho\in\mathbb{C}^{d\times d}\) subject to \(\mathrm{tr}\,\rho=1\). Quantum gates are unitary matrices, which transform quantum states via the matrix-vector multiplication. Common single-qubit rotation gates include \(R_{x}(\theta)=e^{-i\theta X/2}\), \(R_{y}(\theta)=e^{-i\theta Y/2}\), \(R_{z}(\theta)=e^{-i\theta Z/2}\), which are in the matrix exponential form of Pauli matrices
\[X=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\qquad Y=\begin{pmatrix}0&-i\\ i&0\end{pmatrix},\qquad Z=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}. \tag{1}\]
Common two-qubit gates include controlled-X gate \(\mathrm{CNOT}=I\oplus X\) (\(\oplus\) is the direct sum) and controlled-Z gate \(\mathrm{CZ}=I\oplus Z\), which can generate quantum entanglement among qubits.
### Framework of Quantum Neural Networks
Quantum neural networks (QNNs) typically refer to parameterized quantum circuits \(\mathbf{U}(\mathbf{\theta})\) where the parameters \(\mathbf{\theta}\) are trainable based on the feedback from quantum measurement results using a classical optimizer. By assigning some loss function \(\mathcal{L}(\mathbf{\theta})\), QNNs can be used to accomplish various tasks just like artificial neural networks. A general form of QNNs reads \(\mathbf{U}(\mathbf{\theta})=\prod_{\mu=1}^{M}U_{\mu}(\theta_{\mu})W_{\mathbf{\mu}}\), where \(U_{\mu}(\theta_{\mu})=e^{-i\Omega_{\mu}\theta_{\mu}}\) is a parameterized gate such as single-qubit rotations with \(\Omega_{\mu}\) being a Hermitian generator. \(W_{\mu}\) is a non-parameterized gate such as \(\mathrm{CNOT}\) and \(\mathrm{CZ}\). The product \(\prod_{\mu=1}^{M}\) is by default in the increasing order from the right to the left. \(M\) denotes the number of trainable parameters. Note that QNNs with intermediate classical controls [57] can also be included in this general form theoretically. Commonly used templates of QNNs include the hardware efficient ansatz [27], the alternating-layered ansatz (ALT) [58] and the tensor-network-based ansatz [37; 59], which are usually composed of repeated layers. The number of repeated layers is called the depth of the QNN, denoted as \(D\). Fig. 2 depicts an example of the ALT circuit. The gradients of loss functions of certain QNNs are evaluated by the parameter-shift rule [60; 61; 62] on real quantum devices. Hence we can train QNNs efficiently with gradient-based optimizers [63].
Here we focus on quantum state learning tasks, the objective of which is to learn a given target state \(|\phi\rangle\) encoding practical data via minimizing the distance between the target state \(|\phi\rangle\) and the output
state from the QNN \(|\psi(\mathbf{\theta})\rangle=U(\mathbf{\theta})|0\rangle^{\otimes N}\). The corresponding loss function is usually chosen as the fidelity distance
\[\mathcal{L}(\mathbf{\theta})=1-|\langle\phi|\psi(\mathbf{\theta})\rangle|^{2}\,. \tag{2}\]
which can be efficiently calculated on quantum computers using the swap test [64]. An important quantity we used below characterizing the sensitivity of the QNN output state \(|\psi(\mathbf{\theta})\rangle\) regarding its parameters \(\mathbf{\theta}\), the quantum Fisher information (QFI) matrix \(\mathcal{F}_{\mu\nu}\)[65], which is defined as the Riemannian metric induced from the fidelity distance (cf. Appendix A.4)
\[\mathcal{F}_{\mu\nu}(\mathbf{\theta})=2\operatorname{Re}\left[\langle\partial_{ \mu}\psi|\partial_{\nu}\psi\rangle-\langle\partial_{\mu}\psi|\psi\rangle \langle\psi|\partial_{\nu}\psi\rangle\right]. \tag{3}\]
If the QFI is not full-rank, we say \(|\psi(\mathbf{\theta})\rangle\) is over-parameterized [66, 67] meaning that there are redundant degrees of freedom in the parameters over the manifold dimension of the state.
## 3 Statistical characterization of quantum state learning in QNNs
In this section, we develop a no-go theorem characterizing the limitation of quantum neural networks in state learning tasks from a statistical perspective. In short, we prove that the probability of avoiding local minima during the process of learning an unknown quantum state with a QNN is of order \(\mathcal{O}(2^{-N}M^{2}/\epsilon^{2})\), where \(N\) is the number of qubits, \(M\) is the number of trainable parameters and \(\epsilon\) represents the typical precision of measurements. The detailed upper bound also depends on the overlap between the value of the loss function and the QFI of the QNN. Our bounds significantly improve existing results of the trainability analysis of QNNs, which mainly focus on the randomness from the initialization and neglect the information of the loss function value. We will first introduce our ensemble setting in Section 3.1, then present our main theorem on local minima in Section 3.2 and finally show some results beyond local minima in Section 3.4.
### Ensemble of the unknown target state
We first introduce the probability measure used in this work. The randomness studied by most of the previous work on the trainability analyzes of QNNs originates from the random initialization of trainable parameters [36], which usually depends on the circuit depth, specific choices of the QNN architecture and initialization strategies [45]. Meanwhile, the randomness can also come from the lack of prior information, such as learning an unknown quantum state or an unknown scrambler like a black hole [68]. We focus on the latter in the present work.
The usage of adaptive methods is usually not covered by common trainability analyses, however, the training process often tends to stagnate. With the aim of investigating the trainability at a specific loss function value, the ensemble is constructed by a uniform measure of overall pure states that have the same overlap with the current output state of the QNN. Specifically, suppose \(\mathbf{\theta}^{*}\) is the current value of the trainable parameters. The overlap, or fidelity, between the output state \(|\psi^{*}\rangle=|\psi(\mathbf{\theta}^{*})\rangle\) and the target state \(|\phi\rangle\) equals to \(|\langle\phi|\psi^{*}\rangle|=p\). Thus, the target state can be decomposed as
\[|\phi\rangle=p|\psi^{*}\rangle+\sqrt{1-p^{2}}|\psi^{\perp}\rangle, \tag{4}\]
where \(|\psi^{\perp}\rangle\) represents the unknown component in the target state \(|\phi\rangle\) orthogonal to the learnt component \(|\psi^{*}\rangle\). If no more prior information is known about the target state \(|\phi\rangle\) except for the overlap
Figure 2: The quantum circuit of the alternating-layered ansatz on \(4\) qubits. The circuit starts with a \(R_{y}\) layer and a \(R_{x}\) layer, followed by \(D\) repeated layers, where each layer contains alternating \(2\)-qubit unit blocks of a CZ gate, two \(R_{y}\) gates and two \(R_{z}\) gates.
\(p\), in the spirit of Bayesian statistics, \(|\psi^{\perp}\rangle\) is supposed to be a random state uniformly distributed in the orthogonal complement of \(|\psi^{*}\rangle\), denoted as \(\mathcal{H}^{\perp}\). Such a Haar-random state can induce an ensemble of the unknown target state via Eq. (4), which we denote as \(\mathbb{T}=\{|\phi\rangle\mid|\psi^{\perp}\rangle\text{ is Haar-random in }\mathcal{H}^{\perp}\}\). Graphically, \(\mathbb{T}\) can be understood as a contour on the Bloch sphere of an \(N\)-qubit system with a constant distance to \(|\psi^{*}\rangle\) as shown in Fig. 1(b). We remark that \(\mathbf{\theta}^{*}\) can be interpreted as either an initial guess or an intermediate value during the training process so that our following results can be applied to the entire process of learning a quantum state. See Appendix A.1 for more details on the ensemble setting.
### Exponentially likely local minima
We now investigate the statistical properties of the gradient \(\nabla\mathcal{L}\) and the Hessian matrix \(H_{\mathcal{L}}\) of the loss function \(\mathcal{L}(\mathbf{\theta}^{*})\) at the parameter point \(\mathbf{\theta}=\mathbf{\theta}^{*}\) regarding the ensemble \(\mathbb{T}\), and hence derive an upper bound of the probability that \(\mathbf{\theta}^{*}\) is not a local minimum. For simplicity of notation, we represent the value of a certain function at \(\mathbf{\theta}=\mathbf{\theta}^{*}\) by appending the superscript "\(*\)", e.g., \(\nabla\mathcal{L}|_{\mathbf{\theta}=\mathbf{\theta}^{*}}\), as \(\nabla\mathcal{L}^{*}\) and \(\left.H_{\mathcal{L}}|_{\mathbf{\theta}=\mathbf{\theta}^{*}}\right.\) as \(H_{\mathcal{L}}^{*}\). We define that \(\mathbf{\theta}^{*}\) is a local minimum up to a fixed precision \(\epsilon=(\epsilon_{1},\epsilon_{2})\) if and only if each of the gradient components is not larger than \(\epsilon_{1}\) and the minimal eigenvalue of the Hessian matrix is not smaller than \(-\epsilon_{2}\), i.e.,
\[\mathrm{LocalMin}(\mathbf{\theta}^{*},\epsilon)=\bigcap_{\mu=1}^{M}\left\{| \partial_{\mu}\mathcal{L}^{*}|\leq\epsilon_{1}\right\}\ \cap\ \left\{H_{\mathcal{L}}^{*}\succ-\epsilon_{2}I\right\}. \tag{5}\]
If \(\epsilon_{1}\) and \(\epsilon_{2}\) both take zero, Eq. (5) is reduced back to the common exact definition of the local minimum. However, noises and uncertainties from measurements on real quantum devices give rise to a non-zero \(\epsilon\), where the estimation cost scales as \(\mathcal{O}(1/\epsilon^{\alpha})\) for some power \(\alpha\)[69]. Specially, if \(|\psi(\mathbf{\theta}^{*})\rangle\) approaches the true target state \(|\phi\rangle\) such that \(\mathcal{L}^{*}\to 0\), we say \(\mathbf{\theta}^{*}\) is a global minimum. That is to say, here "local minima" are claimed with respect to the entire Hilbert space instead of training landscapes created by different ansatzes. The expectation and variance of the first and second-order derivatives of the loss function are calculated and summarized in Lemma 1, with the detailed proof in Appendix B utilizing the technique we dubbed "subspace Haar integration" in Appendix A.1.
**Lemma 1**: _The expectation and variance of the gradient \(\nabla\mathcal{L}\) and Hessian matrix \(H_{\mathcal{L}}\) of the fidelity loss function \(\mathcal{L}(\mathbf{\theta})=1-|\langle\phi|\psi(\mathbf{\theta})\rangle|^{2}\) at \(\mathbf{\theta}=\mathbf{\theta}^{*}\) with respect to the target state ensemble \(\mathbb{T}\) satisfy_
\[\mathbb{E}_{\mathbb{T}}\left[\nabla\mathcal{L}^{*}\right]=0, \quad\mathrm{Var}_{\mathbb{T}}[\partial_{\mu}\mathcal{L}^{*}]=f_{1}(p,d) \mathcal{F}_{\mu\mu}^{*}. \tag{6}\] \[\mathbb{E}_{\mathbb{T}}\left[H_{\mathcal{L}}^{*}\right]=\frac{dp^ {2}-1}{d-1}\mathcal{F}^{*},\quad\mathrm{Var}_{\mathbb{T}}[\partial_{\mu} \partial_{\nu}\mathcal{L}^{*}]\leq f_{2}(p,d)\|\Omega_{\mu}\|_{\infty}^{2}\| \Omega_{\nu}\|_{\infty}^{2}. \tag{7}\]
_where \(\mathcal{F}\) denote the QFI matrix in Eq. (3) and \(\Omega_{\mu}\) is the generator of the gate \(U_{\mu}(\theta_{\mu})\). \(f_{1}\) and \(f_{2}\) are functions of the overlap \(p\) and the Hilbert space dimension \(d\), i.e.,_
\[f_{1}(p,d)=\frac{p^{2}(1-p^{2})}{d-1},\quad f_{2}(p,d)=\frac{32(1-p^{2})}{d-1} \left[p^{2}+\frac{2(1-p^{2})}{d}\right]. \tag{8}\]
The exponentially vanishing variances in Lemma 1 imply that the gradient and Hessian matrix concentrate to their expectations exponentially in the number of qubits \(N\) due to \(d=2^{N}\) for a \(N\)-qubit system. Thus the gradient concentrates to zero and the Hessian matrix concentrates to the QFI \(\mathcal{F}^{*}\) times a non-vanishing coefficient proportional to \((p^{2}-1/d)\). Since the QFI is always positive semidefinite, the expectation of the Hessian matrix is either positive semidefinite \(\mathcal{L}^{*}=1-p^{2}<1-1/d\), or negative semidefinite if \(\mathcal{L}^{*}>1-1/d\), as illustrated in Fig. 1(a). The critical point \(\mathcal{L}_{c}=1-1/d\) coincides with the average fidelity distance of two Haar-random pure states, which means that as long as \(|\psi^{*}\rangle\) has a higher fidelity than the average level of all states, the expectation of the Hessian matrix would be positive semidefinite.
Using Lemma 1, we establish an exponentially small upper bound on the probability that \(\mathbf{\theta}^{*}\) is not a local minimum in the following Theorem 2, where the generator norm vector \(\mathbf{\omega}\) is defined as \(\mathbf{\omega}=(\|\Omega_{1}\|_{\infty},\|\Omega_{2}\|_{\infty},\ldots,\|\Omega_{ M}\|_{\infty})\).
**Theorem 2**: _If the fidelity loss function satisfies \(\mathcal{L}(\mathbf{\theta}^{*})<1-1/d\), the probability that \(\mathbf{\theta}^{*}\) is not a local minimum of \(\mathcal{L}\) up to a fixed precision \(\epsilon=(\epsilon_{1},\epsilon_{2})\) with respect to the target state ensemble \(\mathbb{T}\) is upper bounded by_
\[\Pr_{\mathbb{T}}\left[\neg\mathrm{LocalMin}(\mathbf{\theta}^{*}, \epsilon)\right]\leq\frac{2f_{1}(p,d)\|\mathbf{\omega}\|_{2}^{2}}{\epsilon_{1}^{2} }+\frac{f_{2}(p,d)\|\mathbf{\omega}\|_{2}^{4}}{\left(\frac{dp^{2}-1}{d-1}e^{*}+ \epsilon_{2}\right)^{2}}, \tag{9}\]
_where \(e^{*}\) denotes the minimal eigenvalue of the QFI matrix at \(\mathbf{\theta}=\mathbf{\theta}^{*}\). \(f_{1}\) and \(f_{2}\) are defined in Eq. (8) which vanish at least of order \(1/d\)._
A sketch version of the proof is as follows, with the details in Appendix B. By definition in Eq. (5), the left-hand side of Eq. (9) can be upper bounded by the sum of two terms: the probability that one gradient component is larger than \(\epsilon_{1}\), and the probability that the Hessian matrix is not positive definite up to \(\epsilon_{2}\). The first term can be bounded by Lemma 1 and Chebyshev's inequality, i.e.,
\[\Pr_{\mathbb{T}}\left[\bigcup_{\mu=1}^{M}\{|\partial_{\mu}\mathcal{L}^{*}|> \epsilon_{1}\}\right]\leq\sum_{\mu=1}^{M}\frac{\mathrm{Var}_{\mathbb{T}}[ \partial_{\mu}\mathcal{L}^{*}]}{\epsilon_{1}^{2}}=\frac{f_{1}(p,d)}{\epsilon_ {1}^{2}}\operatorname{tr}\mathcal{F}^{*}, \tag{10}\]
where the QFI diagonal element is bounded as \(\mathcal{F}_{\mu\mu}\leq 2\|\Omega_{\mu}\|_{\infty}^{2}\) by definition and thus \(\operatorname{tr}\mathcal{F}^{*}\leq 2\|\mathbf{\omega}\|_{2}^{2}\). After assuming \(p^{2}>1/d\), the second term can be upper bounded by perturbing \(\mathbb{E}_{\mathbb{T}}[H_{\mathcal{L}}^{*}]\) to obtain a sufficient condition of positive definiteness (see Appendix A.2), and then utilizing Lemma 1 with the generalized Chebyshev's inequality for matrices (see Appendix A.3), i.e.,
\[\Pr_{\mathbb{T}}\left[H_{\mathcal{L}}^{*}\not\right.-\epsilon_{2}I \right]\leq\sum_{\mu,\nu=1}^{M}\frac{\mathrm{Var}_{\mathbb{T}}\left[\partial_ {\mu}\partial_{\nu}\mathcal{L}^{*}\right]}{\left(\frac{dp^{2}-1}{d-1}e^{*}+ \epsilon_{2}\right)^{2}}\leq\frac{f_{2}(p,d)\|\mathbf{\omega}\|_{2}^{4}}{\left( \frac{dp^{2}-1}{d-1}e^{*}+\epsilon_{2}\right)^{2}}. \tag{11}\]
Combining the bounds regarding the gradient and hessian matrix, one arrives at Eq. (9).
Theorem 2 directly points out that if the loss function takes a value lower than the critical threshold \(\mathcal{L}_{c}=1-1/d\), then the surrounding landscape would be a local minimum for almost all of the target states, the proportion of which is exponentially close to \(1\) as the qubit count \(N=\log_{2}d\) grows. Note that \(\|\mathbf{\omega}\|_{2}^{2}=\sum_{\mu=1}^{M}\|\Omega_{\mu}\|_{\infty}^{2}\) scales linearly with the number of parameters \(M\) and at most polynomially with the qubit count \(N\). Because practically the operator norm \(\|\Omega_{\mu}\|_{\infty}\) is constant such as the generators of Pauli rotations \(\|X\|_{\infty}=\|Y\|_{\infty}=\|Z\|_{\infty}=1\), or grows polynomially with the system size such as the layer with globally correlated parameters [57] and the global evolution in analog quantum computing [70]. Here we focus on the former and conclude that the upper bound in Theorem 2 is of order \(\mathcal{O}(2^{-N}M^{2}/\epsilon^{2})\), implying the exponential training cost. The conclusion also holds for noisy quantum states (see Appendix B). A similar result for the so-called local loss function, like the energy expectation used in variational quantum eigensolvers, is provided in Appendix C.
In principle, if one could explore the whole Hilbert space with exponentially many parameters, \(\mathbf{\theta}^{*}\) was a saddle point at most instead of a local minimum since there must exist a unitary connecting the learnt state \(|\psi^{*}\rangle\) and the target state \(|\phi\rangle\). This is also consistent with our bound by taking \(M\in\Omega(2^{N/2})\) to cancel the \(2^{-N}\) factor such that the bound is no more exponentially small. However, the number of parameters one can control always scales polynomially with the qubit count due to the memory constraint. This fact indicates that if the QNN is not designed specially for the target state using some prior knowledge so that the "correct" direction towards the target state is contained in the accessible tangent space, the QNN will have the same complexity as the normal quantum state tomography.
**Dependence on the loss value \(\mathcal{L}^{*}\).** The dependence of the bound in Theorem 2 on the overlap \(p=\sqrt{1-\mathcal{L}^{*}}\) shows that, as the loss function value becomes lower, the local minima becomes denser so that the training proceeds harder. This agrees with the experience that the loss curves usually decay fast at the beginning of a training process and slow down till the convergence. If \(e^{*}\neq 0\), the second term in Eq. (9) becomes larger as \(p^{2}\to 1/d\), suggesting that the local minima away from the critical point \(\mathcal{L}_{c}\) is more severe than that near \(\mathcal{L}_{c}\). Moreover, if \(\epsilon_{2}=0\), the bound diverges as the QFI minimal eigenvalue \(e^{*}\) vanishes, which reflects the fact that over-parameterized QNNs have many equivalent local minima connected by the redundant degrees of freedom of parameters.
By contrast, if \(\mathcal{L}^{*}>\mathcal{L}_{c}\), the results could be established similarly by slightly modifying the proof yet with respect to local maxima, as depicted in Fig. 1(a). However, the critical point
moves to \(1\) exponentially fast as \(N\) increases, i.e., the range of \(\mathcal{L}^{*}\) without severe local minima shrink exponentially. Hence for large-scale systems, even with a polynomially small fidelity, one would encounter a local minimum almost definitely if no more prior knowledge can be used.
### Implication on the learnability of QNNs
In practical cases, if the QNN is composed of \(D\) repeated layers with \(z\) trainable parameters for each layer and each qubit, then the total number of trainable parameters becomes \(M=NDz\), and hence the probability of avoiding local minima is of order \(\mathcal{O}(N^{2}2^{-N}D^{2}/\epsilon^{2})\), which increases quadratically as the QNN becomes deeper. This seems contrary to the conclusion from barren plateaus [36] where deep QNNs lead to poor trainability. But in fact, they are complementary to each other. The reason is that the ensemble here originates from the unknown target state instead of the random initialization. Similar to classical neural networks, a deeper QNN has stronger expressibility, which creates a larger accessible manifold to approach the unknown state and may turn a local minimum into a saddle point with the increased dimensions. But on the other hand, a deeper QNN with randomly initialized parameters leads to barren plateaus [37]. In short, the local minima here arise due to the limited expressibility together with a non-vanishing fidelity while barren plateaus stem from the strong expressibility together with the random initialization.
To solve this dilemma, our results suggest that a well-designed QNN structure taking advantage of prior knowledge of the target state is vitally necessary. Otherwise, a good initial guess (i.e., an initial state with high fidelity) solely is hard to play its role. An example of prior knowledge from quantum many-body physics is the tensor network states [71] satisfying the entanglement area law, which lives only in a polynomially large space but generally can not be solved in two and higher spatial dimensions by classical computers. Other examples include the UCCSD ansatz [72] in quantum chemistry and the QAOA ansatz [73] in combinatorial optimization, which all attempt to utilize the prior knowledge of the target states.
Finally, we remark that our results also place general theoretical limits for adaptive [28] or layer-wise training methods [48]. Relevant phenomena are observed previously in special examples [56]. Adaptive methods append new training layers incrementally during the optimization instead of placing a randomly initialized determinate ansatz at the beginning, which is hence beyond the scope of barren plateaus [36]. Nevertheless, our results imply that for moderately large systems with \(\mathcal{L}^{*}<\mathcal{L}_{c}\), every time a new training layer is appended, the learnt state \(|\psi^{*}\rangle\) would be a local minimum of the newly created landscape so that the training process starting near \(|\psi^{*}\rangle\) would go back to the original state \(|\psi^{*}\rangle\) without any effective progress more than applying an identity. Note that in adaptive methods, one usually initializes the new appended layer near the identity to preserve the historical learnt outcomes. Similar phenomena are also expected to occur in the initialization strategies where the circuit begins near the identity [44; 45; 47]. We emphasize that our results do not imply the ineffectiveness of all adaptive methods. Instead, they only suggest that simplistic brute-force adaptive methods provide no significant benefit in terms of enhancing learnability on average.
### Concentration of training landscapes
Theorem 2 analyses the statistical properties of the vicinity of a certain point \(\mathbf{\theta}^{*}\), i.e., the probability distributions of the gradient and Hessian matrix of \(\mathbf{\theta}^{*}\). To characterize the training landscape beyond the vicinity, a pointwise result is established in Proposition 3 with the proof in Appendix B.
**Proposition 3**: _The expectation and variance of the fidelity loss function \(\mathcal{L}\) with respect to the target state ensemble \(\mathbb{T}\) can be exactly calculated as_
\[\begin{split}&\mathbb{E}_{\mathbb{T}}\left[\mathcal{L}(\mathbf{ \theta})\right]=1-p^{2}+\frac{dp^{2}-1}{d-1}g(\mathbf{\theta}),\\ &\mathrm{Var}_{\mathbb{T}}\left[\mathcal{L}(\mathbf{\theta})\right] =\frac{1-p^{2}}{d-1}g(\mathbf{\theta})\left[4p^{2}-\left(2p^{2}-\frac{(d-2)(1-p^ {2})}{d(d-1)}\right)g(\mathbf{\theta})\right],\end{split} \tag{12}\]
_where \(g(\mathbf{\theta})=1-|\langle\psi^{*}|\psi(\mathbf{\theta})\rangle|^{2}\)._
Since the factor \(g(\mathbf{\theta})\) takes its global minimum at \(\mathbf{\theta}^{*}\) by definition, the exponentially small variance in Proposition 3 implies that the entire landscape concentrates exponentially in the qubit count to
the expectation \(\mathbb{E}_{\mathbb{T}}\left[\mathcal{L}(\mathbf{\theta})\right]\) with a pointwise convergence (not necessarily a uniform convergence), which takes its global minimum at \(\mathbf{\theta}^{*}\) with respect to the training landscape as long as \(p^{2}>1/d\). For QNNs satisfying the parameter-shift rule, the factor \(g(\mathbf{\theta})\) along the Cartesian axis corresponding to \(\theta_{\mu}\) passing through \(\mathbf{\theta}^{*}\) will take the form of a trigonometric function \(\frac{1}{2}\mathcal{F}_{\mu\mu}^{*}\sin^{2}(\theta_{\mu}-\theta_{\mu}^{*})\), which is elaborated in Appendix B. Other points apart from \(\mathbf{\theta}^{*}\) is allowed to have a non-vanishing gradient expectation in our setup, which leads to prominent local minima instead of plateaus [36].
## 4 Numerical experiments
Previous sections theoretically characterize the limitation of QNNs in state learning tasks considering the information of the loss value \(\mathcal{L}^{*}\). In this section, we verify these results by conducting numerical experiments on the platform Paddle Quantum [74] and Tensorcircuit [75] from the following three perspectives. The codes for numerical experiments can be found in [76].
**Comparison with loss curves.** Firstly, we show the prediction ability of Theorem 2 by direct comparison with experimental loss curves in Fig. 1(c). We create \(9\) ALT circuits for qubit counts of \(2,6,10\) and circuit depth of \(1,3,5\) with randomly initialized parameters, denoted as \(\mathbf{\theta}^{*}\). For each circuit, we sample \(10\) target states from the ensemble \(\mathbb{T}\) with \(p=0.2\) and then generate \(10\) corresponding loss curves using the Adam optimizer with a learning rate \(0.01\). We exploit the background color intensity to represent the corresponding bounds from Theorem 2 by assigning \(e^{*}=0.1\) and \(\epsilon_{1}=\epsilon_{2}=0.05\). One can find that the loss curves decay fast at the beginning and then slow down till convergence, in accordance with the conclusion that the probability of encountering local minima is larger near the bottom. The convergent loss value becomes higher as the qubit count grows and can be partially reduced by increasing the circuit depth, which is also consistent with Theorem 2.
**Landscape profile sampling.** We visualize the existence of asymptotic local minima by sampling training landscape profiles in Fig. 3. Similar to the setup above, we create an ALT circuit with randomly initialized parameters \(\mathbf{\theta}^{*}\), sample \(200\) target states from the ensemble \(\mathbb{T}\) and compute the loss values near \(\mathbf{\theta}^{*}\). Figs. 3(a) and (b) are obtained by randomly choosing a direction for each landscape sample, while Fig. 3(c) is obtained by randomly sampling \(200\) directions for one fixed landscape sample. There is no indication of local minimum in the case of few qubits and small fidelity as shown by the blue curves in Fig. 3(a). However, as the qubit number grows, the landscape profiles concentrate into a clear convex shape centered at \(\mathbf{\theta}^{*}\) for both \(p=0.2\) and \(p=0.8\), where the curvature for \(p=0.8\) is larger due to the factor \((p^{2}-1/d)\) in Eq. (7). Fig. 3(c) further demonstrates that beyond the convexity along a specific direction, \(\mathbf{\theta}^{*}\) is a highly probable local minimum in the case of large qubit counts.
**Probability evaluation.** Finally, we compute the gradients and diagonalize the Hessian matrices to directly verify the exponentially likely local minima proposed by Theorem 2 in Fig. 4. Similar to
Figure 3: Samples of training landscapes along randomly chosen directions as a function of the distance \(\|\mathbf{\theta}-\mathbf{\theta}^{*}\|\) for random target states from \(\mathbb{T}\) with the sample size \(200\), the qubit count \(N=4\) and \(N=10\) and the overlap (a) \(p=0.2\) and (b) \(p=0.8\), respectively. The setup in (c) is the same as in (b) but fixes the target state and only samples the directions. A clear convex shape is present in the samples from \(N=10\) but absent in the samples from \(N=4\). This phenomenon intuitively shows that the training landscapes from the ensemble \(\mathbb{T}\) are concentrated to a local minimum around \(\mathbf{\theta}^{*}\) as the qubit count increases.
the setup above, we create ALT circuits for qubit count from \(N=1\) to \(11\) with depth \(D=5\) and sample \(200\) target states from \(\mathbb{T}\) for each circuit. After specifying a certain subset of parameters to be differentiated, we estimate the probability that \(\mathbf{\theta}\) is a local minimum by the proportion of samples satisfying the condition \(\mathrm{LocalMin}(\mathbf{\theta}^{*},\epsilon)\), where we assign \(\epsilon_{1}=\epsilon_{2}=0.05\). One can find that the probability of encountering local minima saturates to \(1\) very fast as the qubit count increases for arbitrary given values of \(p\), and at the same time, it can be reduced by increasing the number of trainable parameters, which is consistent with the theoretical findings in Theorem 2.
## 5 Conclusion and outlook
In this paper, we prove that during the process of learning an unknown quantum state with QNNs, the probability of avoiding local minima is of order \(\mathcal{O}(N^{2}2^{-N}D^{2}/\epsilon^{2})\) which is exponentially small in the qubit count \(N\) while increases polynomially with the circuit depth \(D\). The curvature of local minima is concentrated to the QFI matrix times a fidelity-dependent constant which is positive at \(p^{2}>1/d\). In practice, our results can be regarded as a quantum version of the no-free-lunch (NFL) theorem suggesting that no single QNN is universally the best-performing model for learning all target quantum states. We remark that compared to previous works, our findings first establish quantitative limits on good initial guesses and adaptive training methods for improving the learnability and scalability of QNNs.
In the technical part of our work, our ensemble arises from the unknown target state. Alternatively, if the QNN is sufficiently deep to form a subspace \(2\)-design (cf. Appendix A.1) replacing the ensemble we used here, a different interpretation could be established with the same calculations: there are exponentially large proportion of local minima on some cross sections of the training landscape with a constant loss function value. However, it remains an open question what the scaling of the QNN depth is to constitute such a subspace \(2\)-design, given that a local random quantum circuit of polynomially depth forms an approximate unitary \(t\)-design [77]. We would like to note that the case where the output and target states are mixed states is not covered due to the quantum nature of the hard-to-defining orthogonal ensemble of mixed states, which may be left for future research.
Future progress will necessitate more structured QNN architectures and optimization tools, where insights from the field of deep learning may prove beneficial. Our findings suggest that the unique characteristics and prior information of quantum systems must be thoughtfully encoded in the QNN in order to learn the state successfully, such as the low entanglement structure in the ground state [71], the local interactions in Hamiltonians [72; 78] and the adiabatic evolution from product states [73].
**Acknowledgement.** We would like to thank the helpful comments from the anonymous reviewers. Part of this work was done when H. Z., C. Z., M. J., and X. W. were at Baidu Research.
Figure 4: Numerical evaluation for the probability that \(\mathbf{\theta}\) is a local minimum up to a fixed precision \(\epsilon\), i.e., \(\mathrm{Pr}_{\mathbb{T}}\left[\mathrm{LocalMin}(\mathbf{\theta}^{*},\epsilon)\right]\) for different qubit count, the number of trainable parameters and the overlap \(p\), with the error bar representing the statistical uncertainty in experiments. (a) shows that the probability converges to \(1\) rapidly with the increasing qubit count. (b) shows that the probability is reduced by increasing the number of parameters, implying the local minimum phenomenon is mitigated. (c) illustrates that the probability of encountering local minima always converges to \(1\) for any fixed overlap \(p\). \(p=0.8\) for both (a) and (b) and the number of parameters in (c) is \(6\). |
2303.18139 | Efficient View Synthesis and 3D-based Multi-Frame Denoising with
Multiplane Feature Representations | While current multi-frame restoration methods combine information from
multiple input images using 2D alignment techniques, recent advances in novel
view synthesis are paving the way for a new paradigm relying on volumetric
scene representations. In this work, we introduce the first 3D-based
multi-frame denoising method that significantly outperforms its 2D-based
counterparts with lower computational requirements. Our method extends the
multiplane image (MPI) framework for novel view synthesis by introducing a
learnable encoder-renderer pair manipulating multiplane representations in
feature space. The encoder fuses information across views and operates in a
depth-wise manner while the renderer fuses information across depths and
operates in a view-wise manner. The two modules are trained end-to-end and
learn to separate depths in an unsupervised way, giving rise to Multiplane
Feature (MPF) representations. Experiments on the Spaces and Real
Forward-Facing datasets as well as on raw burst data validate our approach for
view synthesis, multi-frame denoising, and view synthesis under noisy
conditions. | Thomas Tanay, Aleš Leonardis, Matteo Maggioni | 2023-03-31T15:23:35Z | http://arxiv.org/abs/2303.18139v2 | # Efficient View Synthesis and 3D-based Multi-Frame Denoising
###### Abstract
While current multi-frame restoration methods combine information from multiple input images using 2D alignment techniques, recent advances in novel view synthesis are paving the way for a new paradigm relying on volumetric scene representations. In this work, we introduce the first 3D-based multi-frame denoising method that significantly outperforms its 2D-based counterparts with lower computational requirements. Our method extends the multiplane image (MPI) framework for novel view synthesis by introducing a learnable encoder-renderer pair manipulating multiplane representations in feature space. The encoder fuses information across views and operates in a depth-wise manner while the renderer fuses information across depths and operates in a view-wise manner. The two modules are trained end-to-end and learn to separate depths in an unsupervised way, giving rise to Multiplane Feature (MPF) representations. Experiments on the Spaces and Real Forward-Facing datasets as well as on raw burst data validate our approach for view synthesis, multi-frame denoising, and view synthesis under noisy conditions.
## 1 Introduction
Multi-frame denoising is a classical problem of computer vision where a noise process affecting a set of images must be inverted. The main challenge is to extract consistent information across images effectively and the current state of the art relies on optical flow-based 2D alignment [45, 3, 7]. Novel view synthesis, on the other hand, is a classical problem of computer graphics where a scene is viewed from one or more camera positions and the task is to predict novel views from target camera positions. This problem requires to reason about the 3D structure of the scene and is typically solved using some form of volumetric representation [55, 28, 32]. Although the two problems are traditionally considered distinct, some novel view synthesis approaches have recently been observed to handle noisy inputs well, and to have a denoising effect in synthesized views by discarding inconsistent information across input views [18, 26]. This observation opens the door to 3D-based multi-frame denoising, by recasting the problem as a special case of novel view synthesis where the input views are noisy and the target views are the clean input views [26, 31].
Recently, novel view synthesis has been approached as an encoding-rendering process where a scene representation is first _encoded_ from a set of input images and an arbitrary number of novel views are then _rendered_ from this scene representation. In the Neural Radiance Field (NeRF) framework for instance, the scene representation is a radiance field function encoded by training a neural network on the input views. Novel views are then rendered by querying and integrating this radiance field function over light rays originating from a target camera position [2, 28, 2]. In the Multiplane Image (MPI) framework on the other hand, the scene representation is a stack of semi-transparent colored layers arranged at various depths, encoded by feeding the input views to a neural network trained on a large number of scenes. Novel views are then rendered by warping and overcompositi
Figure 1: _Top:_ Our Multiplane Features Encoder-Renderer (MPFER) reimagines the MPI pipeline by moving the multiplane representation to feature space. _Bottom:_ MPFER significantly outperforms existing methods in multiple challenging scenarios, including here, novel view synthesis from 8 highly degraded inputs.
In the present work, we adopt the MPI framework because it is much lighter than the NeRF framework computationally. The encoding stage only requires one inference pass on a network that generalizes to new scenes instead of training one neural network per-scene, and the rendering stage is essentially free instead of requiring a large number of inference passes. However, the standard MPI pipeline struggles to predict multiplane representations that are self-consistent across depths from multiple viewpoints. This problem can lead to depth-discretization artifacts in synthesized views [40] and has previously been addressed at the encoding stage using computationally expensive mechanisms and a large number of depth planes [8, 11, 27, 40]. Here, we propose to enforce cross-depth consistency at the rendering stage by replacing the fixed overcompositing operator with a learnable renderer. This change of approach has three important implications. First, the encoder module can now process depths independently from each other and focus on fusing information across views. This significantly reduces the computational load of the encoding stage. Second, the scene representation changes from a static MPI to Multiplane Features (MPF) rendered dynamically. This significantly increases the expressive power of the scene encoding. Finally, the framework's overall performance is greatly improved, making it suitable for novel scenarios including multi-frame denoising where it outperforms standard 2D-based approaches at a fraction of their computational cost. Our main contributions are as follow:
* We solve the cross-depth consistency problem for multi-plane representations at the rendering stage, by introducing a learnable renderer.
* We introduce the Multiplane Feature (MPF) representation, a generalization of the multiplane image with higher representational power.
* We re-purpose the multiplane image framework originally developed for novel view synthesis to perform 3D-based multi-frame denoising.
* We validate the approach with experiments on 3 tasks and 3 datasets and significantly outperform existing 2D-based and 3D-based methods for multi-frame denoising.
## 2 Related work
Multi-frame denoisingMulti-frame restoration methods are frequently divided into two categories, depending on the type of image alignment employed. _Explicit alignment_ refers to the direct warping of images using optical flows predicted by a motion compensation module [4, 43, 44, 54]. In contrast, _implicit alignment_ refers to local, data-driven deformations implemented using dynamic upsampling filters [16, 17], deformable convolutions [45, 49], kernel prediction networks [25] or their extension, basis prediction networks [53]. Explicit alignment is better at dealing with large motion while implicit alignment is better at dealing with residual motion, and state-of-the-art performance can be achieved by combining both in the form of flow-guided deformable convolutions [6, 7]. Another distinction between multi-frame restoration methods is the type of processing used. A common approach is to concatenate the input frames together along the channel dimension [4, 43, 44, 54, 49] but recurrent processing is more efficient [9, 10, 34, 42], especially when implemented in a bidirectional way [5, 7, 14]. BasicVSR++ achieves state-of-the-art performance by combining flow-guided deformable alignment with bidirectional recurrent processing iterated multiple times [7]. In a different spirit, the recent DeepRep method [3] introduces a deep reparameterization in feature space of the maximum a posteriori formulation of multi-frame restoration. Similarly to the previous methods however, it still uses a form of explicit 2D alignment, and lacks any ability to reason about the 3D structure of the scene.
View synthesisThe idea to decompose a scene into a set of semi-transparent planes can be traced back to the use of mattes and blue screens in special effects film-making [47, 38]. This scene representation was first applied to view interpolation in [41], and recently gained popularity under the name of Multiplane Image (MPI) [55]. It is particularly powerful to generate novel views from a small set of forward facing views [8, 27, 55], and can even be used to generate novels views from a single image [11, 46, 20]. The rendering of view dependent-effects and non-Lambertian surfaces is challenging due to the use of a single set of RGB\(\alpha\) images, and can be improved by predicting multiple MPIs combined as a weighted average of the distance from the input views [27], or as a set of basis components [51]. The simplicity of this representation is appealing, but it can still be computationally heavy when the number of depth planes grows [8, 51], and the rendered views can suffer from depth discretization artifacts [40]. A number of alternative layered scene representations exist, including the Layered Depth Image (LDI) consisting in one RGB image with an extra depth channel [36], and variants of MPIs and LDIs [13, 19, 21, 39]. So far however, all these methods use a fixed overcompositing operator at the rendering stage. The idea to perform view synthesis by applying 3D operations in the feature space of an encoder-decoder architecture was explored on simple geometries in [52], and used with success on a point-cloud scene representation for view synthesis from a single image in [50]. Recently, Neural Radiance Fields (NeRFs) have become highly popular for their ability to produce high quality renderings of complex scenes from arbitrary viewpoints [24, 28, 29, 30, 2]. However, they tend to be very heavy computationally, require a large number of input views, and lack the ability to generalize to novel scenes. IBRNet [48] improves generalizability by learning a generic view interpolation function, but the approach remains computationally heavy. The application of view syn
thesis approaches to multi-frame restoration has been limited so far, and exclusively based on NeRF. RawNeRF [26] explores novel view synthesis in low-light conditions, and reports strong denoising effects in the synthesized views. Deblur-NeRF [22] augments the ability of NeRFs to deal with blur by including Deformable Sparse Kernels. Noise-aware-NeRFs [31] improves the IBRNet architecture to explicitly deal with noise. However, these different restoration approaches still suffer from the limitations affecting their underlying NeRF representations.
## 3 Method
We start by describing the standard MPI processing pipeline, before discussing the cross-depth consistency problem. We then introduce our MPF Encoder-Renderer and its adaptation to multi-frame denoising.
### Standard MPI processing pipeline
The standard MPI processing pipeline turns a set of input views into an arbitrary set of rendered novel views by applying 4 main transformations: forward-warping, MPI prediction, backward-warping, overcompositing (See Figure 1(a)). We describe this pipeline in more details below.
Input viewsThe inputs of the pipeline are a set of \(V\) views of a scene, consisting of images and camera parameters. The images are of height \(H\) and width \(W\), with red-green-blue color channels, and can be stacked into a 4D tensor \(\mathbf{I}=\{\{\{\mathbf{I}_{vchw}\}_{w=1}^{W}\}_{h=1}^{H}\}_{c=1}^{3}\}_{v=1}^{V}\). To simplify notations, we omit the dimensions \(c,h,w\) and refer to an individual image as \(\mathbf{I}_{v}\). The camera parameters consist of an intrinsic tensor \(\mathbf{K}\) of size \(V\!\times\!3\!\times\!3\) containing information about the focal lengths and principal point of the cameras, and an extrinsic tensor containing information about the camera orientations in the world coordinate system, that can be split into a rotation tensor \(\mathbf{R}\) of size \(V\!\times\!3\!\times\!3\) and a translation tensor \(\mathbf{t}\) of size \(V\!\times\!3\!\times\!1\). A reference view \(i\) is defined a priori, and the positions of all the cameras are assumed to be expressed relatively to it. The intrinsic matrix \(\mathbf{K}_{i}\) of the reference camera is defined such that the corresponding field of view covers the region of space visible to all the input views. Finally, a set of \(D\) depth planes is distributed orthogonally to the reference viewing direction such that their normal is \(\mathbf{n}=(0,0,1)^{\top}\), and their distances \(\{a_{d}\}_{d=1}^{D}\) from the reference camera center are sampled uniformly in disparity. The camera parameters and the depth planes are used to define a set of \(D\!\times\!V\) homography projections, represented by a tensor \(\mathbf{H}\) of size \(D\!\times\!V\!\times\!3\!\times\!3\). Each homography is between one of the input views and the reference view, and is induced by one of the depth planes, such that its matrix is expressed as [12]:
\[\mathbf{H}_{dv}=\mathbf{K}_{v}\left(\mathbf{R}_{v}-\frac{\mathbf{t}_{v}\,\mathbf{n}^{\top}}{a_{d}} \right)\mathbf{K}_{i}^{-1} \tag{1}\]
Plane Sweep VolumesThe first transformation in the MPI pipeline is the computation of Plane Sweep Volumes (PSVs), obtained by forward-warping each input image \(D\) times, according to the homography \(\mathbf{H}_{dv}\). The sampling rate of the warping operator \(\mathcal{W}\) is a hyperparameter that can be controlled using an up-scaling factor \(s\). Each transformation can thus be written as \(\mathbf{X}_{dv}=\mathcal{W}(\mathbf{I}_{v},\,\mathbf{H}_{dv},\,s)\) and the PSV tensor \(\mathbf{X}\) is of size \(D\!\times\!V\!\times\!3\!\times\!sH\!\times\!sW\).
Multiplane ImageThe main processing block of the pipeline is a neural network MPINet, turning the set of PSVs into a multiplane image representation of the scene \(\mathbf{Y}=\text{MPINet}(\mathbf{X})\) where \(\mathbf{Y}\) is a set of \(D\) semi-transparent RGB images of size \(D\!\times\!4\!\times\!sH\!\times\!sW\), constrained to the [0,1] range by using a sigmoid activation function.
Projected MPIsThe MPI is then backward-warped to a set of \(R\) novel views, defined by an homography tensor \(\mathbf{G}\) of size \(R\!\times\!D\!\times\!3\!\times\!3\) following Eq. (1) with the depth and view dimensions transposed. The backward-warping operation is defined as \(\mathbf{Z}_{rd}=\mathcal{W}\left(\mathbf{Y}_{d},\,\mathbf{G}_{rd}^{-1},\,1/s\right)\) obtaining a tensor of projected MPIs \(\mathbf{Z}\) of size \(R\!\times\!D\!\times\!4\!\times\!H\!\times\!W\).
Rendered viewsThe projected MPIs are finally collapsed into single RGB images by applying the overcompositing operator \(\mathcal{O}\). This operator splits each RGB\(\alpha\) image \(\mathbf{Z}_{rd}\) into a colour component \(\mathbf{C}_{rd}\) and an alpha component \(\mathbf{A}_{rd}\) and computes \(\mathbf{\tilde{J}}_{r}=\sum_{d=1}^{D}\left(\mathbf{C}_{rd}\mathbf{A}_{rd}\prod_{k=d+1}^{D} \left(1-\mathbf{A}_{rk}\right)\right)\) obtaining the rendered views \(\mathbf{\tilde{J}}\) of size \(R\!\times\!3\!\times\!H\!\times\!W\).
TrainingThe pipeline is typically trained end-to-end in a supervised way by minimizing a loss \(\mathcal{L}(\mathbf{\tilde{J}},\mathbf{J})\) between the rendered views \(\mathbf{\tilde{J}}\) and the corresponding ground-truth images \(\mathbf{J}\). In practice, \(\mathcal{L}\) is often an \(\mathcal{L}_{1}\) loss applied to low level features of a VGG network [37].
Figure 2: In the standard MPI processing pipeline, all the learning and most of the processing happens in the MPINet module. We propose to move the multiplane representation to feature space, by giving some processing power to the warping operators, and replacing the overcompositing operator with a learnable renderer.
### Cross-depth consistency
The main and only learnable module of the standard MPI pipeline is the prediction network MPINet, transforming a 5D PSV tensor into a 4D MPI scene representation. Its task is challenging because multiplane images are hyper-constrained representations: their semi-transparent layers interact with each other in non-trivial and view-dependent ways, and missing or redundant information across layers can result in depth discretization artifacts after applying the overcompositing operator [40]. To perform well, MPINet requires a mechanism enforcing _cross-depth consistency_; and several approaches have been considered before.
In the original case of stereo-magnification [55], there is one reference input and one secondary input which is turned into a single PSV. Both are concatenated along the channel dimension and fed to an MPINet module predicting the full MPI in one shot as a 3D tensor of size \((D\!\times\!4)\!\!\times\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
and it is possible to integrate a skip connection feeding the noisy inputs directly to the renderer to guide its final predictions. In all our experiments, we use Unets [33] with a base of 64 channels to implement both the encoder and the renderer. We set \(C_{1}=C\), \(C_{2}=V\times C\) and \(C_{3}=64\) such that there is a single hyperparameter \(C\) to vary. Our Multiplane Features Encoder-Render (MPFER) is illustrated in Figure 3 for 3 inputs views and 3 depth planes. A MindSpore [15] implementation of our method is available1.
Footnote 1: [https://github.com/mindspore-lab/mindediting](https://github.com/mindspore-lab/mindediting)
## 4 Experiments
We first consider the Spaces dataset [8] and validate our approach on novel view synthesis. We then focus on a denoising setup and perform extensive comparisons to state-of-the-art 2D-based methods. Finally, we compare our approach to the 3D-based multi-frame denoising method of [31] by replicating their experimental setup on the Real Forward-Facing dataset. In all cases, our method outperforms competitors at a fraction of the computational cost.
### Spaces
The Spaces dataset [8] consists of 100 indoor and outdoor scenes, captured 5 to 10 times each using a 16-camera rig placed at slightly different locations. 90 scenes are used for training and 10 scenes are held-out for evaluation. The resolution of the images is 480\(\times\)800.
Novel view synthesisWe start by replicating the novel view synthesis setup of DeepView [8] with four scenarios: one with 12 input views and three with 4 input views. Similarly to DeepView, we use a VGG loss and train our models for 100k steps using the Adam optimizer, with a learning rate of 1.5e-3. We reduce the learning rate to 1.5e-4 after 80k steps, and use a batch size of 4. Memory usage was reported to be a major challenge in DeepView, and all our models are kept at a significantly smaller size to avoid this issue. While DeepView uses a sophisticated strategy to only generate enough of the MPI to render a 32\(\times\)32 crop in the target image, we use a large patch size of 192 and only apply the loss on the region of the patch that contains more than 80% of the depths planes after backward warping. We compute all metrics by averaging over the validation scenes and target views of each setup, and after cropping a boundary of 16 pixels on all images as done in [8]. We compare to DeepView [8] and Soft3D [32] by using the synthesised images provided with the Spaces dataset. We also consider three variants of the standard MPI pipeline using the same Unet backbone as our MPFER method and trained in the same conditions, but processing the input PSV in different ways. MPINet implements _Option 1_ from Sec. 3.2. The views and depths dimensions of the PSV tensor are stacked along the channel dimension and fed to the Unet backbone to predict the output MPI in one shot. MPINet-dw implements _Option 2_ from Sec. 3.2. The Unet backbone runs depthwise on slices of the PSV to predict each depth plane of the MPI separately, without communication mechanism across depths. Finally, MPINet-dw-it implements a one-step version of the learned gradient descent algorithm of DeepView. A first estimate of the MPI is predicted by a Unet backbone running depthwise, and this estimate is fed to a second Unet backbone also running depthwise, along with the input PSV and gradient components (R), which are PSV-projected current estimates of the target views.
Figure 3: Our Multiplane Features Encoder-Renderer (MPFER). Input views are forward-warped into plane sweep volumes (PSVs) which are processed depthwise by the Encoder Unet64. The resulting multiplane feature representation (MPF) can then be back-projected to an arbitrary number of novel views, or to the same views as the inputs—allowing the integration of a skip connection (illustrated here). The Renderer Unet64 processes the projected MPFs on a per-view basis, producing the final synthesised or denoised outputs.
For our MPFER method and the MPINet ablations, we use a number of depth planes \(D=64\) distributed between 100 and 0.5 meters away from the reference camera, placed at the average position of the input views. We use a number of channels \(C=8\) and a PSV/MPF upscaling factor \(s=1.5\). Since the Unet backbone is not agnostic to the number of input views, we train one version of each model for the setup with 12 input views and one version for the three setups with 4 input views. The results are presented in Table 1. We observe a clear progression between the performances of MPINet, MPINet-dw and MPINet-dw-it, illustrating the benefit of each design change. Our MPFER method outperforms MPINet-dw-it by up to 4dBs in PSNR at a similar computational complexity, and outperforms DeepView by up to 1.8dB at a fraction of the complexity, clearly motivating the use of a learnt renderer for efficient depth fusion.
Multi-frame denoisingWe now consider a different setup where the inputs are 16 views from one rig position with images degraded with noise, and the targets are the same 16 views denoised. Similarly to previous works [3, 25, 31, 53], we apply synthetic noise with a signal dependent Gaussian distribution \(\mathbf{I}_{vchw}\sim\mathcal{N}\left(\mathbf{I}^{*}_{vchw}\,\ \sigma_{r}^{2}+ \sigma_{s}\mathbf{I}^{*}_{vchw}\right)\) where \(\mathbf{I}\) is the tensor of noisy inputs, \(\mathbf{I}^{*}\) is the ground truth signal, and \(\sigma_{r}\) and \(\sigma_{s}\) are noise parameters that are fixed for each sequence. We focus in particular on challenging scenarios with moderate to high gain levels [4, 8, 16, 20], corresponding to the \((\log(\sigma_{r}),\log(\sigma_{s}))\) values [(-1.44, -1.84), (-1.08, -1.48), (-0.72, -1.12), (-0.6, -1.0)] respectively.
We consider two patch-based approaches: VBM4D [23] and VNLB [1], as well as four state-of-the-art learning-based methods: BPN [53], BasicVSR [5] and its extension BasicVSR++ [7], and DeepRep [3]. To evaluate the influence of the model size and in particular the number of depth planes, we train three MPFER models: MPFER-16 with \((D,C,s)=(16,8,1.)\), MPFER-32 with \((D,C,s)=(32,16,1.25)\), and MPFER-64 with \((D,C,s)=(64,8,1.25)\). MPFER-16 has the particularity of using the same number of depth planes as there are input images, meaning that the number of Unet passes per frame to denoise the sequence is \((D+V)/V=2\). This observation motivates us to perform a comparison with three other architectures, using a strict computational budget of 2 Unet passes per frame. Unet-SF (for Single-Frame) is constituted of two Unet blocks without temporal connection, therefore processing the sequence as a disjoint set of single frames. Unet-BR (for Bidirectional-Recurrent) is constituted of two Unet blocks with bidirectional recurrent connections: the lower Unet processes the sequence in a backward way, and the higher Unet processes the sequence in a feedforward way. Finally, Unet-BR-OF (for Bidirectional-Recurrent with Optical-Flow alignment) is constituted of two Unet
Figure 4: Qualitative evaluation for multi-frame denoising with Gain 20 (best viewed zoomed in). We compare MPFER to 2D-based methods on Spaces (top) and to 3D-based methods on the Real Forward-Facing dataset (bottom).
blocks with bidirectional recurrent connections, and the recurrent hidden-state is aligned using a SpyNet module, as done in basicVSR [5]. We train all the models in the same conditions as for the novel view synthesis setup, except for the patch size which we increase to 256, and the loss which we replace with a simple L1 loss. During training, we vary the gain level randomly and concatenate an estimate of the standard deviation of the noise to the input, as in [25, 31, 53]. We evaluate on the first rig position of the 10 validation scenes of the Spaces dataset for the 4 gain levels without boundary-cropping, and present the results in Table 2. Each model receives 16 noisy images as input and produces 16 restored images as output, except for BPN and DeepRep which are burst processing methods and only produce one output. For these methods, we choose the view number 6 at the center of the camera rig as the target output, and compare the performances of all methods on this frame. Our MPFER method clearly outperforms all the other methods at a fraction of the computational cost. It performs particularly strongly at high noise levels, with improvements over other methods of more that 2dBs in PSNR. MPFER-16 also performs remarkably well, despite using only 16 depth planes. This suggests that the high representational power of multiplane features allows to significantly reduce the number of depth planes--and therefore the computational cost--compared to standard MPI approaches, which typically use a very high number of planes (80 in the case of DeepView [8], up to 192 in the case of NeX [51]). A qualitative evaluation is available in Figure 9 (top), and we observe that MPFER is able to reconstruct scenes with much better details. We also present a visualization of multiplane features in Figure 6, illustrating how the model learns to separate depths in an unsupervised way.
### LLFF-N
The LLFF-N dataset [31] is a variant of the Real Forward -Facing dataset [27] where images are linearized by applying inverse gamma correction and random inverse white balancing, and synthetic noise is applied following the same signal dependent Gaussian distribution as used in the previous section with the six gain levels \([1,2,4,8,16,20]\). The dataset contains 35 scenes for training and 8 scenes for testing, and the resolution of the images is 756\(\times\)1008.
DenoisingIn this setup, the model receives 8 frames in input: the target frame plus its 7 nearest neighbors in terms of camera distances. We train one MPFER model with \((D,C,s)=(64,8,1.25)\), using an \(\mathcal{L}_{1}\) loss applied to the target frame. We evaluate on the 43 bursts used in [31] (every 8th frame of the test set) and present the results in the first half of Table 3. A qualitative evaluation is also available in Figure 9 (bottom). To assess the robustness of our method to noisy camera positions, we evaluate it using camera positions computed on clean images (MPFER-C) and on noisy images (MPFER-N) using COLMAP [35]. Our method outperforms IBRNet-N and NAN in both scenarios by large margins, but the evaluation using clean camera poses performs significantly better at high noise levels.
Synthesis under noisy conditionsIn this setup, the model receives as input the 8 nearest neighbors to a held-out target. Again, we train one MPFER model with \((D,C,s)=(64,8,1.25)\), using an \(\mathcal{L}_{1}\) loss applied to the target frame. We evaluate on the same 43 bursts as before and report the results in the second half of Table 3. Our method performs on par with IBRNet and NAN at very low noise levels (close to a pure synthesis problem), and significantly outperforms the other methods at larger noise levels. MPFER only requires \(D\) Unet passes to produce an MPF, and 1 Unet pass to render a novel view, which is significantly lighter than IBRNet and NAN. At inference time, the Unet pass requires 0.6 Mflops per pixel, compared to 45 Mflops for IBRNet [48]. A qualitative evaluation is available in Figure 1 for Gain 20.
Low-Light ScenesFinally, we qualitatively evaluate our denoising model trained on LLFF-N on sequences with real noise captured with a Google Pixel 4 under low-light conditions. We use the sequences from [31] and estimate camera poses using COLMAP [35] on noisy images. We compare our results to those of [31]--downloaded from their project page where more baselines can be found--in Figure 5.
## 5 Conclusion
We proposed to approach multi-frame denoising as a view synthesis problem and argued in favor of using multi-plane representations for their low computational cost and generalizability. We introduced a powerful generalization of multiplane images to feature space, and demonstrated its effectiveness in multiple challenging scenarios.
Figure 5: Qualitative evaluation on sequences with real noise from [31], captured with a Google Pixel 4.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{12 input views (dense)} & \multicolumn{4}{c}{4 input views (small)} & \multicolumn{4}{c}{4 input views (medium)} & \multicolumn{4}{c}{4 input views (large)} & \multicolumn{4}{c}{GFlops@} \\ \cline{2-13} & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & 500\(\times\)800 \\ \hline Soft3D* & 31.93 & 0.940 & 0.052 & 30.29 & 0.925 & 0.064 & 30.84 & 0.930 & 0.060 & 30.57 & 0.931 & 0.054 & n/a \\ DeepView* & 34.23 & 0.965 & 0.015 & 31.42 & 0.954 & 0.026 & 32.38 & 0.957 & 0.021 & 31.00 & 0.952 & 0.024 & 45800 \\ MPNIet & 27.43 & 0.914 & 0.035 & 27.00 & 0.906 & 0.054 & 26.16 & 0.896 & 0.062 & 24.93 & 0.865 & 0.085 & **450** \\ MPNIet-dw & 30.70 & 0.963 & 0.021 & 29.39 & 0.951 & 0.027 & 28.47 & 0.948 & 0.030 & 26.83 & 0.937 & 0.040 & 7890 \\ MPNIet-dw-it & 30.85 & 0.966 & 0.017 & 30.22 & 0.955 & 0.024 & 29.37 & 0.953 & 0.026 & 28.00 & 0.943 & 0.034 & 14800 \\ MPFER-64 & **35.73** & **0.972** & **0.012** & **33.20** & **0.959** & **0.018** & **33.47** & **0.959** & **0.018** & **32.38** & **0.953** & **0.021** & 8490 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Novel view synthesis on Spaces. All metrics were computed on predicted images with a 16-pixel boundary cropped, as done in [8]. Stared methods were evaluated using the predicted images released with the Spaces dataset.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Gain 4} & \multicolumn{4}{c}{Gain 8} & \multicolumn{4}{c}{Gain 16} & \multicolumn{4}{c}{Gain 20} \\ \cline{2-13} & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline \hline \multicolumn{13}{c}{_Denoising of Synthetic Noise_} \\ IBRNet-N* & 33.50 & 0.915 & 0.039 & 31.29 & 0.877 & 0.070 & 29.01 & 0.822 & 0.123 & 26.57 & 0.741 & 0.210 & 24.19 & 0.634 & 0.331 & 23.40 & 0.591 & 0.380 \\ NAN* & 35.84 & 0.955 & 0.018 & 33.67 & 0.930 & 0.034 & 31.26 & 0.892 & 0.068 & 28.64 & 0.834 & 0.132 & 25.95 & 0.749 & 0.231 & 25.07 & 0.715 & 0.271 \\ MPFER-N & 37.90 & 0.969 & 0.013 & 35.61 & 0.951 & 0.025 & 33.02 & 0.921 & 0.048 & 30.21 & 0.872 & 0.091 & 27.24 & 0.797 & 0.164 & 26.23 & 0.765 & 0.198 \\ MPFNC-C & **38.06** & **0.971** & **0.011** & **35.95** & **0.956** & **0.020** & **33.65** & **0.934** & **0.036** & **31.21** & **0.898** & **0.065** & **28.61** & **0.843** & **0.115** & **27.71** & **0.819** & **0.138** \\ \hline \hline \multicolumn{13}{c}{_Novel View Synthesis Under Noisy Conditions_} \\ IBRNet* & **24.53** & 0.774 & 0.135 & 24.20 & 0.730 & 0.159 & 23.44 & 0.653 & 0.217 & 22.02 & 0.536 & 0.327 & 19.76 & 0.377 & 0.492 & 18.80 & 0.319 & 0.553 \\ IBRNet-N* & 23.86 & 0.763 & 0.170 & 23.73 & 0.744 & 0.178 & 23.38 & 0.703 & 0.208 & 22.68 & 0.638 & 0.275 & 21.67 & 0.549 & 0.377 & 21.29 & 0.514 & 0.418 \\ NAN* & 24.52 & **0.799** & **0.132** & 24.41 & 0.787 & **0.145** & 24.18 & 0.765 & 0.171 & 23.70 & 0.726 & 0.221 & 22.79 & 0.666 & 0.305 & 22.37 & 0.641 & 0.342 \\ MPFNC & 24.52 & 0.798 & 0.157 & **24.51** & **0.796** & 0.158 & **24.47** & **0.789** & **0.164** & **24.33** & **0.775** & **0.178** & **23.94** & **0.746** & **0.212** & **23.72** & **0.731** & **0.230** \\ \hline \hline \end{tabular}
\end{table}
Table 3: LLFF-N. We consider the two scenarios introduced in [31]: Denoising of Synthetic Noise, where the noisy target is accessible, and Novel View Synthesis Under Noisy Conditions, where the noisy target is held-out. The numbers for the stared methods correspond to Figure 9 in [31], and were provided by the authors.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Gain 1} & \multicolumn{4}{c}{Gain 2} & \multicolumn{4}{c}{Gain 4} & \multicolumn{4}{c}{Gain 8} & \multicolumn{4}{c}{Gain 16} & \multicolumn{4}{c}{Gain 20} \\ \cline{2-13} & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline \hline \multicolumn{13}{c}{_Denoising of Synthetic Noise_} \\ IBRNet-N* & 33.50 & 0.915 & 0.039 & 31.29 & 0.877 & 0.070 & 29.01 & 0.822 & 0.123 & 26.57 & 0.741 & 0.210 & 24.19 & 0.634 & 0.331 & 23.40 & 0.591 & 0.380 \\ NAN* & 35.84 & 0.955 & 0.018 & 33.67 & 0.930 & 0.034 & 31.26 & 0.892 & 0.068 & 28.64 & 0.834 & 0.132 & 25.95 & 0.749 & 0.231 & 25.07 & 0.715 & 0.271 \\ MPFNC-N & 37.90 & 0.969 & 0.013 & 35.61 & 0.951 & 0.025 & 33.02 & 0.921 & 0.048 & 30.21 & 0.872 & 0.091 & 27.24 & 0.797 & 0.164 & 26.23 & 0.765 & 0.198 \\ MPFNC-C & **3 |
2310.04428 | Avalanche Prediction and Dynamics using Temperature Variance , Grain
Size Variance and Flow Regimes | We investigate the effects of temperature variance, grain size variation,
flow regimes, and the use of Support Vector Machines (SVMs) in avalanche
studies. The temperature variance experiments involved ice single crystals and
polycrystals, revealing that the scale-free pattern of avalanche sizes remains
consistent regardless of temperature. The dynamics of dislocations in
polycrystals were found to be independent of stress level and temperature. The
Material Point Method (MPM) was used to explore snow avalanche behavior and
identify flow regimes. The MPM accurately represented various flow patterns of
snow avalanches, although challenges remained in capturing powder clouds. SVMs
were employed for avalanche forecasting, using meteorological and snowpack
variables as input features. The selected features provided insights into
snowfall characteristics, snow accumulation, rain interaction, snowdrift
patterns, cloud dynamics, snowpack mechanics, and temperature distribution
within the snowpack. The findings contribute to a better understanding of
avalanche dynamics and offer potential improvements in avalanche trend
predictions. | Aditya Sharma | 2023-09-21T18:56:18Z | http://arxiv.org/abs/2310.04428v1 | # Avalanche Prediction and Dynamics using Temperature Variance, Grain Size Variance and Flow Regimes
###### Abstract
We investigate the effects of temperature variance, grain size variation, flow regimes, and the use of Support Vector Machines (SVMs) in avalanche studies. The temperature variance experiments involved ice single crystals and polycrystals, revealing that the scale-free pattern of avalanche sizes remains consistent regardless of temperature. The dynamics of dislocations in polycrystals were found to be independent of stress level and temperature. The Material Point Method (MPM) was used to explore snow avalanche behavior and identify flow regimes. The MPM accurately represented various flow patterns of snow avalanches, although challenges remained in capturing powder clouds. SVMs were employed for avalanche forecasting, using meteorological and snowpack variables as input features. The selected features provided insights into snowfall characteristics, snow accumulation, rain interaction, snowdrift patterns, cloud dynamics, snowpack mechanics, and temperature distribution within the snowpack. The findings contribute to a better understanding of avalanche dynamics and offer potential improvements in avalanche trend predictions.
## 1 Introduction
### Temperature variance
Dislocations behave differently depending on the temperature. In fact, the mobility of a single basal dislocation in ice rises by a factor of around 15 from 20 to 3 \({}^{\circ}\)C. An increase in temperature may also encourage vacancy diffusion within the crystal, which will cause dislocations to climb. The purpose of this study was to ascertain whether temperature can affect how collective dislocation dynamics self-organize.
### Grain size variation
We study the variation of certain aspects of an avalanche flow with respect to the particle sizes and other dynamics. In this case we have taken plastic avalanches as the point of study which is a phenomenon of cascading layers of degrading plastic because of various reasons. During the plastic avalanche, the deformation spreads
rapidly and non-linearly throughout the material, involving the collective motion and interaction of numerous dislocations. The movement is usually taken over a metal surface. There is substantial evidence suggesting that in materials characterized by high dislocation mobility, such as face-centered cubic metals, the motion of dislocations occurs in a highly intermittent manner. This implies that plastic deformation occurs through isolated bursts or episodes, where the instantaneous strain rate can greatly exceed the average strain rate by several orders of magnitude. These bursts are temporary and localized events known as dislocation avalanches. There are multiple methods to study avalanches including (1) Studying the influence on physical properties that are sensitive to the density and velocity of mobile dislocations (2) Reading the electrical response of the stress recorded during and imposed strain rate test (3) Acoustic Emission
### Flow Regimes
Snow exhibits different behaviors depending on the conditions, either resembling a fluid or a solid, which results in distinct characteristics of snow avalanches in reality. In the past, snow avalanches were typically categorized into two flow regimes: dense snow avalanches and powder snow avalanches. However, a recent study conducted by Kohler et al. (2018) shed light on the influence of snow temperature in classifying these flow regimes.
Here, we try to apply MPM (Material Point Method - a hybrid Eulerian-Lagrangian approach to solve and update the motion of the particles) in 2D (slope parallel and slope normal) to explore the distinct behaviors of snow avalanches and the key controlling factors of snow avalanche dynamics.
### SVMs
We use Support Vector Machines (SVMs) in avalanche forecasting. The SVM algorithm is a popular supervised learning method used for classification problems. The goal of SVM is to create an optimal decision boundary, or hyperplane, that can accurately classify data points into different classes.
## 2 Method
### Temperature variance
In this experiment, both cylindrical ice single crystals and polycrystals were prepared using distilled, deionized, and degassed water. The samples were cut at an angle to ensure that the preferred slip planes, known as basal planes, were inclined at the beginning of each test. The average grain size, denoted as \(<d>\), ranged from 260 \(\mu\)m to 5 mm, which is relatively large compared to typical grain sizes in structural materials.
To achieve smaller grain sizes, fine-grained ice powders were pressed into dense aggregates under a low pressure of 0.1 Torr. Further, applying an axial stress of 9 MPa for a period of 2 hours resulted in a very small grain size of 260 \(\mu\)m, which is significant for ice. The c-axis orientations of the grains were not controlled, leading to an isotropic macroscopic mechanical behavior.
One advantage of using ice in this experiment is that both single crystals and polycrystals with different microstructures can be easily grown in the laboratory.
Additionally, the transparency of ice allowed verification that the acoustic emission (AE) activity recorded was not related to microcracking. The ice samples were coupled with the AE transducer by melting/freezing. Parameters such as arrival time, maximum amplitude, and avalanche duration were recorded for each detected event. The system used two time constants to identify individual acoustic events and prevent the inclusion of secondary echoes. The durations of events depended on the chosen time constant and were compared between tests to analyze the damping of avalanches.
The experimental device was calibrated using a Nielsen test, which is a standardized artificial acoustic source.
### Grain size variation
The method of experimentation used is the same as 2.1. The results obtained are done through different measurements and analysis.
### Flow Regimes
The MPM method uses Lagrangian particles, assuming a continuous material, to track the properties such as mass, momentum, and deformation gradient. It also utilizes Eulerian grids to calculate the particle motion and update their states. The particle motion is determined by the conservation of mass and momentum, which can be expressed as follows:
\[\frac{D\rho}{Dt}+\rho\nabla\cdot\nu=0\] \[\rho\frac{D\nu}{Dt}=\nabla\cdot\sigma+\rho g\]
where \(\rho\) is density, t is time, v is velocity, \(\sigma\) is the Cauchy stress and g is the gravitational acceleration. Each particle in the MPM has a fixed mass, which naturally guarantees mass conservation.
To study snow avalanche behavior in controlled conditions, a setup featuring a rectangular snow sample and an ideal slope is used, as shown in Figure:
The inclined slope is linked to the horizontal ground through a circular arc that has a central angle matching the slope angle \(\theta\). The circular arc and the horizontal ground are referred to as the connecting-arc zone and deposition zone, respectively.
In order to explore various flow patterns of snow avalanches, the characteristics such as the friction coefficient (M), the tension/compression ratio (\(\beta\)), the hardening factor (\(\xi\)), and the initial consolidation pressure are modified.
Figure 1: Model setup for MPM modeling of snow avalanches on an ideal slope.
The impact of slope angle \(\theta\) and horizontal length L\({}_{0}\) in Figure 1 is investigated through five sets of simulations outlined in Table 1. Each group examines different ranges of snow properties. Groups I, II, and III focus on studying the influence of slope angle \(\theta\), while groups II, IV, and V analyze the effect of horizontal length \(L_{0}\). In groups I, II, and III, the horizontal length \(L_{0}\) remains fixed, and the drop height \(H_{0}\) is adjusted as specified in Table 1 when varying \(\theta\). Alternatively, one could fix the drop height \(H_{0}\) and modify the horizontal length \(L_{0}\). It is important to note that the ratios \(L_{0}/h_{0}\), \(L_{0}/l_{0}\), and \(L_{0}/r\) are maintained constant when varying \(L_{0}\) in groups II, IV, and V by adjusting \(h_{0}\), \(l_{0}\), and \(r\) accordingly. Increasing \(L_{0}\) results in scaling up the setup, which leads to an increase in the drop height \(H_{0}\).
The size of the background Eulerian mesh in the Material Point Method (MPM) is chosen to balance simulation accuracy and resolution, while also minimizing computation time. The time step is determined based on the CFL condition and the elastic wave speed to ensure simulation stability. The simulation data is exported at intervals of \(1/24\) seconds. In each group of our MPM simulations, four typical flow regimes are captured with the changing mechanical properties of snow:
In the five groups of MPM simulations, where the slope angle \(\theta\) and horizontal length \(L_{0}\) are varied, a consistent pattern emerges between \(\vv_{max}\) (maximum velocity) and \(\alpha\) (slope angle): there is an initial decrease in \(\vv_{max}\) followed by a constant \(\vv_{max}\) as \(\alpha\) increases. This two-stage relationship is evident in Figures 8 and 9.
The first stage primarily includes cases with low friction and cohesion, while the second stage predominantly consists of cases with high friction and cohesion.
### SVMs
The data used in this study consisted of daily measurements of meteorological and snowpack variables. To create an input
Figure 8: Evolution of the maximum velocity with \(\alpha\) for varying slope angles \(\theta\).
Figure 9: Evolution of the maximum velocity with \(\alpha\) for different horizontal lengths \(L_{0}\).
feature vector, data from the current day and the two previous days were combined, resulting in a 30-dimensional vector. Expert features, including cumulative snow index, temperature changes, snow temperature gradients, and binary indicators for weather conditions and avalanche activity, were incorporated into the feature vector.
Recursive feature elimination in conjunction with SVM was employed for feature selection, resulting in a final feature vector of 44 variables.
## 3 Review of literature
### Temperature and grain size variance
In solid materials, acoustic emission (AE) waves are generated by sudden changes in inelastic strain. These waves can originate from different sources, including crack nucleation, crack propagation, twinning, or dislocation motion. However, in the specific study being discussed, the focus is on dislocation motion, as twinning does not occur in ice and experiments confirm the absence of crack nucleation.
The AE events detected in the study are most likely associated with synchronized and rapid motion of dislocations, known as dislocation avalanches, rather than isolated movements of individual dislocations. To understand the underlying source mechanism of these AE waves, a model is needed. The amplitude of the AE wave, represented by \(A\), is related to the total length of dislocations involved in the plastic instability (\(L\)) and the average velocity of dislocations (\(\nu\)), as determined by theoretical analysis.
The formula that we get is
\[A(t)=\frac{3C_{T}^{2}}{4\pi C_{L}}\frac{\psi b}{D^{2}}L\nu=\frac{3C_{T}^{2}}{4 \pi C_{L}}\frac{\psi b}{D^{2}}(\frac{dS}{dt})\]
where \(C_{T}\) and \(C_{L}\) are, respectively, the transverse and longitudinal wave velocities, \(\phi\) is the density of the mate-rial and \(D\) is the source/transducer distance (supposed to be large compared to \(L\): far-field assumption. The term \(1/D^{2}\) represents the geometrical attenuation of the acoustic waves. The term \(L\nu\) accounts for the surface \(S\) swept by time unit by dislocations during the avalanche: \(L\upsilon=dS/dt\). When multiplied by \(b\) and nor-malized by a volume, this term represents a strain rate \(d\epsilon/dt\)
### Flow regimes
This study aims to showcase the effectiveness of the Material Point Method (MPM) in accurately representing various flow patterns of snow avalanches. The data from five reported real avalanches with different complex terrains are analyzed and
contrasted with their respective MPM models. The evolution of the scaled front velocity is examined along the flow path. The field data for the front velocity was collected using Doppler radar devices and photo analyses.
The consistency between the MPM results and the measured data is better for the second avalanche in Fig. 12. The underestimated maximum front velocity in Fig. 11 might be due to the challenge of capturing the powder cloud of the first avalanche with the MPM. the neglection of entrainment in the simplified MPM simulation may also contribute to the discrepancy in Fig. 11
### SVMs
The selected features in the study have direct relevance to Fluid Particle Mechanics (FPM) in the context of snow and precipitation. The new-snow index, cumulative snow index, rain at 900 m, snow drifting, cloud cover, foot penetration, and snow temperature all have connections to FPM. These features provide insights into snowfall characteristics, snow accumulation, rain interaction, snowdrift patterns, cloud dynamics, mechanical properties of the snowpack, and temperature distribution within the snowpack.
## 4 Discussion
### Temperature variance
The study demonstrates that the distribution of avalanche sizes in ice single crystals remains unchanged regardless of applied shear stress and temperature. This highlights the robustness of the scale-free pattern, which arises from long-range elastic interactions between dislocations and is independent of individual dislocation behavior. Lattice friction, a thermally
Figure 11: Front velocity distribution along the flow path for Case 1; Weissshuh north ridge, 12 March 1982, at (Dawson, Switzerland). Drop height \(H_{0}=236\,\mathrm{m}\).
Figure 12: Front velocity distribution along the flow path for Case 2; Case 3; Himmelegg, 14 February 1990 (Arlberg, Austria). Drop height \(H_{0}=352\,\mathrm{m}\).
Figure 13: Front velocity distribution along the flow path for Case 3; Himmelegg, 14 February 1990 (Arlberg, Austria). Drop height \(H_{0}=352\,\mathrm{m}\).
activated process, has minimal influence on the scale-free critical dynamics even at temperatures close to the melting point. The proportion of decorrelated dislocations and drag resistance from phonon interactions become significant factors at high velocities, with the latter increasing with temperature. Additionally, strain hardening and temperature affect the durations of dislocation avalanches, with higher temperatures leading to shorter durations and faster dislocation velocity decay.
In conclusion, the study confirms the temperature independence and universality of the scale-free critical dynamics in ice. The interplay of temperature, lattice friction, phonon drag resistance, and strain hardening contributes to the dynamics of dislocation avalanches. Further investigations, particularly through numerical simulations, can offer deeper insights into these mechanisms and their implications for plastic deformation in ice.
### Grain size variation
The dynamics of dislocations in polycrystals were found to be independent of stress level and temperature, similar to single crystals. Monte-Carlo simulations were conducted to validate the estimation of the power law exponent. The analysis of the 3D spatial organization of dislocation avalanches using correlation integral analysis revealed that avalanches exhibited spatial correlation over distances one order of magnitude larger than the average grain size. From a macroscopic perspective, the slowing down of dislocation motion during creep tests indicated hardening of the sample. Hardening can result from the development of long-range internal stresses (kinematic hardening) and/or short-range dislocation interactions (isotropic hardening). Comparatively, the average grain size was smaller in polycrystals than in single crystals, and it decreased with decreasing grain size. This
Figure 4: Evolution of avalanche durations with temperature in single crystals. Comparison of the average duration of events of amplitude \(A_{0}\) between single crystals tested at different temperatures, as a function of \(A_{0}\). Closed symbols: \(T=-10^{\circ}\)C; resolved shear stress: 41 kPa. Open symbols: \(T=-3^{\circ}\)C; resolved shear stress: 67 kPa.
suggests a relationship between strain hardening and the damping of avalanches, with larger strain hardening leading to quicker damping. Notably, for avalanches larger than 0.01 V, the duration (d) was consistently longer during unloading than during loading.
### Flow regimes
The broad division of Avalanche flows into four regimes could accommodate the different cases that were considered. The evolution of the avalanche front, the shape of the free surface, and the vertical velocity profile exhibit unique characteristics for each flow regime, playing a crucial role in identifying and distinguishing between different flow regimes. The behavior of dense snow avalanches has been well recovered by the MPM. A discrepancy was observed particularly for avalanches which developed a powder cloud above the dense core, as the powder cloud has not been modeled here.
### SVMs
The findings suggest that the selected features provide valuable insights into fluid dynamics and particle mechanics involved in snowfall, snowpack formation, and related processes. By incorporating FPM concepts, such as snowflake characteristics, snow accumulation effects, rain interaction, snowdrift patterns, cloud dynamics, snowpack mechanics, and thermal properties, the SVM-based avalanche forecasting model can leverage inherent FPM features to improve avalanche trend predictions.
## 5 Result
### Temperature variance
Ice single crystals were subjected to uniaxial compression creep experiments at temperatures of 3\({}^{\circ}\)C, 10\({}^{\circ}\)C, and 20\({}^{\circ}\)C. Three different methods were used to set the compression stress as a function of temperature: maintaining the resolved shear stress on basal planes, maintaining the macroscopic strain rate, and maintaining the average velocity of a single dislocation. The distribution of avalanche sizes was found to be independent of the applied shear stress and unaffected by temperature. The avalanche size distributions collapsed precisely after being renormalized by the total number of avalanches, following the same power law. This indicates that the scale-free pattern of avalanche sizes is independent of temperature. Furthermore, the scale-free critical dynamics were shown to be independent of individual dislocation behavior and instead result from long-range elastic interactions between dislocations.
### Grain size variance
In polycrystals, the average grain size plays a significant role due to the interaction between dislocations and grain boundaries (GBs). GBs act as barriers to dislocation motion, leading to the formation of dislocation pile-ups and internal stresses. They can activate neighboring dislocation sources and allow dislocation transmission in specific cases. Additionally, GBs can serve as effective sources of dislocations.
The presence of GBs as barriers may impact the large-scale propagation of dislocation avalanches and modify the scale-free dynamics observed in single crystals. To investigate this, acoustic emission (AE) was recorded during plastic deformation of ice polycrystals with different average grain sizes, and the statistical data were compared to single crystal results. Creep tests revealed a decrease in acoustic activity as the material hardened, indicating a decrease in the visco-plastic strain rate. Stationary creep showed minimal AE activity, suggesting that processes such as dislocation climb in tilt boundaries may not contribute to AE. However, transient creep, associated with basal slip in ice polycrystals, was a significant source of AE. The distribution of avalanche sizes in polycrystals differed from the scaling observed in single crystals, with a smaller power law exponent and a cut-off for large amplitudes.
### Flow regimes
The calculated avalanche front position and velocity from the MPM show reasonable agreement with the measurement data from the literature.
### SVMs
The SVM classifier trained on the selected features demonstrated the ability to capture important information related to snowpack dynamics. The chosen features retained significant Class II (snowpack) information, including subjective foot-penetration values. Notably, expert features related to wind conditions, particularly south or sotheasterly winds, were emphasized. Only two non-expert features from two days prior to the forecast day, foot penetration and wind direction, were retained, indicating the rapid changes in Scotland's maritime climate.
## 6 Conclusion
### Temperature variance
The scale-free pattern observed in single crystals was found to be unaffected by temperature. However, temperature did have an impact on the durations of avalanches, with higher temperatures resulting in shorter durations. This observation was attributed to interactions between dislocations and phonons. Furthermore, the influence of microstructure was examined by conducting tests on polycrystals with varying average grain sizes.
### Grain size variance
The introduction of the average grain size as a microstructural scale in polycrystalline ice disrupted the scale-free pattern observed in single crystals. Grain boundaries (GBs) were found to play a dual role as barriers to the dynamic propagation of dislocation avalanches and as transmitters of internal stresses. This suggests a complex effect of GBs on the mechanical behavior of the material. The relaxation of dislocation avalanches was influenced by the presence of GBs, as indicated by the decrease in avalanche durations with increasing levels of hardening. Additionally, dislocation
avalanches were observed during reverse dislocation motion in polycrystalline ice, and the power law exponent for these avalanches was consistent with the scaling observed in single crystals. These findings highlight the importance of considering microstructural factors, such as grain size and GBs, in understanding the dynamics of dislocation avalanches and the mechanical behavior of polycrystalline materials.
## 6.3 Flow regimes
Despite the simplifications and assumptions made in this study, the findings demonstrate that the MPM model holds great potential for conducting systematic and quantitative investigations into the dynamics of snow avalanches and the transitions between different flow regimes. This approach allows for a comprehensive analysis of the interplay between snow mechanical properties and terrain geometries. Such research contributes to a better understanding of wet snow avalanches and provides insights into analyzing avalanche dynamics under the influence of climate change.
### 6.4 Syms
The study demonstrates the effectiveness of SVMs and expert-selected features in avalanche forecasting. The combination of meteorological and snowpack variables, along with additional expert features, provides valuable information for understanding fluid dynamics and particle mechanics in snow-related processes.
Further research can explore the integration of FPM concepts into avalanche forecasting models to enhance prediction accuracy and contribute to improved safety measures.
## 7 Appendix
### 6.3 A-1: Neilsen Test
The Hsu-Nielsen source, also known as pencil lead break, is a device used to simulate an acoustic emission event.
It involves breaking a brittle graphite lead of 0.5 mm (or alternatively 0.3 mm) diameter, located within a suitable fitting, approximately 3 mm (+/- 0.5 mm) from its tip by pressing it against the surface of the object under examination. This action generates a strong acoustic signal, similar to a natural acoustic emission source, which can be detected by sensors as a burst of activity.
## A-3: Powder Snow Avalanches
Characterized by the suspension and movement of snow grains in a state of fluid turbulence, and primarily driven by the action of gravity, these avalanches resemble particle-laden gravity currents and exhibit similarities to other natural phenomena such as turbidity currents, pyroclastic flows, and desert dust storms.
|
2309.08371 | Spontaneous Unidirectional Loop Extrusion Emerges from Symmetry Breaking
of SMC Extension | DNA loop extrusion is arguably one of the most important players in genome
organization. The precise mechanism by which loop extruding factors (LEFs) work
is still unresolved and much debated. One of the major open questions in this
field is how do LEFs establish and maintain unidirectional motion along DNA. In
this paper, we use High-Speed AFM data to show that condensin hinge domain
displays a structural, geometric constraint on the angle within which it can
extend with respect to the DNA-bound domains. Using computer simulations, we
then show that such a geometrical constraint results in a local symmetry
breaking and is enough to rectify the extrusion process, yielding
unidirectional loop extrusion along DNA. Our work highlights an overlooked
geometric aspect of the loop extrusion process that may have a universal impact
on SMC function across organisms. | Andrea Bonato, Jae-Won Jang, Kyoung-Wook Moon, Davide Michieletto, Je-Kyung Ryu | 2023-09-15T12:55:35Z | http://arxiv.org/abs/2309.08371v1 | # Spontaneous Unidirectional Loop Extrusion Emerges from Symmetry Breaking of SMC Extension
###### Abstract
DNA loop extrusion is arguably one of the most important players in genome organization. The precise mechanism by which loop extruding factors (LEFs) work is still unresolved and much debated. One of the major open questions in this field is how do LEFs establish and maintain unidirectional motion along DNA. In this paper, we use High-Speed AFM data to show that condensin hinge domain displays a structural, geometric constraint on the angle within which it can extend with respect to the DNA-bound domains. Using computer simulations, we then show that such a geometrical constraint results in a local symmetry breaking and is enough to rectify the extrusion process, yielding unidirectional loop extrusion along DNA. Our work highlights an overlooked geometric aspect of the loop extrusion process that may have a universal impact on SMC function across organisms.
DNA loop extrusion by structural maintenance of chromosome (SMC) complexes has emerged as a universal organizing principle for chromosomes [1; 2; 3; 4; 5; 6; 7]. For instance, it is now well established that in eukaryotes, cohesin complexes are involved in shaping "topologically associating domains" (TADs) during interphase [8; 9], while condensin complexes direct the establishment of the cylindrical structure of mitotic chromosomes [10].
SMC complexes have a ring-like structure, composed by a SMC dimer and an intrinsically disordered kleisin subunit. The SMC dimer is formed through the so-called "hinge" while the kleisin subunit is bound to the ATPase domain of each SMC (Fig. 1a). In the case of yeast condensin, there are putative additional DNA binding sites in the hinge, dimerized heads, Ycg1/Brn1 and Ycs4/Brn1 for DNA anchoring [11]. Despite the wealth of structural data, there is still contrasting evidence regarding the precise topology and mechanics of loop extrusion process.
Among the most debated features of loop extrusion are the motoring action of the hinge and the origin of the unidirectional motion [12; 13; 14; 15]. Recent structural studies have suggested that SMC uses conformational changes between a hinge-released state - where the hinge is extended away from the heads - and a hinge-engaged state - where the hinge is in proximity of the heads, in order to drive the motion in a "scrunching", and ATP-dependent, fashion [7; 16; 17] (Fig. 1a, b). The scrunching model predicts that following dimerisation of the SMC heads (due to ATP-binding), the coiled-coil arms fold to bring the hinge closer to the heads. Following ATP-hydrolysis, the heads are released and the hinge extends again [14; 16; 18]. During this step, the positively charged extended hinge may search for a 3D proximal (but not necessarily 1D contiguous) DNA segment to grab and to subsequently bring close to the heads in the following ATP-binding step [14; 19]. Whilst this model can elegantly explain the bypassing of other SMCs [20; 21] and large roadblocks [22; 23], it cannot explain unidirectionality. Indeed, during the search step, there is no guarantee that the hinge will grab onto a DNA segment ahead of the one that was reeled in the extruded loop in the previous ATP cycle. Thus, even within the scrunching model explaining the unidirectional motion of LEFs remains an outstanding problem in the field of SMC-driven DNA organisation.
To gain a better understanding of the origin of the unidirectional motion, we use High-Speed AFM (HS AFM) on yeast condensin [16]. We observed that the hinge domain typically extends in an orthogonal direction from the DNA-bound globular head domains, and we then hypothesised that this local (angular) symmetry breaking due to the structure of condensin could generate a geometric bias in the grabbing of new DNA segments. Thus, we computationally implemented a loop extrusion process with such geometric constrain, by modelling a 3D search constrained to lie within a solid angle, and we discovered that it spontaneously displayed rectified, unidirectional extrusion. Our findings suggest that a geometric symmetry-breaking mechanism, without the need to impose explicit topological constraints between the SMC complex and extruded DNA segment, could underlie the emergence of rectified, unidirectional loop extrusion by SMC proteins.
_AFM reveals a preferred angle for hinge extension. -_
We started from the hypothesis that the structure of SMC itself may present some geometric restrictions on
its allowed conformations. Specifically, we argued that the hinge, connected to dynamic coiled-coil arms [18], largely determines the search of the DNA segment to be captured and reeled in the extruded loop. For this reason, we wanted to measure the possible geometric conformations assumed by a LEF, in order to quantify any restriction on the motion of the hinge. To do this, we performed and analysed High-Speed AFM (HS AFM) images of yeast condensin complex to obtain the typical position of the hinge with respect to the globular head domains [16] (Fig. 1). From the HS AFM movies we could identify two distinct globular domains: the hinge and the heads, linked together by semi-flexible arms (Fig. 1a). Furthermore, we were able to distinguish hinge-released and hinge-engaged states by measuring the distance of the hinge from the heads (see Fig. 1a-b and SI Fig. S4). Based on these, we measured the angles of hinge-releasing and hinge-engaging states, with respect to the line connecting two heads and from the middle point of two heads (Fig. 1a-b). In addition, to compare these angles with the anchoring angle of condensin with respect to the DNA, we analyzed dry-AFM images and measured the angle of hinge extension with respect to the DNA-bound head domains (Fig. 1c).
We discovered that the hinge is most often extended orthogonally to the line joining the SMC heads (Fig. 1a-b). Even in absence of DNA, we measured that the angle distribution at which the hinge is extended and retracted is normally distributed for both hinge releasing (\(83\pm 26^{\circ}\)) and engaging (\(86\pm 30^{\circ}\)) steps. In addition, dry-AFM images of condensins that bound to DNA through head domains also showed vertical angle of the hinge-head line respect to DNA tangential direction (\(88\pm 23^{\circ}\)) (Fig. 1c).
It is important to notice that our measurements are done on condensin complex absorbed on mica, i.e., in 2D. Thus, we argue that the angle distribution of the hinge extension would define a solid angle when the complex is allowed to move in 3D. Indeed, our results suggest that the hinge releases and engages orthogonally to both heads and DNA, forming a solid angle \(\Omega\) defined by the width of the Gaussian distribution, \(\gamma\simeq 60^{\circ}\) (see Fig. 1d).
Based on the AFM results, we defined a hinge-reachable region for the scrunching model as a truncated cone with the estimated solid angle \(\Omega\) (Fig. 1d). The maximum hinge extension is determined from the distribution of hinge-head distances in the hinge-released state (\(\simeq 40\) nm) while the minimum hinge extension is obtained as the hinge-head distance in the hinge-engaged state [16] (\(\simeq 10\) nm, see SI Fig. S4).
_A LEF model with geometrically biased 3D search._ -
In light of our HS AFM measurements, we propose a new geometrically-constrained scrunching model as follows: first, Ycg1/Brn1 anchors DNA with a "safety belt" mechanism [24] then the heads/Ycs4 connect the anchors to other part of DNA to extrude a loop (Fig. 2a). The motor action of the hinge is limited by the angle distribution we found in Fig. 1, and can grab DNA segments through the hinge, by extending the SMC arms at a fixed angle \(\alpha\simeq 90^{\circ}\) and with a certain width \(\gamma\simeq 60^{\circ}\) from the orientation of the bound DNA. After that the hinge grabs onto a new DNA segment, the ATP-binding-induced conformational change brings the grabbed DNA close to the heads/Ycs4 subunits. Finally, after ATP-hydrolysis, the heads/Ycs4 bind to the new DNA segment - thereby extending the extruded loop - and the hinge is then free to target a new DNA segment for the next round of DNA-loop extrusion (Fig. 2a).
To simulate this model, we implemented a coarse-grained loop extrusion process with a geometric constraint on the region that can be reached by the hinge. Specifically, we account for the connectivity of the anchor (YcgI) to the heads (Smc2 and Smc4) via the kleisin subunit as beads connected by a harmonic bond; additionally, we impose that the search of the segment to reel in the extruded loop is to be performed within a solid angle around an axis orthogonal to the tangent of the SMC-bound DNA (Fig. 2b). When a segment of the coarse-grained polymer falls within a truncated cone
Figure 1: **Limited solid angle of hinge movement from the position of the SMC heads.****a-b.** Angle distributions of hinge-releasing **a.** and hinge-engaging movements **b.** respect to the line connecting two heads at the middle of two heads observed by HS AFM (\(N=727\) and \(698\), respectively). **c.** Distributions of the angle between hinge and the central position of the globular domain with respect to tangential DNA line at the central positions of the globular domain analysed by dry-AFM images (N = 79). **d.** Sketch of the truncated cone hinge-reachable region determined by the AFM data.
formed by the solid angle \(\Omega\) and within a certain Euclidean distance (10 and 40 nm) the position of the harmonic bond between anchor and heads is updated to grab onto the anchor and such new segment (see Fig. 2b). In turn, the harmonic spring connecting anchor and heads is temporarily extended and brings the new segment close to the anchor. Finally, the new segment is identified with the new position of the heads, the anchor remains at its original position, and the hinge is then returned free to search for a new segment to grab onto (again within the solid angle \(\Omega\)). This process defines a full ATP-cycle and involves a 3D search of proximal DNA segments with a geometric constraint but no strict topological requirements.
The important difference of our work from previous models of loop extrusion [2; 3; 4; 14; 19; 25; 26] is that we do not impose the directionality of the extrusion _a priori_. The hinge can grab any segment ahead, or behind, the current 1D position of the heads. In fact, in our simulations we can observe backward extrusion steps, where the newly grabbed segment is _inside_ the extruded loop, as shown in Fig. 2c, d. This move yields a reduction in the total length of the extruded loop and is also observed in experiments [27]. In addition, our model accounts for an additional spring to mimic the presence of the disordered kleisins attaching the anchor Ycgl to the SMC heads (SI). This additional spring constrains the relative rotation of the DNA segments bound to the heads and the anchor. This rotational constraints cannot otherwise be imposed if two beads are connected by a single spring [28]. The rotational constraint within the DNA-bound SMC complex is evident from structural cryo-EM data [11], where the bound DNA segments sit tightly within DNA-binding pockets near the heads, and that their juxtaposition within the SMC structure assume a well-defined angle [11]; this implies that SMC-bound DNA segments are likely to be restricted in their relative rotation.
Using this model, we performed a simulated loop extrusion on a bead-spring polymer with \(N=400\) beads of size \(\sigma=10\) nm. We then tracked the position of the anchor and heads, and defined an oriented extruded loop length as \(l=n_{a}-n_{h}\), where \(n_{a}\) and \(n_{h}\) are the positions of the anchor and the heads; we discovered that, strikingly, the LEFs display growing loops with a clear sign of unidirectional motion. Since we do not hard-code directionality within the model, the LEFs in different replicas will start extruding in different directions. The interesting finding is that they display a tendency to maintain a rectified, unidirectional motion, once that the "left-right" symmetry has been broken (Fig. 3a).
To understand how this spontaneous rectification is due to the broken symmetry in the search step, we performed simulations with wider search angles up to restor
Figure 2: **A model for loop extrusion with geometric constraints on hinge extension.****a.** Schematics of a loop extrusion model where the hinge (red) is restricted to search for a DNA segment within a truncated cone. The segment is then bound to the SMC heads and reeleled in the extruded loop, while the hinge returns to the search position. Throughout this process, one DNA segment is trapped at the anchor bound to the kleisin (purple). **b** Implementation of the model on a coarse-grained bead-spring polymer, where the heads and anchor are denoted with red and purple beads, respectively. The hinge is not explicitly modelled with a bead, but is accounted for by the geometrically restricted search region (yellow shaded area). The simulated loop extrusion displays both (**c**) forward and (**d**) backward steps.
ing the full spherically symmetric search \(\gamma=360^{\circ}\). In the symmetric case, we do not observe unidirectional motion, instead we find that extruded loops shrink back, with a behaviour similar to a random walk (see Fig. 3a). To further characterise this process we took the root mean squared extruded length \(\langle l\rangle=\langle[n_{a}-n_{h}]^{2}\rangle^{1/2}\) and indeed found that the spherically symmetric case displayed a scaling \(\langle l\rangle\sim t^{1/2}\), in line with a simple random walk (see Fig. 3b). Interestingly, we also noted that the cases with \(60^{\circ}<\gamma<360^{\circ}\) displayed faster linear growth than the case with \(\gamma=60^{\circ}\). Despite this, the distribution of step sizes clearly indicate that the case \(\gamma=60^{\circ}\) is the one that benefits from the greater rectification, i.e. the ratio forward/reverse steps is the largest. In other words, wider angles increase the probability of shorter and backward steps (Fig. 3c). In turn, this implies that the average step size - defined as \(s=\Sigma_{i}[sign(i)S_{i}]/N\), where \(S_{i}\) is the \(i\)-th step size - is typically smaller for wider angles. The largest probability of large steps \(\sim 50\) nm, which is in line with experiments [27], was seen for \(\gamma=60^{\circ}\). Thus, to understand why wider search angles yield faster extrusion in our simulations, we compute the success probability of making a step. Indeed, in our algorithm we impose that the LEF does not make a step if, in a given simulation time, there are no DNA beads that satisfy the search criterion. This readily implies that narrower search angles yield lower success rates (Fig. 3d). The opposite trends of success rate (increasing with \(\gamma\)) and average step size (decreasing with \(\gamma\)) yield a trade-off (Fig. 3d) that naturally leads to an optimum in velocity around \(\gamma\simeq 180^{\circ}\). However, we argue that condensin may employ a narrow angle to optimize the search process within a crowded DNA environment, within which the success probability would be generally larger.
Conclusions. - Motivated by AFM data, in this Letter we have provided experimental evidence that the structure of yeast condensin favours certain geometric conformations where the hinge is extended perpendicularly to the local direction of the heads-bound DNA segment. We have also quantified the width of the search angle, \(\gamma=60^{\circ}\) and computationally demonstrated that by imposing this geometric constraint, loop extrusion can be spontaneously rectified. Interestingly, we find that the narrower the search angle, the larger the typical the step size and the more unidirectional the extrusion, but also the more likely to fail to find a DNA segment to grab in a given time. This trade-off yields an optimum extrusion speed at a predicted angle of \(\gamma=180^{\circ}\).
We argue that the emergence of spontaneously rectified, unidirectional extrusion is due to a combination of
Figure 3: **Unidirectional motion emerges from symmetry breaking.****a.** Individual traces of simulated extruded loops as a function of time and with different hinge-search angles. **b.** Root mean squared extruded length as a function of time and for different hinge-search angles. The case where the search is allowed to occur on the full sphere \(\gamma=360^{\circ}\) yields a random walk scaling as \(t^{1/2}\). **c.** Probability of step size as a function of the search angle. The narrower the angle, the more rectified the extrusion, i.e. the larger the forward/reverse ratio. **d.** Plot of the success probability and average step sizes as a function of the hinge search angle. These two quantities display a trade-off which yield an optimum of the extrusion velocity at \(\gamma=180^{\circ}\) search angles, as shown in **e**.
local deformations of the underlying DNA and the geometric constraint on the hinge-mediated 3D search. Our findings ought to be relevant to other SMC protein complexes, as long as their structure impose a geometric constraint on the conformational space of the DNA:protein complex. Additionally, they can reconcile a range of recent findings, e.g. bypassing of roadblocks, Z-loops, and also the pinching of a negatively supercoiled loop during the hinge-grabbing step [29], whilst also explaining the unidirectional extrusion. Our hypothesis could be tested by mutating the SMC coiled-coil arms to be more flexible or rigid, thereby directly increasing or reducing the search angle \(\gamma\). Overall, we argue that our findings contribute to highlight a largely overlooked aspect of loop extrusion that ought to be relevant for the function of generic SMC complexes.
## Acknowledgements
DM acknowledges the Royal Society and the European Research Council (grant agreement No 947918, TAP) for funding. The authors also acknowledge the contribution of the COST Action Eutopia, CA17139. J.-K.R. acknowledges the Institute of Applied Physics of Seoul National University, Creative-Pioneering Researchers Program through Seoul National University (Project Number 3348-20230013), the Brain Korea 21 Four Project grant funded by the Korean Ministry of Education, Samsung Electronics Co., Ltd. (Project Number A0426-20220109), and the National Research Foundation of Korea (Project Number 0409-20230237, 0409-20230171, 0409-20230219).
|
2309.07724 | From Compliance to Impact: Tracing the Transformation of an
Organizational Security Awareness Program | There is a growing recognition of the need for a transformation from
organizational security awareness programs focused on compliance -- measured by
training completion rates -- to those resulting in behavior change. However,
few prior studies have begun to unpack the organizational practices of the
security awareness teams tasked with executing program transformation. We
conducted a year-long case study of a security awareness program in a United
States (U.S.) government agency, collecting data via field observations,
interviews, and documents. Our findings reveal the challenges and practices
involved in the progression of a security awareness program from being
compliance-focused to emphasizing impact on workforce attitudes and behaviors.
We uniquely capture transformational organizational security awareness
practices in action via a longitudinal study involving multiple workforce
perspectives. Our study insights can serve as a resource for other security
awareness programs and workforce development initiatives aimed at better
defining the security awareness work role. | Julie M. Haney, Wayne Lutters | 2023-09-14T14:01:05Z | http://arxiv.org/abs/2309.07724v1 | From Compliance to Impact: Tracing the Transformation of an Organizational Security Awareness Program
###### Abstract
There is a growing recognition of the need for a transformation from organizational security awareness programs focused on compliance - measured by training completion rates - to those resulting in behavior change. However, few prior studies have begun to unpack the organizational practices of the security awareness teams tasked with executing program transformation. We conducted a year-long case study of a security awareness program in a United States (U.S.) government agency, collecting data via field observations, interviews, and documents. Our findings reveal the challenges and practices involved in the progression of a security awareness program from being compliance-focused to emphasizing impact on workforce attitudes and behaviors. We uniquely capture transformational organizational security awareness practices in action via a longitudinal study involving multiple workforce perspectives. Our study insights can serve as a resource for other security awareness programs and workforce development initiatives aimed at better defining the security awareness work role.
**Keywords.** security awareness, training, compliance, measures, case study
## 1 Introduction
Compliance - aligning organizational processes and programs with external rules and standards - is a significant driver for security (cybersecurity) programs in various sectors, such as healthcare, government, and financial services. Compliance can be helpful in setting a minimum bar of security expectations for an organization. Training employees for policy compliance influences their awareness but has minimal impact on promoting actual compliant behavior (Stevens et al., 2020). Addressing this gap between knowledge of and practice of, is the purview of security awareness training.
Security awareness training plays an important role in helping organizational employees achieve a common understanding of security threats and acceptable security-related actions (Wilson and Hash, 2003). Various public and private industry sectors recognize the importance of this role by requiring or recommending annual security awareness training. For example, U.S. government agencies implement training mandated in the Federal Security Modernization Act (FISMA) (2014). The European Union's General Data Protection Regulation (2016) encourages organizations to provide similar training.
These training requirements intend that compliance will result in positive impacts on workforce security behaviors, thus improving the overall security posture of organizations. While training can be helpful, superficial compliance metrics indicating number of employees completing the training tell little of deeper impact: how security behaviors, understanding, and attitudes have positively changed. Indeed, prior literature and industry surveys revealed that security awareness programs often fall short in changing behaviors (Bada et al., 2019). Employees continue to fall prey to cyber attacks, as exemplified by the Office of Management and Budget (2021) report stating that 53% of U.S. Government cyber
incidents in 2020 resulted from employees violating acceptable usage policies or succumbing to email attacks.
Considering these sobering observations, there is a growing recognition of the need for transformation from security awareness programs that are merely compliance-focused to those facilitating behavior change. While studies have examined the impact of security awareness efforts from the _end user_ perspective (Khando et al. 2021; Tschakert and Ngamsuriyaroj 2019), few have begun to unpack the _organizational practices_ of security awareness professionals tasked with executing program transformation. To address this shortfall, we conducted longitudinal, case study research of a security awareness team and program at a U.S. Government agency. Through interviews, field observations, and document analysis, we found a deliberate progression of the program from being focused on training compliance to becoming dedicated to achieving employee empowerment and behavior change.
Our study makes several contributions. We uniquely capture the transformation of security awareness practices within an organizational context from the perspectives of multiple stakeholders (security awareness team members, their managers, and employees), instead of the end-user perspective common in other studies. In doing so, we confirm and extend the research body of knowledge into a real-world context. Furthermore, our findings have applicability for other security awareness programs by allowing readers to witness how one organization re-examined and evolved their program beyond a compliance focus. While specific tactics may not be appropriate in all organizations, the considerations and strategies employed by our study's awareness team can serve to inform how other practitioners might approach maturing their own programs. We also contribute to efforts to define a dedicated security awareness work role by confirming the skills needed by these professionals.
## 2 Related Work
Our case study is focused on understanding how security awareness professionals develop, execute, and transform their security awareness programs. Because our study explores the perspective of the awareness team, we do not fully address literature on security awareness impacts and perceptions from the end user perspective.
### Organizational Security Awareness Programs
#### 2.1.1 Security Awareness Approaches and Challenges
Hansch and Benenson (2014) posited three dimensions of security awareness commonly identified in research literature. _Perception_ is gaining knowledge of the existence of a security threats. _Protection_ involves users knowing what measures they must take to counter these threats. _Behavior_ refers to users actively and effectively taking steps to reduce security incidents.
The cornerstone of many security awareness programs is annual training, most often conducted online. Beyond this training, programs may implement other awareness activities and communications throughout the year, for example, speaker events, newsletters, emails, and even novel approaches such as virtual reality and escape rooms (Haney et al., 2022). These additional activities can reinforce learning while adapting to new security threats, policies, and processes as they arise.
Although viewed as important for organizational security, security awareness programs may face several difficulties. Training often has a poor reputation of being boring and ineffectual (Reeves and Delfabbro, 2021). Additionally, programs may fail to provide engaging materials that motivate employees to practice good security habits, have unrealistic expectations of what employees can do, and rely on punitive, fear-based tactics that may not have a positive impact on sustainable behavior change (Pedley, _et al._, 2020; Bada _et al._, 2019; Alshaikh, 2018; Renaud and Dupuis, 2019).
To counter these challenges, researchers made recommendations for security awareness approaches. Based on surveys and a literature review of security behavioral theories, Moody _et al._ (2018)
developed the Unified Model of Information Security Policy Compliance (UMISPC) to guide security awareness efforts. The model illustrates the impacts of _habit, fear_ (influenced by threats and response self-efficacy), and the _role values_ (tasks and nature of work roles) on employee intentions to follow information security policies. Others took a practical approach, recommending specific awareness strategies. Bada _et al._ (2019) encouraged programs to disseminate simple, consistent guidance that promotes a sense of self-efficacy and continuously provide feedback and training refreshers to help make security a habit. Alshaikh _et al._ (2018) recommended engaging employees by: communicating how security impacts the organization; building trust and relationships with the workforce; soliciting employee feedback; relating security to employees' personal lives; and implementing creative types of training. Others (Bauer _et al._, 2017; Abawajy, 2014; Korpela, 2015), suggested that organizations find varied ways to communicate security awareness and tailor communications since they cannot expect a single approach to resonate across the entire workforce. For example, serious games, in which participants learn about security concepts then put them directly into practice, have been found to be effective in raising awareness and teaching security techniques (Hart _et al._, 2020; Khando _et al._, 2021; Shostack, 2023).
The field work presented in this paper builds on this previous research to explore the challenges of one particular security awareness program and how the awareness team overcame these difficulties in practice.
#### Measures of Effectiveness
Measuring program success is a critical, but challenging, aspect of security awareness programs, with few programs making a concerted effort. The lack of measurement may in part be due to organizations' emphasis on compliance to training policies (e.g., FISMA), represented by training completion rates, when the goal should be behavior change (Fertig _et al._, 2020; Bada _et al._, 2019). For a holistic assessment, organizations can instead use a combination of measures of effectiveness, including: number and kinds of security incidents related to training topics; user-initiated incident reporting; phishing simulation click rates; engagement with security awareness materials (e.g., represented by number of views); and feedback from stakeholders via surveys and interviews (Alshaikh _et al._, 2018; Fertig _et al._, 2020). Chaudhury _et al._ (2022) developed a security awareness metrics framework that included the following indicators: _impact_ (changes in security knowledge, attitude, and behavior); _sustainability_ (value-added and impact on organizational policies and processes); _accessibility_ (relevance, quality, reachability, and usability of awareness materials); and _monitoring_ (workforce and leadership interest and participation). Furthermore, the security training institute SANS (2021) found that organizations that assess their own programs against peers tend to have greater leadership support for security awareness training, and, therefore, more success. One such benchmark is the five-level Security Awareness Maturity Model (SANS, 2018).
Beyond self-reports and researcher recommendations, our case study demonstrates the use and evolution of various measures of effectiveness as observed in practice.
#### Security Awareness Professionals
Security awareness professionals focus on raising awareness of and advocating security practices to the general workforce within an organization. Surveys of these professionals revealed that most perform security awareness duties on a part-time basis without a job title that reflects their duties, and they may lack sufficient professional skills (e.g., communications) that were viewed as essential in security awareness roles (Haney et al., 2022; SANS, 2021; Woelk, 2015). Stewart and Lacey (2011) described a significant issue in that the technical specialists who are often given security awareness responsibilities mistakenly believe that providing a "broadcast of facts" will result in behavior change and fail to consider how communications should be tailored to their audiences. Bada _et al._ (2019) highlighted
competency issues when they suggested that the lack of understanding that awareness is a unique discipline leads to ill-prepared awareness professionals.
While prior studies relied solely on self-report data, our study used multiple data collection methodologies to observe security professional practices in action.
### Organizational Security Influencers
As a foundation of our research, we look to prior work related to workers who, like security awareness professionals, attempt to influence others' behaviors within organizations: risk communicators, change agents, and cybersecurity advocates. Security awareness professionals are risk communicators since they need to effectively communicate security risk, with a goal of enabling positive security behavior. The risk communication research body of knowledge, e.g., (Covello, 1997; Nurse et al., 2011; Slovic, 1987), identifies desirable practices of communicators, including: keeping communications simple, but specific and engaging; customizing information to target audiences; and assisting people in seeing the consequences of their decisions.
Those in security awareness roles may also be considered a type of change agent, described in Diffusion of Innovations (DOI) Theory as someone who actively influences clients' adoption decisions (Rogers, 2003; Markus and Benjamin, 1996). In facilitating adoption, change agents focus on the betterment of the organization, team with others, employ strong communication skills, and practice context-dependent adaptation.
Haney and Lutters (2018, 2021) defined cybersecurity advocates as security professionals who actively promote and facilitate the adoption of security best practices and technologies as an integral component of their jobs. Advocates are deeply passionate about the work as they see security having the potential for significant individual and societal impacts. In advocacy jobs, technical literacy, while necessary, is often regarded as less important than professional skills -interpersonal skills, communication, context awareness, creativity, and flexibility. Therefore, multi-disciplinary advocacy teams are viewed as beneficial in developing and disseminating engaging security guidance. The study also uncovered advocates' work practices, revealing that, similar to risk communicators and change agents, they strive to empower their audience by providing practical recommendations in an understandable language, employing engaging communication techniques, and incentivizing positive behaviors.
In this paper, we apply and extend these prior works into the security awareness domain by studying risk communication, change agentry, and security advocacy _in practice_. Specifically, while the Haney and Lutters study began to form a picture of the work of cybersecurity advocates by identifying what advocates _say_ they do, our case study extends this work by providing a better understanding of what one type of advocate (security awareness professionals) _actually do_ in a real-world setting.
## 3 Methodology
We conducted a single case study of the security awareness team and program in a U.S. Government agency over the course of one year. The study protocol was approved by our Institutional Review Board. The agency's Chief Information Officer provided a signed letter of support for study participation.
### Selection of Case Study Method
While positivist research aims to predict, control, and generalize through the collection of quantitative data, our research goals to describe, understand, and interpret the processes and transformation of a security awareness program aligned with a constructivist approach (Merriam and Tisdell, 2016). Therefore, we made the careful decision to conduct a _qualitative_ case study.
A case study is a bounded, in-depth investigation of an object of study ("the case") to capture its complexity. The case study methodology has been a staple of social science research for decades and is particularly appropriate when in-depth explanation of a phenomenon is required [2]. It typically involves the use of multiple data collection and analysis methods, allowing for triangulation to strengthen interpretations [1, 20, 21]. Eisenhart (1989) identified three possible aims of case studies: providing description, testing theory, and generating theory. The case study is particularly suited to our research goals since it attempts to both confirm prior findings (testing theory) and yield insights into security awareness through a holistic, detailed understanding of a program in a real-world context (providing description).
Practices surrounding security are some of the most closely guarded organizational behaviors, for understandable reason. Opportunities to gain deep participant observer access to these programs are rare, let alone having longitudinal access to organizations that are not your own, as evidenced by a lack of such studies in current literature. A strength of this study is precisely this kind of access - affording a robust view of security awareness work in action.
The object of the case study was the security awareness team and program at a U.S. Government agency (referred to as Agency Q). We selected a case that was theoretically useful and allowed us to replicate or extend prior theories and findings on security awareness and advocacy. In addition, due to a preponderance of government security mandates, government agencies tend to be compliance-oriented, offering an opportunity to explore how an agency balances security awareness compliance pressure with achieving actual impact. Additionally, the program was in an active state of transformation we could observe and describe real-time.
### Data Collection
To examine security awareness from different perspectives, we took a multi-methods data collection approach involving interviews, field observations, and reviews of both qualitative and quantitative documents to triangulate evidence. We collected data via 14 on-site visits and multiple email exchanges and phone conversations spread out over the course of one year. Our collection methods were responsive to the context [1], adjusting to accommodate new awareness events as they arose and new data sources as we became aware over the course of our investigation.
#### 3.2.1 Interviews
We interviewed the entire security awareness team: the program lead (a career civil servant) and two contracted staff. Prior to being interviewed, they completed a short demographic survey capturing education and work experience. Since our study extends prior work on cybersecurity advocacy into a specific, observable context (organizational security awareness efforts), our interview questions were initially based on the protocol used in the Haney and Lutters (2018) interview study with modifications to capture organizational details and account for the security awareness focus. Questions covered work practices, approaches, motivations, and challenges. The team lead was asked additional questions to obtain details about team composition and security awareness efforts. The lead's interview lasted over an hour, and the two contractor interviews were 30 minutes each.
To obtain a management perspective, we interviewed two managers in the team's chain-of-command: the Chief Information Security Officer (CISO) (interview time 20 minutes), and the direct supervisor of the team lead (45 minutes). We asked them about their views of the security awareness program, how program resource decisions were made, thoughts about security challenges to the organization, their roles, and experience at Agency Q (both had about 10 years).
We also interviewed nine agency employees who were "consumers" of the awareness program's events and materials. The awareness team selected and recruited the employees via email and face-to-face contact based on our request to include individuals in both information technology (IT) and non-IT roles. Employee interviews lasted about 30 minutes. Employees were asked about their views of the security
awareness program, the security information they received, and the impact of that information on their behaviors. We also asked how long they had been at Agency Q, with responses ranging from 11 - 30+ years. Five employees were in non-IT roles. Employees also assessed their own security familiarity, with three indicating low familiarity, three moderate, and three high. Table 1 provides employee demographics.
Interviews were audio-recorded and transcribed. Data were stored without personal identifiers, rather with a participant code. Participant codes for security awareness team members are indicated with an "S" (S1-3), managers with an "M" (M1-2), and employees with an "E" (E1-9).
#### 3.2.2 Field Observations of Security Awareness Events
Field observations are a commonly-used, valuable research tool in gathering first-hand data, providing specific events to be used as reference points, and helping to triangulate emerging findings (Merriam and Tisdell, 2016). The security awareness team organized three types of events throughout the year: lunchtime events, security days, and security officer forums (described later). To observe the team's approaches in action, we captured a complete annual cycle of events as a participant-observer, attending at least one of each common event type: three lunchtime events, one security officer forum, and two security days. Detailed field notes taken during the events documented techniques, the security information presented, and audience engagement.
#### 3.2.3 Additional Data Collection
We collected over 25 electronic and physical copies of security awareness materials disseminated during security awareness events and campaigns. The team also provided two other types of documents for analysis: 1) 20 post-event reports from the previous two years and year of the study and 2) six feedback survey reports from events during the same timeframe. These reports provided further confirmation of observational findings.
As emerging themes and additional questions arose, other data were collected as needed via email exchanges or phone conversations with the team. For example, one phone conversation revolved around learning more about simulated phishing exercises.
### Data Analysis
Analysis of multiple data sources allowed us to find overlapping themes and identify the "why" behind identified relationships. Data analysis of interviews and event field notes was initially guided by a
\begin{table}
\begin{tabular}{|c|l|c|c|c|} \hline
**ID** & **Role** & \begin{tabular}{c} **\# Years at** \\ **Agency** \\ \end{tabular} & \begin{tabular}{c} **IT Familiarity** \\ \end{tabular} &
\begin{tabular}{c} **Security Familiarity** \\ \end{tabular} \\ \hline E1 & Research librarian & 12 & Moderate & Moderate \\ \hline E2 & Manager of an information security & 12 & High & High \\ & organization & & & \\ \hline E3 & Team leader of a research and & 14 & Moderate & Moderate \\ & technology resources organization & & & \\ \hline E4 & Engineer & 12 & Low & Low \\ \hline E5 & Attorney & 30+ & Low & Low \\ \hline E6 & Attorney & 12 & Low & Low \\ \hline E7 & IT specialist, information systems & 11 & Moderate & High \\ & security officer & & & \\ \hline E8 & Information systems security & 17 & Moderate & High \\ & officer & & & \\ \hline E9 & IT specialist & 13 & Moderate & Moderate \\ \hline \end{tabular}
\end{table}
Table 1: Demographics of interviewed employees
subset of deductive codes based on the cybersecurity advocates study [12], with inductive coding using Grounded Theory-informed methods [15] employed as new data labels emerged. Both authors independently coded a sample of interviews: all team interviews, one manager interview, and two employee interviews. We then met to discuss the applicability of the deductive codes, reorganize codes based on differences in the case study as compared to the previous interview study, and suggest new codes as appropriate to construct a final codebook. The first author than used the codebook to deductively code the remaining interviews and field notes. We regularly met to discuss emerging themes and our interpretations of the data.
We also reviewed the other provided documents to glean which security topics the team deemed important to distribute and how they leveraged third party resources. A review of post-event reports provided a deeper understanding of past security awareness events, including attendance, topics, and how events were portrayed to agency leadership. Event feedback surveys offered insight into how the events were viewed by attendees and how feedback contributed to subsequent events.
### Researcher Positionality and Limitations
In qualitative research, the researcher serves as the primary instrument for data collection and analysis [16]. Therefore, the positionality of the authors is important to note. The first author has a background in cybersecurity and human-centered computing as both a practitioner and a researcher working in the U.S. Government. The second author has an academic research background in information and computer science and specializes in field studies of IT-mediated work from a socio-technical perspective.
These positionalities and related experiences explicitly surfaced throughout data analysis discussions and potentially impacted the interpretation of data. They often served as an advantage, for example, facilitating the first author's trust-building with Agency Q staff and understanding of common security terminology used by the awareness team. Any undue bias introduced by these positionalities was mitigated by a rigorous data analysis process, as well as full-team data analysis discussions during which assumptions were identified and deliberated.
We also note several potential limitations of our study. The case study, which was valuable for exploring security awareness in practice, was bounded in scope as it focused on a team working in the government sector. Therefore, we cannot generalize findings to all employment sectors and types of awareness programs. However, given corroborating results in prior research and other projects focused on security awareness training and professionals, we believe the findings from our study may be transferable to similar contexts.
Additionally, the security awareness team's involvement in the recruitment of employees to interview may have resulted in a biased sample. Although we requested a diverse sample of work roles, employees more familiar with and favorable towards the awareness program and security may have agreed to be interviewed, with 4 of 9 employee participants in a security role. However, without our own access to internal email addresses, the sample selection was the most practical avenue given resource constraints that prevented the awareness team from assisting with large-scale recruitment. Although the employee interviews did provide interesting insights, this limited sample was not enough to make general inferences across the entire workforce, and we gained little understanding of those who chose not to participate in awareness events or those who attended and were unsatisfied.
## 4 Case Study Context
As context for our findings related to security awareness program evolution, this section provides thick descriptions of the security awareness team, initiatives, and challenges.
### Agency Q
Since security practices are some of the most sensitive information that an organization holds, gaining extended insider access to this is exceptionally rare. The access provided to us was wide-reaching, but still constrained by a disclosure agreement to preserve anonymity. Here we share as much information about the organizational context as we are able within these constraints.
Agency Q is a medium-sized U.S. government agency on the scale of 5000 government and contract employees with a mission focused on producing guidance and regulations for critical infrastructure stakeholders. Most employees are stationed at agency headquarters, with smaller contingents in regional offices throughout the country. Employee roles are diverse, ranging for scientists and engineers to IT professionals to business and support personnel.
The agency deploys Microsoft Windows and Apple MacOS computers and Android and iOS smart phones for official use by employees. At the time of the study, employees were authorized to telework one day per week using government-owned resources. Sensitive information is routinely produced, disseminated, and stored by agency employees.
Agency Q fell prey to three different coordinated year-phishing attacks several years prior to the study. The attacks resulted in about a dozen employees disclosing their account and password information and two others infecting their computers with malware. Afterwards, there was a concerted effort to educate the workforce on detecting phishing emails and appropriately using email.
### Security Awareness Team
The security awareness team was in a security oversight organization under the Office of the Chief Information Officer (CIO). The program dated back at least 10 years prior, at which time it focused on annual training compliance with few in-person events. The current team assumed responsibility for the program three years prior to the start of our study and had expanded the program to three primary responsibilities: 1) implementing and tracking the completion of online, mandatory, annual security training; 2) planning and executing initiatives to increase employee awareness of security issues and appropriate actions, and 3) managing role-based training required for employees with a security role (not a focus of our study).
The security awareness lead (S1) had over 30 years of IT and security experience with degrees in business administration and computer programming. S1 spent 50% of work time leading the security awareness program, which was viewed as "_the most important thing I'm doing right now_." S1 also worked with information system owners on system authorizations and served as a subject matter expert for other government security initiatives.
The other team members (S2 and S3) were contractors supporting the program full-time. Neither had formal backgrounds in a technical field but brought business skills that enabled them to be heavily involved in the creative and logistical aspects of event planning. S1 appreciated the discipline diversity within their team, admitting that, although S1 had the security and technical knowledge, "_I am not a subject matter expert in the marketing and...the organizational skills. I need [S2 and S3] to do that. They're also brilliant when it comes to the ideas and how we're going to make things happen_."
### Initiatives
The team implemented several initiatives (activities) throughout the year: three types of synchronous, in-person events (lunchtime events, security officer forums, and security days) and two types of asynchronous activities (campaigns and phishing exercises). Table 2 provides an overview of these. Most initiatives had themes. For example, a "Safety Tech Check" event helped attendees get the most out of their smartphones, laptops, and tablets while operating them safely and protecting personal information.
The team developed their ideas during brainstorming sessions informed by their own explorations of security awareness techniques and topics. Their expertise developed from via attendance at external security events, information exchanges with other security awareness professionals, vendor security training, self-study, current events, and subscriptions to online security forums. The team also routinely considered agency mission priorities, workforce feedback, and organizational security incidents and risks (see section 5.1).
### Challenges
Interviews revealed challenges that hindered security awareness efforts and the agency workforce's willingness and ability to practice good security habits. To set the context for exploring how the team tried to overcome these challenges, the most frequently mentioned issues are described here.
\begin{table}
\begin{tabular}{|p{56.9pt}||p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Initiative** & **Description** & **Target Audience \& Typical** & **Number per year** \\ \hline Lunchtime Events & These were informal, drop-in events held in an exhibit area located in a main thoroughfare. Typical events had an information fair atmosphere with multiple tables staffed by agency employees, representatives from community organizations, or other government agencies. However, alternate, performance-based event formats had recently emerged for these events. & General workforce & 4-5 \\ \hline Security Officer Forums & The 2.5-hour forums focused on providing information to help security staff in their roles. Forums were held live but were broadcast to the those in remote locations and recorded for later viewing. Forums typically featured security leaders providing agency security updates and 4-5 talks by internal or external speakers. & Agency staff with security roles (but anyone can attend) & 2 \\ \hline Security Days & The half-day events took place in a large auditorium and were aimed at all agency employees with a goal of providing security and technology information applicable to at both home and work. There were typically four talks during the events. The events were remotely broadcast with recordings and presentation slides posted for later viewing. & General workforce & 2 \\ \hline Campaigns & Focused campaigns and security awareness material distribution addressed specific security issues emerging as agency challenges. Branded campaign posters were displayed throughout agency buildings, and handouts outlining mitigative actions were distributed to the workforce. & General workforce & As needed \\ \hline Phishing Exercises & Simulated phishing attacks to test the workforce’s susceptibility to phishing emails and raise awareness. The security awareness team set the strategy for these exercises, but a contractor executed the exercises and collected statistics on click rates. & General workforce & 4 \\ \hline \end{tabular}
\end{table}
Table 2: Agency Q’s security awareness initiatives.
#### 4.4.1 Lack of Buy-in
The team was challenged by employees not in IT or security roles who did not recognize the value of security, how it related to them, or their own security responsibilities. S1 discussed difficulties convincing non-technical staff to attend awareness events because _"we're still in our silos."_ An engineer said that security was not a concern in their group because _"we don't do that for a living" (E4)_. Security may be de-prioritized as employees are _"busy doing their work" (M2)_. Several interview participants discussed the need for better alignment between the security awareness program and the agency's mission elements to facilitate understanding of the connection to security.
Cognitive biases may also affect buy-in. For example, employees may be overly optimistic in believing they would not be a victim of a security attack because they do not do anything important enough to be targeted. A security officer felt people may not _"take it seriously until it personally affects them" (E7)_.
Lack of support for security training was especially problematic when some agency leaders _"see it as a nuisance" (S1)_, rather than setting an example for the workforce. S1 commented, _"I just happened to have a training course with some leadership in the mission office. And they're like 'Why do I have to take this course? Why do I care about cybersecurity?" (S1)_. With this attitude, managers may not support employees attending events because they feel "_I can't afford to send my people there...They need to be here at their desk" (S1)_.
#### 4.4.2 Compliance-driven Training
Compliance to government security mandates, such as those specific to employee security awareness, is an important metric for government organizations, including Agency Q. However, the security awareness team recognized that _"just because you're compliant doesn't mean that it's an effective program" (S1)_. Compliance metrics failed to show how impactful the training was in changing workforce behavior. _"Everyone will check the box saying 'Yeah, we're 100% trained. '...Even if you are, what good is it if you don't have people applying what they learned?" (S1)_.
Annual security awareness training might be the same every year and was viewed by employees as burdensome. S1 commented, _"People do it because they have to. You have the online training where it's like 'Click, click, click, click, done'...rather than paying attention to the words they're seeing on the screen" (S1)_. The team observed that employees completed training to avoid punitive measures so "_their supervisor will get off their back" (S3)_, rather than as a learning opportunity.
#### 4.4.3 Resources
In addition to organizational and staff challenges, the team was faced with resource shortages. The program budget was just enough to pay for the two supporting contractors. Without additional resources, the team was not able to expand their efforts, as confirmed by the CISO: _"I don't know how much more we can do to increase our activities with the budget pressure that we've got" (M1)_.
## 5 Findings
Over the prior three years, the security awareness team made a concerted effort to transform their program with a renewed focus on engaging and empowering employees to make sound security decisions rather than simply on enforcing compliance. In this section, we detail findings based on our observations of the strategies and activities that demonstrated the program's ongoing evolution.
### Evolving to a Cybersecurity Advocate Role
Our exploration of program progression towards behavior change is, in part, illustrated by how the agency's awareness team members actively took on a cybersecurity advocate role (Haney and Lutters, 2018).
#### 5.1.1 Building Trust
The cornerstone of cybersecurity advocacy is the ability to establish trust with the audience. To build technical credibility, the awareness team stayed updated on security topics through formal training, self-study, subscriptions to mailing lists, and being _"on alert with current [security] events or anything in the news" (S3)_.
Beyond domain knowledge, the team demonstrated interpersonal skills critical for building relationships and trust. S1 felt that, for security to resonate on an individual level, a personal touch is required, so preferred face-to-face interactions whenever possible. During security awareness events, we directly observed that the team members were enthusiastic and positive, projecting their service-oriented motivation to help people navigate the complexities of security _"so they can just be aware of what's going on and what they can do to make their life easier and protect themselves" (S1)_. For example, during a lunchtime event, we observed S3 facilitating a cybersecurity trivia game. They praised contestants for correct answers. If participants answered incorrectly, S3 was non-judgmental and explained the answers in an easy-to-understand way. Attendees visibly responded positively. Employees also expressed confidence in both the team and the information in the interviews, further illustrating the team's success in establishing trust. For example, an employee said that team members "_are knowledgeable...They're the experts...They are incredibly energetic " (E3)_. Another thought the team was approachable: _"They're friendly. They're open to suggestions. They are open to questions" (E9)_.
#### 5.1.2 Effectively Communicating
In cybersecurity advocacy, communication skills are essential. To communicate effectively, the team had a solid understanding of organizational context, which aided them in selecting appropriate communication mechanisms and styles. For example, at the observed security officer forum, S1 injected organizational context when talking about cybersecurity role-based training, including information about the federal directives mandating the training, who it applies to within the organization, why it is important to the organization, and how agency employees can complete training.
To impact employees with diverse work roles, the team translated technical topics into plain language. S1 recognized the necessity of tailoring communications to employees' security knowledge and skill: _"I can sit up there and speak to the technical aspects of things and bore everybody except for the few techno-geeks in the back. Or I can try to make things more generalized" (S1)_. Employees expressed positive perceptions of the team's communication abilities. When asked about the content of the security awareness information, a staff member said it was beneficial _"having the information presented in a manner to where it's not intimidating,...in a way that you can embrace it and take away information" (E3)_.
### Engaging and Empowering the Workforce
The agency's program transformation entailed a shift from an extrinsic motivation of security being an organizational compliance requirement to intrinsic motivation where security becomes a habit and something employees want to do because of its inherent value. S1 viewed effective security
awareness as going beyond compliance: _"I'm not going to force anybody to change but give them the opportunity to see that they can."_ S1 likened this goal to the development of dental hygiene habits:
_"knowing that you have to brush your teeth, to actually brushing your teeth and then not even thinking about brushing your teeth every day. That's what I'm trying to push for in our program, where security awareness is now second nature" (S1)._
The shift involved changing workplace security culture by first facilitating recognition (engaging), then instilling a sense of personal responsibility and self-efficacy (empowering). The team recognized the importance of involving the workforce as active participants and contributors to the security of the organization, believing that "_Human beings are the most important cybersecurity tools" (S1)_. This section describes ways in which the team engaged and empowered the workforce.
#### 5.2.1 Making Security Relatable
Engagement was grounded in the personalization of the security message and bringing awareness of security issues and their relevance to individuals. At the observed security officer forum, S1 told the audience, _"We don't want to make you paranoid. We just want to make you aware" (S1)_. During the two observed security days, at the conclusion of each talk, S1 asked a question of each speaker to link the topic to Agency Q's mission and make the connection to attendees' jobs.
To make security relatable, the team included topical information during events, campaigns, and communications. Focus areas and event themes might be related to the season of the year or current news. For example, an event held shortly before the American football Super Bowl was entitled "Cyber Kickoff." Recent organizational security needs, threats to the agency, or hot security topics in the government also frequently drove topics. S1 provided an example:
_"We just had an inspection where we found a bunch of people that were leaving their [smart cards] in their readers when they weren't at their desks. So, we're going to be talking at the forum about that" (S1)_.
To ensure events were of interest, the team involved the workforce in deciding which topics to address via post-event surveys, personal interactions, email, and focus groups. The lead described how their team had increased attendance at the security officer forums:
_"We brought a bunch of [security staff] together and asked, 'What do you want? What would make these more efficient and effective?'...We started bringing in people they wanted to hear, subjects they wanted to hear that meant most to them" (S1)_.
Beyond topicality, the team strove to shift the mindset of employees to want to make informed security decisions and follow safe security practices regardless of context. This involved a shift of the locus of concern from just the security of the organization to also include the security of the individual. At most awareness events, there was a _"mix of cybersecurity at work and home" (S1)_. For example, at a holiday-themed lunchtime event, a table called "Information Station Holiday Edition: Are your gifts vulnerable?" was popular with attendees who learned about security and privacy considerations for devices like smart speakers and fitness trackers.
S2 commented that incorporating the home aspect gets people's attention because, at work, people might feel like someone else is responsible for security, but at home they are responsible. We asked employees how the information they received at work had helped them make security decisions at home. They discussed being able to recognize and act upon security issues regarding phishing emails, online shopping, and smartphone usage. An IT specialist appreciated the work-home emphasis:
_"Some of the information, [the team] will pass out and say, 'You can send this and share this with family'...I definitely have thought about several of the talks going home. And I
have sent information that was approved to my parents saying, 'You need to read this '""" (E7)._ The awareness events did not just focus on topics related to cybersecurity; rather, the team also tried to provide information related to other types of security affecting work and home life, such as personnel and physical security. The CISO commented on this approach: _"We want to secure the person, and we want everyone to think about all aspects in which they could secure themselves" (M1)_. For example, a summer "Traveling Cyber Safe" lunchtime event featured a personnel security officer providing information about foreign travel safety.
Of note, while the team endeavored to create engaging material and interactions, they acknowledged that their agency offered few tangible incentives for demonstrating good security behaviors (e.g., official recognitions or awards), in large part due to a lack of budget and staff available to progress any incentive initiatives. The agency was more apt to take negative measures, for example account suspension of employees who failed to complete their mandatory training.
#### 5.2.2 Employing Engaging Communication Techniques
To engage the workforce, the team disseminated security information using a variety of in-person and online communication techniques throughout the year to reinforce the message, rather than solely relying on once-a-year training. They took care in selecting topics of interest to the workforce and endeavored to find engaging, external speakers for security awareness events who were _"exciting for people to listen to. They're just hearing another perspective and another point of view that's not from our agency" (S2)_. To overcome complacency and garner attention while operating with a limited budget, the team recognized the need for creativity in their approaches:
_"You want to just put a different spin on it because people just see stuff all the time: 'Have a good password. Lock your computer'...Be creative and think outside the box for different reasons or different tactics to make people think" (S3)_.
A manager affirmed the need for security awareness efforts to creatively adapt to employees' preferences and constraints: _"We need to keep making it as user-friendly as possible, not having it be a big commitment of people's time. And doing it in such a way that people want to keep coming back to the program" (M1)_. As an example, during the agency's "Click with Care Campaign" that encouraged safe email behaviors, the team wanted to ensure that all members of the workforce viewed important anti-phishing tips. Hearing from employees that email reminders were typically not read, the team devised an alternative: _"We have a little phishing handout...We want to put it on half a piece of cardstock and go around at like 4:00 when everyone's gone and put it on everyone's desk with a little lifesaver [candy]" (S3)_.
The team focused on making security awareness memorable, often by holding events that were entertaining. Most lunchtime events included gamification, such as security-themed trivia with candy prizes. A team member discussed the team's frequent use of humor: _"If we can get five eye rolls at an event, we can call it a win. Because especially in this industry, everyone's so business and serious. So, we like to have a little fun" (S3)_. One creative approach was observed when the team executed a campaign to encourage employees to complete their annual security training early by distributing "Now and Later" candies on people's desks with postcards that said, "Take your training now and not later."
During the study timeframe, the team took the lunchtime events into new directions beyond the usual information fair setup. One observed event entitled "Late Night Cyberside Chat" was a humorous parody of a late-night television show. Various guests joined S1 "on stage" to discuss security and IT topics. Another observed event was Shakespeare-themed and entitled "To Send or Not to Send." The impetus for the theme stemmed from recent security issues observed within the agency in which employees were sending personally identifying information to their personal email accounts:
_"We were brainstorming, and someone said, 'Oh, to send or not to send'...They're not_
_paying attention to emails, they're not paying attention to posters. Maybe we'll do a little_
_show about it and put our own cybersecurity spin on it" (S3)._
Attendees were given a playbill containing the performance script and security tips (see excerpts in Figure 1) and were offered a bag of popcorn. During the performance, S1 donned a Shakespearean-era hat and proceeded to read through three "acts" based off popular Shakespeare plays but re-written by the team to incorporate security themes.
Although the team was not afraid to try new approaches, _"not everything works" (S2)_. The team members described a lunchtime event in which they designed a "cyber passport" for attendees:
_"Everyone...could go and get a signature from each table [on the passport]...and put it_
_in the box to win a gift card. And we thought it would be great, but it was just hard_
_logistic-wise...and then a lot of people are kind of like 'Oh, I don't know I want to do_
_this.' And when they found out it was a gift card to the cafeteria, they're like 'Oh, I don't want that.'...Not many people were filling them out and putting them in the box. So, that_
_idea kind of fizzled" (S3)._
Less-successful events were viewed as learning experiences as they provided valuable
information about workforce preferences.
#### 5.2.3 Providing Practical Recommendations and Tools
Empowerment involves providing actionable steps, increasing feelings of self-efficacy, and encouraging individuals to be more reflective about their security behaviors. The team felt that bringing awareness of security threats was important but did not necessarily lead to behavior change. Therefore, people needed practical, actionable steps to counter threats and protect themselves and their organization. As expressed by S1, _"Cybersecurity awareness is not always 'The bad guys are coming to get you,' but 'Here are some better tools to use'."_ S1 believed that people could take small steps that have a large, long-term impact: "_You need to just be aware of the little things you can do to protect yourself from the little things that are going to happen that are going to end up being a big pain" (S1)_. An employee commented on the synergy between bringing awareness and encouraging action:
Figure 1: “To send or not to send” lunchtime event handout excerpts.
Original handout has been modified to anonymize the agency.
"The more we know about it [security] and the more we know people that we can reach out to that can help us if we have a question about it, I think that can make you feel more empowered and more comfortable in doing it the right way and protecting yourself as well" (E3)._
The team tried to ensure awareness information included recommendations that were achievable given employees' skillsets, described in terms they understand, and were accompanied by points of contact for more information. Recommendations were often offered by event speakers and in security-related handouts provided at all events. For example, at the lunchtime "To Send or Not to Send" event, after each act, the team lead described the security issue in plain language and offered concrete steps attendees could take. The accompanying handout also included a list of agency resources for more information.
### Measuring Success
Gauging the impact of security awareness training on workforce attitude and behavior change was essential for determining program effectiveness. The development of meaningful measures of effectiveness could be non-trivial but was still a goal for S1, who viewed compliance and effectiveness metrics as being complementary: _"I'm coming up with different ways of measuring how we're making that impact as well as making sure we're hitting all the right checkboxes for compliance. So, it's kind of a balancing act" (S1)_. Compliance served as a once-a-year indicator of coverage across the entire workforce while effectiveness measures provided continuous evaluation. Towards this ongoing assessment, the team utilized a combination of quantitative and qualitative measures.
#### 5.3.1 Compliance Metrics
Compliance metrics revealed how many employees fulfilled their mandated annual security awareness training. These metrics were reported to agency leadership and government oversight organizations. Compliance, although sometimes deceiving, was one indicator of progress as expressed by the team lead: _"Seeing our training numbers, meeting our goals for the training numbers, even though that's just compliance, it's still showing that people realize that they have to take this, start making that awareness" (S1)_. Not meeting compliance goals could be disappointing. The team lead was upset when recent training compliance numbers were lower than desired due to an issue with employee rosters not being updated.
#### 5.3.2 Event Attendance
Event attendance was an indicator of program popularity: _"I think we're starting to make a difference. I can see that by the numbers of people who come to our events and look forward to it" (S1)_. Although saying little about the impact, attendance was often the only immediately available measure. Since taking over the program, the team had seen an increase in attendance. Referring to security days, S1 compared the trend of 200-300+ attendees in the most recent three years with attendance prior to the team becoming involved: _"We've also seen some after action reports from some of the other cybersecurity events that took place...They were excited to get 50 people."_ In addition to event attendance, there was an increase in the number of people who viewed online posted content after the event. Whether because of improvements in quality or better marketing of events, the team and their leadership interpreted these numbers as a positive sign.
Event attendance was not just assessed by number of attendees but also by who attended as an indicator of reach across the agency. The team had observed an increasingly diverse audience beyond
those with security roles. However, there was still a large portion of the agency population who chose not to attend events.
To increase reach, the team was considering new approaches. Because employees may be reluctant to step away from work to attend events, ideas for future improvement centered on the team visiting staff. M2 suggested that the team consider occasionally attending group meetings to talk about security. Because of thin resources, M2 also discussed the possibility of having security "ambassadors" within each mission organization. These ambassadors could serve as an extension of the security awareness team by passing on security tips and reminders to coworkers.
#### 5.3.3 Employee Feedback
Employee feedback was another way in which the team gauged the effectiveness of their program. After every security day and security officer forum, the team invited attendees to complete an anonymous, post-event survey. Although results may be subject to self-selection bias, surveys provided a useful mechanism to obtain feedback. Respondents rated the quality of topics and presentations, event organization, and communications and could provide additional feedback and suggestions for future events. Overall, survey respondents viewed the events positively. Out of 43 respondents in two security officer forum surveys, 93% rated the quality of speakers and topics as above average or excellent. Out of 109 respondents in three security day event surveys, 95% rated the quality as above average or excellent. For the second security day we observed, the survey additionally asked why people attended the event. Out of the 21 respondents, two-thirds indicated that they wanted to earn training credit, while over 85% said that the advertised talks looked interesting. Just over half attended because they thought the information would be helpful in their jobs. These reasons provided insight into employee motivations for attending and their perceived value of the events.
In addition to survey responses, the team often received feedback directly from employees (face-to-face, email). Although not quantifiably measurable, direct feedback provided anecdotal evidence that security awareness was shifting security attitudes and generating interest. Staff members told S3 how much they enjoyed past events: _"We were still hearing things about speakers we had a couple years ago, and people saying, 'Oh can we get them back, can we get someone similar?'...The fact that we still hear feedback months and years later is very rewarding" (S3)_. Even managers in the team's chain of command received feedback: _"I'm getting a lot more positive feedback from the agency, from the user community, than I had in the past. And before, when we thought not getting negative feedback was good, getting positive feedback is even better" (M1)_.
Our interviews with agency employees also shed light on the program's impact. Several employees with security roles commented on the value of information presented at the security officer forums, with an IT specialist mentioning that one talk _"made me think more about my system... [It] was something that I could apply to my own systems going forward" (E7)_. Over half the participants discussed the personal impact of awareness efforts related to phishing and appropriate email behaviors. An employee commented, _"I didn't have that awareness before... I am much more educated now than I used to be... That by itself is a big deal" (E4)_. Others demonstrated that their awareness also translated into action, for example, being empowered to report potential phishing emails. An attorney remarked on how agency security awareness efforts led to increased vigilance:
_"You can get messages from the agency, and I go like, 'Is this real?' And I send it to the cybersecurity team to say, 'Am I supposed to be answering this?'So, basically, as a result of my training here, I am very, very suspicious of everything" (E5)_.
#### 5.3.4 User-Generated Incidents and Reporting
Trends in employee-involved security incidents served as additional evidence that the workforce was becoming more security-aware: _"Are they [incidents] going up? Are they coming down? Are we seeing more people going to the training and does that mean we're seeing fewer events?" (S1)_. For example, indicators of behavior change might include decreases in the number of incidents of badges being left unattended in computers or fewer incidents of employees emailing unencrypted personally identifiable information.
A decrease in phishing exercise click rates (number of employees clicking on the phishing email link) and number of repeat clickers (those who click on phishes in two or more consecutive exercises) also provided useful information. When click rates dipped lower, S1 wondered if this was because the phishes were too easy or if the security culture was improving. To test this, the team increased the sophistication of phishes sent during quarterly exercises. The subsequent click rates remained low, implying that employees were indeed informed about phishing and making sound decisions.
Employee reporting of security issues was also viewed as a success indicator. When asked what measures of effectiveness were most meaningful, the CISO commented, _"Once upon a time, I would have said fewer incidents. Now, I'm saying more better-reported incidents. So, people are recognizing that they've done something, recognizing that there's a problem" (M1)_. For example, the agency saw an uptick in phishing reporting for both phishing exercises and suspected real-word phishing emails.
S1 wanted to take a more holistic approach to measuring program effectiveness. S1 expressed the desire to work with security staff to examine agency security data (e.g., logs, incident reporting, real-world phishing emails) and physical security data (e.g., badge incidents) to explore potential relationships to awareness efforts and identify areas where new or different training might be beneficial. Phishing was discussed as an exemplar of how a holistic approach might work:
_"We're trying to...be able to tie in together the people who take their training to the people who get caught with phishing exercises, the people who are really getting caught with phishing exercises with people who are losing their badges to people who send out information they shouldn't to see what's the correlation here. Are these people just too busy? Are they not paying attention? Is there a training problem?" (S1)_.
Unfortunately, despite having solid relationships with other agency security groups, the team had not been able to make much progress on these goals because of limited resources.
#### 5.3.5 Evidence of Leadership Support
Another indicator of success was the increase of support from senior leaders. S1 believed, _"If we can get the leadership to look forward to what we're doing and show some interest rather than just being another line on a report, I think that's very good. I think we're making some progress" (S1)_. In years past, the leadership viewed the program as _"just kind of this small little program. I don't think it had that much spotlight on it" (S3)_. However, that had changed: _"Now I've got the CIO and the deputy CIO and the deputy CIO sitting in the crowd and watching the whole [event] rather than just showing up to give their opening remarks and then leaving" (S1)_. The interviewed managers were pleased with the progress the team had made. S1's immediate supervisor complimented, _"I think we're doing a fantastic job with the resources we have" (M2)_. S1 also engaged with senior leadership outside their chain of command: _"I also do the senior executive training, and I'm trying to make them more aware of what we're doing and why we're doing it. And so far, I've gotten a lot of positive feedback" (S1)_.
Discussion
Prior security awareness research studies predominantly explored impacts on the recipients of training or via self-report data. However, our case study goes beyond these to holistically uncover work practices of security awareness professionals by observing awareness activities in action and exploring program transformation over a prolonged time. We further supplement our observations by uniquely synthesizing multiple stakeholder perspectives. In this section, we situate our study in the context of prior literature and describe its real-world implications for security awareness programs.
### _Progressing Security Awareness Programs_
The case study further contributes to the research body of knowledge as well as providing practical implications for organizational security awareness programs.
#### 6.1.1 Advancing Security Awareness Research
Our findings provide observable evidence that validates the challenges of security awareness programs identified in prior self-report surveys and studies. For example, Agency Q's security awareness team sometimes struggled to overcome negative perceptions of awareness training (Bada _et al._, 2019; Alshaikh, 2018) and faced resource issues exhibited by a constrained budget and S1 performing awareness duties part-time (SANS, 2021; Woelk, 2015).
The team attempted to overcome these challenges in a real-world demonstration of the approaches previously recommended (but not often observed) by researchers and industry experts. Approaches included: reinforcing security awareness throughout the year (Bada et al., 2019); tailoring communications to the audience's knowledge and emotional response to security; soliciting and acting upon employee feedback (Alshaikh _et al._, 2018); developing creative and varied ways to engage the workforce (Bauer _et al._, 2017; Abawajy, 2014; Korpela, 2015); collecting a variety of measures of effectiveness (Alshaikh _et al._, 2018; Fertig _et al._, 2020); and gamification (Khando et al., 2021; Hart et al., 2020).
Our case study also confirms and provides real-world examples of the habit, fear, and role value elements identified in Moody et al.'s UMISPC (2018). The team facilitated _habit_ by providing information applicable to both work and home so that good security behaviors become ingrained in daily life. _Fear_ was approached by presenting honest perspectives on security threats while tempering that by building employee self-efficacy via concrete guidance and resources. The team addressed _role values_ by tailoring and making their communications relatable to different employee groups. We further discovered real-world implementation of the three dimensions of security awareness identified by Hansch and Benenson (2014). The team facilitated _perception_ by providing information about specific security threats and _protection_ by providing practical recommendations. _Behaviors_ were reflected to a certain degree by phishing simulation metrics and employee feedback; however, the team acknowledged that they have more work to do to gather and synthesize additional behavior-based measures.
Agency Q's team did little in the way of providing external, positive incentives, such as employee recognitions or competitions, which have been anecdotally cited as helpful in promoting positive security behaviors and attitudes (Haney et al., 2022). However, other researchers found that such rewards had no significant impact on security behavior intentions (Moody et al., 2018). Regarding punishments, the agency suspended accounts of employees who failed to complete annual training. While effective in the short term to achieve compliance, prior work has shown that punishments and excessive fear appeals can be counterproductive in establishing long-term positive attitudes towards security, so should be used carefully (Renaud and Dupuis, 2019).
#### Practical Implications for Programs
Our findings can serve as a resource to guide the work of security awareness professionals in transforming their own programs. While acknowledging that these approaches may not be suitable for all organizations, our observations afford a unique opportunity to witness how a security awareness team goes about rethinking and progressing their program.
As a basis of comparison for other organizations, we situate this transformation along the five increasing levels of maturity in the SANS Security Awareness Maturity Model (2018):
* Level 1, Non-existent program
* program focused on meeting annual training requirements
* program includes topics that directly impact the organization's mission; awareness activities conducted continuously throughout the year
* program has processes, adequate resources, leadership support, and positive impact on security culture
* program collects metrics, including behavioral impact, resulting in continuous improvement and demonstration of return on investment
The case study revealed specific examples of how a security awareness program can evolve beyond Level 2 (found in many organizations) by translating the intent of the training policy into organization-relevant approaches aimed at changing attitudes and behaviors of individuals. For example, the team demonstrated meeting the threshold for Level 3 by picking topics relevant to the organization and its employees. The team also distributed security information throughout the year using a variety of engaging formats - informal drop-in events, structured security days, and physical materials - to accommodate differences in employee preferences. The program's use of gamification and humor further attracted attendees to events, embodying security expert Adam Shostack's (2023) view that "Security is very serious stuff, but that doesn't mean that we can't be playful, creative, or engaging as we work."
Through its efforts, the program further demonstrated most elements needed to advance to Level 4 by facilitating shifts in organizational security culture, garnering more leadership support, and starting to gauge behavior change by collecting a variety of measures. However, they still struggled in that some employees did not see how security relates to their jobs.
The agency program had not yet attained the highest maturity level, as the team recognized that there were still challenges and improvement opportunities, largely due to resource shortfalls. They implemented some of recommended metrics in Chaudhary _et al._'s framework (2022). The team collected accessibility indicators to measure relevance, quality, and reachability via employee feedback surveys and attendance reports of which groups most frequently attended events. Monitoring indicators to gauge organizational support came via surveys, leadership involvement, and informal feedback. They implemented impact indicators in the form of phishing click rates; however, security incidents were not correlated to the extent they would have liked. The team also did not measure sustainability indicators of the impact on organizational policies and program funding over time.
We also note another shortfall that can serve as a lesson learned for other programs: the lack of utilization of established instructional design principles or learning (e.g., Experiential Learning Theory) (Aaltola and Taitto 2019) and behavioral theories (e.g., Protection Motivation Theory, Theory of Planned Behavior) (Moody et al., 2018; Lebek et al., 2014), which can be valuable in informing awareness approaches. Part of this issue may be attributed to the research-practice gap common to many disciplines. In addition, the lack of involvement of learning experts and communications and marketing staff - whose practices are based on enduring behavioral theories - may put the agency team at a disadvantage.
### 6.2 Progressing the Security Awareness Role
Our findings provide additional insights and understanding of security awareness professionals as a unique work role.
#### 6.2.1 Advancing Research on the Security Awareness Role
Program transformation was facilitated by the security awareness team transitioning from being compliance managers to becoming the change agents, risk communicators, and cybersecurity advocates identified in prior work. We extend prior work that outlined desired knowledge and skills for these professionals by illustrating - through direct observations and multi-stakeholder perspectives - these skills and how they are perceived by others in a real-world context.
The team displayed not just technical acumen but also non-technical, professional skills (e.g., interpersonal skills, listening, communication skills, context awareness) previously identified as necessary to establish trust and credibility with recipients of their communications (Covello, 1997; Slovic, 1987; Rogers, 2003; Nurse et al., 2011; Haney and Lutters, 2021; Haney et al., 2022). For example, the team regularly solicited employee feedback, were viewed as friendly and approachable, and took special pains to communicate in plain language. They also had the multi-disciplinary composition common in advocacy (Haney and Lutters, 2021) and security awareness (Haney et al., 2022) teams. Although only the team lead had a technical background, the skills brought to bear by the other two team members (e.g., interpersonal, creative, planning) contributed to the program's progression and demonstrated the benefit of having diverse skillsets in awareness teams.
#### 6.2.2 Practical Implications for Supporting the Security Awareness Role
Evidence of professional competencies can contribute to industry efforts to formally define a security awareness role within security work role frameworks, including the widely-adopted National Initiative for Cybersecurity Education Workforce Framework for Cybersecurity (NICE Framework) (Petersen _et al._, 2020). The NICE Framework includes Work Roles consisting of tasks, knowledge, and skills. Private and public sector organizations have utilized these Work Roles to hire security workers, build teams to achieve specific objectives, shape career paths, and discover critical workforce gaps (NIST, 2020).
Currently, the Framework lacks a Work Role that adequately captures the duties and necessary knowledge and skills of security awareness professionals (Haney et al., 2022). To remedy this, SANS (2019) proposed a new Work Role called "Security Awareness and Communications Manager," which places less emphasis on technical skills and more on professional skills such as communications, partnering, and project management. Spurred in part by the SANS proposal, our own research, and other community feedback, the NICE Community Council hosted a workshop to discuss the need for a new awareness-focused role (NICE, 2021).
## 7 Conclusion
This paper describes a longitudinal case study of how one government agency's security awareness program has transformed from being merely compliance-focused to a higher degree of maturity involving sustained empowerment and engagement of organizational employees. The discoveries described in this paper contribute to the research body of knowledge on security workers who aim to facilitate security behavior change by providing evidence from a real-world context, validated by the perspectives of multiple stakeholders. In addition, the tactics, professional characteristics, and program progression demonstrated by the agency team can serve as a valuable exemplar and resource for security awareness
professionals in other organizations as well as industry and government groups seeking to better define security work roles.
## Disclaimer
Certain commercial companies or products are identified in this paper to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the companies or products identified are necessarily the best available for the purpose.
## Acknowledgements
We would like to thank the security awareness team, office of the Chief Information Officer, and the employees at Agency Q who were so generous with their knowledge and time and afforded us this research opportunity.
|
2307.16385 | Multi-gait Locomotion Planning and Tracking for Tendon-actuated
Terrestrial Soft Robot (TerreSoRo) | The adaptability of soft robots makes them ideal candidates to maneuver
through unstructured environments. However, locomotion challenges arise due to
complexities in modeling the body mechanics, actuation, and robot-environment
dynamics. These factors contribute to the gap between their potential and
actual autonomous field deployment. A closed-loop path planning framework for
soft robot locomotion is critical to close the real-world realization gap. This
paper presents a generic path planning framework applied to TerreSoRo
(Tetra-Limb Terrestrial Soft Robot) with pose feedback. It employs a
gait-based, lattice trajectory planner to facilitate navigation in the presence
of obstacles. The locomotion gaits are synthesized using a data-driven
optimization approach that allows for learning from the environment. The
trajectory planner employs a greedy breadth-first search strategy to obtain a
collision-free trajectory. The synthesized trajectory is a sequence of
rotate-then-translate gait pairs. The control architecture integrates
high-level and low-level controllers with real-time localization (using an
overhead webcam). TerreSoRo successfully navigates environments with obstacles
where path re-planning is performed. To best of our knowledge, this is the
first instance of real-time, closed-loop path planning of a non-pneumatic soft
robot. | Arun Niddish Mahendran, Caitlin Freeman, Alexander H. Chang, Michael McDougall, Patricio A. Vela, Vishesh Vikas | 2023-07-31T03:26:48Z | http://arxiv.org/abs/2307.16385v1 | # Multi- gait Locomotion Planning and Tracking for Tendon-actuated Terrestrial Soft Robot (TerreSoRo)
###### Abstract
The adaptability of soft robots makes them ideal candidates to maneuver through unstructured environments. However, locomotion challenges arise due to complexities in modeling the body mechanics, actuation, and robot-environment dynamics. These factors contribute to the gap between their potential and actual autonomous field deployment. A closed-loop path planning framework for soft robot locomotion is critical to close the real-world realization gap. This paper presents a generic path planning framework applied to TerreSoRo (Tetra-Limb Terrestrial Soft Robot) with pose feedback. It employs a gait-based, lattice trajectory planner to facilitate navigation in the presence of obstacles. The locomotion gaits are synthesized using a data-driven optimization approach that allows for learning from the environment. The trajectory planner employs a greedy breadth-first search strategy to obtain a collision-free trajectory. The synthesized trajectory is a sequence of rotate-then-translate gait pairs. The control architecture integrates high-level and low-level controllers with real-time localization (using an overhead webcam). TerreSoRo successfully navigates environments with obstacles where path re-planning is performed. To best of our knowledge, this is the first instance of real-time, closed-loop path planning of a non-pneumatic soft robot.
## I Introduction
Since the advent of the soft Mckibben actuator, soft materials have been envisioned to be an integral part of next generation robots, including for terrestrial environments. This can be attributed to their ability to adapt and interact with the environment. Over the past few decades there has been active research in discovery of novel soft materials, actuators and sensors [1, 2]. Correspondingly, there has been advancement in modeling and control techniques for soft systems like manipulators [3]. Additionally, researchers have demonstrated terrestrial locomotion of soft robots using ad-hoc or intuitive gaits [4]. However, in comparison to their rigid counterparts, path planning and navigation of soft robot locomotors is understudied and rarely implemented. This can be primarily attributed to challenges related to modeling of soft materials and their actuation, plus robot-environment interaction. Furthermore, unlike rigid robots, soft terrestrial robots generate lower magnitudes of force when interacting with the environment. This has multiple consequences, including higher variance in locomotive gait displacements (translation and rotation) and higher sensitivity to small environmental changes. The research focus is to mitigate the sources of discrepancy between the potential of soft terrestrial robots and their real-life realization by developing a real-time, closed-loop locomotion controller with localization feedback.
Finite-element modeling and different reduced-order models have been explored by researchers for creating locomotion models [5, 6, 7, 8, 9]. However, the accuracy and predictability of these approaches remains limited. Models for the friction and sliding of soft materials over a substrate have proven inadequate to capture the robot-environment interaction. Additionally, soft robots are sensitive to manufacturing inaccuracies and defects. Consequently, most locomotion control strategies for soft terrestrial robots rely on biomimetic, intuitive approaches, or trial-and-error [4, 10]. More recently, environment-centric, data-driven model-free approaches have been implemented to synthesize gaits [11, 12]. This paper utilizes the gaits synthesized by these approaches as briefly discussed later in the paper.
Tethered and untethered soft locomotors are actuated using various methods, e.g., pneumatic, shape memory alloys (SMAs), dielectric elastomers (DEA), and motor-tendon actuators [4]. While most use open-loop control strategies, closed-loop control has been performed by few researchers. Patterson et al use a reactive strategy to perform closed-loop control of an SMA-actuated soft swimming robot [13]; Liu et al [14] use a reactive planner for control of a pneumatically actuated soft robot with predetermined gaits; Hamill et al [15] perform gait-based path planning using temporal logic; Lu et al [16] apply bidirectional A\({}^{*}\) with a time varying bounding box. For pose recovery, soft robotics researchers typically rely on economically expensive motion capture systems, e.g., VICON and Optitrack, limiting their more widespread study. This research employs a lattice-based trajectory planner; robot gait models inform the design of controlled trajectories that move the robot from start to goal while avoiding obstacles. It also details an experimental setup that uses two inexpensive overhead webcams and a localization algorithm that compensates for marker occlusion.
Multi-modal motion planning for traditional rigid-body mobile robots typically entails a solution search, through the robot configuration space, for a state sequence (and accompanying control) to accomplish an objective while respecting both task and feasibility constraints. Though artic
ulated mobile robots (e.g. humanoids, multi-legged mechanisms) are often natively described by high-dimensional configuration spaces, planning exploits dimensional-reduction strategies that specialize the search space to modes and configurations relevant to a particular task [17, 18, 19]; these manifest in conjoinments of several graph-based representations that together capture the multi-modal search space, and where sampling approaches may be utilized to construct graphs or graph-to-graph (i.e. mode-to-mode) transitions. Similarly, graph-based representations have been used to synthesize motion plans for vehicles capable of multiple geographically-dependent modalities (e.g. swimming, driving, flying); graphs are synthesized using a sampling-based approach, with edges valued according to modal cost of transport. Dijkstra's algorithm may then be used to identify the optimal multi-modal solution when traveling between locations [20]. For a hyper-redundant snake-like robot, a hybrid-optimization approach employs mixed integer programming with model predictive control (MPC) to guide step climbing; the former enforces a particular sequence of discrete modes to be traversed, while the latter designs reference trajectories within each mode [21]. This work focuses on the TerreSoRo soft mobile robot, capable of several locomotion gaits; trajectory planning and re-planning entails a lattice-based search through the space of possible gait sequences, for guiding the robot to desired goal locations within obstacle-strawn environments and in the face of locomotion uncertainty.
**Contributions.** This research, to best of our knowledge, is the first instance of real-time path planning and closed-loop control of a non-pneumatic soft robot. The experimental setup involves system integration of high-level (online path planning and offline gait synthesis) control, low-level control (TerreSoRo actuation), and real-time pose estimation that uses parallel architecture to process visual feedback. Trajectory planning is accomplished using a lattice-based, greedy breadth-first search through the robot's gait control space; motion models of available robot gaits inform the design of controlled locomotion trajectories that move the robot from start to goal while avoiding obstacles. A rotate-then-translate motion control paradigm is adopted to both simplify the procedure and aid tractability of the planning problem. The planned trajectory is re-computed when the position error between the estimated and the actual path exceeds a prescribed threshold.
**Organization.** Sec. II formulates the navigation problem, details the robot, and provides an overview of data-driven gait discovery and selection. Next, Sec. III discusses the closed-loop control framework, architecture and path-planning methodology. Sec. IV contains the experimental setup, methodology, tracking algorithms, and results. Sec. V concludes the paper and discusses future work.
## II Robot Description and Problem Formulation
### _TerreSoRo: Tetra-Limb Terrestrial Soft Robot_
TerreSoRo is a four-limb terrestrial soft robot actuated using motor-tendon actuators (MTAs). The robot is powered using external power with a low-level controller. The high-level controller and path planner is located off-board on a desktop computer that communicates with the webcam for localization (described in Sec. III). The robot design is the result of topology optimization to allow six identical robots to reconfigure into a sphere, whose details are more fully explored in [22, 23]. As a result, the limbs are designed for complex geometrical curling and not optimized for any particular locomotion modes. The emphasis is on implementing a path planning strategy for individual planar locomotion of the soft robot and does not address reconfigurability.
Robot fabrication involves integration of soft material limbs, control and actuation payload (motors, electronics), and routing of the tendons through the limb as shown in Fig. 1. The modular fabrication process involves mixing two liquid silicone components (Smooth-On Dragon Skin Part A and B - Shore Hardness 20A) degassed in vacuo. The tendon paths are cast by threading a thick wire through the rigid 3D printed mold as shown in Fig. 1 which is removed upon curing of the cast. The central hub is 3D printed with flexible filament (Shore Harness 85A, placed in the mold for casting; the casting is repeated for the other limbs. Rapid curling and uncurling of the flexible limbs (450 ms/transition) is achieved through motor-tendon actuation. Four DC motors with 3D printed PLA spools are placed in the hub and secured using zip ties. Teflon tubing is inserted into the individual fins of each limb to prevent tear caused by the difference in stiffness between the silicone and the fishing line tendon, Fig. 1. Finally, threaded fishing line attached to the spool is routed through each fin and anchored at the end with a fishing hook. A slip ring is incorporated into the tether connector to reduce any effects of built-up torsion in the tether.
### _Gait Synthesis_
The gaits for TerreSoRo are synthesized (discovered) using an environment-centric framework that discretizes the factors dominating the robot-environment interaction. The procedure of synthesizing locomotion gaits is described briefly.1
Footnote 1: The reader may refer to [11, 12] for detailed analysis, and to [24] for the graph theory terminology and concepts used.
**Environment-centric Framework.** Conceptually, locomotion results from optimization of forces acting at different parts of the body that ultimately effect change in inertia [25].
Fig. 1: The soft robot is cast using a rigid mold and a tendon-path wire. The motors with spools are placed inside the hub.
In that context, _robot states_ are defined as discrete physical states (e.g., postures, shapes) where the forces acting on the robot body in each state are considerably different. _Motion primitives_ refer to the possible transitions between these robot states. These transitions result in motion of the robot (rotation and translation). A weighted digraph is effective in modeling the robot states, motion primitives, resulting motion, and their inter-dependencies. The robot states and motion primitives correspond to the digraph's \(n\) vertices \(V(G)\) and the \(m\) directional edges \(E(G)\), respectively. For TerreSoRo, robot states correspond to permutations of the four limbs being curled (actuated) or uncurled (un-actuated) as illustrated in Fig. 2. For this robot, actuation is binary (on/ off) and all possible permutations (states) are statically stable. The weight associated with each edge \(e_{i}\) is the resulting motion of the motion primitive, i.e., the translation \(\mathbf{p}_{i}\in\mathbb{R}^{2\times 1}\) and rotation \(\theta_{i}\) measured in the coordinate system of the initial vertex of the edge. For this discussion, the edge weight \(\mathbf{w}_{i}\) is modeled as a normal distribution with mean \(\mathbf{\mu}_{i}\in\mathbb{R}^{3\times 1}\) and covariance matrix \(\Sigma_{i}\in\mathbb{R}^{3\times 3}\):
\[\mathbf{w}_{i}=\mathcal{N}\left(\mathbf{\mu}_{i},\Sigma_{i}\right),\mathbf{\mu}_{i}(e_{i} )=\begin{bmatrix}\mathbf{p}_{i}\\ \theta_{i}\end{bmatrix},\Sigma_{i}(e_{i})=\begin{bmatrix}\Sigma_{pp}&\Sigma_{ p\theta}\\ \Sigma_{\theta p}&\Sigma_{\theta\theta}\end{bmatrix}. \tag{1}\]
In summary, we define the following matrices with elements corresponding to edges \(e_{i}\).
_Learning of the environment_ is equivalent to learning the graph edge weights. Experimentally, this is achieved by traversing all the edges of the graph without repeating any and recording the resulting motion. This traversal sequence, referred to as the Euler cycle, is repeated multiple (five) times with randomized starting positions and path orders to learn the probabilistic weights of the graph.
**Translation and Rotation Gaits.** Locomotion gaits are defined here as simple cycles that are transformation invariant. The transformation invariance principle implies that the distance and rotation of the robot are preserved irrespective of starting vertex as it traverses through all edges of the simple cycle. It has been proven that under this definition, there will exist two types of planar gaits: translation and rotation [12]. The former is the gait when the cumulative rotation of the simple cycle is zero. The latter corresponds to the simple cycle when translation of all the edges is zero. Hence, we individually optimize for these two type of gaits with different cost functions and constraints.
The _gait library_ comprises the synthesized translation and rotation gaits discovered using the discussed data-driven approach (learning of the graph and searching for optimal gaits). The cost functions, \(J_{t},J_{\theta}\) linearly weight the locomotion, variance and gait length while assuming small rotations of the motion primitives.
\[\begin{split} J_{t}(\mathbf{z})&=\left(\alpha_{t}^{T}P+ \beta_{t}S_{p}+\gamma_{t}1_{1\times m}\right)\mathbf{z}\\ J_{\theta}(\mathbf{z})&=\left(\alpha_{\theta}\Theta+ \beta_{\theta}S_{\theta}+\gamma_{\theta}1_{1\times m}\right)\mathbf{z}\end{split} \tag{2}\]
where \(\{\alpha_{t},\beta_{t},\gamma_{t},\alpha_{\theta},\beta_{\theta},\gamma_{ \theta}\}\) are the linear weights, and the binary vector \(\mathbf{z}\in\{0,1\}^{m}\) is the mathematical representation of a gait. Consequently, the gait synthesis is formulated as a Binary Integer Linear Programming (BILP) optimization problem with linear constraints that can be solved using optimization solvers, e.g., MATLAB\({}^{\text{\textregistered}}\). The translation \(\mathbf{z}_{t}\) and rotation \(\mathbf{z}_{\theta}\) gaits are synthesized using
\[\begin{split}\mathbf{z}_{t}&=\min_{\mathbf{z}}J_{t}( \mathbf{z})\quad\text{s.t.}\quad\Theta\mathbf{z}\leq\epsilon_{\theta}\\ \mathbf{z}_{\theta}&=\min_{\mathbf{z}}J_{\theta}( \mathbf{z})\quad\text{s.t.}\quad|P\mathbf{z}|\leq\epsilon_{p}\gamma_{zi}=1 \end{split} \tag{3}\]
Gait constraints :\(B\mathbf{z}=0,B^{i}\mathbf{z}\leq 1,z_{i}\in\{0,1\}\forall i,\)
\(\nexists\mathbf{z}_{1},\mathbf{z}_{2}\) s.t. \(\mathbf{z}=\mathbf{z}_{1}+\mathbf{z}_{2},\ B\mathbf{z}_{1}=B\mathbf{z}_{2}=0\)
where \(B\) is the incidence matrix, \(B^{i}\) is the positive elements of \(B\), and the gait constraints mathematically ensure that vector \(\mathbf{z}\) is a simple cycle.
Once the gait library has been created and contains a synthesized rotation gait and translation gait, symmetry of the robot can be assumed to expand the library to improve path planning capabilities. The permutations of the translation gait w.r.t. each limb are considered for control purposes to achieve change in orientation of 90 degrees. Such behavior is observed in biology (e.g., brittle star) [26] where the animal can change their leading limb to change the direction of their translation. While it is assumed that these permutations will have similar motion in four different directions, each of these gaits are tested to characterize their distinct twists.
### _Problem Statement_
Soft mobile robots, such as TerreSoRo, manipulate their deformable body to accomplish distinctly useful locomotion, relative to more traditional rigid-body robots. In particular, locomotion results from unique interactions between their soft body material composition and the surrounding environment. This allowance comes with distinct challenges; locomotion outcomes are heavily coupled to manufacturing and material variabilities, factors that often are difficult to control. Data-driven techniques are demonstrably effective
Fig. 2: The robot states represented by a four digit binary number. Each digit corresponds to one of the four limbs to indicate if the limb is actuated (1/ black color limb) or un-actuated (0/ light color limb).
for: (1) discovering useful gaits indigenous to a particular manufacturing instance, and (2) generating motion models that characterize these gaits for planning and control. Progression of these mobile robots toward autonomous field deployment entails an ability to both plan viable trajectories through an environment as well as accomplish some form of feedback-based control to track synthesized plans.
We employ a data-driven approach to synthesize gaits and predictive motion models for TerreSoRo; these inform a lattice-based planner whose gait sequence outputs, and accompanying locomotion trajectories, move TerreSoRo to prescribed goal locations within an obstacle-strown environment. Feedback takes the form of trajectory re-planning when tracking error exceeds a pre-defined threshold.
## III Methodology, Framework and Algorithm
The control framework architecture comprises a high-level controller in MATLAB(r) that communicates with the low-level Arduino microprocessor to result in locomotion of the robot as summarized in Fig. 3. Localization is achieved by analyzing the webcam output and tracking the center of the robot. In the offline state, the Gait Synthesizer uses the motion data from the Euler cycle experiments to build the Gait Library. The gaits therein are then experimentally validated to store the expected motion data to feed into the path planner. In the online state, the experiment world (which maps the obstacles and initial robot pose) and the gait library are used to initialize the path planner. Real-time control is then achieved by comparing the expected pose from the path planner and the instantaneous experiment pose to inform the low-level controller and re-planning is performed as necessary.
### _Localization_
The feedback to the robot controller plays a critical role in path re-planning. As can be seen in the experimental setup, Fig. 3b, two overhead webcams are used. One webcam is used to record HD video; the other is used for localization and has its properties (e.g., contrast, brightness, etc.) adjusted to facilitate image segmentation of the four neon markers on the robot hub. Both webcam videos are processed in parallel using the MATLAB(r) Parallel Processing Toolbox(tm). The "localization core" video is processed through a image mask that highlights the markers located on the robot (blue markers on orange robot), obstacles (pink) and the target (green cross), as seen in Fig. 3. This video is stored and a polable data queue accesses the data for localization at appropriate times. The pose estimation is performed using Arun's method [27], which finds the least-squares solution to the pose by using singular value decomposition. Occlusion of the markers by the tether is managed by identifying the marker in the next time frame using the nearest neighbor. The occluded markers are reconstructed using the estimated pose. Camera capture occurs at \(\sim 30\)Hz. The robot pose estimation provides feedback at \(6\)Hz on an Intel(r) Xeon(r) E5-1650 v4, as dictated by the computational complexity and the robot's average motion profile.
### _Gait Library_
Gaits are synthesized as described in Sec. II-B. For this research, five gaits are chosen - a rotation dominant gait \(R\) and a translation dominant gait \(T_{1}\) with its permutations \(T_{2}\), \(T_{3}\), and \(T_{4}\). The limb actuation patterns are shown in Fig. 4a.
Each of the gaits are run for 120 gait cycles and the mean locomotion twist \(\xi(\mathbf{p_{i}},\mathbf{\theta}_{i})\) is obtained that can be used by the path planner. The mean translation and rotation for each of these gaits is visually shown in Fig. 4. These plots highlight a key observation: the translation-dominant gaits \(T_{1}\), \(T_{2}\), \(T_{3}\), and \(T_{4}\) are offset from each other by 90 degrees as expected (due to the limbs being positioned at 90 degree offsets). As the robot is fabricated to be rotationally symmetric, we anticipated that the twist magnitudes of these gaits would be identical; after all, they are the same gait with actuation initiated by a different limb. However, this is not true and highlights the sensitivity of soft robots to small manufacturing inaccuracies/non-uniformities. While control
Fig. 3: (a) The control architecture combines offline analysis to generate the Gait Library for the online path planning with localization feedback. (b) The experimental setup involves system integration of high and low-level controllers involving processing in MATLAB and microcontroller.
parameters could potentially be adjusted to reduce these differences in behavior, it would be unlikely to eliminate them completely as the frictional effects appear to dominate. Moreover, soft robots are sensitive to small changes in the environment (e.g., small bumps in the substrate) as suggested by the error bars in Fig. 4b. The data-driven gait discovery and the path planning strategy accommodate these asymmetries and variations by treating each starting limb option as a distinct gait with individually (experimentally-) derived behaviors; feedback-based re-planning additionally serves to mitigate undesirable effects.
### _Trajectory Planning_
Trajectory synthesis is computed on a binary image representation of an obstacle scenario. This input image is transcribed to a grid world cost map, \(C(x,y):\mathbb{R}^{2}\rightarrow\mathbb{R}\), where \((x,y)\) denote grid coordinates and \(C(x,y)\) denotes the cost associated with each grid location. Obstacle locations are characterized by high costs, which fall off radially with distance. A tunable parameter governs how quickly cost decays and is used to effectively dilate obstacles. Grid locations are classified as either _occupied_ or _free_ based on a pre-configured cost threshold. This results in a configuration space suitable for a point representation of the robot. Robot presence at any _occupied_ grid location constitutes an obstacle collision. For a prescribed goal position \(\mathbf{x}_{\text{goal}}\in E(2)\), cost-to-go \(C_{\text{go}}(x,y;\mathbf{x}_{\text{goal}}):\mathbb{R}^{2}\rightarrow\mathbb{R}^{+}\) across the grid world is quickly computed using an expanding wavefront approach. The cost-to-go allows us to discern the relative value of potential trajectory destinations within the world. Gait-based controlled trajectories are then designed within this grid world representation of the locomotion scenario.
**Gait Models.** Trajectory plans adhere to a rotate-then-translate motion paradigm. Synthesis entails searching for a sequence of these rotate-then-translate pairs, in order to move the robot from a starting pose, \(g_{0}\in SE(2)\), to a desired goal location, \(\mathbf{x}_{\text{goal}}\). This conceptually simplifies the trajectory planning problem; planning becomes an iterative process of: **(1)** 'aiming' (i.e. rotation), then **(2)** traveling in that direction (i.e. translation). This procedure terminates when the trajectory plan reaches a pre-defined radius \(\delta_{\text{goal}}\) of the goal position \(\mathbf{x}_{\text{goal}}\).
From the set of gaits that TerreSoRo is able to accomplish, we select a subset to be used in trajectory planning and control. A single gait is selected to accomplish rotationally-dominant motion. We denote this gait as \(R\); its behavior is characterized by a time-averaged body velocity twist \(\xi^{\text{R}}\in\mathfrak{se}(2)\), and a gait periodicity of \(Q_{\text{R}}\) seconds. The remaining \(d\) gaits are characterized by translationally-dominant motion, and denoted as the set \(\{T_{1},T_{2},\ldots,T_{d}\}\). Each translational gait \(T_{i}\) exhibits a time-averaged body velocity twist \(\xi^{\text{T}_{i}}\in\mathfrak{se}(2)\) and gait periodicity \(Q_{\text{T}_{i}}\), where \(i\in\{1\ldots d\}\).
**Trajectory Synthesis.** The planning strategy presented here employs a greedy breadth-first search through the space of possible TerreSoRo gait sequences. The approach successively expands sets of neighboring, collision-free trajectory destinations that may be reached by a single rotation-translation gait sequence. The neighboring destination with the smallest cost-to-go is then selected for subsequent expansion. Fig. 5 illustrates trajectory exploration and synthesis, through an obstacle-strewn environment. This strategy produced feasible trajectories, and the corresponding gait sequences needed to accomplish them, facilitating the robot's intelligent traversal through a variety of obstacle arrangements.
Beginning at an initial expansion pose \(g^{\text{expand}}\in SE(2)\), the planner first samples trajectory end points that may be reached by the rotational profile \(\xi^{\text{R}}\), over durations of \(n_{\text{R}}=1\ldots N_{\text{R}}\) rotational gait periods. \(N_{\text{R}}\in\mathbb{Z}^{+}\) is a fixed limit and computed such that \(\xi^{\text{R}}_{\text{go}}\cdot N_{\text{R}}<\frac{\pi}{2}\). Reachable robot poses,
Fig. 4: (a) Gait Library comprising five gaits: rotation dominant gait \(R\), and four translation dominant gaits \(\{T_{1},T_{2},T_{3},T_{4}\}\), where the last three gaits are permutations of the synthesized gait G. (b) Average locomotion of each gait executed for 60 cycles. Arrows indicate the orientation of the robot after every 10 cycles. (c) The average translation and rotation speed of each gait where the bars indicate the standard deviation.
using the rotational gait \(R\), are denoted \(g_{n_{\text{R}}}^{\text{R}}\in SE(2)\) and expressed relative to the initial expansion pose \(g^{\text{expand}}\). These poses are checked for collisions; if a collision is identified, the pose is discarded and removed from consideration going forward. Beginning from each collision-free \(g_{n_{\text{R}}}^{\text{R}}\), trajectory end points are forward sampled for each translational motion profile \(\xi^{\text{T}_{\text{i}}}\) where \(i=1\ldots d\), and durations of \(n_{\text{T}_{\text{i}}}=1\ldots N_{\text{T}}\) translational gait periods; \(N_{\text{T}}\in\mathbb{Z}^{+}\) is pre-configured and denotes the maximum number of consecutive cycles a translational gait may be run. The poses of these trajectory end points, relative to \(g_{n_{\text{R}}}^{\text{R}}\), are denoted \(g_{n_{\text{T}_{\text{i}}}}^{\text{T}_{\text{i}}}\). Their poses, relative to the initial expansion pose, are computed as \(g_{n_{\text{R}}}^{\text{R}}\cdot g_{n_{\text{T}_{\text{i}}}}^{\text{T}_{\text {i}}}\); their corresponding spatial poses are \(g\big{(}n_{\text{R}},i,n_{\text{T}_{\text{i}}}\big{)}=g^{\text{expand}}\cdot g _{n_{\text{R}}}^{\text{R}}\cdot g_{n_{\text{T}_{\text{i}}}}^{\text{T}_{\text {i}}}\). The trajectory end points, as well as sub-sampled points along each trajectory segment, are tested for collisions; a trajectory expansion is discarded if a collision is detected. Cost-to-go \(C_{\text{go}}\) is evaluated at the \((x,y)\) coordinates associated to each \(g\big{(}n_{\text{R}},i,n_{\text{T}_{\text{i}}}\big{)}\in SE(2)\). The motion sequence, described by \(\{n_{\text{R}},i,n_{\text{T}_{\text{i}}}\}\), and the corresponding trajectory destination \(g\big{(}n_{\text{R}},i,n_{\text{T}_{\text{i}}}\big{)}\) that are associated with the lowest cost-to-go, are selected as the optimal control and subsequent node to be expanded, respectively. This procedure iterates until the selected pose \(g\big{(}n_{\text{R}},i,n_{\text{T}_{\text{i}}}\big{)}\) falls within a threshold distance \(\delta_{\text{goal}}\) of the goal position \(\mathbf{x}_{\text{goal}}\).
### _Path Recalculation_
As observed, the locomotion gaits have both rotation and translation associated with them. As the robot performs gait cycles and switches gaits, the pose error does not always monotonically increase. Generally, the position error reaches some maximum value and then decreases. This observation is illustrated in Fig. 6. Consequently, path recalculation should be performed when the position error exceeds a user-defined error threshold upon completion of a gait sequence and/or at user-defined intervals (i.e., every \(n\) gait cycles).
## IV Experiments and Discussion
To validate the closed-loop path planner, we performed experiments with three world scenarios with obstacles: (1) World 1 where the robot has to perform a 'zig' maneuver to go around the obstacle, (2) World 2 where the robot needs to perform a 'zig' and 'zag' motion to avoid obstacles and reach the goal, and (3) World 3 where the robot needs to go between two obstacles to reach the goal. All experiments are performed with the experimental setup and methodology discussed in Sec. III. A rubber garage mat is used as the substrate and paper cutouts are used for the obstacles. The TerreSoRo successfully maneuvers past the obstacles to reach the target position in all three scenarios. The results discussed in this section are visualized in the multimedia attachment to this paper.
World 1 consists of two obstacles where the robot is required to maneuver one obstacle to reach the target. The planner recalculates the path six times before it reaches the goal, as shown in Fig. 6(a). The path is recalculated after the execution of a gait sequence if the position error exceeds a threshold. From the error plot, sudden drops in the error indicate path re-planning. World 2 requires the robot to perform two obstacle-avoiding maneuvers to reach the goal as shown in Fig. 6(b). World 2 also triggers six path recalculations. In both World 1 and World 2, the position error both increases and decreases during the execution of gait sequences; it does not consistently monotonically increase before suddenly dropping at recalculation points. Thus, the robot is allowed to complete the current gait sequence, until the point of a gait switch. If the position error still exceeds the threshold, re-planning occurs.
Fig. 5: Controlled trajectories are computed over a grid world cost map representation of the locomotion scenario, \(C(x,y)\). Obstacle locations are assigned high costs (yellow) that decay with distance; locations far from obstacles entail low cost (dark blue). The trajectory solution (dark red) moves the robot from its starting pose \(g_{0}\) (lower-left) to a prescribed goal position \(\mathbf{x}_{\text{goal}}\) (green ‘X’). An example trajectory search, beginning at the starting pose, is illustrated. Cyan, yellow, light green and orange trajectories depict explored expansions using each translational gait, \(T_{1},T_{2},T_{3},T_{4}\), after initial rotation by gait R. The gait sequence leading to a subsequent pose with the least cost-to-go is selected; subsequent expansion then focuses on this new pose.
Fig. 6: The error between the path-planned and experimental poses of the robot for open-loop paths consisting of 60 gait cycles. The paths include a gait switch from (a) \(T_{1}\) to \(T_{2}\) and (b) \(T_{2}\) to \(T_{1}\) after 30 gait cycles (dashed vertical line). The plots indicate non-monotonic increase in the pose error.
The World 3 scenario explores the ability of TerreSoRo to navigate narrow spaces, Fig. 7c. Here, to perform this delicate maneuver, the path re-planning is triggered after every 20 gait cycles. As observed, the position errors are smaller relative to previous world scenarios. However, this scenario also involves thirty recalculations and does not appear to exploit the potential decreases in the error that were seen in the previous worlds.
## V Conclusion and Future Work
For the first time, this research presents successful closed-loop path planning and obstacle avoidance of the four-limb motor-tendon actuated soft robot TerreSoRo. The experiment uses real-time visual feedback from low-cost webcams to perform localization, while the lattice-based path planner generates collision-free trajectories using a greedy breadth-first approach. As soft robots are more sensitive to factors like changes in the environment and manufacturing uncertainties, the locomotion gaits are synthesized using a data-driven environment-centric framework. Conceptually, this approach discretizes the factors dominating the environment-robot interaction and synthesizes the tracked motion of these interactions to find translation and rotation gaits. The path planner generates sequences of gaits that are pairs of
Fig. 7: Example implementations of the closed loop path planner for (a) World 1, (b) World 2, and (c) World 3. The images on the left show the starting and ending robot poses and its path (cyan) that avoids the obstacles (red) to reach the acceptance boundary (circle) around the target (green ‘X’). The plots in the middle show the same experimental path of the robot (blue) compared to the planned path (maroon). The target is marked as a green ‘X’ with a green acceptance circle boundary around it. This plot also marks the moments of gait switches (blue dots), path recalculations (red), and planned paths (dotted pink) that are abandoned when re-planning occurs. The plots on the right show the position and orientation errors for these paths. Sudden corrections in the error indicate re-planning.
rotation-then-translation. The synthesized gaits for the TerreSoRo have coupled translation and rotation. Consequently, the path is recalculated when the position error exceeds a threshold after completion of a gait sequence, or at a user-defined interval. The framework is validated on complex world scenarios with obstacles that require the robot to perform challenging maneuvers. The non-uniform nature of the surface/environment and the intentional slipping action of the robot limbs further complicate this challenge.
Future work of this research involves extending the framework to a larger gait library that allows for locomotion on different surfaces. Furthermore, work will be done to adapt the path planner to incorporate the probabilistic nature of locomotion gaits.
|
2309.09709 | CATR: Combinatorial-Dependence Audio-Queried Transformer for
Audio-Visual Video Segmentation | Audio-visual video segmentation~(AVVS) aims to generate pixel-level maps of
sound-producing objects within image frames and ensure the maps faithfully
adhere to the given audio, such as identifying and segmenting a singing person
in a video. However, existing methods exhibit two limitations: 1) they address
video temporal features and audio-visual interactive features separately,
disregarding the inherent spatial-temporal dependence of combined audio and
video, and 2) they inadequately introduce audio constraints and object-level
information during the decoding stage, resulting in segmentation outcomes that
fail to comply with audio directives. To tackle these issues, we propose a
decoupled audio-video transformer that combines audio and video features from
their respective temporal and spatial dimensions, capturing their combined
dependence. To optimize memory consumption, we design a block, which, when
stacked, enables capturing audio-visual fine-grained combinatorial-dependence
in a memory-efficient manner. Additionally, we introduce audio-constrained
queries during the decoding phase. These queries contain rich object-level
information, ensuring the decoded mask adheres to the sounds. Experimental
results confirm our approach's effectiveness, with our framework achieving a
new SOTA performance on all three datasets using two backbones. The code is
available at \url{https://github.com/aspirinone/CATR.github.io} | Kexin Li, Zongxin Yang, Lei Chen, Yi Yang, Jun Xiao | 2023-09-18T12:24:02Z | http://arxiv.org/abs/2309.09709v2 | # CATR: Combinatorial-Dependence Audio-Queried Transformer for Audio-Visual Video Segmentation
###### Abstract.
Audio-visual video segmentation (AVVS) aims to generate pixel-level maps of sound-producing objects within image frames and ensure the maps faithfully adhere to the given audio, such as identifying and segmenting a singing person in a video. However, existing methods exhibit two limitations: 1) they address video temporal features and audio-visual interactive features separately, disregarding the inherent spatial-temporal dependence of combined audio and video, and 2) they inadequately introduce audio constraints and object-level information during the decoding stage, resulting in segmentation outcomes that fail to comply with audio directives. To tackle these issues, we propose a decoupled audio-video transformer that combines audio and video features from their respective temporal and spatial dimensions, capturing their combined dependence. To optimize memory consumption, we design a block, which, when stacked, enables capturing audio-visual fine-grained combinatorial-dependence in a memory-efficient manner. Additionally, we introduce audio-constrained queries during the decoding phase. These queries contain rich object-level information, ensuring the decoded mass adherets to the sounds. Experimental results confirm our approach's effectiveness, with our framework achieving a new SOTA performance on all three datasets using two backbones. The code is available at [https://github.com/aspirinnone/CATR.github.io](https://github.com/aspirinnone/CATR.github.io).
Combinatorial-Dependence; Audio-Constrained Queries; Blockwise-Encoded Gate +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †:
audio-visual interactions separately, which constrains their effectiveness. Various combinations of audio and video exhibit unique spatial-temporal dependencies, contributing to more accurate and robust results. Thus, a method that captures the spatial-temporal characteristics of audio and video in combination is essential.
Secondly, **Object-Limited Queryless Decoding.** Previous methods typically derive the final mask directly after decoding video features, as exemplified by the use of an FCN decoder [54]. This approach neglects audio guidance information and omits object-level information during the decoding stage, potentially leading to segmentation errors in complex environments. For instance, in Figure 1 (b), the second frame of the video contains a violin, a piano, and people simultaneously. With audio containing only violin sounds and human singing, the target segmentation objects should be the person and the violin. However, due to the absence of audio constraints during the decoding phase, previous methods may erroneously segment the piano, influenced by the video's focus on the front and back frames. Consequently, it is essential to introduce audio restrictions and provide object-level guidance information during the decoding phase.
To address the above limitations, we designed targeted modules:
(1) **Combinatorial-Dependence Fusion.** To comprehensively assess audio-visual combinatorial-dependence, we design to combine audio and video features from their respective temporal and spatial dimensions, followed by capturing this combination's spatial-temporal dependence. Commonly, transformers are used to capture temporal dependencies; assuming a video frame dimension of \(H\times W\), the merged feature dimension becomes (\(H\times W+T\)). However, due to the substantial memory consumption associated with this encoder, we propose an innovative decoupling transformer that considerably reduces memory usage while allowing the extraction of spatial-temporal interaction information between audio-audio, video-video, audio-video, and video-audio combinations.
(2) **Object-Aware Audio-Queried Decoding.** To enable attention focus on the object of interest, we propose an audio-queried decoder. Specifically, we apply an audio constraint to all object queries, allowing the model to leverage audio information to direct attention toward the desired object. These conditional queries serve as inputs for the model, which produces object-aware dynamic kernels to filter segmentation masks from feature maps.
On the whole, we propose a **C**ombinatorial-Dependence **A**udio-Queried **T**ransformer **N**etwork **(CATR; Figure. 2), which contains two main components: Decoupled Audio-Visual Transformer Encoding Module (DAVT; detailed in Section. 4) and Audio-Queired Decoding Module (detailed in Section. 4.1). In encoding, we design an innovative decoupling block, which consists of two steps: initially, we merge audio and video features of corresponding frames while concurrently capturing their temporal information. Subsequently, we facilitate interaction between video features containing temporal information and audio features. By stacking decoupling blocks, we can efficiently capture audio-visual spatial-temporal correlations in a memory-efficient manner. In addition, Audio-Queried Decoder Module innovatively employ an audio constraint to all object queries to produce object-aware dynamic kernels to filter the segmentation of desired object. Moreover, we design a Blockwise-Encoded Gate to utilize all the features extracted from each encoder block. This Blockwise-Encoded Gate enables modeling of the overall distribution of all encoder blocks from a global perspective, thereby balancing the contributions of different encoder blocks.
We conduct extensive experiments on three popular benchmarks and achieve new state-of-the-art performance on all datasets with two backbones (On S4, CATR 84.4% \(\mathcal{J}\) / 91.3% \(\mathcal{F}\) vs. TPAVI 78.7%
Figure 1. AVVS Task Description and CATR Contributions. The objective of the Audio-Visual Video Segmentation (AVVS) task is to generate pixel-level maps identifying sound-producing objects within image frames (a). Previous approaches separately addressed the temporal dependencies of video and the audio-video interaction information (b), neglecting the unique spatial-temporal dependencies inherent to audio and video as a combination. CATR initially merges audio and video features, subsequently capturing the spatial-temporal dependencies of this combination. Note that the red arrow symbolizes the Video-to-Audio (V-to-A) information, while the blue arrow denotes the Audio-to-Video (A-to-V) information. Additionally, we introduce innovative audio-constrained learnable queries to enhance object-aware segmentation (c).
\(\mathcal{J}\) / 87.9% \(\mathcal{F}\); On M3, CATR 61.8% \(\mathcal{J}\) / 71% \(\mathcal{F}\) vs. TPAVI 54% \(\mathcal{J}\) / 64.5% \(\mathcal{F}\)). Our code and benchmark will be released.
Overall, our contributions are summarized as follows:
* We introduce an encoding-decoding framework CATR that presents a novel spatial-temporal audio-video fusion block to fully consider the audio-visual combinatorial dependence in a decoupled and memory-efficient manner.
* We propose the audio-constrained learnable queries to incorporate audio information comprehensively during decoding. These audio-constrained queries contain abundant object-level information that can select which object is being referred to segment. In addition, we introduce a Blockwise-Encoded Gate that allows for the selective fusion of features from different encoder blocks.
* We conduct extensive experiments on three popular benchmarks, and achieve new superior state-of-the-art performance on all three datasets with two backbones.
## 2. Related Work
### Video Object Segmentation (VOS)
The VOS task (Wang et al., 2017; Wang et al., 2018) aims to segment the object of interest throughout the entire video sequence. It is divided into two settings: semi-supervised and unsupervised. For semi-supervised VOS (Wang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), the target object is decided given a one-shot mask of the first video frame. As for unsupervised VOS (Wang et al., 2018), it needs to segment all the primary objects automatically. Many excellent works are proposed and proven to achieve impressive segmentation performance. However, these fancy designs are limited to a single visual modality.
### Audio-Visual Video Segmentation (AVVS)
The human ability to identify objects is not solely reliant on visual cues but also on auditory signals. For instance, the distinct sounds of a dog barking or a bird chirping are easily recognizable. This observation underscores the complementarity of audio and visual information. However, while speech-guided video segmentation is a more reliable means of distinguishing instance-level objects, sound can only provide information about object categories, making it a challenging task to locate and segment the object producing the sound. Zhou et al. (Zhou et al., 2019) pioneered the audio-visual segmentation (AVVS) task and proposed a framework incorporating the TPAVI module, a groundbreaking approach for achieving pixel-level segmentation using audio information. Nonetheless, their framework's handling of multi-modal feature fusion and audio guidance was inadequate. Thus, we present a novel framework that addresses these limitations.
### Vision Transformers
Transformer (Wang et al., 2018) was first introduced for sequence-to-sequence translation in natural language processing community and has achieved marvelous success in most computer vision tasks (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) such as object detection (Dong et al., 2018; Wang et al., 2018), tracking (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) and segmentation (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). The Transformer employs an attention mechanism to facilitate the transformation of input into output representations. Building upon this foundation, the DETR (Dong et al., 2018) has advanced the field by introducing a learnable query mechanism, which serves to expand the range of output possibilities. By employing an intelligent query and output matching mechanism, DETR is capable of determining the most optimal association between input and output elements. Furthermore, the VisTR (Wang et al., 2018) extends the capabilities of DETR to the domain of video segmentation, achieving notable advancements. DeAOT (Wang et al., 2018) decouples the visual and identification features in hierarchical propagation (Wang et al., 2018) and achieves state-of-the-art performance in semi-supervised VOS. Inspired by these works, our work also relies on the query mechanism of Transformer but considers an additional modality, i.e., audio, as the object reference. Moreover, we propose an effective spatial-temporal fusion module to realize audio-guided video segmentation.
## 3. Method
### Overview
Our pipeline for AVVS task can be formulated as encoding-decoding (depicted in Figure. 2). To address limitations in previous methods, such as inadequate correlation and vague reference, we carefully design two modules: the Decoupled Audio-Visual Transformer Encoding Module (DAVT; detailed in Section.4) and the Audio-Queired Decoding Module (detailed in Section.4.1). These modules enable effective audio-visual spatial-temporal connection and capture the object-level information to achieve more explicit reference, respectively. Moreover, we design a Blockwise-Encoded Gate to enable modeling of the overall distribution of all encoder blocks. In addition, CATR aims to output a pixel-level map of the object(s) that produce sound at the time of the image frame,
\[\{M_{t}\}_{t=1}^{T}=CATR(S_{t,\text{o}},S_{t,\text{a}}), \tag{1}\]
where we denote the video sequence as \(S=\{S_{t,\text{o}},S_{t,\text{a}}\}_{t=1}^{T}\). Moreover, \(S_{\text{o}}\) denotes the visual sequence and \(S_{\text{a}}\) denotes the audio sequence. The predictions are denoted as \(\{M_{t}\}_{t=1}^{T}\), \(M_{t}\in\mathbb{R}^{H\times W}\).
## 4. Decoupled Audio-Visual Transformer
In contrast to existing methods that account for the video temporal information and audio-visual interaction separately, we propose a method that obtains the spatial-temporal combinatorial-dependence between audio-audio, video-video, audio-video, and video-audio in a novel decoupling memory-efficient manner.
**Stack DAVT Blocks.** To conserve memory, we designed the Decoupled Audio-Visual Transformer (DAVT). The DAVT block involves two steps. Initially, we combine the audio and video features of corresponding frames and capture their temporal information simultaneously. Subsequently, we interact processed video features with audio features, respectively.
For a video sequence \(S_{\text{o}}\), we extract visual features after popular backbones and atrous spatial pyramid pooling (Dong et al., 2018) and obtain hierarchical visual feature maps. We denote the video features as \(F_{\text{o}}\in\mathbb{R}^{T\times H\times W\times C}\), where \(T,H\), \(W\) and \(C\) signifying the number of frames, height, width, and channel, respectively. Given an audio sequence \(S_{\text{a}}\), we employ a convolutional neural network VGGish (He et al., 2017) pre-trained on AudioSet (He et al., 2017) as backbone to extract audio features
\(F_{a}\in\mathbb{R}^{T\times d}\), where \(d=128\).
\[F_{v}^{l+1},F_{a}^{l+1}=DAVT(F_{v}^{l},F_{a}^{l}), \tag{2}\]
where \(DAVT(\cdot)\) denotes the Decoupled Audio-Visual Transformer block, \(l\) denotes the \(l\)-th block. By stacking multiple DAVT blocks, we can effectively capture the spatial-temporal correlation between audio-audio, video-video, audio-video and video-audio in a memory-efficient manner.
**Spatial Audio-Visual Fusion.** To obtain the audio-visual overall dependence, the visual features \(F_{v}^{l}\) and audio features \(F_{a}^{l}\) are linearly projected to a shared dimension \(D\). The video features for each frame are flattened and individually merged with the audio embeddings, yielding a set of \(T\) multi-modal sequences, each of shape \((H\times W+1)\times D\).
\[\tilde{F}_{v}^{l},\tilde{F}_{a}^{l}=SF(Concat(F_{v}^{l},F_{a}^{l})), \tag{3}\]
where \(Concat(\cdot)\) denotes the concatenate operation, and \(SF(\cdot)\) denotes the spatial audio-visual fusion function, which is employed as self-attention. Then we obtain the processed video feature \(\tilde{F}_{v}^{l}\) that contains the corresponding frame audio information. Similarly, the audio feature \(\tilde{F}_{a}^{l}\) contains the corresponding frame video information.
**Temporal A-to-V Fusion.** Employing a transformer-based encoder will consume a large amount of memory, so we use the decoupling method to carry out Audio-to-Video (A-to-V) interaction and Video-to-Audio (V-to-A) interaction respectively.
\[\tilde{F}_{v}^{l},\tilde{F}_{a}^{l}=TAV(\tilde{F}_{v}^{l},\tilde{F}_{a}^{l})= \text{Softmax}\left(\frac{\tilde{F}_{v}^{l}W^{Q}\cdot\left(\tilde{F}_{a}^{l}W^ {K}\right)^{\mathrm{T}}}{\sqrt{d_{\text{head}}}}\right)\tilde{F}_{a}^{l}W^{V} \tag{4}\]
where the \(TAV(\cdot)\) denotes the Temporal Audio-to-Video Fusion, which is employed as multi-head attention [36]. In \(TAV(\cdot)\), the query is the processed video feature \(\tilde{F}_{v}^{l}\), and key is the audio feature \(\tilde{F}_{a}^{l}\). Moreover, \(W^{Q},W^{K},W^{V}\in\mathbb{R}^{C\times d_{\text{head}}}\) are learnable parameters.
**Temporal V-to-A Fusion.** Correspondingly, we also design a Temporal Video-to-Audio Fusion function \(TVA(\cdot)\),
\[\tilde{F}_{v}^{l},\tilde{F}_{a}^{l}=TVA(\tilde{F}_{v}^{l},\tilde{F}_{a}^{l})= \text{Softmax}\left(\frac{\tilde{F}_{a}^{l}W^{Q}\cdot\left(\tilde{F}_{v}^{l}W ^{K}\right)^{\mathrm{T}}}{\sqrt{d_{\text{head}}}}\right)\tilde{F}_{v}^{l}W^{ V}, \tag{5}\]
where \(TVA(\cdot)\) denotes the Temporal Video-to-Audio Fusion that is also employed as multi-head attention. In \(TVA(\cdot)\), the query is the processed audio feature \(\tilde{F}_{a}^{l}\) and key is the video feature \(\tilde{F}_{v}^{l}\).
After we obtain the video feature \(\tilde{F}_{v}^{l}\) that from \(TVA(\cdot)\) and \(\tilde{F}_{v}^{l}\) that from \(TVA(\cdot)\), we merge the \(\tilde{F}_{v}^{l}\) and \(\tilde{F}_{v}^{l}\) by element-wise adding and obtain the fully interacted video feature \(\tilde{F}_{v}^{l}\)
**Blockwise-Encoded Gate.** The existing method typically employs the features of the last encoder block alone as the decoder input, which is insufficient because the features of each encoder block contain varying degrees of multi-modal interaction information (see
Figure 2. CATR employs an encoder-decoder structure. (a) In encoding, we merge audio and video features and capture their spatial-temporal combinatorial-dependencies. To conserve memory, we devise decoupling methods, utilizing temporal A-V and temporal V-A to fusion audio and video features. (b) To balance the contributions of multiple encoder blocks, we implement a blockwise gating method for utilizing all video features from each block. (c) In decoding, we introduce audio-constrained learnable queries, which harness audio features to extract object-level information, guiding target object segmentation.
Figure 4). Thus, we design gate mechanisms to utilize all the features extracted from each encoder block and balance the contributions of different encoder blocks.
Suppose we have two video features \(\tilde{F}_{v}^{l}\) and \(\tilde{F}_{v}^{l+1}\) from different Spatial-Temporal Encoding blocks, we design a gate unit and \(G^{l+1}\) denotes the \((l+1)\)-th output vector,
\[\begin{split} G^{l+1}&=Pool(Sigmoid(Conv(Concat( \tilde{F}_{v}^{l},\tilde{F}_{v}^{l+1})))),\\ F_{v}^{final}&=Conv(G^{l}\cdot\tilde{F}_{v}^{l}+G^{l+1 }\cdot\tilde{F}_{v}^{l+1}),\end{split} \tag{6}\]
where \(Concat(\cdot)\) denotes the concatenate operation, \(Conv(\cdot)\) denotes the convolution layer, \(Sigmoid(\cdot)\) denotes the sigmoid function and \(Pool(\cdot)\) denotes the global average pooling. The output channel of \(Conv(\cdot)\) is \(C\), which means the resulted gate vector \(G^{l+1}\) has \(C\) different elements which correspond to \(C\) gate values (we set \(C=256\) here).
The gate values \(G^{l+1}\) is applied for weighting the different-blocks video features \(F_{v}^{l}\) and \(\tilde{F}_{v}^{l+1}\). To obtain the final video encoding feature \(F_{v}^{final}\), we fuse all the re-weighted features by element-wise addition and convolutional layers.
### Audio-Queired Decoding
The existing methods fall short in effectively capturing object-level details and offering explicit information for cross-modal reasoning. To overcome this limitation, we propose audio-constrained queries, which impose an audio constraint on all object queries and generate object-aware dynamic kernels that filter target object segmentation masks from feature maps. Our approach aims to provide a comprehensive solution that enhances object recognition by incorporating audio signals into the process.
**Audio-Constrained Query.** We hierarchically fuse the final video feature \(F_{v}^{final}\) and the multi-layer features from backbone with an FPN-like (He et al., 2017) decoder, then we obtain the semantically-rich video feature maps \(F_{seg}=\{f_{seg,t}\}_{t=1}^{T}\), where \(f_{seg,t}\in\mathbb{R}^{\frac{R}{R}\times\frac{N}{R}\times C}\).
To capture the object-level information comprehensively, we devised a set of \(N\) learnable queries. These queries, along with the audio feature, were fed into the decoder embedding and position embedding in the transformer, resulting in queries with abundant object-level information. Next, we use two-layer dynamic kernels \(\mathcal{G}_{\text{kernel}}\) to generate a corresponding sequence segmentation for each query. Finally, the binary masks are generated by dynamic convolution:
\[M_{i}=\{F_{seg}*\omega_{i}\}_{i=1}^{N}, \tag{7}\]
where \(M_{i}\in\mathbb{R}^{N\times\frac{H}{N}\times\frac{N}{R}}\) denotes the segmentation mask with \(N\) queries. \(\omega_{i}\) and \(F_{seg}\) denote the \(i\)-th dynamic kernel weights and its exclusive feature map, respectively.
**Query Matching.** The aim of query matching is to determine which of the predicted sequences best fits the referred object. Here, we denote each ground-truth sequence as \(y=(M,R)=(\{M_{t}\}_{t=1}^{T},\{R_{t}\}_{t=1}^{T})\), where \(M\) denotes the ground-truth mask and \(R\) denotes a probability scalar indicating whether the instance corresponds to the referenced object and ascertains the visibility of this object within the current frame. In addition, we denote the prediction set as \(\hat{y}=\{\hat{y}_{i}\}_{i=1}^{N}\),where \(\hat{y}_{i}=(\{\hat{M}_{i,t}\}_{t=1}^{T},\{\hat{R}_{i,t}\}_{t=1}^{T})\).
To find the best prediction from \(N\) conditional queries, we use a reference head \(\mathcal{G}_{\text{Ref}}\), which consists of a single linear layer followed by a softmax layer. Then we obtain the positive sample by minimizing the matching cost:
\[\begin{split}\hat{y}_{\text{pos}}&=\operatorname*{ arg\,min}_{\hat{y}_{i}\in\hat{y}}\mathcal{C}_{\text{match}}\ \left(y,\hat{y}_{i}\right),\\ \mathcal{C}_{\text{match}}\ \left(y,\hat{y}_{i}\right)&= \mathcal{C}_{\text{dice}}\left(M,\hat{M}_{i}\right)+\mathcal{C}_{\text{ref}} \left(R,\hat{R}_{i}\right)\end{split} \tag{8}\]
where \(\hat{y}_{\text{pos}}\) denotes the permutation in \(N\) conditional queries with the lowest total cost. \(\mathcal{C}_{\text{dice}}\) takes on the role of overseeing and evaluating the predicted mask sequence in direct comparison with the ground-truth mask sequence, with this evaluation process being conducted by the Dice coefficients (He et al., 2017), and \(\mathcal{C}_{\text{ref}}\) utilizes cross-entropy to guide the reference predictions, aligning them with the corresponding ground-truth reference identity.
### Loss and Inference
We consider both mask and reference identity, and we define our loss function as follows:
\[\begin{split}&\mathcal{L}(y,\hat{y}_{i})=\mathcal{L}_{\text{Mask}} \left(M_{i},\hat{M}_{i}\right)+\mathcal{L}_{\text{Ref}}\left(R_{i},\hat{R}_{ i}\right)\\ &=\lambda_{d}\mathcal{L}_{\text{Dice}}\left(M_{i},\hat{M}_{i} \right)+\lambda_{f}\mathcal{L}_{\text{Focal}}\left(M_{i},\hat{M}_{i}\right)+ \lambda_{f}\mathcal{L}_{\text{Ref}}(R_{i},\hat{R}_{i})\end{split} \tag{9}\]
where \(\mathcal{L}_{\text{Mask}}\) ensures mask alignment between the predicted and ground-truth, and \(\mathcal{L}_{\text{Ref}}\) supervises the reference identity predictions. In addition, \(\mathcal{L}_{\text{Mask}}\) is implemented by a combination of the Dice (He et al., 2017) and the per-pixel Focal (He et al., 2017) loss functions, and \(\mathcal{L}_{\text{Ref}}\) is implemented by a cross-entropy term.
For inference, CATR will predict \(N\) object sequences. For each sequence, we obtain the predicted reference probabilities and the reference score set \(P=\{p_{i}\}_{i=1}^{N}\). We select the object sequence with the highest score and its index is denoted as \(R_{pos}\),
\[R_{pos}=\operatorname*{arg\,max}_{i\in\{1,2,\ldots,N\}}p_{i} \tag{10}\]
Finally, we return the final mask \(M_{pos}\) that corresponds to \(R_{pos}\)
Figure 3. Attention maps generated from spatial fusion & temporal A-V/V-A fusions. Sample 1: target is person & piano; spatial fusion focuses on person, neglects piano; with temporal A-V/V-A fusions, the attention map accurately highlights both. Sample 2: target is the piano; spatial fusion wrongly emphasizes person, but temporal A-V/V-A maps correctly focus on piano. Consequently, we draw the conclusion that spatial fusion provides an initial integration of audio information, whereas temporal A-V and V-A fusions further consolidate this information to accurately identify the target object.
## 5. Experiment
### Implementation Details
**Datasets.** We train and validate our model on three datasets: Semi-supervised Single-sound Source Segmentation (S4), Fully-supervised Multiple-sound Source Segmentation (M3), and Fully-supervised Audio-Visual Semantic Segmentation (AVSS). S4 and M3 datasets provide binary segmentation maps identifying the pixels of sounding objects, while the AVSS dataset offers semantic segmentation maps as labels. The S4 dataset contains audio samples with a single target object, supplying ground-truth solely for the initial frame during training. Evaluation necessitates predictions for all video frames in the test set. In contrast, both M3 and AVSS datasets contain audio samples with multiple target objects and furnish ground-truth data for all frames throughout the training phase.
**Training Details.** We conduct training and evaluation on S4, M3 and AVSS datasets, with the backbone ResNet-50 and Pyramid Vision Transformer (PVT-v2). The channel size of the spatial-temporal encoding module is set to \(C=256\). We use the VGGish model to extract audio features and use the Adam optimizer with a learning rate of 1e-5 for the fully-supervised MS3 settings, 1e-4 for the semi-supervised S4 and the fully-supervised AVSS settings. The batch size is set to 4 and the number of audio-constrained queries is set as 50. On S4, we set the training epoch as 25, and On M3 and AVSS, we set the training epoch as 100.
**Evaluation Metrics.** We use standard metrics Jaccard index (Krizhevsky et al., 2014)\(\mathcal{J}\) and F-score \(\mathcal{F}\) as the evaluation metrics, where \(\mathcal{J}\) and \(\mathcal{F}\) measure the region similarity and contour accuracy, respectively. In our experiment, we use \(\mathcal{M}_{\mathcal{J}}\) and \(\mathcal{M}_{\mathcal{F}}\) to denote the mean metric values over the whole dataset.
### Comparison
**Compare with Other Tasks.** Audio-visual video segmentation (AVVS) is a relatively new and emerging task, first introduced by (Wang et al., 2017), which aims to segment target objects in videos based on corresponding sounds. Although some well-established tasks, such as sound source localization (SSL), video object segmentation (VOS) and salient object detection (SOD) can perform video object segmentation, we utilize state-of-the-art methods from these related tasks as a comparative benchmark for our experiments. As evident in Table 1, there exists a significant performance gap between SSL-based methods and our CATR, primarily due to the lack of pixel-level results in SSL. Furthermore, our model demonstrates a clear advantage over video object segmentation (VOS) and salient object detection (SOD) methods on both S4 and M3 datasets. This superior performance can be attributed to the fact that VOS and SOD are single-mode tasks and do not utilize sound information. In summary, the comparison with SOTA methods from related tasks substantiates the exceptional performance of our model in AVVS.
**Compare with SOTA TPAVI.** Our proposed CATR outperforms the previous SOTA TPAVI on all datasets (S4, M3 and AVSS) with two backbones (see Figure 1 and 2). This improvement is due to the integration of the decoupled audio-visual transformer encoding module (DAVT) and the object-aware audio-queried decoding module. The DAVT block captures the combinatorial dependence of audio and video, combining audio and video in the space dimension to capture the temporal characteristics of this multi-modal combination. Compared to the previous model that considered the audio-visual temporal and interactive features independently, the combinatorial dependence we obtained is better equipped to locate the referred object. Additionally, our object-aware audio-queried decoder utilizes multiple queries containing rich audio cues and object-level information, providing more accurate object segmentation and target location compared to the previous model's decoding directly. By considering both object-level and audio-constrained decoding, our model achieves more precise results.
**Compare with the Processed Data.** Limited datasets exist for audio-visual video segmentation, leading (Wang et al., 2017) to introduce AVSBench-object datasets first. Among these, S4 represents a semi-supervised
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{3}{c}{S4} & \multicolumn{2}{c}{M3} \\ & & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) \\ \hline \multirow{2}{*}{SSL} & LVS (Krizhevsky et al., 2014) & resnet18 & 37.9 & 51 & 29.5 & 33 \\ & MSSL (Krizhevsky et al., 2014) & resnet18 & 44.9 & 66.3 & 26.1 & 36.3 \\ \hline \multirow{2}{*}{VOS} & 3DC (Krizhevsky et al., 2014) & resnet152 & 57.1 & 75.9 & 36.9 & 50.3 \\ & SST (Krizhevsky et al., 2014) & resnet101 & 66.3 & 80.1 & 42.6 & 57.2 \\ \hline \multirow{2}{*}{SOD} & iGAN (Liu et al., 2017) & resnet50 & 61.6 & 77.8 & 42.9 & 54.4 \\ & LGVT (Krizhevsky et al., 2014) & swin & 74.9 & 87.3 & 40.7 & 59.3 \\ \hline \multirow{4}{*}{AVVS} & TPAVI (Wang et al., 2017) & resnet50 & 72.8 & 84.8 & 47.9 & 57.8 \\ & PVT-v2 & 78.7 & 87.9 & 54.0 & 64.5 \\ \cline{1-1} & \multirow{2}{*}{CATR\({}^{*}\)} & resnet50 & 74.8 & 86.6 & 52.8 & 65.3 \\ \cline{1-1} & & PVT-v2 & 81.4 & 89.6 & 59.0 & 70.0 \\ \cline{1-1} & & resnet50 & 74.9 & 87.1 & 53.1 & 65.6 \\ \cline{1-1} & & PVT-v2 & 84.4 & 91.3 & 62.7 & 74.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Quantitative comparisons on AVSBench-object datasets (single-source,S4; multi-source,M3). Blue indicates the best performance with resnet backbone, while red indicates the best performance among all settings. \({}^{*}\) denotes that the training datasets are supplemented annotation with AOT.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c}{AVSS} \\ & & & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) \\ \hline \multirow{2}{*}{VOS} & 3DC (Krizhevsky et al., 2014) & resnet18 & 17.3 & 21.6 \\ & AOT (Wang et al., 2017) & resnet50 & 25.4 & 31.0 \\ \hline \multirow{2}{*}{AVSS} & TPAVI (Wang et al., 2017) & PVT-v2 & 29.8 & 35.2 \\ & CATR & PVT-v2 & 32.8 & 38.5 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Quantitative comparisons on AVSBench-semantic datasets (AVSS). Red indicates the best performance.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{PVT-v2} & \multicolumn{2}{c}{M3} & \multicolumn{2}{c}{S4} \\ & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) \\ \hline \multirow{2}{*}{Baseline} & 51.2 & 63.3 & 77.9 & 87.6 \\ & 53.1 & 64.7 & 79.1 & 88.5 \\ \multirow{2}{*}{(+) Encoupled A-V Transformer} & 57.7 & 68.6 & 82.6 & 90.1 \\ & 60.8 & 70.3 & 83.8 & 91.1 \\ \multirow{2}{*}{(+) Audio-queried Decoding} & 62.7 & 74.5 & 84.4 & 91.3 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Ablation analysis on the M3 and S4 dataset with PVT-v2 backbone.
learning task, providing ground-truth for the first frame in the training set. To maximize dataset utility without incurring additional labor, we devised a complementary approach for S4 and M3 datasets. Specifically, during M3 training, we employed AOT (Zhou et al., 2017) to predict unlabelled frames of the S4 training set, using these predictions as ground-truth for the AVVS task. Concurrently, we preserved the same setting as TPAVI, implementing semi-supervised training for the S4 dataset and fully supervised training for the M3 dataset.
Table 1's experimental results demonstrate that our model's performance on the original dataset (CATR) surpasses the previous state-of-the-art TPAVI, and the supplementary labeling method (CATR*) further enhances the model's effectiveness.
### Contribution of The Core Components
Table 3 demonstrates the contributions of each proposed module to the overall performance enhancement in CATR, utilizing PVT-v2 and ASPP as encoding and expanded fused feature maps as decoding in the baseline. Given the limited original training samples, it is essential to maximize the use of available data. We augment the two training sets, respecting their semi-supervised and fully supervised configurations, due to the similarity of their segmentation objectives. Specifically, we incorporated M3 video data into the S4 training set and supplemented the S4 training set with AOT-generated ground-truth when training the M3 dataset. The second row in Table 3 indicates that our additional annotation improves the performance of both M3 and S4 datasets.
Furthermore, our experiments reveal that the decoupled audio-visual transformer encoding, blockwise-encoded gate, and audio-queried decoding modules significantly enhance the model's performance. Notably, the audio-queried decoding module exhibits a more substantial improvement in the M3 dataset than in the S4 dataset (\(\mathcal{M}_{\mathcal{J}}\) is up 6.5 vs. 0.6). This is attributable to the multiple objectives in M3 dataset videos, which complicate segmentation target identification. The audio-constrained query contains rich object-level information and guides the segmentation effectively.
**The Impact of Decoupled A-V Transformer Encoding.** We developed a decoupled spatial-temporal encoding block consisting of three components. Initially, we integrated audio and video features in the spatial domain using a spatial fusion method, capturing the temporal dependence of this combination. Subsequently, the spatially fused features were processed through both the temporal A-V and temporal V-A modules. Table 5 demonstrates the contributions of each component to overall performance, highlighting the critical role played by the temporal A-V module. We attribute its significance to the predominant use of video features in the final decoding process, where video features serve as the key and value within the temporal A-V module, preserving crucial video information. To further illustrate this, we examined the attention maps of features processed by the spatial-temporal encoding block, depicted in Figure 3. The initial spatial fusion attention map appears scattered, particularly in the second example, indicating insufficient integration of audio guidance information. In contrast, attention maps for both temporal A-V and temporal V-A modules are more precise and focused, with the temporal A-V map in the second example almost exclusively centered on the piano, underscoring its importance.
**The Impact of Blockwise-Encoded Gate.** To optimize the contributions from each encoder block, the Blockwise-Encoded Gate was devised to fully harness the potential of the individual encoders' features. Table 3 demonstrates the enhancement in model performance when incorporating the Blockwise-Encoded Gate. Table 6 examines the influence of varying the number of channels within the Blockwise-Encoded Gate, where channels denote the quantity obtained after passing through a convolutional layer, reflecting the designated number of weights. The experimental findings indicate
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Gate Channel} & \multicolumn{2}{c}{M3} & \multicolumn{2}{c}{S4} \\ & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) \\ \hline
2 & 58.4 & 69.8 & 83.5 & 90.8 \\
64 & 61.8 & 70.9 & 83.9 & 91.1 \\
128 & 62.1 & 72.9 & 84.2 & 91.2 \\
256 & 62.7 & 74.5 & 84.4 & 91.3 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Analysis of the number of channels in Blockwise-Encoded Gate with PVT-v2 backbone.
Figure 4. Visualization of video features after processing at each stage. We observed that the initial features, generated by the backbone network, appeared indistinct. However, the video features progressively aligned with the desired segmentation object after the spatial-temporal encoding module.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Resnet50} & \multicolumn{2}{c}{M3} & \multicolumn{2}{c}{S4} \\ & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) \\ \hline TPAVI w audio & 47.9 & 57.8 & 72.8 & 84.8 \\ CATR w/o audio & 36.4 & 51.4 & 73.2 & 84.6 \\ CATR w audio & 52.1 & 64.6 & 74.1 & 86.1 \\ \hline \hline \multirow{2}{*}{PVT-v2} & \multicolumn{2}{c}{M3} & \multicolumn{2}{c}{S4} \\ & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) \\ \hline TPAVI w audio & 54.0 & 64.5 & 78.7 & 87.9 \\ CATR w/o audio & 43.9 & 57.6 & 80.7 & 89.1 \\ CATR w audio & 62.7 & 74.5 & 84.4 & 91.3 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Comparison between TPAVI with audio information and CATR without audio information.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{PVT-v2} & \multicolumn{2}{c}{M3} \\ & \(\mathcal{M}_{\mathcal{J}}\) & \(\mathcal{M}_{\mathcal{F}}\) \\ \hline \hline \multirow{2}{*}{CATR} & 62.7 & 74.5 \\ \cline{2-3} & 59.7 & 70.7 \\ \cline{2-3} & 58.4 & 69.8 \\ \cline{2-3} & 62.1 & 72.9 & 84.2 \\ \cline{2-3} & 256 & 62.7 & 74.5 \\ \hline \hline \end{tabular}
\end{table}
Table 5. Ablation analysis of spatial-temporal encoding.
that optimal performance is achieved with 256 channels, corresponding to our feature dimension. This suggests that assigning a weight to each feature channel enables the model to effectively account for the proportional contribution of each feature.
**The Impact of Audio-Queried Decoding.** In the decoding phase, we developed \(N\) learnable queries incorporating auditory cues and comprehensive object-level information. We employed the \(C_{\text{match}}\) function to select the query optimally aligned with the audio features, which served as the final mask. As demonstrated in Table 3, our audio-constrained query decoding approach substantially enhances the model's performance. Previous models neglect audio information in their decoding stages, resulting in segmentation outcomes predominantly influenced by adjacent video frames. Consequently, by emphasizing audio features during the decoding process, we effectively improve overall performance.
### The Impact of Audio Signals
The enhanced performance of CATR prompts an inquiry: Does this improvement stem from a superior comprehension of pixel-level video features or more effective utilization of audio features? To investigate, we conducted an experiment that removed audio features from the Spatial-Temporal Encoding Module and applied self-attention to video features. Additionally, we replaced learnable queries, originally constrained by audio, with video features in the decoding module. Table 4 presents the results of CATR without audio. These findings reveal that (1) our model effectively leverages audio features, as evidenced by the significant improvement in the M3 dataset when comparing CATR with and without audio (\(\mathcal{M}_{\mathcal{J}}\) is 0.627 vs. 0.439 with PVT-v2); and (2) our model demonstrates a more advanced understanding of pixel-level video features, as shown by the superior performance in the S4 dataset, even without employing audio information, surpassing the previous state-of-the-art TPAVI (\(\mathcal{M}_{\mathcal{J}}\) is 0.807 vs. 0.787 with PVT-v2).
## 6. Conclusion
We introduce a novel Combinatorial-Dependence Audio-Queried Transformer (CATR) framework that achieves state-of-the-art performance on all three datasets using two backbones. Unlike previous methods that treated temporal video information and audio-visual interaction separately, our proposed combinatorial dependence fusion approach comprehensively accounts for the spatial-temporal dependencies of audio-visual combination. Additionally, we propose the audio-constrained learnable queries to incorporate audio information comprehensively during decoding. These queries contain object-level information that can select which object is being referred to segment. To further enhance performance, we introduce a blockwise-encoded gate that balances contributions from multiple encoder blocks. Our experimental results demonstrate the significant impact of these novel components on overall performance.
**Limitations:** Objects with similar auditory characteristics can confound video segmentation outcomes when they coexist within a single frame. To address this challenge, we plan to explore the refinement of audio feature pre-processing in future research. **Broader Impact:** The exceptional performance of CATR enables its practical implementation in audio-guided video segmentation applications. These applications include utilizing auditory cues to accentuate objects in augmented and virtual reality environments and generating pixel-level object maps for surveillance inspections. We expect that our research will contribute to practical applications of audio-guided video segmentation.
###### Acknowledgements.
This work was supported by the Fundamental Research Funds for the Central Universities (No. 226-2023-00048), the National Key Research & Development Project of China (2021ZD0110700), and the National Natural Science Foundation of China (U19B2043, 61976185).
Figure 5. Comparative analysis of the TPAVI method and our proposed CATR. We present two qualitative examples from the M3 and S4 datasets. The M3 dataset example (left) demonstrates TPAVI’s inability to detect the transition of auditory objects, such as from a violin to a guitar, whereas CATR accurately predicts these changes in alignment with the audio signal. In S4 example (right), CATR exhibits better performance on pixel-level segmentation in the presence of a complex background. |
2305.19817 | On the effect of angular momentum on the prompt cusp formation via the
gravitational collapse | In this work, we extend the model proposed by White concerning the
post-collapse evolution of density peaks while considering the role of angular
momentum. On a timescale smaller than the peak collapse, $t_{0}$, the inner
regions of the peak reach the equilibrium forming a cuspy profile, as in
White's paper, but the power-law density profile is flatter, namely $\rho
\propto r^{-1.52}$, using the specific angular momentum $J$ obtained in
theoretical models of how it evolves in CDM universes, namely $J \propto
M^{2/3}$. The previous result shows how angular momentum influences the slope
of the density profile, and how a slightly flatter profile obtained in
high-resolution numerical simulations, namely $\rho \propto r^{\alpha}$,
$(\alpha \simeq -1.5)$ can be reobtained. Similarly to simulations, in our
model adiabatic contraction was not taken into account. This means that more
comprehensive simulations could give different values for the slope of the
density profile, similar to an improvement of our model. | Antonino Del Popolo, Saeed Fakhry | 2023-05-31T13:01:32Z | http://arxiv.org/abs/2305.19817v2 | # On the effect of angular momentum on the prompt cusp formation via the gravitational collapse
###### Abstract
In this work, we extend the model proposed by White in [1] concerning the post-collapse evolution of density peaks while considering the role of angular momentum. On a time scale smaller than the peak collapse, \(t_{0}\), the inner regions of the peak reach the equilibrium forming a cuspy profile, as in White's paper, but the power-law density profile is flatter, namely \(\rho\propto r^{-1.52}\), using the specific angular momentum \(J\) obtained in theoretical models of how it evolves in CDM universes, namely \(J\propto M^{2/3}\). The previous result shows how angular momentum influences the slope of the density profile, and how a slightly flatter profile obtained in high-resolution numerical simulations, namely \(\rho\propto r^{\alpha}\) (\(\alpha\simeq-1.5\)) can be reobtained. Similarly to simulations, in our model adiabatic contraction was not taken into account. This means that more comprehensive simulations could give different values for the slope of the density profile, similar to an improvement of our model.
Dark Matter - Gravitational Collapse - Galactic Halos
## I Introduction
Dark matter halos are nonlinear hierarchical structures whose formation and evolution are predicted in the cosmological perturbation theory [2]. The initial stages of the formation of these structures can be attributed to the physical conditions under which the primordial density fluctuations can be separated from the expansion of the Universe and collapse due to the self-gravitational force. Dark matter halos are a suitable and fundamental framework for studying nonlinear gravitational collapse in the Universe. Therefore, the post-collapse evolutionary stages of dark matter halos play an essential role in explaining their local properties, see, e.g. [3; 4; 5; 6; 7; 8].
Accordingly, in recent years, many high-resolution simulations of collapse and post-collapse evolution of dark matter halos have been performed, see, e.g. [9; 10; 11; 12; 13; 14; 15; 16]. The outcomes of these simulations demonstrate that shortly after the collapse of the initial density peaks, the central regions of dark matter halos can be well described by the power-law density profile \(\rho(r)=Ar^{\gamma}\) with \(\gamma\approx-1.5\). In this relation, \(A\) is constant, and its value is estimated for each dark matter halo from the characteristic scale and collapse time of the relevant density peak [16]. We want to recall that the high-resolution simulation by [16] are not taking into account baryons, and this implies that several important physical effects, like, for example, adiabatic contraction are not taken into account. This implies that the result \(\gamma\approx-1.5\) could be modified by those effects. On the other hand, the dynamics of hierarchical structures in dark matter models, except for those with self-interactions, indicate that the galactic halos in the earliest stages of their post-collapse evolution start to grow instantaneously in their size and mass and ultimately reach a uniform non-power-law density distribution [17; 18; 19; 20].
Dark matter-only simulations of galaxy-type and cluster-sized halos indicate that the effective slope of halo density profiles at the smallest resolution radii must be shallower than \(\gamma\approx-1.5\), see, e.g. [21; 22; 23]. However, the slope of the density profile may return to a steep state due to the resistance of the initial cusp in the central regions of dark matter halos, see, e.g. [16]. Although many numerical studies have been conducted in the post-collapse evolution of initial density peaks, the black-box nature of simulations does not explain the formation of prompt cusps with a power-law index \(\gamma\approx-1.5\).
In order to provide a logical description of the shape of the formed halos, the first theoretical model was presented in [24], in which an initial density peak prone to collapse is considered as a perturbed point-like mass in a dust-filled collisionless Einstein-de Sitter Universe. The results of this study show that dark matter halos with a power-law index \(\gamma=-9/4\) are created when the surrounding matter falls into the perturbation. After that, in [25], a more general approach was proposed by considering spherical collapse in purely radial orbits, which did not predict the power-law index describing dark matter halos as \(\gamma>-2\). However, assuming purely radial orbits to describe the complex collapse conditions seemed simplistic. Accordingly, it was shown in [26] that the consideration of randomly oriented orbits with non-zero angular momentum leads to providing an interval for the power-law index as \(0>\gamma>-3\). The mentioned studies all agree that the orbital period of the circle in the ra
dius encompassing mass \(M\) is proportional to the time of the fall of the halo shell that surrounded mass \(M\) in the early Universe. Also, in [27], a more complete analytical model was presented in explaining the density profile of dark matter halos, which describes the relationship between the fall time and the halo structure.
Despite their valuable results, the aforementioned studies cannot provide information on the instantaneous formation of the cusp during the earliest stages of post-collapse evolution of the initial density peaks, because the process of instantaneous formation of cusps requires a different description of the fall of the shells into the halo in a suitable timescale. In this regard, [1] has presented an analytical model for the post-collapse evolution of initial density peaks in a collisionless dust-filled Universe. The results of this study exhibit that on instantaneous time scales compared to the collapse time of the initial density peaks, the innermost regions of the formed halos are consistent with the density profile of adiabatic cusps with a power distribution index \(\gamma=-12/7\). The power-law index value obtained by [1] is not compatible with the relatively flatter corresponding value of \(\gamma\approx-1.5\) obtained from high-resolution numerical simulations. Notably, in the analysis presented in [1], the effect of angular momentum is not included, which can significantly reduce the difference between analytical approaches and high-resolution simulations.
In this work, we focus on studying the effect of angular momentum on the prompt cusp formed during the post-collapse evolution of initial density peaks. In this respect, the outline of the work is as follows. In Sec. II, We discuss a theoretical framework for the gravitational collapse from the initial density peaks and evaluate its post-collapse evolutionary stages in the presence of angular momentum. Also, in Sec. III, we discuss the results obtained in this work and compare them with those extracted from the previous studies. Finally, in Sec. IV, we summarize our findings.
## II Theoretical model of gravitational collapse
As mentioned in the previous section, dark matter halos are nonlinear structures formed by the gravitational collapse of peaks in overdensity regions in a dust-filled collisionless Universe. In the earliest stages of the formation of initial density peaks, cosmological perturbation theory estimates the local density of the neighborhood of the peaks as
\[\rho(q,t)=\bar{\rho}(t)[1+\delta(q,t)]=\bar{\rho}(t)\left[1+\delta_{\rm sc}(t )\left(1-\frac{q^{2}}{6R^{2}}\right)\right], \tag{1}\]
where \(q=r/a\) is a Lagrangian radial coordinate, \(\bar{\rho}=1/(6\pi Gt^{2})=\bar{\rho_{0}}(t/t_{0})^{-2}=\bar{\rho_{0}}a^{-3}\) is the mean density of the background, \(a=(t/t_{0})^{2/3}\) is the cosmological expansion factor, \(R=\sqrt{\left|\delta/\bigtriangledown^{2}\delta\right|}\) is a characteristic Lagrangian scale for the peak, and \(\delta_{\rm sc}=1.686a\) is the critical value of linear overdensities. Also, the index "0" represents the evaluated values of the quantities at the collapse time of the central region of the peak to infinite density. The mass enclosed by a radius of \(q\) is determined as \(M=4\pi\bar{\rho}_{0}q^{3}/3=M_{0}(q/R)^{3}\), where \(M_{0}=4\pi\bar{\rho}_{0}R^{3}/3\). Hence, the average overdensity within radius \(q\) can be specified by the following relation
\[\bar{\delta}(q,t) = 1.686a\left(1-\frac{q^{2}}{10R^{2}}\right) \tag{2}\] \[= 1.686a\left[1-0.1\left(\frac{M}{M_{0}}\right)^{2/3}\right].\]
Due to maintaining the radial hierarchy in mass shells, their collapse happens just when the mass enclosed in the shell reaches the critical value of overdensities \(1.686\). Therefore, the shell collapse time for the lowest order in \(M/M_{0}\) can be calculated as
\[t_{\rm c}(M)=t_{0}\left[1+0.15\left(\frac{M}{M_{0}}\right)^{2/3}\right]. \tag{3}\]
At times earlier than \(t_{\rm c}(M)\), considering the non-zero angular momentum, the infall velocity of the shell is
\[\frac{1}{2}\left(\frac{dr}{dt}\right)^{2}=\frac{GM}{r}+\int\frac{L^{2}}{M^{2} r^{3}}dr, \tag{4}\]
where \(G\) is the gravitational constant and \(L\) is the angular momentum.
As discussed in several papers, e.g. [4], two sources of angular momentum are involved in structure formation: (1) derived from bulk streaming motions and (2) produced by random tangential motions. The former, the ordered angular momentum, arises due to tidal torques experienced by protohalos [28; 29; 30]. The latter, the random angular momentum [31], is connected to random velocities (non-radial motions).
Several studies have concluded that larger amounts of angular momentum, ordered or random, lead to shallower inner density profiles, see, e.g. [31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. In our case, in these smooth peaks, there are no random motions.
The previous relation describes how the angular momentum of typical particles in a shell scales with Lagrangian radius under the assumption of a uniform tidal field across the considered region. As mentioned above, the ordered angular momentum in the CDM model has been studied by several authors [28; 29; 30]. The specific angular momentum of each mass element is defined as \(J=L/M\), and takes the form \(J=J_{0}(M/M_{\rm J})^{2/3}\)[30; 41; 42], where \(J_{0}\) and \(M_{\rm J}\) are characteristic angular momentum and mass.
Hence, solving Eq. (4) yields the following relation
\[r^{3/2}F(r,M)=3(GM_{\rm J})^{2}\left(\frac{M}{M_{\rm J}}\right)^{1/2}(t_{\rm c }-t), \tag{5}\]
here
\[F(r,M)=\sqrt{2GM_{\rm J}-\frac{J_{0}^{2}}{r}\left(\frac{M}{M_{\rm J }}\right)^{1/3}}\times\\ \left[GM_{\rm J}+\frac{J_{0}^{2}}{r}\left(\frac{M}{M_{\rm J}} \right)^{1/3}\right]. \tag{6}\]
Equivalently, one can write Eq. (5) as
\[r^{3/2}F(r,M)=3(GM_{\rm J})^{2}\left(\frac{M}{M_{\rm J}}\right)^ {1/2}t_{0}\times\\ \left[0.15\left(\frac{M}{M_{0}}\right)^{2/3}-\frac{\Delta t}{t_{0 }}\right], \tag{7}\]
where \(\Delta t=t-t_{0}\). Note that Eq. (7) has been obtained from Eq. (5), substituting the value of \(t_{c}\) given by Eq. (3). Here, we can assume that Eq. (3) is still valid because, as we will see, angular momentum slightly changes the spherical collapse whose collapse time is given by Eq. (3).
After simplifying this expression, it takes the following form,
\[\left(\frac{r}{R}\right)^{3/2}F(r,M)=\sqrt{2}(GM_{\rm J})^{3/2} \left(\frac{M}{M_{0}}\right)^{1/2}\times\\ \left[0.15\left(\frac{M}{M_{0}}\right)^{2/3}-\frac{\Delta t}{t_{ 0}}\right], \tag{8}\]
in which \(0<\Delta t/t_{0}<0.15(M/M_{0})^{2/3}\) and \(M<M_{0}\) are valid for a short time interval after the initial collapse of the peak. A collapsing shell will cross previously collapsed shells just before reaching the pericenter1, and this will cause the enclosed mass to drop below \(M\). We employ the scaled quantities \(r^{\prime}=r/R\), \(m=M/M_{0}\), and \(s=\Delta t/t_{0}\) and set the time origin at \(t_{0}\) for the sake of simplicity. In this case, Eq. (8) takes the following form
Footnote 1: Here, we recall that due to the presence of the angular momentum, the shells do not reach the origin, but an orbital pericenter.
\[r^{\prime 3/2}F(r^{\prime},m)=\sqrt{2}(GM_{\rm J})^{3/2}m^{1/2}\left(0.15m^{2/3}-s \right), \tag{9}\]
where
\[F(r^{\prime},m)=\sqrt{2GM_{\rm J}-\frac{J_{0}^{2}}{R}\left(\frac {M_{0}}{M_{\rm J}}\right)^{1/3}\frac{m^{1/3}}{r^{\prime}}}\times\\ \left[GM_{\rm J}+\frac{J_{0}^{2}}{R}\left(\frac{M_{0}}{M_{\rm J}} \right)^{1/3}\frac{m^{1/3}}{r^{\prime}}\right]. \tag{10}\]
Also, correspondingly \(0<s<0.15m^{2/3}\) and \(m<1\). Specifically, in limit \(r^{\prime}\rightarrow[J_{0}^{2}(M_{0}/M_{\rm J})^{1/3}]/(2M_{\rm J}RG)\), it can be found that \(m\to m_{\rm c}(s)=(s/0.15)^{3/2}\). This is the inverse of Eq. (3), which describes the collapse time of the initial mass shell \(m\) as \(s_{\rm c}(m)=0.15m^{2/3}\)2.
Footnote 2: As can be seen in the following, for the parameters characteristic of the galaxy DD046, we have that \([J_{0}^{2}(M_{0}/M_{\rm J})^{1/3}]/(2M_{\rm J}RG)\simeq 0.001\). The fact that \(r^{\prime}\) cannot reach \(0\) is due to the presence of the angular momentum.
Let's assume a mass shell that initially contains mass \(M^{\prime}\) and is in a critical state of collapse at the moment of \(t=t_{0}+0.15(M^{\prime}/M_{0})^{2/3}\). The equation of motion of this shell as it re-expands is
\[\frac{d^{2}r}{dt^{2}}=-\frac{GM(r,t)}{r^{2}}+\frac{J^{2}}{r^{3}}, \tag{11}\]
which in scaled variables defined above equates to
\[\frac{d^{2}r^{\prime}}{ds^{2}}=-\frac{2}{9}\frac{m(r^{\prime},s)}{r^{\prime 2} }+\frac{2}{9}\frac{J^{2}}{GM_{0}r^{\prime 3}R}. \tag{12}\]
Figure 1: Post-collapse trajectory. Left panel: solution without angular momentum, \(J=0\). Right panel: solution taking into account angular momentum, and \(m^{\prime}=0.5\)
Since the radius of the shell fulfills Eq. (5) during the last phase of initial collapse, it can be assumed that the post-collapse motion acts as the time-reverse of the initial collapse. Therefore, the shell radius in scaled variables can be determined as follows
\[r^{\prime 3/2}F(r^{\prime},m^{\prime})=\sqrt{2}(GM_{\rm J})^{3/2}m^{\prime 1/2} \left(s-s_{\rm c}(m^{\prime})\right), \tag{13}\]
where \(m^{\prime}=M^{\prime}/M_{0}\), \(s_{\rm c}(m^{\prime})=0.15m^{\prime 2/3}\), and
\[F(r^{\prime},m^{\prime})=\sqrt{2GM_{\rm J}-\frac{J_{0}^{2}}{R} \left(\frac{M_{0}}{M_{\rm J}}\right)^{1/3}\frac{m^{\prime 1/3}}{r^{\prime}}}\times\] \[\left[GM_{\rm J}+\frac{J_{0}^{2}}{R}\left(\frac{M_{0}}{M_{\rm J}} \right)^{1/3}\frac{m^{\prime 1/3}}{r^{\prime}}\right]. \tag{14}\]
This can be used as initial conditions to solve Eq. (12). It can also be deduced from Eq. (9) that for all \(r^{\prime}\) and \(s>s_{\rm c}(m^{\prime})\), one obtains \(m>m^{\prime}\). This means that the deceleration of the shell during its re-expansion is larger than the acceleration it experiences during its first collapse to the center. Therefore, favorable conditions can be provided for the second collapse of the shell to the center at smaller radii. In fact, the final cusp in the halo density profile is the product of this asymmetry between the re-expansion and collapse of the mass shells. By re-defining the variables as \(s^{\prime\prime}=s/m^{\prime 2/3}\), and \(r^{\prime\prime}=r/m^{\prime 7/9}\) and defining \(f=m(r^{\prime},s)/m^{\prime}\), Eq. (9) can be determined as follows
\[r^{\prime 3/2}F(r^{\prime\prime},m^{\prime})=\sqrt{2}(GM_{\rm J})^{3/2}f^{1/ 2}\left(0.15f^{2/3}-s^{\prime\prime}\right), \tag{15}\]
in which
\[F(r^{\prime\prime},m^{\prime})=\sqrt{2GM_{J}-\frac{J_{0}^{2}}{R} \left(\frac{M_{0}}{M_{J}}\right)^{1/3}\frac{1}{r^{\prime\prime}m^{\prime 4/9}}}\times\] \[\left[GM_{J}+\frac{J_{0}^{2}}{R}\left(\frac{M_{0}}{M_{J}}\right)^ {1/3}\frac{1}{r^{\prime\prime}m^{\prime 4/9}}\right]. \tag{16}\]
Accordingly, Eq. (12) is specified as follows in terms of the variables scaled above
\[\frac{d^{2}r^{\prime\prime}}{ds^{\prime\prime 2}}=-\frac{2}{9}\frac{f(r^{ \prime\prime},s^{\prime\prime})}{r^{\prime\prime 2}}+\frac{2}{9}\frac{J_{0}^{2}}{ GM_{0}R}\frac{1}{r^{\prime\prime 3}m^{\prime 4/9}}. \tag{17}\]
Also, at small radii, the initial solution of Eq. (13) is
\[r^{\prime 3/2}F(r^{\prime\prime},m^{\prime})=\sqrt{2}(GM_{\rm J})^{3/2}\left(s^{ \prime\prime}-0.15\right). \tag{18}\]
Differently from [1], the post-collapse trajectories are dependent on \(m^{\prime}\) so they are not self-similar. In any case, the non-self-similarity is not strong, and it makes sense to integrate a shell's trajectory independently because they are not all coupled.
However, as we will show, the dependency of the density \(\rho(r)\) from the radius is not that obtained by White, namely \(\rho\propto r^{-12/7}\), since \(r^{\prime\prime}\) is not a constant equal to \(0.087\), but slightly depends on mass.
By integrating numerically Eq. (17), using the initial solution at a small radius, i.e., Eq. (18), the function \(f(r^{\prime\prime},s^{\prime\prime})\), i.e., Eq. (15), and fixing the values of \(R\), \(M_{0}=M_{J}\), \(J_{0}\), one can obtain the solution. In the case of \(J_{0}=0\), the solution is the same as [1]. In that case, the time and radius of the second apocentre are given by \(s^{\prime\prime}=0.199\), and \(r^{\prime\prime}=0.087\), which can be written in terms of the original variables \(r\) and \(t\) as in Eq. (14) of [1], i.e.,
\[r_{\rm max}=0.087R\left(\frac{M^{\prime}}{M_{0}}\right)^{7/9}. \tag{19}\]
In the case of nonzero angular momentum, one can obtain \(r^{\prime\prime}=\left(0.104/m^{\prime 0.1}\right)\). In other words, in the original variables, the radius of the apocentre can be written as
\[r_{\rm max}=\frac{0.104}{\left(M^{\prime}/M_{0}\right)^{0.1}}R\left(\frac{M^{ \prime}}{M_{0}}\right)^{7/9}=0.104R\left(\frac{M^{\prime}}{M_{0}}\right)^{0.67 8}. \tag{20}\]
In Fig. (1), we have shown the post-collapse trajectories in the case of zero angular momentum (left panel), and in the presence of nonzero angular momentum and \(m^{\prime}=0.5\) (right panel). As can be seen from the figure, also from Eqs. (19), and (20), \(r_{\rm max}\) increases with \(M^{\prime}\), but slightly slower when the nonzero angular momentum is taken into account. The post-collapse equilibrium of the structure is reached in times much shorter than \(t_{0}\), and it is established from the inside to the outside.
In the presence of nonzero angular momentum, the mass in a radius \(r\) scales as \(M(r)\propto r^{1.48}\) This dependence comes directly from Eq. (20), solving with respect to \(M(r)\). This scaling can be obtained because the gravitational force at \(r<r_{\rm max}\) has a small evolution at \(t>t_{\rm max}\). Then \(\rho(r)\) can simply be specified by calculating the ratio between the mass and the volume, leading to \(\rho\propto\frac{M}{r^{3}}\propto\frac{r^{1.48}}{r^{3}}\propto r^{-1.52}\).
Interestingly, this result is in agreement with the \(N\)-body simulations, see, e.g. [9; 10; 11; 12; 13; 14; 15; 16].
We have to stress that the result \(\rho^{-1.52}\), in agreement with [16] simulations is obtained for, low, peculiar values of the specific angular momentum, namely \(J_{0}\) (related to the product of velocity and radius), \(R\), and \(M\) has been fixed to those of DD046 [43] (see their Fig. 16, and Table 2). In the case of structures having a large specific angular momentum, as spiral galaxies similar to the Milky Way, and then a term \(2J_{0}^{2}/9GM_{0}R\) in Eq. (17) larger than in the case of DD046, there will be a further flattening of the profile. About this issue, and the comparison with numerical simulations, see our Section III.
Apart from the mainly used definition of angular momentum that we discussed, as shown in [44], the specific angular momentum of each mass element can be defined as \(J=L/M=kr^{\alpha}\), where \(\alpha=1.1\pm 0.3\) is a power-law index corresponding to the Gaussian distribution on dark matter halos, and \(k=J_{0}/R^{1.1\pm 0.3}\), where \(J_{0}\) and \(R\) are typical specific angular momentum and scale of a halo, respectively.
In order to have an algebraically compact solution, one can choose \(\alpha=0.9\). By repeating the calculations for the mentioned expression of angular momentum, one can recover the equations needed to obtain the post-collapse trajectories. Accordingly, the implicit equation for \(f(r^{\prime\prime},s^{\prime\prime})\) is given by
\[r^{\prime\prime 3/2}{}_{2}F_{1}\left(\frac{1}{2},\frac{15}{8}; \frac{23}{8};\frac{5k^{2}(Rr^{\prime\prime}m^{\prime 7/9})^{0.8}}{Gm^{\prime}M_{0}}\right)\] \[=f^{1/2}\left(0.15f^{2/3}-s^{\prime\prime}\right), \tag{21}\]
where \({}_{2}F_{1}(a,b;c;z)\) represents the hypergeometric function. The equation of motion, i.e., Eq. (17), is specified as follows in terms of scaled variables
\[\frac{d^{2}r^{\prime\prime}}{ds^{\prime\prime 2}}=-\frac{2}{9}\frac{f(r^{ \prime\prime},s^{\prime\prime})}{r^{\prime\prime 2}}+\frac{2}{9}\frac{k^{2}R^{0.8}}{ GM_{0}}\frac{1}{r^{\prime\prime 1.2}m^{\prime 0.377}}. \tag{22}\]
Hence, the initial solution at a small radius takes the following form
\[r^{\prime\prime}\left\{{}_{2}F_{1}\left(\frac{1}{2},\frac{15}{8 };\frac{23}{8};\frac{5k^{2}(Rr^{\prime\prime}m^{\prime 7/9})^{0.8}}{Gm^{ \prime}M_{0}}\right)\right\}^{2/3}\] \[=(s^{\prime\prime}-0.15)^{2/3}. \tag{23}\]
Similar to the method employed earlier and using Eqs. (21), (22), and (23), the following formula can be obtained
\[r_{\rm max}=\frac{0.1138}{\left(M^{\prime}/M_{0}\right)^{0.12}}R\left(\frac{M ^{\prime}}{M_{0}}\right)^{7/9}=0.1138R\left(\frac{M^{\prime}}{M_{0}}\right)^{ 0.658}. \tag{24}\]
As a result, \(\rho(r)\) can be specified simply by calculating the ratio between the mass and the volume, leading to \(\rho(r)\propto r^{-1.48}\). In [44], the specific angular momentum has a certain scatter, \(J\propto r^{-1.1\pm 0.3}\). Taking account of this scatter, the density profile is proportional to \(\rho(r)\propto r^{(-1.44,-1.58)}\).
Up to here, we have provided an analytical approach to determine the effect of angular momentum on the prompt cusp formed through gravitational collapse. In the next section, we will discuss the reasons why inclusion of the nonzero angular momentum produces a flatter profile than that obtained in [1].
## III Discussion
As noticed in [1], there are several points to discuss relative to the validity of the result obtained in that paper. The system used in that paper is spherically symmetric, then the motions are purely radial. In this condition, the density profile should have an inner slope close to or smaller than \(-2\)[45; 46; 24; 25]. The scaling argument in [1] fails unless some angular momentum is acquired before particles reach the final orbits. As previously discussed, angular momentum can be acquired through tidal torques experienced by protohalos [28; 29; 30], or deviation from spherical symmetry. According to [1], angular momentum would restore the slope \(-12/7\) for the cusp (as expected from a model proposed by [47]), but in the case, it is too strong it could invalidate the result. Again, as previously reported, several studies [31; 32; 33; 34; 35; 36; 37; 38; 39; 40] arrived to the conclusion that large amounts of angular momentum, leads to shallower inner density profiles, till even the formation of a central core. As mentioned by [1], the main point leading to doubt of the applicability of the argument used in [1] is the assumption of spherical symmetry. However, the same author after an argumentation of this issue arrives to conclude that \(\rho\propto r^{-12/7}\), even if as shown in [16] the initial collapse is complex and very far from spherical. However, simulations find a slope \(\simeq-1.5\), flatter than that obtained in [1]. The author then asks whether the model captures the features of violent relaxation in the inner region of the peak, or whether there are some factors that explain the difference between the simulated slopes of \(-1.5\) and \(-12/7\) obtained in that paper. As shown in [4], there are several factors that change the inner slope of the density profile. In that paper, the collapse was studied taking into account ordered and random angular momentum, dynamical friction [48], dark energy, and dark matter contraction due to baryonic infall [49; 50]. Those physical effects influence the structure formation and the inner slope in different ways. For example, baryonic infall produces a steepening of the profile, while angular momentum and dynamical friction slow down the collapse, and flatten the profile. In the present paper, we have decided to take into account only the ordered angular momentum to show how it is enough to reduce the inner slope of the density profile in agreement with theoretical studies and simulations [31; 32; 33; 34; 35; 36; 37; 40]. In our model, the change of the inner structure is related to the interaction of the structure studied with the neighboring ones, arising from the asphericity of those structures (see [51] for a discussion on the relation between angular momentum acquisition, asphericity, and structure formation). Asphericity gives rise to a mass-dependent inner slope. The equation of motion in our model contains a mass-dependent angular momentum, born from the quadrupole momentum of the proto-structures with the tidal field of the neighboring objects. This term slightly breaks the self-similarity of the trajectories of the mass shells. Hence, the turnaround epoch and collapse time would change. The collapse in our model is different from that of [1]. Both turnaround epoch and collapse time change, together with the collapse threshold \(\delta_{c}\), which becomes mass-dependent and a monotonic decreasing function of the mass (see Fig. 1 in [52])3.
Footnote 3: We want also to recall that the behavior of the threshold implies that less massive perturbation (e.g. galaxies) to form structures must cross a higher threshold than more massive ones. Using
The flattening of the profile can be explained as follows. In the case of pure radial orbits, the inner part of the profile is dominated by particles from the outer shells. When the angular momentum increases, these particles are closer to the maximum radius, and this gives rise to a shallower profile. Particles having smaller angular momentum will enter the inner part (core) of the halo, but with a reduced radial velocity in comparison with purely radial collapse. Some particles have an angular momentum so large that they never fall in the core. In other terms, particles with larger angular momentum are prevented from coming close to the central region of the halo, then contributing to the central density. Consequently, the profile is flattened. Moreover, this result is in agreement with the previrialization conjecture [57], according to which initial asphericities and tidal interactions between neighboring protostructures give rise to non-radial motions opposing the collapse. Apart from this, the role of angular momentum in flattening the profile is in agreement with previously mentioned studies.
One of the main points mentioned in [1] is that the difference between the prediction of the analytical approach and the simulations may be due to the effect of some additional factors. To address this point, as shown in [4], it should be noted that the effect of additional factors on the distribution of the inner regions of halos is a potential possibility. In this work, we have shown that the consideration of angular momentum affects the slope of the density profile, in such a way that the difference between the prediction obtained from the theoretical approach and the simulations is significantly reduced.
Before concluding, as we previously wrote, we recall that the result \(\rho^{-1.52}\), in agreement with simulations, is obtained for, low, peculiar values of the specific angular momentum, radius, and mass. A further flattening with respect to [1] should be waited for large spiral galaxies like the Milky Way. As we reported in the introduction, the high-resolution simulation by [16] have dark matter only, so baryon-induced effects like, for example, adiabatic contraction [49; 50] do not apply. Limiting ourselves to this issue, the effect of adiabatic contraction is that of steepening the profile. In other terms, the slope \(\gamma\approx-1.5\) in [16] could be modified by the effects not taken into account. Also, our model is not taking into account the adiabatic contraction. Then, to get a more precise value of the slope, it will be important to run appropriate simulations, and in our case to use a model like that described in [4] taking not only into account angular momentum but also dynamical friction, adiabatic contraction, etc.
## IV Conclusions
In this paper, we have extended the model proposed by [1], relative to the post-collapse evolution of density peaks, looking at the effect angular momentum can have on the author's final solution. In particular, we wanted to see if angular momentum could reduce the discrepancy between the density profile extracted from [1] and that obtained from simulations. As several times cited, several papers stressed that angular momentum has the effect of flattening the inner slope of density profiles. By modifying the equations presented in [1], and including the nonzero angular momentum, we have shown that on a timescale smaller than the peak collapse, \(t_{0}\), the equilibrium configuration of the peak is a cusp but with a flatter slope \(\rho\propto r^{-1.52}\), for the classical form of the specific angular momentum, \(J\propto M^{2/3}\). The previous result indicates how angular momentum can reduce the discrepancy between the slope of the density profile derived in [1] and that obtained in in high-resolution numerical simulations, namely \(\rho\propto r^{\alpha}\) (\(\alpha\simeq-1.5\)). The reason why the angular momentum flattens the inner density profile is qualitatively justified by the fact that in the case we have considered a collapse with pure radial orbits, as in [1], outer particles dominate the inner part of the profile, and this gives rise to cusiper density profiles. If the nonzero angular momentum is present, the particles' orbits are closer to the maximum radius, with the consequence that a flatter profile can be obtained. In other terms, particles with larger angular momentum are prevented from coming close to the halo's center, then contributing to the central density. Consequently, the density profile is flattened.
###### Acknowledgements.
The authors would like to gratefully acknowledge Prof. Giovanni Russo from the mathematical department of Catania University for helping to advance some of the calculations.
|
2308.16500 | Images of Multilinear Polynomials on Generalized Quaternion Algebras | The main goal of this paper is to extend [J. Algebra Appl. 20 (2021),
2150074] to generalized quaternion algebras, even when these algebras are not
necessarily division rings. More precisely, in such cases, the image of a
multilinear polynomial evaluated on a quaternion algebra is a vector space and
we additionally provide a classification of possible images. | Peter Vassilev Danchev, Truong Huu Dung, Tran Nam Son | 2023-08-31T07:12:27Z | http://arxiv.org/abs/2308.16500v2 | # Images of multilinear polynomials
###### Abstract.
The main goal of this paper is to extend [J. Algebra Appl. 20 (2021), 2150074] to generalized quaternion algebras, even when these algebras are not necessarily division rings. More precisely, in such cases, the image of a multilinear polynomial evaluated on a quaternion algebra is a vector space and we additionally provide a classification of possible images.
Key words and phrases:Quaternion algebras, multilinear polynomial evaluations, Kaplansky conjecture 2020 _Mathematics Subject Classification_. 14A22; 16H05; 16K20 * Corresponding author: Peter V. Danchev
###### Abstract.
Let \(F\) be a field. We consider the characteristic of \(F\) in \(2\), a _quaternion algebra_ over \(F\). We consider the characteristic of \(F\) in \(2\), a _quaternion algebra_ over \(F\).
(the set of scalar matrices), or \(p(A)\) contains the set \(\operatorname{sl}_{2}(F)\) of matrices whose traces are zero. In particular, if \(F\) is the field of real numbers, then \(p(A)\in\{\{0\},F1,\operatorname{sl}_{2}(F),\operatorname{M}_{2}(F)\}\), which is the same result if \(F\) is a quadratically closed field (see [11]). It is well known by [1, 29] that \(\operatorname{sl}_{2}(F)\) coincides with \(s_{2}(\operatorname{M}_{2}(F))\) where \(s_{2}\) stands for \(s_{2}=x_{1}x_{2}-x_{2}x_{1}\) is a polynomial of degree two in \(F\langle x_{1},x_{2}\rangle\) which is frequently called the _standard_ polynomial of degree two. On the other hand, \(F1\) is itself the center of \(\operatorname{M}_{2}(F)\) which is frequently denoted by \(Z(\operatorname{M}_{2}(F))\). Hence, by further using the result of [15], we obtain the following result:
**Proposition 2.1**.: _Let \(\mathbb{H}_{F}\) be a quaternion algebra over a field \(F\) and let \(p\in F\langle X\rangle\) be multilinear._
1. _If_ \(\mathbb{H}_{F}\) _is not a division ring, then_ \(p(\mathbb{H}_{F})\) _is either_ \(\{0\}\)_, or_ \(F\)_, or_ \(p(\mathbb{H}_{F})\) _contains_ \(s_{2}(\mathbb{H}_{F})\)_. In particular, if_ \(F\) _is a quadratically closed field, then_ \(p(\mathbb{H}_{F})\in\{\{0\},F,s_{2}(\mathbb{H}_{F}),\mathbb{H}_{F}\}\)_._
2. _If_ \(F\) _is the field of real numbers, then_ \(p(\mathbb{H}_{F})\in\{\{0\},F,s_{2}(\mathbb{H}_{F}),\mathbb{H}_{F}\}\)_._
The case that \(\mathbb{H}_{F}\) is a division quaternion algebra over a field \(F\) will be discussed right now. We first consider multilinear polynomials of small degree. From the technique and strategy of [7] and [21], it follows immediately the following results, so it will be skipped.
**Proposition 2.2**.: _Let \(\mathbb{H}_{F}\) be a quaternion algebra over a field \(F\) and let \(p\in F\langle X\rangle\) be multilinear._
1. _If_ \(p\in F\langle x_{1},x_{2}\rangle\)_, then_ \(p(\mathbb{H}_{F})\in\{\{0\},s_{2}(\mathbb{H}_{F}),\mathbb{H}_{F}\}\)_._
2. _If_ \(p\in F\langle x_{1},x_{2},x_{3}\rangle\)_, then either_ \(p(\mathbb{H}_{F})\in\{\{0\},s_{2}(\mathbb{H}_{F}),\mathbb{H}_{F}\}\) _or_ \(p\) _has the form:_ \(p=\lambda_{1}s_{2}(x_{1},s_{2}(x_{3},x_{2}))+\lambda_{2}s_{2}(x_{3},s_{2}(x_{ 1},x_{2}))\) _where_ \(\lambda_{1},\lambda_{2}\in F\)_._
3. _If_ \(p\in F\langle x_{1},x_{2},x_{3},x_{4}\rangle\)_, then_ \(p\) _has the form:_ \[p = \lambda_{1}s_{2}(s_{2}(s_{2}(x_{2},x_{1}),x_{3}),x_{4})+\lambda_ {2}s_{2}(s_{2}(s_{2}(x_{3},x_{1}),x_{2}),x_{4})\] \[+ \lambda_{3}s_{2}(s_{2}(s_{2}(x_{4},x_{1}),x_{2}),x_{3})+\lambda_ {4}s_{2}(x_{1},x_{2})s_{2}(x_{3},x_{4})\] \[+ \lambda_{5}s_{2}(x_{1},x_{3})s_{2}(x_{2},x_{4})+\lambda_{6}s_{2} (x_{1},x_{4})s_{2}(x_{2},x_{3})\] \[+ \lambda_{7}s_{2}(x_{2},x_{3})s_{2}(x_{1},x_{4})+\lambda_{8}s_{2} (x_{2},x_{4})s_{2}(x_{1},x_{3})\] \[+ \lambda_{9}s_{2}(x_{3},x_{4})s_{2}(x_{1},x_{2}).\]
Note that if \(p=\lambda_{1}s_{2}(x_{1},s_{2}(x_{3},x_{2}))+\lambda_{2}s_{2}(x_{3},s_{2}(x_{ 1},x_{2}))\) and the characteristic of \(F\) is \(0\), then by using [23, Theorem 3.4], it follows that \(p(\mathbb{H}_{F})\) contains elements of reduced trace \(0\). From these observations, the next work we can think of is studying about \(s_{2}(\mathbb{H}_{F})\). On the other hand, if \(p\in F\langle X\rangle\) is multilinear, then
\[p(x_{1},\ldots,x_{m})=\sum_{\sigma\in S_{m}}\lambda_{\sigma}x_{\sigma(1)} \cdots x_{\sigma(m)}\]
in which \(S_{m}\) is the symmetric group in \(m\) letters and the coefficients \(\lambda_{\sigma}\) are constants in \(F\). From [10, Lemma 3.3], it follows that
1. If \(\sum_{\sigma\in S_{m}}\lambda_{\sigma}\neq 0\), then by \(p(\mathbb{H}_{F})=\mathbb{H}_{F}\).
2. \(\sum_{\sigma\in S_{m}}\lambda_{\sigma}=0\) if and only if \(p\) belongs to the T-ideal \(\langle s_{2}(x_{1},x_{2})\rangle^{T}\) generated by the polynomial \(s_{2}\).
Hence, the investigation of \(s_{2}(\mathbb{H}_{F})\) play an important role.
To do this, we need a series technical claims as follows and recall that a Pythagorean field is a field in which every sum of two squares is a square. Let us see more in [26].
**Lemma 2.3**.: _Let \(F\) be a field. If \(a,b,c\) belong to \(F\), then there exist \(x_{1},x_{2},x_{3}\in F\) such that_
\[\begin{cases}x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=1,\\ ax_{1}+bx_{2}+cx_{3}=0\end{cases}\]
_unless \(a^{2}+b^{2}\neq 0\) and the characteristic of \(F\) is not \(2\). This statement remains valid in case that \(a^{2}+b^{2}\neq 0\) and the characteristic of \(F\) is not \(2\), especially when \(F\) is a Pythagorean field._
Proof.: Let \(a,b,c\in F\). If \(a=b=c=0\), then we can choose \(x_{1}=1\) and \(x_{2}=x_{3}=0\). Otherwise, without loss of generality, we can assume \(a\neq 0\). We divide the proof into the following cases:
**Case 1.**\(a^{2}+b^{2}=0\). Then, since \(a\) is nonzero, \(b\) must be nonzero. If \(c=0\), then we choose \(x_{1}=\frac{-b}{a}\) and \(x_{2}=x_{3}=1\). Now, let us examine the case when \(c\neq 0\). If the characteristic of \(F\) is not \(2\), then we can choose \(x_{1}=\frac{-c}{2a},x_{2}=\frac{-c}{2b}\), and \(x_{3}=1\). Next, we consider the characteristic of \(F\) being \(2\). Within this subcase, since \(a^{2}+b^{2}=0\), the sum \(a+b\) of \(a\) and \(b\) must be \(0\), that is \(a=b\). If there exist \(x_{1},x_{2}\), and \(x_{3}\) in \(F\) satisfying \(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=1\) and \(ax_{1}+bx_{2}+cx_{3}=0\), then \(x_{1}+x_{2}+x_{3}=1\) and \(a+c\neq 0\) implying \(x_{3}=\frac{a}{a+c}\) and \(x_{1}+x_{2}=\frac{c}{a+c}\), and thus we can choose \(x_{1}=0\) and \(x_{2}=\frac{c}{a+c}\).
**Case 2.**\(a^{2}+b^{2}\neq 0\). If the characteristic of \(F\) is \(2\), then \(a+b\neq 0\) and we can choose \(x_{1}=\frac{-b}{a+b},x_{2}=\frac{a}{a+b}\), and \(x_{3}=0\). Continuing ahead, we focus on the final subcase, where the characteristic of \(F\) is not \(2\). Moreover, within this subcase, we make an additional assumption that \(F\) is a Pythagorean field. Then, there exists \(d\in F\) such that \(d^{2}=a^{2}+b^{2}\), so we can choose \(x_{1}=\frac{-b}{d},x_{2}=\frac{a}{d}\), and \(x_{3}=0\)
**Lemma 2.4**.: _Let \(F\) be a field. If \(a,b,c\) belong to \(F\), then there exist \(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\in F\) such that_
\[\begin{cases}x_{2}x_{6}-x_{3}x_{5}=a,\\ x_{3}x_{4}-x_{1}x_{6}=b,\\ x_{1}x_{5}-x_{2}x_{4}=c,\end{cases}\]
_except when \(a^{2}+b^{2}\neq 0\) and the characteristic of \(F\) is not \(2\). This statement remains valid in case that \(a^{2}+b^{2}\neq 0\) and the characteristic of \(F\) is not \(2\), especially when \(F\) is a Pythagorean field._
Proof.: Let \(a,b,c\in F\). Using Lemma 2.3, there exist \(x_{1},x_{2},x_{3}\in F\) such that \(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=1\) and \(ax_{1}+bx_{2}+cx_{3}=0\) unless \(a^{2}+b^{2}\neq 0\) and the characteristic of \(F\) is not \(2\). In case that \(a^{2}+b^{2}\neq 0\) and the characteristic of \(F\) is not \(2\), we need to add the assumption that \(F\) is a Pythagorean field. The proof is completed by choosing \(x_{4}=bx_{3}-x_{2}c,x_{5}=x_{1}c-ax_{3}\), and \(x_{6}=ax_{2}-x_{1}b\).
We are now in a position to prove the following theorem for standard polynomials of degree \(2\).
**Theorem 2.5**.: _Let \(\mathbb{H}_{F}\) be a quaternion algebra over a field \(F\)._
1. _If the characteristic of_ \(F\) _is not_ \(2\)_, then_ \(s_{2}(\mathbb{H}_{F})\) _is contained in_ \[\{\alpha i+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F\}.\] _In particular, if_ \(F\) _is a Pythagorean field, then_ \[s_{2}(\mathbb{H}_{F})=\{\alpha i+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F \}=s_{2}(s_{2}(\mathbb{H}_{F})).\] _Here_ \(s_{2}(s_{2}(\mathbb{H}_{F}))=\{s_{2}(a,b)\mid a,b\in s_{2}(\mathbb{H}_{F})\}\)_._
2. _If the characteristic of_ \(F\) _is_ \(2\)_, then_ \[s_{2}(\mathbb{H}_{F})=\{\alpha j^{2}+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F \}=s_{2}(s_{2}(\mathbb{H}_{F})).\]
_In particular, \(s_{2}(\mathbb{H}_{F})\) is a vector space over \(F\) in either of the following cases:_
1. _The characteristic of_ \(F\) _is not_ \(2\) _and_ \(F\) _is a Pythagorean field._
2. _The characteristic of_ \(F\) _is_ \(2\)_._
Proof.: Let \(\alpha,\beta\in\mathbb{H}_{F}\), namely \(\alpha=\alpha_{1}+\alpha_{2}i+\alpha_{3}j+\alpha_{4}\) and \(\beta=\beta_{1}+\beta_{2}i+\beta_{3}j+\beta_{4}k\) for some \(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\beta_{1},\beta_{2},\beta_{3}, \beta_{4}\in F\). Then,
since the multilinearity of \(s_{2}\), direct calculation shows that
\[s_{2}(\alpha,\beta)\] \[= s_{2}(\alpha_{1}+\alpha_{2}i+\alpha_{3}j+\alpha_{4}k,\beta_{1}+ \beta_{2}i+\beta_{3}j+\beta_{4}k)\] \[= \alpha_{1}\beta_{1}s_{2}(1,1)+\alpha_{1}\beta_{2}s_{2}(1,i)+ \alpha_{1}\beta_{3}s_{2}(1,j)+\alpha_{1}\beta_{4}s_{2}(1,k)\] \[+ \alpha_{2}\beta_{1}s_{2}(i,1)+\alpha_{2}\beta_{2}p(i,i)+\alpha_{ 2}\beta_{3}s_{2}(i,j)+\alpha_{2}\beta_{4}s_{2}(i,k)\] \[+ \alpha_{3}\beta_{1}s_{2}(j,1)+\alpha_{3}\beta_{2}s_{2}(j,i)+ \alpha_{3}\beta_{3}s_{2}(j,j)+\alpha_{3}\beta_{4}s_{2}(j,k)\] \[+ \alpha_{4}\beta_{1}s_{2}(k,1)+\alpha_{4}\beta_{2}s_{2}(k,i)+ \alpha_{4}\beta_{3}s_{2}(k,j)+\alpha_{4}\beta_{4}s_{2}(k,k)\] \[= \alpha_{2}\beta_{3}s_{2}(i,j)+\alpha_{2}\beta_{4}s_{2}(i,k)+ \alpha_{3}\beta_{2}s_{2}(j,i)+\alpha_{3}\beta_{4}s_{2}(j,k)\] \[+ \alpha_{4}\beta_{2}s_{2}(k,i)+\alpha_{4}\beta_{3}s_{2}(k,j)\]
We divide the proof into two cases as follows:
**Case 1.** The characteristic of \(F\) is not \(2\). Observe that \(s_{2}(i,k)=-2j,s_{2}(k,i)=2j,s_{2}(j,i)=-2k,s_{2}(i,j)=2k,s_{2}(j,k)=2i,s_{2}(k,j)=-2i.\) Hence,
\[s_{2}(\alpha,\beta)=2(\alpha_{3}\beta_{4}-\alpha_{4}\beta_{3})i+2(\alpha_{4} \beta_{2}-\alpha_{2}\beta_{4})j+2(\alpha_{2}\beta_{3}-\alpha_{3}\beta_{2})k.\]
This means that \(s_{2}(\mathbb{H}_{F})\) is contained in \(\{\alpha i+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F\}\).
Now, we add the assumption that \(F\) is a Pythagorean field. Let \(x=\alpha i+\beta j+\gamma k\) where \(\alpha,\beta,\gamma\in F\). Using Lemma 2.4, there exist \(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\in F\) such that
\[\begin{cases}x_{2}x_{6}-x_{3}x_{5}=\frac{\alpha}{2},\\ x_{3}x_{4}-x_{1}x_{6}=\frac{\beta}{2},\\ x_{1}x_{5}-x_{2}x_{4}=\frac{\gamma}{2},\end{cases}\]
so \(x=s_{2}\left(x_{1}i+x_{2}j+x_{3}k,x_{4}i+x_{5}j+x_{6}k\right)\in s_{2}(\mathbb{ H}_{F})\). Hence, \(s_{2}(\mathbb{H}_{F})\) contains \(\{\alpha i+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F\}\). Therefore,
\[s_{2}(\mathbb{H}_{F})=\{\alpha i+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F\}.\]
On the other hand, it is obvious that \(s_{2}(s_{2}(\mathbb{H}_{F}))\) is contained in \(s_{2}(\mathbb{H}_{F})\). From the above proof, if \(x=\alpha i+\beta j+\gamma k\in s_{2}(\mathbb{H}_{F})\) where \(\alpha,\beta,\gamma\in F\), then \(x=s_{2}\left(x_{1}i+x_{2}j+x_{3}k,x_{4}i+x_{5}j+x_{6}k\right)\in s_{2}(s_{2}( \mathbb{H}_{F}))\) for some \(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\in F\). Hence, \(s_{2}(\mathbb{H}_{F})=s_{2}(s_{2}(\mathbb{H}_{F}))\).
**Case 2.** The characteristic of \(F\) is \(2\). Similarly, observe that \(s_{2}(i,k)=k,s_{2}(k,i)=-k,s_{2}(j,i)=-j,s_{2}(i,j)=j,s_{2}(j,k)=-j^{2},s_{2}(k,j)=j^{2}\). Therefore,
\[s_{2}(\alpha,\beta)=(\alpha_{4}\beta_{3}-\alpha_{3}\beta_{4})j^{2}+(\alpha_{2 }\beta_{3}-\alpha_{3}\beta_{2})j+(\alpha_{2}\beta_{4}-\alpha_{4}\beta_{2})k.\]
We deduce that \(\{\alpha j^{2}+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F\}\) contains \(s_{2}(\mathbb{H}_{F})\). On the other hand, repeating the similar arguments in Case 1, if
\(\alpha j^{2}+\beta j+\gamma k\) where \(\alpha,\beta,\gamma\in F\), then by Lemma 2.4, there exist \(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\in F\) such that
\[\begin{cases}x_{2}x_{6}-x_{3}x_{5}=-\alpha,\\ x_{3}x_{4}-x_{1}x_{6}=-\gamma,\\ x_{1}x_{5}-x_{2}x_{4}=\beta,\end{cases}\]
so \(x=s_{2}(x_{2}i+x_{2}j+x_{3}k,x_{4}i+x_{5}j+x_{6}k)\in s_{2}(\mathbb{H}_{F})\). This means that \(s_{2}(\mathbb{H}_{F})\) contains \(\{\alpha j^{2}+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F\}\). Therefore, we can conclude that \(s_{2}(\mathbb{H}_{F})=\{\alpha j^{2}+\beta j+\gamma k\mid\alpha,\beta,\gamma \in F\}\). Again, repeating the arguments in Case 1, we conclude that \(s_{2}(\mathbb{H}_{F})=s_{2}(s_{2}(\mathbb{H}_{F}))\).
From the investigation of standard polynomials of degree \(2\), we can easily extract the following non-trivial generalization for multilinear polynomials of degree \(2\).
**Theorem 2.6**.: _Let \(\mathbb{H}_{F}\) be a quaternion algebra over a field \(F\) and let \(p\in F\langle x_{1},x_{2}\rangle\) be multilinear._
1. _If the characteristic of_ \(F\) _is not_ \(2\)_, then_ \(p(\mathbb{H}_{F})\) _is contained in_ \(\{\alpha i+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F\}\) _unless either_ \(p(\mathbb{H}_{F})=\{0\}\) _or_ \(p(\mathbb{H}_{F})=\mathbb{H}_{F}\)_. In particular, if_ \(F\) _is a Pythagorean field, then_ \[p(\mathbb{H}_{F})\in\{\{0\},\{\alpha i+\beta j+\gamma k\mid\alpha,\beta,\gamma \in F\},\mathbb{H}_{F}\}.\]
2. _If the characteristic of_ \(F\) _is_ \(2\)_, then_ \[p(\mathbb{H}_{F})\in\{\{0\},\{\alpha j^{2}+\beta j+\gamma k\mid\alpha,\beta, \gamma\in F\},\mathbb{H}_{F}\}.\]
_In particular, \(p(\mathbb{H}_{F})\) is a vector space over \(F\) in either of the following cases:_
1. _The characteristic of_ \(F\) _is not_ \(2\) _and_ \(F\) _is a Pythagorean field._
2. _The characteristic of_ \(F\) _is_ \(2\)_._
In case of multilinear polynomials of degree three and four, we also obtain a description as follows.
**Theorem 2.7**.: _Let \(\mathbb{H}_{F}\) be a quaternion algebra over a field \(F\) and let \(p\in\langle X\rangle\) be multilinear. If The degree of \(p\) is either three or four, then there are possible cases:_
1. \(p(\mathbb{H}_{F})=\{0\}\)_._
2. \(p(\mathbb{H}_{F})=\mathbb{H}_{F}\)_._
3. _If the characteristic of_ \(F\) _is not_ \(2\)_, then_ \(p(\mathbb{H}_{F})\) _is contained in_ \(\{\alpha i+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F\}\)_._
4. _If the characteristic of_ \(F\) _is_ \(2\)_, then_ \(\{\alpha j^{2}+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F\}\) _contains_ \(p(\mathbb{H}_{F})\)_._
On the other hand, from Theorem 2.5, it follows immediately to know images of multilinear polynomials defined as follows. Let \(m=2^{k}\) for some a positive integer \(k\). We define a polynomial \(v_{k}(x_{1},\ldots,x_{2^{k}})\in F\langle X\rangle\) successively as follows: set \(v_{1}(x_{1},x_{2})=x_{1}x_{2}-x_{2}x_{1}\), assume that \(v_{k-1}(x_{1},\ldots,x_{2^{k-1}})\) is defined, and then put
\[v_{k}(x_{1},\ldots,x_{2^{k}})=v_{1}(v_{k-1}(x_{1},\ldots,x_{2^{k-1}}),v_{k-1} (x_{2^{k-1}+1},\ldots,x_{2^{k}})).\]
This polynomial relates to the solvability of a lie algebra.
**Theorem 2.8**.: _Let \(\mathbb{H}_{F}\) be a quaternion algebra over a field \(F\). For each positive integer \(k\), the image of \(v_{k}\) evaluated on \(\mathbb{H}_{F}\) coincides with \(s_{2}(\mathbb{H}_{F})\) in cases of the following:_
1. _The characteristic of_ \(F\) _is not_ \(2\) _and_ \(F\) _is a Pythagorean field._
2. _The characteristic of_ \(F\) _is_ \(2\)_._
_In particular, in both cases, \(v_{k}(\mathbb{H}_{F})\) is a vector space over \(F\) and_
\[s_{2}(s_{2}(\mathbb{H}_{F}))=s_{2}(\mathbb{H}_{F})=v_{1}(\mathbb{H}_{F})=v_{2 }(\mathbb{H}_{F})=\cdots=v_{k}(\mathbb{H}_{F})=\cdots.\]
Proof.: We prove theorem by induction on \(k\). The statement is trivial in case \(k=1\) by using Theorem 2.5. Assume that \(k=2\). According to Theorem 2.5, it follows that \(s_{2}(\mathbb{H}_{F})=s_{2}(s_{2}(\mathbb{H}_{F}))\), so if \(a\in s_{2}(\mathbb{H}_{F})\), then
\[a = s_{2}(b_{1},b_{2})\] \[= s_{2}(s_{2}(a_{1},a_{2}),s_{2}(a_{3},a_{4}))\] \[= v_{1}(v_{1}(a_{1},a_{2}),v_{1}(a_{3},a_{4}))\] \[= v_{2}(a_{1},a_{2},a_{3},a_{4})\in v_{2}(\mathbb{H}_{F})\]
for some \(b_{1},b_{2},a_{1},a_{2},a_{3},a_{4}\in s_{2}(\mathbb{H}_{F})\) in which \(b_{1}=s_{2}(a_{1},a_{2})\) and \(b_{2}=s_{2}(a_{3},a_{4})\). Hence, \(v_{2}(\mathbb{H}_{F})=v_{1}(\mathbb{H}_{F})=s_{2}(\mathbb{H}_{F})=s_{2}(s_{2 }(\mathbb{H}_{F}))\). Assume that the statement is true for \(k-1\). Now, by the induction hypothesis, if \(a\in s_{2}(\mathbb{H}_{F})\), then
\[a = s_{2}(b_{1},b_{2})\] \[= v_{1}(b_{1},b_{2})\] \[= v_{1}(v_{k-1}(a_{1},\ldots,a_{2^{k-1}}),v_{k-1}(a_{2^{k-1}+1}, \ldots,a_{2^{k}}))\] \[= v_{k}(a_{1},\ldots,a_{2^{k}})\in v_{k}(\mathbb{H}_{F})\]
for some \(b_{1},b_{2},a_{1},\ldots,a_{2^{k}}\in s_{2}(\mathbb{H}_{F})\) in which \(b_{1}=v_{k-1}(a_{1},\ldots,a_{2^{k-1}})\) and \(b_{2}=v_{k-1}(a_{2^{k-1}+1},\ldots,a_{2^{k}})\). Therefore, \(s_{2}(\mathbb{H}_{F})=v_{k}(\mathbb{H}_{F})\). The proof is completed.
In the rest of this section, as mentioned, we will extend the work of [15] to certain division quaternion algebras. Frequently, the quaternion algebra over a field \(F\) of characteristic different from \(2\) is denoted by
\(\mathbb{H}(a,b)_{F}\) when \(i^{2}=-a\) and \(j^{2}=-b\). Recall that a quadratically closed field is a field in which every element has a square root. Note that every quadratically closed field is a Pythagorean field but not conversely, for example, the field of real numbers is Pythagorean but it is not a quadratically closed field. Let us see more in [26].
In this aspect, we first have the following plain but useful claim.
**Lemma 2.9**.: _[_17_, Proposition 2.8]_ _Let \(\mathbb{H}(a,b)_{F}\) be a division quaternion algebra over a field \(F\) of characteristic different from \(2\). If \(\alpha=a_{0}+a_{1}i+a_{2}j+a_{3}k\) and \(\beta=b_{0}+b_{1}i+b_{2}j+b_{3}k\) belong to \(\mathbb{H}(a,b)_{F}\) where \(a_{0},a_{1},a_{2},a_{3},b_{0},b_{1},b_{2},b_{3}\in F\), the necessary and sufficient condition for the linear equation \(ax=xb\) has nonzero solutions \(x\in\mathbb{H}(a,b)_{F}\) is \(a_{0}=b_{0}\) and \(aa_{1}^{2}+ba_{2}^{2}+aba_{3}^{2}=ab_{1}^{2}+bb_{2}^{2}+abb_{3}^{2}\)._
With Lemma 2.9 at hand, we obtain the useful result as follows.
**Lemma 2.10**.: _Let \(\mathbb{H}(a,b)_{F}\) be a division quaternion algebra over a field \(F\) of characteristic different from \(2\). If \(\alpha=a_{0}+a_{1}i+a_{2}j+a_{3}k\in\mathbb{H}_{F}\) where \(a_{0},a_{1},a_{2},a_{3}\in F\), then \(x^{-1}\alpha x=a_{0}+ri\) for some nonzero \(x\in\mathbb{H}_{F}\) and \(r\in F\) in either of the following cases:_
1. \(F\) _is a quadratically closed field._
2. \(F\) _is a Pythagorean field and_ \(a=b=1\)_._
Proof.: Let \(\alpha=a_{0}+a_{1}i+a_{2}j+a_{3}k\in\mathbb{H}_{F}\) where \(a_{0},a_{1},a_{2},a_{3}\in F\). If \(F\) is a quadratically closed field, there is \(r\in F\) such that \(r^{2}=a_{1}^{2}+\frac{b}{a}a_{2}^{2}+ba_{3}^{2}\). On the other hand, we also obtain a similar result if \(F\) is a Pythagorean field and \(a=b=1\). Using Lemma 2.9, there exists \(x\in\mathbb{H}(a,b)_{F}\) is nonzero such that \(x^{-1}\alpha x=a_{0}+ri\).
On the other hand, we also need to know images of multilinear polynomials evaluated on basis quaternions \(1,i,j,k\).
**Lemma 2.11**.: _Let \(\mathbb{H}_{F}\) be a division quaternion algebra over a field \(F\) and let \(p\in F\langle X\rangle\) be multilinear. If \(a_{1},\ldots,a_{m}\in\{1,i,j,k\}\), then \(p(a_{1},\ldots,a_{m})=c\cdot q\) for some \(c\in F\) and \(q\in\{1,i,j,k\}\)._
Proof.: Let \(a_{1},a_{2},\ldots,a_{m}\in\{1,i,j,k\}\). If \(m=2\), then \(a_{2}a_{1}=\lambda a_{1}a_{2}\) for some \(\lambda\in F\). Proceeding by induction on \(m\), we can conclude that there exists \(\lambda\in F\) such that \(a_{\sigma(1)}a_{\sigma(2)}\cdots a_{\sigma(m)}=\lambda a_{1}a_{2}\cdots a_{m}\) for every \(\sigma\in S_{m}\). Therefore, \(p(a_{1},\ldots,a_{m})=c\cdot q\) for some \(c\in F\) and \(q\in\{1,i,j,k\}\).
We are now to establish the following useful lemma needed for the establishment of the final main result in this section.
**Lemma 2.12**.: _Let \(\mathbb{H}(a,b)_{F}\) be a division quaternion algebra over a field \(F\) of characteristic different from \(2\) and let \(p\in F\langle X\rangle\) be multilinear
and non-central. Then, \(s_{2}(\mathbb{H}_{F})\) is contained in \(p(\mathbb{H}_{F})\) in either of the following cases:_
1. \(F\) _is a quadratically closed field._
2. \(F\) _is a Pythagorean field and_ \(a=b=1\)_._
Proof.: Let \(x\in s_{2}(\mathbb{H}_{F})\). Using Theorem 2.5 and Lemma 2.10, \(g^{-1}xg=ri\) for some \(g\in\mathbb{H}_{F}\) and \(r\in F\). According to Lemma 2.11 and the fact that \(p\) is non-central, there exists \(y\in p(\mathbb{H}_{F})\) which is of the form: \(r^{\prime}\cdot x^{\prime}\) where \(x^{\prime}\in\{i,j,k\}\) and \(r^{\prime}\in F\) is nonzero. Applying Lemma 2.10 again, without loss of generality, we can assume \(x^{\prime}=i\). On the other hand, let us write \(y=p(z_{1},z_{2},\ldots,z_{m})\) for some \(z_{1},z_{2},\ldots,z_{m}\in\mathbb{H}_{F}\), which implies that
\[x=p(grr^{\prime-1}z_{1}g^{-1},gz_{2}g^{-1},\ldots,gz_{m}g^{-1})\in p(\mathbb{ H}_{F}).\]
Therefore, \(s_{2}(\mathbb{H}_{F})\) is contained in \(p(\mathbb{H}_{F})\).
We can now summarize all of the above lemmas and show the validity of the following statement. The idea of the proof is similar in [15, Theorem 1].
**Theorem 2.13**.: _Let \(\mathbb{H}(a,b)_{F}\) be a division quaternion algebra over a field \(F\) of characteristic different from \(2\) and let \(p\in F\langle X\rangle\) be multilinear. Then, \(p(\mathbb{H}_{F})\in\{\{0\},F,s_{2}(\mathbb{H}_{F}),\mathbb{H}_{F}\}\) in either of the following cases:_
1. \(F\) _is a quadratically closed field._
2. \(F\) _is a Pythagorean field and_ \(a=b=1\)_._
Proof.: Lemma 2.11 yields that there are four possible cases:
**Case 1:**: \(p(\mathbb{H}_{F})=\{0\}\).
**Case 2:**: \(p(\mathbb{H}_{F})\) is contained in \(F\).
**Case 3:**: \(p(\mathbb{H}_{F})\) is contained in \(s_{2}(\mathbb{H}_{F})\).
**Case 4:**: \(p(\mathbb{H}_{F})\) do not belong to \(\{\{0\},F,s_{2}(\mathbb{H}_{F})\}\).
Regarding Case 2, we immediately deduce that \(p(\mathbb{H}_{F})=F\). Using Lemma 2.12, we conclude that \(p(\mathbb{H}_{F})=s_{2}(\mathbb{H}_{F})\). Now, let us consider Case 4, and then, there exist \(a_{1},\ldots,a_{m}\in\mathbb{H}_{F}\) such that \(p(a_{1},\ldots,a_{m})\in F\) which is nonzero. Put
\[A_{1}=p(a_{1},a_{2},\ldots,a_{m})\]
which can be view as a constant polynomial taking only one possible value which is a nonzero and for each \(i\in\{2,\ldots,m+1\}\), let
\[A_{i}=p(x_{1},\ldots,x_{i-1},a_{i},\ldots,a_{m})\]
be a polynomial in \(F\langle x_{1},\ldots,x_{i-1}\rangle\). Then, \(A_{i}(\mathbb{H}_{F})\) is contained in \(A_{i+1}(\mathbb{H}_{F})\) for all \(i\in\{1,\ldots,m+1\}\) and \(A_{m+1}(\mathbb{H}_{F})=p(\mathbb{H}_{F})\). Hence,
there exists \(i\in\{1,\ldots,m+1\}\) such that \(A_{i}(\mathbb{H}_{F})\) is contained in \(F\) and \(A_{i+1}(\mathbb{H}_{F})\) is not contained in \(F\), so there exist \(r_{1},r_{2},\ldots,r_{m},r_{i}^{*}\in\mathbb{H}_{F}\) satisfying \(p(r_{1},\ldots,r_{m})\in F\) which is nonzero and
\[p(r_{1},\ldots,r_{i-1},r_{i}^{*},r_{i+1},\ldots,r_{m})\]
which is not in \(F\). Let us set \(\alpha=p(r_{1},\ldots,r_{m})\) and
\[\beta=p(r_{1},\ldots,r_{i-1},r_{i}^{*},r_{i+1},\ldots,r_{m}).\]
Using Lemma 2.10, \(g^{-1}\beta g=a^{\prime}+r^{\prime}i\) for some \(a^{\prime}\in F\) and \(r^{\prime}\in F\). Since \(\beta\) is not in \(F\), the element \(r^{\prime}\) must be nonzero. Now, let \(x\in\mathbb{H}_{F}\). By using Lemma 2.10, \(hxh^{-1}=a+ri\) for some \(r\in F\) and \(h\in\mathbb{H}_{F}\) which is nonzero. If \(a=0\), then Lemma 2.12 yields that \(x\in s_{2}(\mathbb{H}_{F})\) is contained in \(p(\mathbb{H}_{F})\). Now, we assume \(a\neq 0\). By putting \(r_{i}^{**}=r_{i}^{*}-a^{\prime}\alpha^{-1}r_{i}\), direct calculation shows that
\[a=g^{-1}ag=p(g^{-1}r_{1}g,\ldots,g^{-1}r_{i-1}g,g^{-1}a\alpha^{-1}r_{i}g,g^{-1 }r_{i+1}g,\ldots,g^{-1}r_{m}g)\]
and
\[ri=p(rg^{-1}r_{1}g,\ldots,rg^{-1}r_{i-1}g,rr^{\prime-1}g^{-1}r_{i}^{**}g,rg^{-1 }r_{i+1}g,\ldots,rg^{-1}r_{m}g).\]
By putting
\[r_{i}^{***}=h^{-1}(g^{-1}a\alpha^{-1}r_{i}g+rr^{\prime-1}g^{-1}r_{i}^{**}g)h\]
and \(r_{j}^{*}=h^{-1}rg^{-1}r_{j}gh\) for all \(j\in\{1,\ldots,m\}\) and \(j\neq i\), we can deduce that
\[x=p(r_{1}^{*},\ldots,r_{i-1}^{*},r_{i}^{***},r_{i+1}^{*},\ldots,r_{m}^{*})\in p (\mathbb{H}_{F}).\]
Therefore, within Case 4, we conclude that \(p(\mathbb{H}_{F})=\mathbb{H}_{F}\).
Let us see Proposition 2.1 in case of quaternion algebras which are not division rings.
## 3. Products of images of multilinear polynomials
The classical Waring's problem, which is proposed by Edward Waring in 1770 and solved by David Hilbert in 1901, asks whether for every positive integer \(k\), there exists a positive integer \(g(k)\) such that every positive integer can be expressed as a sum of \(g(k)\)\(k\)th powers of nonnegative integers. Various extensions and variations of this problem have been studied (see [18, 19, 20, 22, 28]). In 2020, Bresar initiated the study of various Waring's problems for matrix algebras (see [4]), which is recently developed by Bresar and Semrl in [5, 6].
In this section, we focus on Waring's problems which asks whether every element of a quaternion algebra can be written as a product of images of multilinear polynomials, which is initiated partially by [8, 9]. Recall that a polynomial \(p\in F\langle X\rangle\) is an _identity_ of an \(F\)-algebra
if \(p(A)=\{0\}\), and \(p\) is _central_ if \(p\) is not an identity of \(A\) but \(p(A)\) is contained in the center \(Z(A)\) of \(A\). For convenience, a polynomial \(p\in C\langle x_{1},\cdots,x_{m}\rangle\) is called _non-central_ if \(p\) is not an identity as well as central. For convenience, for subsets \(A_{1},\ldots,A_{n}\) of an algebra \(A\), we use the symbol \(A_{1}\cdots A_{n}\) for the set consisting of all products of the form: \(a_{1}\cdots a_{n}\) where \(a_{1}\in A_{1},\ldots,a_{n}\in A_{n}\). In case that \(A_{1},\ldots,A_{n}\) are all equal to a subset \(B\) of \(A\), we write \(B^{n}\) instead of \(A_{1}\cdots A_{n}\).
By using Lemma 2.10, we obtain the following result.
**Theorem 3.1**.: _Let \(\mathbb{H}(a,b)_{F}\) be a division quaternion algebra over a field \(F\) of characteristic different from \(2\) and let \(p_{1},p_{2}\in F\langle X\rangle\) be multilinear and non-central. Then, \(\mathbb{H}_{F}=p_{1}(\mathbb{H}_{F})p_{2}(\mathbb{H}_{F})\) in either of the following cases:_
1. \(F\) _is a quadratically closed field._
2. \(F\) _is a Pythagorean field and_ \(a=b=1\)_._
Proof.: Let \(\alpha\in\mathbb{H}_{F}\). According to Lemma 2.10, \(h\alpha h^{-1}=x+ri\) for some \(x,r\in F\) and \(h\in\mathbb{H}_{F}\) which is nonzero. Assume that \(\mathbb{H}_{F}=\mathbb{H}(a,b)_{F}\). Then, direct calculation shows that
\[\alpha=h^{-1}jh\cdot h^{-1}(-xa^{-1}j+rk)h.\]
Using Theorem 2.5, both \(j\) and \(-xa^{-1}j+rk\) belong to \(s_{2}(\mathbb{H}_{F})\). On the other hand, Lemma 2.12 yields that both \(j\) and \(-xa^{-1}j+rk\) respectively belong to \(p_{1}(\mathbb{H}_{F})\) and \(p_{2}(\mathbb{H}_{F})\), so \(h^{-1}jh\) and \(h^{-1}(-xa^{-1}j+rk)h\). The proof is completed.
In the case of quaternion algebras which is not a division ring, we also obtain an interesting result without the assumption that the characteristic of a field is not \(2\) as follows.
**Theorem 3.2**.: _Let \(\mathbb{H}_{F}\) be a quaternion algebra over a field \(F\) and let \(p_{1},p_{2}\in F\langle X\rangle\) be multilinear and non-central. If \(\mathbb{H}_{F}\) is not a division ring, then \(\mathbb{H}_{F}=p_{1}(\mathbb{H}_{F})p_{2}(\mathbb{H}_{F})\)._
Proof.: If \(\mathbb{H}_{F}\) is not a division ring, then by using [27, Main Theorem 5.4.4 and Theorem 6.4.11], it follows that \(\mathbb{H}_{F}\) is isomorphic to the matrix ring \(\mathrm{M}_{2}(F)\) over \(F\) as an \(F\)-algebra. Then, the proof is complete by using [8, Proposition 5.3].
Such situations may be difficult in case that quaternion algebras are division rings as well as have characteristic \(2\). We will together consider the standard polynomial of degree two. Let \(\alpha=bi+cj+dk\in\mathbb{H}_{F}\) where \(b,c,d\in F\). Using Lemma 2.4, we find
such that
\[\begin{cases}x_{1}x_{5}-x_{2}x_{4}=d,\\ x_{3}x_{4}-x_{1}x_{6}=c,\\ x_{2}x_{6}-x_{3}x_{5}=b,\end{cases}\]
in which
\[\begin{cases}x_{4}=cx_{3}-x_{2}d,\\ x_{5}=x_{1}d-bx_{3},\\ x_{6}=bx_{2}-x_{1}c,\end{cases}\]
so \(x_{1}x_{4}+x_{2}x_{5}+x_{3}x_{6}=0\) and
\[\alpha=bi+cj+dk=(x_{1}i+x_{2}j+x_{3}k)(x_{4}i+x_{5}j+x_{6}k).\]
According to Theorem 2.5, it follows that
\[s_{2}(\mathbb{H}_{\mathbb{F}})=\begin{cases}\{\alpha i+\beta j+\gamma k\mid \alpha,\beta,\gamma\in F\}\text{ if the characteristic of }F\\ \text{ is not 2 and }F\text{ is a Pythagorean field,}\\ \{\alpha j^{2}+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F\}\text{ if the characteristic of }F\\ \text{ is 2.}\end{cases}\]
Therefore, if the characteristic of \(F\) is not 2 and \(F\) is a Pythagorean field, then \(s_{2}(\mathbb{H}_{F})\) is contained in \(s_{2}(\mathbb{H}_{F})s_{2}(\mathbb{H}_{F})=s_{2}(\mathbb{H}_{F})^{2}\), and thus
\[s_{2}(\mathbb{H}_{F})\subseteq s_{2}(\mathbb{H}_{F})^{2}\subseteq\cdots \subseteq s_{2}(\mathbb{H}_{F})^{k}\subseteq\cdots.\]
A natural question is that there is a positive integer \(n\) such that \(s_{2}(\mathbb{H}_{F})^{k}=s_{2}(\mathbb{H}_{F})^{k+1}\) for all \(k\geq n\), especially \(s_{2}(\mathbb{H}_{F})^{n}=\mathbb{H}_{F}\). From Theorem 3.1 and Theorem 3.2, we conclude that \(\mathbb{H}_{F}=s_{2}(\mathbb{H}_{F})^{2}\) in either of the following cases:
1. \(\mathbb{H}_{F}\) is not a division ring.
2. The characteristic of \(F\) is not 2 and 1. either \(F\) is a quadratically closed field 2. or \(F\) is a Pythagorean field and \(i^{2}=j^{2}=-1\).
Moreover, we can show that if \(\mathbb{H}_{F}\) is a division quaternion algebra over a field \(F\) of characteristic 2 and
\[p=s_{2}(x_{1},x_{2})+s_{2}(x_{3},x_{4})s_{2}(x_{5},x_{6})\in F\langle x_{1}, \ldots,x_{6}\rangle,\]
then \(p(\mathbb{H}_{F})=\mathbb{H}_{F}\). Indeed, let \(\alpha\in\mathbb{H}_{F}\). According to Theorem 2.5, since \(s_{2}(\mathbb{H}_{F})=s_{2}(s_{2}(\mathbb{H}_{F}))\), it follows that there exist nonzero \(y\) and \(z\) which belong to \(s_{2}(\mathbb{H}_{F})\) such that \(s_{2}(y,z)\) is nonzero, so
\[\alpha = \alpha s_{2}(y,z)^{-1}s_{2}(y,z)\] \[= s_{2}(\alpha s_{2}(y,z)^{-1}y,z)+s_{2}(z,\alpha s_{2}(y,z)^{-1}) y\in p(\mathbb{H}_{F}).\]
We also obtain the similar result if
\[p=s_{2}(x_{1},x_{2})s_{2}(x_{3},x_{4})+s_{2}(x_{5},x_{6})s_{2}(x_{7},x_{8})\in F \langle x_{1},\ldots,x_{8}\rangle\]
by expressing \(s_{2}(\alpha s_{2}(y,z)^{-1}y,z)\) as \(s_{2}(\alpha s_{2}(y,z)^{-1}yz^{-1},z)z\).
## 4. Trace vanishing multilinear polynomials
Let \(R\) be a ring. Standardly, we use the notations \(\mathrm{M}_{n}(R)\) and \(\mathrm{sl}_{n}(R)\) as the ring of matrices of size \(n\) over \(R\) and the set of all matrices of \(\mathrm{M}_{n}(R)\) whose trace are zero. Similarly, we also replace the field in free algebra by a commutative ring. According to [13], a polynomial \(p\in Z(R)\langle X\rangle\) is called _trace vanishing_ of \(\mathrm{M}_{n}(R)\) if \(p(\mathrm{M}_{n}(R))\) is contained in \(\mathrm{sl}_{n}(R)\). For example, it is not difficult to see that a special polynomial, which is of the form:
\[s_{m}=\sum_{\sigma\in S_{m}}\mathrm{sgn}(\sigma)x_{\sigma(1)}\cdots x_{\sigma( m)}\]
where \(\mathrm{sgn}(\sigma)\) is the sign of \(\sigma\), is trace vanishing if \(m\) is even. In 2016, A. Kanel-Belov, S. Malev, L. Rowen describe all possible images of trace vanishing polynomials for \(3\times 3\) (see [12, Theorem 3, Theorem 4]). On the other hand, in the case of multilinear polynomials of three variables, using [21, Lemma 14], if
\[p=\sum_{\sigma\in S_{3}}\lambda_{\sigma}x_{\sigma(1)}x_{\sigma(2)}x_{\sigma(3)}\]
is trace vanishing, then
\[\sum_{\sigma\in S_{3}}\lambda_{\sigma}=\sum_{\sigma\in A_{3}}\lambda_{\sigma}=0\]
where \(A_{3}\) is the alternating group of \(S_{3}\). In view of [3], \(p\) is trace vanising if and only if \(p\) is a sum of an identity of \(\mathrm{M}_{n}(R)\) and a sum of commutators in \(R\langle X\rangle\).
In this section, we focus on trace vanishing multilinear polynomials evaluated on quaternion algebras. In the first, we recall the _trace_ in quaternion algebras. Let \(\mathbb{H}_{F}\) be a quaternion algebra over a field \(F\). For any \(\alpha=a_{0}+a_{1}i+a_{2}j+a_{3}\in\mathbb{H}_{F}\) where \(a_{0},a_{1},a_{2},a_{3}\in F\), the symbol \(\mathrm{trace}(\alpha)\) is denoted by the _trace_ of \(\alpha\) and defined as
\[\mathrm{trace}(\alpha)=\begin{cases}2a_{0}\text{ if the characteristic of $F$ is not $2$,}\\ a_{1}\text{ if the characteristic of $F$ is $2$.}\end{cases}\]
We use the symbol \(\mathbb{H}_{F}^{0}\) for the set of elements of \(\mathbb{H}_{F}\) whose are trace zero. Note that \(\mathbb{H}_{F}^{0}\) is a vector space over \(F\) and
\[\mathbb{H}_{F}^{0}=\begin{cases}\{\alpha i+\beta j+\gamma k\mid\alpha,\beta, \gamma\in F\}\text{ if the characteristic of $F$ is not $2$,}\\ \{\alpha+\beta j+\gamma k\mid\alpha,\beta,\gamma\in F\}\text{ if the characteristic of $F$ is $2$.}\end{cases}\]
From Theorem 2.5, it follows:
1. If the characteristic of \(F\) is \(2\), then \(s_{2}(\mathbb{H}_{F})=\mathbb{H}_{F}^{0}\).
2. If the characteristic of \(F\) is \(2\), then \(s_{2}(\mathbb{H}_{F})\) is contained in \(\mathbb{H}_{F}^{0}\). In particular, if \(F\) is a Pythagorean field, then \(s_{2}(\mathbb{H}_{F})=\mathbb{H}_{F}^{0}\).
Similarly in [12], a polynomial \(p\in F\langle X\rangle\) is called _trace vanishing_ of \(\mathbb{H}_{F}\) if \(p(\mathbb{H}_{F})\) is contained in \(\mathbb{H}_{F}^{0}\).
**Proposition 4.1**.: _Let \(\mathbb{H}_{F}\) be a quaternion algebra over a field \(F\) and let \(p\in F\langle X\rangle\) be multilinear of degree at most four. If \(p(\mathbb{H}_{F})\) is neither \(\{0\}\) nor \(\mathbb{H}_{F}\), then \(p\) is trace vanishing. In particular, if \(p\in F\langle x_{1},x_{2}\rangle\), then \(p(\mathbb{H}_{F})=\mathbb{H}_{F}^{0}\) in either of the following cases:_
1. _The characteristic of_ \(F\) _is not_ \(2\) _and_ \(F\) _is a Pythagorean field._
2. _The characteristic of_ \(F\) _is_ \(2\)_._
Proof.: This is inferred directly from Theorem 2.6 and Theorem 2.7.
**Proposition 4.2**.: _Let \(\mathbb{H}_{F}\) be a quaternion algebra over a field \(F\) and let \(p\in F\langle X\rangle\) be multilinear. If \(p(\mathbb{H}_{F})\) is neither \(\{0\}\) nor \(F\) as well as \(\mathbb{H}_{F}\) and \(p\) is trace vanishing, then \(p(\mathbb{H}_{F})=\mathbb{H}_{F}^{0}\) in either of the following cases:_
1. \(\mathbb{H}_{F}\) _is not a division ring._
2. \(\mathbb{H}_{F}\) _is a division ring and the characteristic of_ \(F\) _is not_ \(2\) _and_ 1. _either_ \(F\) _is a quadratically closed field_ 2. _or_ \(F\) _is a Pythagorean field and_ \(i^{2}=j^{2}=-1\)_._
Proof.: It follows immediately according to Proposition 2.1 and Theorem 2.13.
As seen above, the investigation of the image of the standard polynomial \(s_{2}\) of degree two plays an important role, especially we can apply the results of the rings of all \(2\times 2\) matrices over fields to quaternion algebras which are not division rings. Therefore, we will end this section with an interesting observation. Now, we shift our attention to matrices. We show that if \(p\in Z(R)\langle X\rangle\) is multilinear such that \(p(\operatorname{M}_{n}(R))\) is contained in the union \(\{0\}\cup(\operatorname{M}_{n}(R)\setminus\operatorname{sl}_{n}(R))\) of \(\{0\}\) and the set difference \(\operatorname{M}_{n}(R)\setminus\operatorname{sl}_{n}(R)\) of \(\operatorname{M}_{n}(R)\) and \(\operatorname{sl}_{n}(F)\) and \(p(\operatorname{M}_{n}(R))\neq\{0\}\), then \(p(\operatorname{M}_{n}(R))\) is contained in \(Z(\operatorname{M}_{n}(R))\). The reason why we get attention to \(\operatorname{sl}_{n}(R)\) is that \(\operatorname{sl}_{n}(R)\) coincides with \(s_{2}(\operatorname{M}_{n}(R))\) when \(R\) is a field
and the description of \(s_{2}(\mathrm{M}_{n}(R))\) play an important role as mentioned above. To prove this result, we need to denote \(e_{ij}\) by a matrix unit of \(\mathrm{M}_{n}(R)\) which has only one nonzero entry with value \(1\) at the position \((i,j)\). The following lemma is pivotal which is already known.
**Lemma 4.3**.: _[_24_, Lemma 2]_ _Let \(R\) be a ring and let \(p\in Z(R)\langle X\rangle\) be multilinear. If \(a_{1},\ldots,a_{m}\in R\) and \(A_{1},\ldots,A_{m}\) are matrix units, then \(p(a_{1}A_{1},\ldots,a_{m}A_{m})\) is either a diagonal matrix or a matrix of the form: \(a\cdot e_{ij}\) for some \(a\in R\) and \(i\neq j\)._
We are now ready to show the final result as mentioned above.
**Theorem 4.4**.: _Let \(R\) be a ring and let \(p\in Z(R)\langle X\rangle\) be multilinear such that \(p(\mathrm{M}_{n}(R))\) is contained in \(\{0\}\cup(\mathrm{M}_{n}(R)\setminus\mathrm{sl}_{n}(R))\) and \(p(\mathrm{M}_{n}(R))\neq\{0\}\). Then, \(p(\mathrm{M}_{n}(R))\) is contained in \(Z(\mathrm{M}_{n}(R))\)._
Proof.: According to Lemma 4.3, if \(a_{1},\ldots,a_{m}\in R\) and \(A_{1},\ldots,A_{m}\) are matrix units, then \(p(a_{1}A_{1},\ldots,a_{m}A_{m})\) is either a diagonal matrix or a matrix of the form: \(a\cdot e_{ij}\) for some \(a\in R\) and \(i\neq j\). Since \(p(\mathrm{M}_{n}(R))\) is contained in the union \(\{0\}\cup(\mathrm{M}_{n}(R)\setminus\mathrm{sl}_{n}(R))\), the last possible is eliminated. Since every matrix in \(\mathrm{M}_{n}(R)\) is a linear combination of matrix units over \(R\) and \(p\) is multilinear, it follows that \(p(\mathrm{M}_{n}(R))\) consists only diagonal matrices. Set \(x\in p(\mathrm{M}_{n}(R))\) and write \(x=p(B_{1},\ldots,B_{m})\) where \(B_{1},\ldots,B_{m}\in\mathrm{M}_{n}(R)\), so
\[x=\sum_{i=1}^{n}a_{i}e_{ii}\]
for some \(a_{1},\ldots,a_{n}\in R\). On the other hand, for each \(j\in\{2,3,\ldots,n\}\), there exist \(b_{1}^{j},b_{2}^{j},\ldots,b_{n}^{j}\in R\) such that
\[\sum_{i=1}^{n}b_{i}^{j}e_{ii}\] \[= p\left((\mathrm{I}_{n}+e_{1j})B_{1}(\mathrm{I}_{n}+e_{1j})^{-1},\ldots,(\mathrm{I}_{n}+e_{1j})B_{n}(\mathrm{I}_{n}+e_{1j})^{-1}\right)\] \[= (\mathrm{I}_{n}+e_{1j})p(B_{1},\ldots,B_{m})(\mathrm{I}_{n}+e_{1 j})^{-1}\] \[= \left(\sum_{i=1}^{n}a_{i}e_{ii}\right)+(a_{j}e_{1j}-a_{1}e_{1j}).\]
Hence, \(a_{1}=a_{2}=\cdots=a_{n}\), and put \(a_{1}=a\). If \(r\in R\), then by using the similar argument, for each \(j\in\{2,3,\ldots,n\}\), there exist \(c_{1}^{j},\ldots,c_{n}^{j}\in R\)
such that
\[\sum_{i=1}^{n}c_{i}^{j}e_{ii}\] \[= p\left((\mathrm{I}_{n}+re_{1j})B_{1}(\mathrm{I}_{n}+re_{1j})^{-1}, \ldots,(\mathrm{I}_{n}+re_{1j})B_{m}(\mathrm{I}_{n}+re_{1j})^{-1}\right)\] \[= (\mathrm{I}_{n}+re_{1j})p(B_{1},\ldots,B_{m})(\mathrm{I}_{n}+re_{ 1j})^{-1}\] \[= \left(\sum_{i=1}^{n}a_{i}e_{ii}\right)+rae_{1j}-are_{1j},\]
that is, \(ra=ar\). This means that \(a\) belongs to the center of \(R\). Therefore, \(p(\mathrm{M}_{n}(R))\) is contained in \(Z(\mathrm{M}_{n}(R))\).
|
2303.00010 | Direct detection of supernova progenitor stars with ZTF and LSST | The direct detection of core-collapse supernova (SN) progenitor stars is a
powerful way of probing the last stages of stellar evolution. However,
detections in archival Hubble Space Telescope images are limited to about one
per year. Here, we explore whether we can increase the detection rate by using
data from ground-based wide-field surveys. Due to crowding and atmospheric
blurring, progenitor stars can typically not be identified in pre-explosion
images alone. Instead, we combine many pre-SN and late-time images to search
for the disappearance of the progenitor star. As a proof of concept, we
implement our search for ZTF data. For a few hundred images, we achieve
limiting magnitudes of about 23 mag in the g and r band. However, no progenitor
stars or long-lived outbursts are detected for 29 SNe within z<0.01, and the
ZTF limits are typically several magnitudes less constraining than detected
progenitors in the literature. Next, we estimate progenitor detection rates for
the Legacy Survey of Space and Time (LSST) with the Vera C. Rubin telescope by
simulating a population of nearby SNe. The background from bright host galaxies
reduces the nominal LSST sensitivity by, on average, 0.4 mag. Over the ten-year
survey, we expect the detection of about 50 red supergiant progenitors and
several yellow and blue supergiants. The progenitors of SNe Ib and Ic are
detectable if they are brighter than -4.7 mag or -4.0 mag in the LSST i band,
respectively. In addition, we expect the detection of hundreds of pre-SN
outbursts depending on their brightness and duration. | Nora L. Strotjohann, Eran O. Ofek, Avishay Gal-Yam, Jesper Sollerman, Ping Chen, Ofer Yaron, Barak Zackay, Nabeel Rehemtulla, Phillipe Gris, Frank J. Masci, Ben Rusholme, Josiah Purdum | 2023-02-28T19:00:01Z | http://arxiv.org/abs/2303.00010v2 | # Direct detection of supernova progenitor stars with ZTF and LSST
###### Abstract
The direct detection of core-collapse supernova (SN) progenitor stars is a powerful way of probing the last stages of stellar evolution. However, detections in archival Hubble Space Telescope images are limited to about one detection per year. Here, we explore whether we can increase the detection rate by using data from ground-based wide-field surveys. Due to crowding and atmospheric blurring, progenitor stars can typically not be identified in pre-explosion images alone. Instead, we combine many pre-SN and late-time images to search for the disappearance of the progenitor star.
As a proof of concept, we implement our search for ZTF data. For a few hundred images, we achieve limiting magnitudes of \(\sim 23\) mag in the \(g\) and \(r\) band. However, no progenitor stars or long-lived outbursts are detected for 29 SNe within \(z\leqslant 0.01\), and the ZTF limits are typically several magnitudes less constraining than detected progenitors in the literature.
Next, we estimate progenitor detection rates for the Legacy Survey of Space and Time (LSST) with the Vera C. Rubin telescope by simulating a population of nearby SNe. The background from bright host galaxies reduces the nominal LSST sensitivity by, on average, 0.4 mag. Over the ten-year survey, we expect the detection of \(\sim 50\) red supergiant progenitors and several yellow and blue supergiants. The progenitors of SNe Ib and Ic are detectable if they are brighter than \(-4.7\) mag or \(-4.0\) mag in the LSST \(i\) band, respectively. In addition, we expect the detection of hundreds of pre-SN outbursts depending on their brightness and duration.
Core-collapse supernovae (304) -- Massive stars (732) -- Red supergiant stars (1375) -- Sky surveys (1464)
## 1 Introduction
While thousands of core-collapse supernovae (SNe) are discovered and classified every year1, detecting their faint progenitor stars is much more challenging. Therefore, we cannot be certain that they are similar to the well-studied stars in the Milky Way or Magellanic clouds.
Footnote 1: see, e.g., [https://www.wis-tns.org/stats-maps](https://www.wis-tns.org/stats-maps)
Direct progenitor star detections so far (see, e.g., Smartt 2015 or Van Dyk 2017 for reviews) have established that the progenitors of SNe II are red supergiants (RSG) and SNe IIb have been observed to arise from yellow supergiants (YSG). Slowly rising, SN1987A-like SNe II are the explosions of more compact blue supergiants (BSG), and at least some interacting SNe IIn are believed to originate from luminous blue variables (see, e.g., Gal-Yam et al. 2007 or Smith 2017). Less is known about the progenitor stars of other, rarer SN types, like SNe Ibc or Ibn (see, e.g., Eldridge and Maund 2016; Kilpatrick et al. 2021; Xiang et al. 2019 for potential detections). SN observations indicate that their progenitors are partially or completely stripped, massive stars. The stripping presumably requires an extremely massive progenitor star with strong winds or a binary partner.
The Hubble Space Telescope (HST) has been very successful at detecting progenitor stars. However, detections are only attained at a rate of about one progenitor per year (see e.g. Davies and Beasor 2018), such that
increasing the sample size substantially would require decades of observations. Archival HST observations are available for about 25% of the closest SNe (Smartt, 2015), and identifying a progenitor star securely requires both precise astrometry and additional late-time HST observations to verify that the progenitor candidate has indeed vanished (see, e.g., Crockett et al., 2011; Maund et al., 2014, 2015; Van Dyk et al., 2023). This confirmation is crucial as some of the brightest progenitor candidates have turned out to be stellar clusters rather than single stars (e.g., Maund et al., 2015).
Smartt (2015) (and earlier Smartt et al., 2009) compile a sample of detected progenitor stars and compare their inferred masses to predictions by stellar models. Smartt (2015) find that all 26 RSG mass estimates and upper limits are fainter than \(\log_{10}(L/L_{\odot})\leqslant 5.2\), corresponding to a bolometric magnitude of \(-8.2\) mag. Based on stellar evolution models, they conclude that all progenitors had initial masses of \(<18\) M\({}_{\odot}\), while they would expect that 30% of the progenitors are more massive. This discrepancy was coined the RSG problem, and Smartt (2015) suggests that the most massive stars collapse into black holes without producing a bright SN. Such failed SNe are also predicted by studies that simulate stellar cores (see, e.g., Patton and Sukhbold, 2020 for a recent result), and a few candidates have been reported (Reynolds et al., 2015; Basinger et al., 2021; Neustadt et al., 2021).
However, the lack of bright SN progenitors was diagnosed based on sparse observations. SN progenitor stars are usually only detected in one or few HST observations, often in a single band. Thus, the star's surface temperature and bolometric luminosity cannot be estimated reliably (see, e.g., Smartt, 2015; Davies and Beasor, 2018). Other uncertainties are induced by host extinction, circumstellar dust (e.g., Kochanek et al., 2012), uncertain SN distances, and the small number of progenitor detections (Davies and Beasor, 2018). Consequently, it is under debate whether the RSG problem is significant (e.g., Davies and Beasor, 2018, 2020; Kochanek, 2020; Rodriguez, 2022).
An additional complication is that the impending core collapse might trigger mass-loss events that change the progenitor's temperature and luminosity: Violent stellar eruptions are common prior to SNe IIn (Ofek et al., 2014; Strojohann et al., 2021), and similar, but fainter, outbursts were also detected prior to SNe II (Jacobson-Galan et al., 2022), SNe Ibn (Pastorello et al., 2007; Strojohann et al., 2021), broad-lined SNe Ic (Ho et al., 2019), and potentially SNe IIb (Strotjohann et al., 2015). In addition, Margutti et al. (2017) and Sollerman et al. (2020) observed late-time interaction for three SNe Ib, which indicates a major mass-loss event shortly before the SN explosion. The spectra of young SNe indicate that a large fraction of them explode within a confined shell of circumstellar medium (Khazov et al., 2016; Bruch et al., 2021, 2022), which points to increased mass loss in the last years before the core collapse. Outbursts that inflate stellar envelopes have also been proposed to explain the fast rise times and hot temperatures of young SNe (Morozova et al., 2020; Forster et al., 2018). Mass-loss events can boost the progenitor luminosity, e.g., due to interaction of the ejected material. However, absorption or inflated envelopes can also redden the stellar spectrum and reduce the progenitor luminosity (Davies et al., 2022). Pre-SN outbursts can hence cause dimming or brightening in a single band.
Most of the described challenges could be mitigated by a larger sample of SN progenitors detected in several bands and epochs. Therefore, we explore here whether ground-based, large field-of-view surveys are sensitive enough to detect the closest SN progenitor stars. Due to blurring by the atmosphere, stars in nearby galaxies are blended with each other, such that pre-SN data alone is usually not sufficient to pinpoint the progenitor. Instead, we combine many images before the SN explosion and after it has faded and search for a flux difference between these two time windows. The search does not require dedicated observations and can be done for any position monitored by a survey. Here, we consider two surveys: the Zwicky Transient Facility (ZTF; Bellm et al., 2019; Graham et al., 2019; Dekany et al., 2020) that has been running since 2018 and the planned, more sensitive Legacy Survey of Space and Time (LSST; Ivezic et al., 2019).
Ground-based, wide-field surveys have so far detected a few progenitor stars: The bright YSG progenitor of SN 2011dh was detected by the Palomar Transient Factory (PTF; Strotjohann et al., 2015), the Large Binary Telescope (Szczygiel et al., 2012), and the Nordic Optical Telescope (Ergon et al., 2015). Combining hundreds of PTF images before and after the SN confirmed that the progenitor of SN 2011dh had indeed disappeared. The same search was sensitive enough to disfavor a progenitor candidate for another SN IIb, SN 2012P, as it was still present after the SN had faded. In a more recent search, we detected the progenitor star of the IIn SN 2019cmy at a seemingly constant luminosity of \(-14\) mag in the ZTF \(g\) and \(r\) bands in the last year before the SN explosion (Strotjohann et al., 2021). However, the star is fainter in earlier PTF observations (Soumagnac et al. in prep.); we hence likely observe a long-lasting outburst rather than a quiescent progenitor star. Progenitors in their quiescent state are many magnitudes
fainter and the low temperatures of RSG make their detection in visible bands even more challenging.
In Sect. 2, we conduct a progenitor search for ZTF data as a proof of concept and compare the ZTF results to earlier progenitor detections. In Sect. 3, we quantify how sensitive LSST will be to progenitor stars and pre-SN outbursts and we conclude in Sect. 4.
## 2 Search for Progenitor stars in ZTF data
As a test, we implement our progenitor search for the closest SNe in ZTF data. Section 2.1 describes the sample selection, and Sect. 2.2 explains the details of the search. Results are presented in Sect. 2.3, and we compare to progenitor detections in literature in Sect. 2.4. Finally, in Sect. 2.5 we quantify whether the search is as sensitive as expected. Appendix A provides more details on the photometric pipeline and error sources, and we verify the sensitivity of the search by injecting faint, artificial sources into the images and quantifying their recoverability.
### Sample selection
Our ZTF search is based on SNe detected by the Bright Transient Survey (BTS; Fremling et al., 2020; Perley et al., 2020). The ZTF pipeline (Masci et al., 2019) uses the ZOGY image subtraction algorithm (Zackay et al., 2016) and yields \(>10^{6}\) potential detections per night (Patterson et al., 2019), but quality cuts reduce the number of SN candidates to \(\sim 50\) per night (Perley et al., 2020). An on-duty astronomer searches the remaining candidates for genuine, SN-like transients that surpass the brightness threshold of 18.5 mag in the \(g\) or \(r\) band. The selected objects are classified with the SEDM spectrograph on the P60 telescope (Blagorodnova et al., 2018; Rigault et al., 2019; Kim et al., 2022), and discoveries are reported to the Transient Name Server2 and the BTS sample explore3. For our search, we select 60 SNe that exploded between 2018 and 2021 within \(z\leqslant 0.01\) or \(45\) Mpc.
Footnote 2: [https://www.wis-tns.org/](https://www.wis-tns.org/)
Footnote 3: [https://sites.astro.caltech.edu/ztf/bts/explorer.php](https://sites.astro.caltech.edu/ztf/bts/explorer.php)
The luminosity distances of nearby SNe are rather uncertain, and, if available, we use the more precise host galaxy redshift instead of the SN redshift. We here adopt the preferred redshift from the NASA/IPAC Extragalactic Database4 and convert it to the infall-corrected distance. For SN 2018ebt, we measure a host redshift of \(z=0.0095\) from galaxy lines in a late-time spectrum. After refining the redshifts, we are left with 55 SNe within \(45\) Mpc.
Footnote 4: [https://ned.ipac.caltech.edu/](https://ned.ipac.caltech.edu/)
Next, we download IPAC different images (IRSA 2022)5 and calculate forced-photometry light curves for the remaining SNe as described in Appendix A.1. We inspect the light curves visually and, if necessary, correct the approximate explosion date, \(t_{0}\), to ensure that the selected pre-SN observations do not contain any SN light. Conservatively, we only combine observations obtained more than ten days before \(t_{0}\) to calculate the pre-explosion flux. IPAC produces a single ZTF reference image for each combination of ZTF field, filter, CCD number, and CCD quadrant Masci et al. (2019). In the following, we call each unique combination of these parameters a field and only compare difference images in the same field, i.e., that were produced with the same reference image. We require at least 20 pre-SN observations in the same field and discard three SNe with fewer pre-SN observations. We also use the selected pre-SN observations to do a baseline correction to ensure that the pre-SN light curve corresponds to zero flux and we scale up the error bars if they are too small to account for the observed scatter (details given in Appendix A.1).
Footnote 5: Including data until August 2022.
The expected sensitivity of our search is \(\sim 23\) mag, and we extrapolate the late-time SN light curves to determine when they have faded below this threshold. To gain sensitivity to the faint late-time fluxes, we combine the \(g\)- and \(r\)-band fluxes in 7-day-long bins as described in Appendix A.1. To avoid marginal detections, we select the second-last \(r\)-band detection and extrapolate it with a slope of \(1\) mag per 100 days, the decay rate of \({}^{56}\)Co. Two SNe, SN 2018ivc and SN 2018ebt, have no, or few, \(r\)-band observations that pass our quality cuts (described in Appendix A.1) and we extrapolate the \(g\)-band light curve instead but increase the flux by \(0.5\) mag as SNe are typically brighter in the \(r\) band at this time. We acknowledge that some SNe might fade more slowly at late times, e.g., due to the late-time interaction (Sollerman et al., 2020; Weil et al., 2020) or light echos (Maund, 2019) which might yield false non-detections. The best way to mitigate this is inspecting the late-time light curves carefully, and using several different time windows to calculate the late-time flux.
Table 2 lists the properties of the 29 SNe that have already faded sufficiently, and unbinned forced-photometry light curves are provided in Table 3. Figs. 1 and 2 show the forced-photometry light curves in 7-day-long bins. Vertical, black lines mark the time windows for which we measure the pre-SN and late-time flux when searching for progenitor stars. Our sample includes 18 SNe II or IIP. SN 2018hna (Singh et al., 2019)
has an SN 1987A-like light curve and its progenitor is presumably a more compact BSG. The remaining 11 SNe originate from, at least partially, stripped progenitor stars: two are classified as SNe IIb, five are SNe Ib, three are SNe Ic, and one is a broad-lined SN Ic. The two closest SNe Ib in our sample belong to the subclass of low-energy, Calcium-rich SNe.
### The progenitor search
The low spatial resolution of ground-based surveys like ZTF does not allow us to reliably identify the progenitor star in pre-SN images alone as each pixel contains the light from many stars. But the vanishing of the progenitor star reduces the flux at the SN position and we search for this flux residual by comparing pre-SN images to observations taken after the SN has faded.
Table 4 lists all fields with at least 20 observations before and after the SN and their light curves are shown in Figs. 1 and 2. For each field, we calculate the mean flux, weighted according to the flux error bars, before and after the SN. We only compare observations that have the same reference image. We estimate the flux error with
Figure 1: Light curves in week-long bins for 29 nearby SNe with enough observations before and after the SN (continued in Fig. 2). To determine whether the SN is still present at late times, we identify the second-last \(5\,\sigma\) detection in the \(r\) band and extrapolate the light curve with a slope of \(1\,\mathrm{mag}\) per \(100\,\mathrm{days}\) (shown as a dashed line). SN 2018ivc and SN 2018ebt do not have enough \(r\)-band observations, so we extrapolate the \(g\)-band light curve, but shift the curve by 0.5 mag, as most SNe are brighter in the \(r\) band at this time. A vertical black line indicates when the SN is most likely fainter than \(23\,\mathrm{mag}\) in the ZTF \(r\) band. Observations before the first and after the second black line are combined to measure the flux before and after the SN, respectively.
the bootstrap method (Efron, 1982) by resampling both the pre-SN and late-time fluxes while allowing for repetitions. For each randomized light curve we calculate the weighted mean flux before and after the SN and the difference between the two. The standard deviation of the resulting distribution of residuals is used as the error on the flux residual. The advantage of the bootstrap method is that the results are valid even if the flux errors are inaccurate or not Gaussian. By using the weighted mean, we assume, however, that the relative size of the error bars is correct.
Combining hundreds of observations reduces statistical fluctuations, therefore, our search might be limited by additional errors and systematics, such as residuals due to bright host galaxies, shallow reference images, a bias mismatch, or errors on the PSF. In Appendix A.2, we quantify the impact of these error sources empirically: We select 20 positions in the same images that have host fluxes similar to the SN position and perform a progenitor search at each position. For appropriately large error bars the standard deviation of these flux residuals should be close to one. If it is larger, the errors are likely underestimated, and we scale them up, i.e., we scale down the significance of the progenitor detection (see Appendix A.2 for details). The resulting scaling factors are given in the seventh column of Table 4. A few fields have scaling factors as large as two, sometimes due to a bright host, while in other cases, we cannot pinpoint what limits the sensitivity. However, most scaling factors are close to one, indicating that the sensitivity is determined by well-understood statistical
Figure 2: Continuation of Fig. 1.
fluctuations and that additional ZTF observations might improve the depth of the search.
### Results
All flux residuals, their errors, and their significances are listed in Table 4. Our search yields a single \(5\,\sigma\) detection: When combining the \(g\)-band observations of SN 2020cxd (Yang et al., 2021; Kozyreva et al., 2022; Valerin et al., 2022) we measure a flux residual of \(22.7\pm 0.2\) mag (shown as a green data point in Fig. 3). We inspect the 30-day-bin light curve of SN 2020cxd, and Fig. A1 shows that both the \(g\)- and \(r\)-band flux are fading over the four years during which ZTF monitored the position. The trend seems to continue after the SN explosion, implying that it is either due to an unrelated background source or due to long-term changes in the calibration. We did not observe similar variability for any of the 20 background positions in the same galaxy, so it is likely a local effect. A single ZTF pixel covers \(1.012\) arcsec in the sky, corresponding to \(110\) kpc in the host galaxy NGC 6395. Therefore, it is possible that the fading originates from an unrelated source. We thus conclude that unrelated variable sources and progenitor outbursts are a potential contamination for our search. We do not find any genuine detections and the ZTF limiting magnitudes of our progenitor search are shown in Fig. 3.
### SN progenitor detections in the literature
So far, progenitor searches in HST images have been published for six of the SNe in the ZTF sample (see Table 2). The closest object, SN 2020jfo, has a rather faint progenitor star with \(25.47\pm 0.07\) mag or \(-5.35\) mag in the F814W band (Sollerman et al., 2021). Moreover, the detection of an SN Ib progenitor was reported for SN 2019yvr (Kilpatrick et al., 2021) and the colors imply an F-type spectrum. Late-time observations will reveal, whether it is a single star, binary system, or dense stellar cluster (Sun et al., 2022). The progenitors of SN 2018ivc, SN 2019ehk, and SN 2020fqv remain undetected (see Bostroem et al., 2020, Jacobson-Galan et al., 2020, and Tinyanont et al., 2022), while a bright stellar cluster was detected at the position of SN 2020oi (Gagliano et al., 2022).
Samples of progenitor searches have, for example, been presented by Smartt (2015) and Van Dyk (2017) (for all SN types) and Davies and Beasor (2018) (for RSG progenitors). We update the Davies and Beasor (2018) and Davies and Beasor (2020) sample by adding more recent detections and limits and we discard three progenitor candidates that are not detected securely (SN 2009hd, SN 2009kr, and SN 2009md; Elias-Rosa et al., 2011; Maund et al., 2015). We also compile a sample of detected, non-RSG progenitors in Table 6. Host extinction estimates are taken from the literature and, like the SN distance estimates, they are often uncertain (see, e.g., Maund, 2017). We do not consider dust which might reduce the impact of extinction in some bands (Kochanek et al., 2012).
We aim to compare the ZTF limits to direct observations and not to derived quantities. Therefore, we look up the original progenitor observations in individual bands and convert them to ZTF \(r\)-band AB magnitudes as described in Appendix B. If the progenitor is detected in several bands, we pick the band that is closest to the ZTF \(r\) band. We use filter profiles from the pyphot library6 if available or from the database of the Spanish Virtual Observatory7 and adopt the spectral shapes of giant or supergiant stars in the X-shooter Spectral Library (Verro et al., 2022).
Footnote 6: [https://mfouesneau.github.io/pyphot/](https://mfouesneau.github.io/pyphot/)
Footnote 7: [https://svo.cab.inta-csic.es](https://svo.cab.inta-csic.es)
Figure 4 shows the ZTF upper limits compared to detections and upper limits reported in the literature
Figure 3: Apparent limiting magnitudes of the ZTF progenitor search in the \(g\) (green) and \(r\) (red) bands versus the number of images in the time window, before or after the SN, that contains fewer ZTF images and, hence, limits the sensitivity of the search. Open triangles show the limits that are obtained with bootstrap error propagation. We then repeat our search 20 positions throughout the image and increase the error bars if the flux residuals scatter more than expected (see Appendix A.2). The resulting scaled limits are shown as solid triangles. The red dashed line indicates the ideal \(r\)-band sensitivity based on typical ZTF zeropoints, sky background, and seeing values (described in Sect. 2.5). The limits that we obtain are less constraining due to host galaxies, non-optimal image processing, or additional systematic errors. For comparison, the horizontal red line indicates the median \(r\)-band limiting magnitude of single difference images.
(mostly from HST). The ZTF limits are several magnitudes less sensitive than most detected progenitor stars. The upper dashed red line indicates an apparent magnitude of 23 mag, the limiting magnitude that we obtained for ZTF fields with many observations (see Sect. 2.3 and Table 4). Only three detected progenitors have brighter apparent magnitudes in the \(r\) band. The ZTF search would hence have a very rough detection rate of one progenitor per decade. It is sensitive to bright YSG progenitors out to \(\sim 20\) Mpc, BSG progenitors within \(\sim 5\) Mpc, and even closer RSGs.
One reason for the ZTF non-detections are the low RSG surface temperatures. If the adopted M4 spectral shape is typical, RSG progenitors are \(\sim 1.5\) mag fainter in the \(r\) band compared to the F814W band. ZTF also has an \(i\) band, but the camera is less sensitive at these longer wavelengths (see Fig. 12 by Masci et al., 2019) and the \(i\) band suffers from fringing. Furthermore, it is not used as often, and the small number of observations reduces the sensitivity even further. In conclusion, the ZTF \(g\) and \(r\) bands are not red enough to detect RSG progenitors, therefore the detection of hotter, less extended progenitors such as YSG or BSG stars is more likely even though they are rarer.
### Expected ZTF sensitivity
Next, we check whether the progenitor search is as good as expected for the P48 telescope or whether we lose sensitivity due to unaccounted-for errors, such as registration errors or biases in the background estimation (see Appendix A.2).
We start by quantifying the sensitivity of single ZTF science images. For each science image, we look up the zeropoint, \(ZP\), and seeing, \(S\), and measure the standard deviation, \(\sigma_{\rm bg}\), of the sky background, \(B\), from the image. For a source with magnitude \(m\), the expected total number of signal counts is \(n_{\rm signal}=10^{0.4(ZP-m)}\). The standard deviation of the Gaussian PSF is \(\sigma=S/2.35\)
Figure 4: Arrows mark \(5\,\sigma\)\(r\)-band upper limits on the progenitor magnitudes from the ZTF search (also given in Table 4). Colors indicate the expected progenitor type based on the SN type. For comparison, we show detections (stars) and non-detections (triangles; \(3\,\sigma\) upper limits) of SN progenitors from the literature (listed in Tables 5 and 6). All magnitudes are converted to ZTF \(r\)-band AB magnitudes and we do not correct for extinction because we want to compare to the sensitivity of surveys. The two dashed lines indicate which progenitors would have been detectable by with a limiting magnitude of 23 mag and 26 mag, respectively.
Based on Eq. 1 by Ofek and Ben-Ami (2020), the expected signal-to-noise ratio in a single image can hence be approximated as:
\[\frac{S}{N}=\sqrt{\sum\left(\frac{n_{\text{sig, pix}}}{n_{\text{bg, pix}}}\right)^{2}}=\frac{10^{0.4(ZP-m)}}{\sqrt{4\pi}(S/2.35)\sigma_{\text{bg}}} \tag{1}\]
where \(n_{\text{sig, pix}}\) and \(n_{\text{bg, pix}}\) are the number of signal and background counts per pixel. We obtain the last expression by integrating over all pixels assuming a Gaussian PSF. We verify Eq. 1 by injecting simulated sources and find that the signal-to-noise ratio is 22% lower because the PSF is not perfectly Gaussian. Except for this correction, Eq. 1 is a good measure for the sensitivity. To estimate the improvement when combining many images, we draw ZTF images randomly and add their signal-to-noise ratios in quadrature. The resulting ideal sensitivity as a function of the number of images is shown as a dashed red line in Fig. 3.
The triangles in Fig. 3 represent the limiting magnitudes of the progenitor search (also given in Table 4) and they show that the sensitivity of our search is sometimes close to the ideal sensitivity, but can be up to 1.5 mag worse for other fields. The ideal sensitivity does not consider the host background which limits our search for SNe with bright hosts, such as SN 2020oi or SN 2018ivc (compare Appendix. A.2). Moreover, we assume that the sensitivity is purely determined by the time window that contains the fewest observations. But if both time windows contain the same number of observations, the sensitivity would decrease by a factor of \(\sqrt{2}\) or by 0.36 mag. We neglect all fields where either of these effects is relevant, but find that a discrepancy remains: On average, the ZTF search could be improved by 1 mag.
Reaching the ideal sensitivity would likely require custom image coaddition and subtraction as well as a more careful calibration that reduces the scatter between epochs and between different positions within the same image. If we reach the ideal sensitivity, we would expect limiting magnitudes of 24 mag in the \(r\) band for more than 250 images in both time windows (see Fig. 3). However, this is not sufficient to detect progenitors on a regular basis, as shown in Sect. 2.4. Thus, detecting a sample of SN progenitors requires either a larger telescope, that detects more signal photons or a better site with a smaller seeing or darker sky (compare Eq. 1).
## 3 Prospects for direct progenitor detections with LSST
We explore here how many progenitor detections we expect with LSST. For this purpose, we simulate a population of nearby SNe in Sect. 3.1 and estimate the impact of bright host galaxies in Sect. 3.2. We calculate the expected number of progenitor detections in Sect. 3.3 and discuss pre-SN outbursts in Sect. 3.4. Appendix B shows the bolometric corrections that we use to convert between different spectral bands.
### Simulating a population of nearby supernovae
To simulate a population of nearby SNe, we randomly draw a distance, explosion time, peak magnitude, fading time, progenitor magnitude, and the host background at the SN position.
The distances of very nearby SNe are determined by individual galaxies, as exemplified by SN 2019ehk and SN 2020oi, which exploded in the same host (see Table 2). We adopt the local galaxy distribution from the GLADE+ catalog (Dalya et al., 2022) and roughly select the part of the extragalactic sky that LSST can observe. We consider galaxies within 70 Mpc and the catalog is approximately complete out to this distance (Dalya et al., 2022). The \(B\)-band flux of galaxies mostly originates from young, hot stars and is closely correlated with the star-formation and SN rate. We, therefore, draw the distances of the simulated SNe according to the \(B\)-band luminosity of galaxies in the GLADE+ catalog.
Conservatively, we only consider bright SNe with peak magnitudes of at least 18.5 mag, i.e., SNe that a survey similar to BTS would find and classify. For this purpose we draw absolute peak magnitudes from the \(r\)-band SN luminosity function presented by Perley et al. (2020), convert them to apparent magnitudes, and discard 22% of the simulated SNe that are fainter than 18.5 mag.
The SN fading time is crucial because the progenitor search requires late-time observations as we search for a vanishing source rather than trying to identify the progenitor in pre-SN images alone. In Sect. 2.1, we extrapolate the \(r\)-band light curves of 50 ZTF SNe. They fade to 23 mag within 0.8 to 2.5 years with a median of 1.6 years. But when extrapolating to fainter magnitudes, slower radioactive decays might start to contribute. SN 1987A is one of few SNe with very-late-time detections (Seitenzahl et al., 2014) and we here assume that all light curves develop similarly after they have reached 23 mag. Compared to a pure \({}^{56}\)Co decay, the slower \({}^{57}\)Co decay only increases the median fading time to 27.5 mag by 70 days. \({}^{44}\)Ti starts to dominate the light curve at even later times and is not relevant to our search. Slower radioactive decays hence only have a very minor impact on the progenitor search, if the ratios between the isotopes are similar to the ones for SN 1987A. SNe fade to 27.5 mag within 1.9 to 6 years with a median duration of 3 years. We do not consider
here that some SNe may fade away more slowly due to late-time interaction with a circumstellar medium (see, e.g., Sollerman et al., 2020 or Weil et al., 2020), light echos (Maund, 2019), or prolonged magnetar emission.
Of all SNe that explode during the ten-year survey 80% fade to at least 24.7 mag before the end of the survey, i.e., they are no longer detectable in single LSST images. Within the survey footprint, LSST is scheduled to collect 184 \(r\)- and \(i\)-band observations (Ivezic et al., 2019), while 80 \(g\)-band visits are planned. SNe that are bright in the middle of the survey have 60 to 80 LSST \(r\)- and \(i\)-band images both before and after the SN. However, SNe that happen earlier or later have fewer observations in one of the time windows resulting in a lower sensitivity (compare Fig. 3). As a consequence, the average SN has 35 \(r\)- and \(i\)-band and 16 \(g\)-band observations in the shorter time window.
Finally, we draw absorbed \(i\)-band magnitudes for RSG progenitors from the luminosity function by Davies and Beasor (2020) and convert the to the LSST \(g\), \(r\), and \(i\)-band magnitudes using the bolometric corrections given in Sec. B.
### The impact of bright hosts on the LSST sensitivity
The design limiting magnitude of a single LSST visit is 24.7 mag in the \(r\) band (Table 1 of Ivezic et al., 2019) while the expected limiting magnitude after 10 years and 184 observations is 27.5 mag. Figure 2 by Ivezic et al. (2019) shows how the sensitivity improves when coadding several \(r\)-band observations. The improvement is similar for \(g\)- and \(i\)-band images, but \(g\)-band observations are 0.2 mag deeper while \(i\)-band observations are 0.7 mag less constraining (Ivezic et al., 2019). The LSST \(z\) and \(y\) bands can likely also be used for progenitor searches and RSGs are bright in these bands (see Appendix B). However, the SN fading times might be longer, e.g., due to dust formation, and we, therefore, do not consider these bands here.
However, the limiting magnitudes quoted by Ivezic et al. (2019) are only valid for isolated sources. In such locations, the background is dominated by the sky brightness of on average 21.2 mag arcsec\({}^{-2}\) in the LSST \(r\) band (Table 2 by Ivezic et al., 2019) or \(3.3\times 10^{-9}\) maggies arcsec\({}^{-2}\). But many SNe explode on top of bright host galaxies. As shown in Eq. 1, the signal-to-noise ratio is proportional to \(\sigma_{\rm bg}^{-1}\) and the standard deviation of the sky background is \(\sigma_{\rm bg}=B^{-0.5}\) where \(B\) is the sum of all relevant backgrounds at the SN position. The host background, therefore, degrades the limiting magnitude by:
\[\Delta l=1.25\log 10\bigg{(}1+\frac{B_{\rm host}}{B_{\rm sky}}\bigg{)} \tag{2}\]
where \(B_{\rm host}\) and \(B_{\rm sky}\) are the host and sky background in counts per area.
In Sect. 2.2, we measured the \(r\)-band host surface brightnesses at the positions of 29 ZTF SNe using late-time ZTF data (see Table 4; details on the calculation described in Appendix A.2). We assume that LSST SNe will explode in as bright galaxies and randomly draw host backgrounds from this distribution. To first order, the host surface brightness is independent of the distance: Nearby galaxies are brighter, but they are also more resolved, such that the number of photons per pixel is roughly preserved. We use Eq. 2 to calculate by how much these hosts would reduce the sensitivity of our search in LSST data. Figure 5 shows that 46% of the galaxies are brighter than the average LSST sky background, i.e., the host background reduces the limiting magnitude by more than 0.38 mag and for 7% of the hosts the sensitivity is reduced by more than 1 mag.
Compared to LSST, ZTF is less affected by host backgrounds because the sky background is on average 1 mag arcsec\({}^{-2}\) brighter than at the LSST site. Nevertheless, the ZTF search is host-dominated for 14% or four of the 29 SNe in our sample: the host backgrounds of SN 2019vvr, SN 2020oi, and SN 2020fqv are larger than the average sky background and the bright host of SN 2018ivc produces large residuals during image subtraction, such that quality cuts reject most \(r\)-band observations (compare Sect. A.2).
Even the brightest host galaxies are most likely not saturated in LSST images. For a seeing of 0.7 arcsec and an exposure time of 15 s, a point source is saturated if it is brighter than 15.8 mag in the \(r\) band (Abell et al., 2009). For this seeing and the LSST pixel size (Ivezic et al., 2019) about 10% of the photons fall on the bright
Figure 5: Reduction in limiting magnitude due to the host background for the expected LSST sky background of 21.2 mag arcsec\({}^{-2}\). The orange crosses indicate the \(r\)-band host backgrounds at the locations of 29 nearby ZTF SNe (see Sect. 2).
est pixel. An extended source, like a galaxy, produces as many photons per pixel if it has a surface brightness of \(14.8\,\mathrm{mag\,arcsec^{-2}}\). In the ZTF sample, SN 2020oi has the brightest host background with \(18.1\,\mathrm{mag\,arcsec^{-2}}\), and is hence a factor of 20 fainter than the LSST saturation limit for extended sources.
### Expected LSST progenitor detections
LSST is planning to monitor \(18\,000\,\mathrm{deg^{2}}\) of extragalactic sky over ten years in the \(ugrizy\) filters (Ivezic et al., 2019). For an SN rate of \(10^{-4}\,\mathrm{Mpc^{-3}\,yr^{-1}}\)(Perley et al., 2020) and the distribution of nearby galaxies (Dalya et al., 2022), we expect in total \(\sim 600\) SNe within \(70\,\mathrm{Mpc}\). Out of these, 490 have peak magnitudes brighter than \(18.5\,\mathrm{mag}\) in the \(g\) or \(r\) band and fade within the survey duration to at least \(24.7\,\mathrm{mag}\) in the \(r\) band. 56% or about 270 of them are SNe II with RSG progenitors (Li et al., 2011).
Next, we convert the available number of LSST images, calculated in Sect. 3.1, to limiting magnitudes while also considering the impact of the host background as described in Sect. 3.2. The most constraining limits are \(\sim 27.2\,\mathrm{mag}\) in the \(r\) band for SNe that have 70 LSST observations both before and after the SN. However, the median SN only has half as many observations in the shorter time window and the median limiting magnitude is, therefore, \(26.2\,\mathrm{mag}\) in the \(r\) band and \(25.9\,\mathrm{mag}\) and \(25.8\,\mathrm{mag}\) in the \(g\) and \(i\) bands, respectively.
Figure 6 shows the distance distribution of the closest SNe (orange distribution) and their detection probability in the LSST \(g\), \(r\), and \(i\) bands. For the \(i\) band the detection probability is close to one out to \(\sim 10\,\mathrm{Mpc}\), i.e., even faint progenitor stars are detected within this distance. In the \(g\) band, on the other hand, progenitors are so faint that they might remain undetected even for the closest SNe. The total expected number of RSG detections is given by the sum over the distribution: We expect about 46 RSG progenitor detections in the \(i\) band, 17 in the \(r\) band, and 1.4 in the \(g\) band, if the stars have M4 spectra (see Table 1). Neglecting the host background (see Sect. 3.2) would have yielded 56 \(i\)-band detections, e.g., \(\sim 30\%\) of the RSG progenitors remain undetected due to their bright host galaxies. For a less red M2 spectrum, we expect 72, 44, and 5 detections in the \(i\), \(r\), and \(g\) bands, respectively. LSST is hence sensitive to the surface temperature and we expect more detections for slightly hotter progenitors.
Our search will also produce strong constraints for the high-luminosity end of the RSG luminosity function: The brightest detected RSG so far is SN 2012ec with a bolometric magnitude of \(-8.11\,\mathrm{mag}\) (compare Table 5), corresponding to an LSST \(i\)-band magnitude of \(-7.28\,\mathrm{mag}\). We expect that 100 LSST searches will be sensitive to as bright progenitor stars. These observations can help to establish whether a significant RSG problem is present.
LSST will also detect non-RSG progenitors. Table 1 lists the expected number of SNe within \(70\,\mathrm{Mpc}\) based on the volumetric SN fractions by Li et al. (2011) and Kleiser et al. (2011). The luminosity functions of these progenitor types are not well constrained, and we, therefore, assume that all YSGs are as bright as the progenitor of SN 2008ax (see Table 6), for BSGs we adopt the magnitude and spectrum of the progenitor of SN 1987A. The last three columns of Table 1 show that we expect the detection of \(\sim 4\) YSG and 2 BSG progenitors. In addition, we expect the detection of at least one SN Ib progenitor if they are brighter than \(-4.7\,\mathrm{mag}\) in the \(i\) band, and SN Ic progenitors become detectable if they are at least as bright as \(-4.0\,\mathrm{mag}\).
### LSST detection rates for pre-SN eruptions
The impending core collapse can trigger pre-SN outbursts. These eruptions are well observed for SNe IIn (Ofek et al., 2014; Strotjohann et al., 2021), but also happen prior to other Types of SNe (see e.g. Ho et al., 2019; Jacobson-Galan et al., 2022). We here estimate rough LSST detection rates for outbursts that happen immediately before the SN explosion.
Bright outbursts can be detected even for distant SNe and we assume a homogeneous SN rate of \(10^{-4}\,\mathrm{Mpc^{-3}\,yr^{-1}}\) for distances larger than \(70\,\mathrm{Mpc}\) where the GLADE+ galaxy catalog is incomplete. As before, we only consider SNe with peak magnitudes brighter
Figure 6: Probability that an RSG progenitor with an M4 spectrum is detected in the LSST \(g\) (thin green line), \(r\) (red), or \(i\) (black) band and the expected number of bright, nearby SNe within the LSST footprint during the ten-year survey (orange distribution; right axis). The number of progenitor detections is given as the product of the two (thick lines; right axis) and we expect 46 RSG detections in the \(i\) band, 17 in the \(r\) band, and 1.4 in the \(g\) band.
than 18.5 mag. Within the 10-year LSST survey, we expect the detection of 3 300 as bright core-collapse SNe and they are located at a median distance of 120 Mpc.
Figure 7 shows our sensitivity to precursor eruptions depending on the outburst magnitude and duration. One-month-long outbursts are typically only present in one or two LSST \(r\)-band images and LSST can detect half of them if they are brighter than an absolute magnitude of \(-10.3\) mag. Longer-lasting outbursts are observed several times and for a 2-year-long eruption, the median sensitivity improves to \(-9.0\) mag. The dotted, cyan line in Fig. 7 visualizes that the average host reduces our sensitivity by 0.4 mag (see also Fig. 5).
SNe IIn make up about 9% of all core-collapse SNe (Perley et al., 2020) and LSST will detect \(\sim 300\) SNe IIn with peak magnitudes brighter than 18.5 mag. Strojohann et al. (2021) found that 25% of the progenitors have outbursts brighter than \(-13\) mag and Fig. 7 shows that as bright outbursts are detectable even in single LSST images. Therefore, we expect the detection of at least 75 such outbursts. A four-months-long outburst with a faint, absolute magnitude of \(-11.3\) mag was detected prior to SN 2020tlf, an SN II with long-lasting flash-spectroscopy features (Jacobson-Galan et al., 2022). We expect the detection of 1 800 bright SNe II and similar outbursts would be detectable for 90% of them. This large number of bright SNe will allow us to measure the rate, luminosity function, and timing of the outbursts which might contribute to revealing their triggering mechanism.
## 4 Conclusion
We propose searching for SN progenitor stars by combining a large number of images from ground-based, wide-field surveys to detect the star's disappearance. The advantages of this approach are that the data is readily available, it is collected systematically for a large fraction of the sky, usually in several bands, and late-time data ensures that the progenitor star has indeed vanished.
We test the proposed method by searching for the progenitor stars of 29 ZTF SNe with redshift \(z<0.01\) that have already faded sufficiently (see Sect. 2). We com
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{ type} & fraction & \#SNe & progenitor lumi. & spectral type & \multicolumn{2}{c}{\# detections} \\ & & & (mag\({}_{i}\)) & & \(g\) & \(r\) & \(i\) \\ \hline RSG & 0.56 & 273 & LF & M4 & 1.4 & 17 & 46 \\ & & & & M2 & 5.4 & 44 & 72 \\ YSG & 0.09 & 44 & \(-5.4\) & A5 & 2.4 & 5.2 & 1.6 \\ BSG & 0.02 & 9.8 & \(-5.6\) & B2 & 0.95 & 2.3 & 0.5 \\ \hline SN Ib progenitor & 0.12 & 56 & \(\leqslant-4.7\) & F2 & \(\geqslant 0.87\) & \(\geqslant 1.0\) & \(\geqslant 0.79\) \\ SN Ic progenitor & 0.14 & 66 & \(\leqslant-4.0\) & A5 & \(\geqslant 0.57\) & \(\geqslant 1.0\) & \(\geqslant 0.44\) \\ \hline \end{tabular} Note. – The number of SNe within 70 Mpc with both pre-SN and late-time observations during the ten-year survey for 18 000 deg\({}^{2}\). The SN fractions in the second column are taken from Li et al. (2011) and Kleiser et al. (2011). For RSGs, the progenitor \(i\)-band magnitudes are drawn from the luminosity function by Davies & Beasor (2020). Due to the lack of a measured luminosity function, we assume that all YSGs are as bright as the progenitor of SN 2008ax and BSGs as the one of SN 1987A (forth column of the table; taken from Table 6). The luminosities of SN Ib and SNe Ic progenitors are highly uncertain, but we expect at least one LSST detection if they are brighter than \(-4.7\) mag and \(-4.0\) mag in the \(i\)-band, respectively.
\end{table}
Table 1: Expected number of LSST detections for different progenitor types.
Figure 7: Detection probability for pre-SN outbursts depending on their luminosity and duration. The calculation was done for a population of SNe with peak luminosities brighter than 18.5 mag and the right-hand axis shows the expected number of core-collapse SNe within the LSST footprint during the 10-year survey. The dotted, cyan line indicates that the average host galaxy reduces our sensitivity by 0.4 mag (see also Fig. 5).
bine up to a few hundred observations before and after the SN and subtract the fluxes in these two time windows from each other. With the available amount of data, the background from host galaxies, and statistical and systematic errors in ZTF difference images, we reach limiting magnitudes down to \(\sim 23\) mag in the \(g\) and \(r\) bands. We do not detect any progenitor stars or long-lasting outbursts and detections reported in the literature are typically several magnitudes fainter than our limits (see Sect. 2.4). We estimate that ZTF is sensitive to the brightest YSGs and to the closest RSGs within 5 Mpc, which yields a detection rate of approximately one per decade. Additional data would likely improve the limiting magnitude for some fields, while others seem limited by various errors. Our search is, on average, 1 mag less sensitive than expected for a ZTF-like survey, in part presumably because we combined relatively shallow difference images instead of producing dedicated difference images. Our method hence requires precise photometric calibration, long-term stability, and potentially improved image co-addition and source detection methods.
The upcoming LSST survey will yield 50 to 70 RSG progenitor detections (see Sect. 3.3). For half of the RSG progenitors, \(r-i\) colors will allow us to constrain the surface temperature and distinguish between M2 and M4 type stars, reducing the statistical and systematic errors on the RSG luminosity function. The larger sample of RSG progenitors might help to establish whether the red supergiant problem is significant, i.e., whether the most massive RSGs produce bright SNe or collapse into black holes directly.
We also expect to detect several YSG and BSG progenitors and the detection of SNe Ib and Ic progenitors is possible if their LSST \(i\)-band magnitudes are brighter than \(-4.7\) mag or \(-4.0\) mag, respectively. The progenitors of stripped-envelope SNe could either be stripped by binary partners, or they could be very massive stars with strong winds, and progenitor detections in several bands might allow us to distinguish between these two scenarios. Wu and Fuller (2022) predict that late nuclear burning stages trigger mass loss and brightening for stripped stars. Such events would boost the detection rate for these progenitors and offer direct information about processes in the stellar core.
LSST pre-SN light curves will be sensitive to precursor eruptions that could produce confined shells of material around the star. Increased mass-loss is required to explain both flash-spectroscopy features (see e.g., Bruch et al.2022) and early SN light curves (Morozova et al.2020; Forster et al.2018), but it is so far unclear how the star produces this environment (see e.g., Davies et al.2022). We expect the detection of more than 70 bright outbursts prior to SNe IIn and many RSG outbursts if they are common and brighter than \(-9\) mag. In addition, low-amplitude variability or dimming events, for example observed by Szczygiel et al.2012 and Rui et al.2019, can be detected for the brightest progenitors.
In summary, we showed that the planned LSST survey is well suited to detect both quiescent and flaring SN progenitor stars. For the predictions in Sect. 3, we assumed that the survey reaches its design specifications and that image subtraction works well even for bright galaxies. If this is not the case, we can mitigate errors empirically, as done for ZTF in Sect. 2.3, which would reduce the sensitivity of our search.
## Acknowledgments
We thank Schuyler Van Dyk and Doron Kushnir for their comments on the manuscript. In addition, we are grateful to Morgan Fraser, Emma Beasor, Samantha Wu, Azalee Bostroem, Eva Laplace, and Dietrich Baade for helpful discussion. Moreover, we would like to thank the organizers of the MIAPbP workshop on interacting supernovae.
N.L.S. is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via the Walter Benjamin program - 461903330. This research was supported by the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) which is funded by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC-2094 - 390783311.
E.O.O. is grateful for the support of grants from the Benozio center, Willner Family Leadership Institute, Ilan Gluzman (Secaucus NJ), Madame Olga Klein - Astrachan, Minerva foundation, Israel Science Foundation, BSF, Israel Ministry of Science, Yeda-Sela, and Weizmann-MIT.
This work is based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grants No. AST-1440341 and AST-2034437 and a collaboration including current partners Caltech, IPAC, the Weizmann Institute of Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, IN2P3, University of Warwick, Ruhr University Bochum, Northwestern Uni
versity and former partners the University of Washington, Los Alamos National Laboratories, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW.
This research has made use of the NASA/IPAC Extragalactic Database (NED), which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology.
This research has used the Spanish Virtual Observatory ([https://svo.cab.inta-csic.es](https://svo.cab.inta-csic.es)) project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020-112949GB-I00.
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline \multicolumn{1}{c}{ IAU name} & \multicolumn{1}{c}{band} & \multicolumn{1}{c}{FCQFID} & \multicolumn{1}{c}{\(n_{\rm early}\)} & \multicolumn{1}{c}{\(n_{\rm late}\)} & \multicolumn{1}{c}{host flux} & \multicolumn{1}{c}{scaling} & \multicolumn{1}{c}{flux} & \multicolumn{1}{c}{sig.} & \multicolumn{1}{c}{limmag} & \multicolumn{1}{c}{abs.} & \multicolumn{1}{c}{limmag} \\ & & & & & \multicolumn{1}{c}{(\(10^{-10}\) mgy arcsec\({}^{-2}\))} & \multicolumn{1}{c}{(\(10^{-10}\) mgy)} & \multicolumn{1}{c}{(mag)} & \multicolumn{1}{c}{(mag)} \\ \hline SN 2020jfo & \(g\) & 4730621 & 149 & 65 & \(22.2\pm 0.6\) & 1.8 & \(-0.4\pm 2.7\) & \(-0.2\) & 22.2 & \(-8.6\) \\ & \(r\) & 4730622 & 191 & 76 & \(33.6\pm 0.8\) & 1.5 & \(-0.4\pm 2.2\) & \(-0.2\) & 22.4 & \(-8.4\) \\ SN 2020fqv & \(g\) & 5250521 & 168 & 109 & \(39.6\pm 0.9\) & 1.0 & \(-0.1\pm 1.1\) & \(-0.1\) & 23.1 & \(-7.8\) \\ & \(r\) & 5250522 & 203 & 120 & \(80.4\pm 2.0\) & 1.6 & \(3.6\pm 2.5\) & 1.4 & 22.3 & \(-8.6\) \\ SN 2018imf & \(g\) & 5261221 & 21 & 94 & \(4.7\pm 0.5\) & 1.8 & \(-3.2\pm 3.2\) & \(-1.0\) & 22.0 & \(-8.9\) \\ & \(r\) & 5261222 & 29 & 166 & \(6.1\pm 0.6\) & 1.0 & \(-1.0\pm 1.8\) & \(-0.6\) & 22.6 & \(-8.3\) \\ SN 2021gno & \(g\) & 5240911 & 200 & 22 & \(3.0\pm 0.5\) & 1.2 & \(-0.7\pm 2.5\) & \(-0.3\) & 22.3 & \(-8.6\) \\ & \(r\) & 5240912 & 242 & 23 & \(5.5\pm 0.7\) & 1.1 & \(-1.1\pm 2.5\) & \(-0.4\) & 22.3 & \(-8.6\) \\ SN 2019ehk & \(g\) & 5760331 & 108 & 145 & \(30.0\pm 0.9\) & 1.4 & \(-2.1\pm 1.5\) & \(-1.4\) & 22.8 & \(-8.1\) \\ & \(r\) & 5760332 & 148 & 218 & \(63.1\pm 2.0\) & 1.2 & \(-0.7\pm 1.7\) & \(-0.4\) & 22.7 & \(-8.2\) \\ SN 2020oi & \(r\) & 5760332 & 189 & 97 & \(560.4\pm 13.9\) & 1.0 & \(3.0\pm 1.6\) & 1.8 & 22.7 & \(-8.2\) \\ SN 2018ivc & \(g\) & 4021631 & 46 & 143 & \(298.9\pm 19.3\) & 1.9 & \(1.5\pm 3.0\) & 0.5 & 22.1 & \(-9.0\) \\ SN 2018hna & \(g\) & 7891311 & 116 & 146 & \(1.4\pm 0.5\) & 1.0 & \(-3.0\pm 1.3\) & \(-2.2\) & 22.9 & \(-8.1\) \\ & \(g\) & 7901611 & 126 & 101 & \(1.2\pm 0.5\) & 1.2 & \(-0.1\pm 1.9\) & \(-0.1\) & 22.6 & \(-8.5\) \\ & \(g\) & 8190141 & 101 & 157 & \(1.4\pm 0.5\) & 1.1 & \(-0.3\pm 1.3\) & \(-0.2\) & 23.0 & \(-8.1\) \\ & \(g\) & 8200431 & 128 & 113 & \(1.4\pm 0.4\) & 1.6 & \(0.2\pm 2.2\) & 0.1 & 22.4 & \(-8.7\) \\ & \(r\) & 7891312 & 123 & 176 & \(1.8\pm 0.6\) & 1.7 & \(-11.1\pm 3.1\) & \(-3.6\) & 22.0 & \(-9.0\) \\ & \(r\) & 7901612 & 121 & 156 & \(1.5\pm 0.4\) & 1.2 & \(-2.4\pm 1.4\) & \(-1.8\) & 22.9 & \(-8.2\) \\ & \(r\) & 8190142 & 88 & 167 & \(1.5\pm 0.6\) & 1.1 & \(0.3\pm 1.5\) & 0.2 & 22.8 & \(-8.3\) \\ & \(r\) & 8200432 & 115 & 142 & \(1.8\pm 0.5\) & 1.2 & \(-1.2\pm 1.7\) & \(-0.7\) & 22.6 & \(-8.4\) \\ \hline \end{tabular}
\end{table}
Table 4: Measured flux residuals
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multicolumn{1}{c}{ SN name} & \multicolumn{1}{c}{JD} & \multicolumn{1}{c}{fcqfid} & \multicolumn{1}{c}{flux} & \multicolumn{1}{c}{flux error} \\ & & & \multicolumn{1}{c}{(\(10^{-9}\) mgy)} & \multicolumn{1}{c}{(\(10^{-9}\) mgy)} \\ \hline SN2020jfo & 2458199.8034259 & 4730622 & \(-5.3\) & 3.6 \\ SN2020jfo & 2458204.809028 & 4730621 & \(1.2\) & 2.7 \\ SN2020jfo & 2458204.8170023 & 4730621 & \(-1.2\) & 2.5 \\ SN2020jfo & 2458210.8014699 & 4730621 & \(-8.1\) & 4.7 \\ SN2020jfo & 2458214.7387037 & 4730622 & \(4.5\) & 3.4 \\... & & & & \\ \hline \end{tabular} Note. – We removed all data points that do not pass our quality cuts, applied a baseline correction, and rescaled flux uncertainties based on the observed scatter of pre-SN observations (see Appendix A). The third column encodes the ZTF field (first three to five digits), CCD ID (fourth and third last digits), quadrant ID (second last digit), and filter (last digit) of the ZTF observations. Fluxes are given in maggies. The full table contains 29 449 observations and is published online.
\end{table}
Table 3: Unbinned ZTF forced-photometry light curves for 29 nearby SNe
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline IAU name & band & FCQFID & \(n_{\rm early}\) & \(n_{\rm late}\) & host flux & scaling & flux & sig. & limmag & abs. limmag \\ & & & & & \((10^{-10}\,{\rm mgy}\,{\rm arcsec}^{-2})\) & & \((10^{-10}\,{\rm mgy})\) & & (mag) & (mag) \\ \hline SN 2020cxd & \(g\) & 8481121 & 564 & 232 & \(18.0\pm 1.6\) & 1.1 & \(8.1\pm 1.5\) & 5.3 & 22.8 & \(-9.1\) \\ & \(r\) & 8481122 & 478 & 234 & \(27.1\pm 1.8\) & 1.0 & \(0.2\pm 0.7\) & 0.3 & 23.6 & \(-8.3\) \\ SN 2020pw & \(g\) & 8510431 & 262 & 24 & \(14.4\pm 0.5\) & 1.0 & \(-0.4\pm 4.7\) & \(-0.1\) & 21.6 & \(-10.5\) \\ & \(r\) & 8510432 & 266 & 33 & \(42.6\pm 1.0\) & 1.0 & \(-2.6\pm 4.3\) & \(-0.6\) & 21.7 & \(-10.4\) \\ SN 2020mj & \(g\) & 4271631 & 130 & 33 & \(0.2\pm 0.3\) & 1.2 & \(-0.7\pm 1.8\) & \(-0.4\) & 22.6 & \(-9.7\) \\ & \(r\) & 4271632 & 341 & 36 & \(0.2\pm 0.4\) & 1.0 & \(-2.7\pm 2.0\) & \(-1.3\) & 22.5 & \(-9.8\) \\ SN 2020mmz & \(g\) & 8161431 & 290 & 39 & \(48.4\pm 2.1\) & 1.0 & \(-2.3\pm 2.1\) & \(-1.1\) & 22.4 & \(-9.9\) \\ & \(r\) & 8161432 & 316 & 58 & \(79.9\pm 1.5\) & 1.2 & \(-0.6\pm 1.9\) & \(-0.3\) & 22.6 & \(-9.7\) \\ SN 2020hvp & \(g\) & 4300941 & 66 & 30 & \(10.1\pm 0.7\) & 1.0 & \(-7.5\pm 3.9\) & \(-1.9\) & 21.8 & \(-10.5\) \\ & \(g\) & 4311231 & 67 & 31 & \(9.4\pm 0.5\) & 1.2 & \(-1.9\pm 4.1\) & \(-0.5\) & 21.7 & \(-10.6\) \\ & \(r\) & 4300942 & 107 & 33 & \(19.9\pm 0.6\) & 1.4 & \(-1.4\pm 4.1\) & \(-0.3\) & 21.7 & \(-10.6\) \\ & \(r\) & 4311232 & 119 & 35 & \(18.9\pm 0.9\) & 1.5 & \(0.1\pm 4.0\) & 0.0 & 21.7 & \(-10.6\) \\ SN 2019yrr & \(g\) & 4231531 & 43 & 29 & \(83.8\pm 1.6\) & 1.3 & \(-3.7\pm 3.2\) & \(-1.2\) & 22.0 & \(-10.3\) \\ & \(r\) & 4231532 & 79 & 22 & \(160.4\pm 3.4\) & 1.0 & \(-0.4\pm 2.7\) & \(-0.1\) & 22.2 & \(-10.2\) \\ SN 2020abs & \(g\) & 3211411 & 49 & 19 & \(9.2\pm 0.7\) & 1.1 & \(-1.5\pm 3.3\) & \(-0.4\) & 21.9 & \(-10.5\) \\ & \(g\) & 3720441 & 66 & 21 & \(8.5\pm 0.9\) & 1.3 & \(-6.7\pm 4.3\) & \(-1.6\) & 21.7 & \(-10.8\) \\ & \(r\) & 3211412 & 50 & 31 & \(10.5\pm 0.9\) & 1.1 & \(0.9\pm 4.5\) & 0.2 & 21.6 & \(-10.8\) \\ SN 2019spk & \(g\) & 3661611 & 45 & 58 & \(0.6\pm 0.3\) & 1.2 & \(2.4\pm 3.1\) & 0.8 & 22.0 & \(-10.5\) \\ & \(g\) & 4170241 & 48 & 98 & \(0.6\pm 0.2\) & 1.4 & \(-0.7\pm 2.1\) & \(-0.3\) & 22.4 & \(-10.1\) \\ & \(r\) & 3661612 & 45 & 54 & \(0.6\pm 0.6\) & 1.3 & \(4.0\pm 3.3\) & 1.2 & 21.9 & \(-10.6\) \\ & \(r\) & 4170242 & 55 & 126 & \(0.9\pm 0.5\) & 1.3 & \(-1.3\pm 2.8\) & \(-0.5\) & 22.1 & \(-10.4\) \\ SN 2021bug & \(g\) & 4730111 & 171 & 69 & \(8.4\pm 0.5\) & 1.2 & \(2.3\pm 1.7\) & 1.4 & 22.7 & \(-9.9\) \\ & \(r\) & 4730112 & 228 & 75 & \(13.3\pm 0.6\) & 1.1 & \(-2.6\pm 1.9\) & \(-1.4\) & 22.6 & \(-10.0\) \\ SN 2021bhd & \(g\) & 8450841 & 555 & 42 & \(20.9\pm 0.7\) & 1.1 & \(-1.3\pm 1.4\) & \(-1.0\) & 22.9 & \(-9.7\) \\ & \(r\) & 8450842 & 506 & 50 & \(48.7\pm 1.4\) & 1.1 & \(2.8\pm 1.5\) & 1.8 & 22.8 & \(-9.8\) \\ SN 2019yz & \(g\) & 4291421 & 25 & 82 & \(7.7\pm 0.5\) & 1.0 & \(-2.3\pm 2.5\) & \(-0.9\) & 22.3 & \(-10.4\) \\ & \(r\) & 4291422 & 40 & 116 & \(14.1\pm 0.5\) & 1.0 & \(-2.6\pm 2.8\) & \(-0.9\) & 22.1 & \(-10.6\) \\ SN 2020fsb & \(g\) & 2770731 & 27 & 29 & \(46.8\pm 1.7\) & 1.0 & \(1.0\pm 6.8\) & 0.2 & 21.2 & \(-11.5\) \\ & \(r\) & 2770732 & 30 & 19 & \(78.5\pm 2.2\) & 1.1 & \(6.7\pm 6.1\) & 1.1 & 21.3 & \(-11.4\) \\ SN 2019bzd & \(g\) & 3250211 & 20 & 64 & \(30.2\pm 0.9\) & 1.6 & \(5.8\pm 5.3\) & 1.1 & 21.4 & \(-11.6\) \\ & \(r\) & 3250212 & 37 & 82 & \(60.0\pm 1.3\) & 1.0 & \(2.2\pm 2.8\) & 0.8 & 22.1 & \(-10.9\) \\ SN 2019ps & \(g\) & 5881631 & 159 & 99 & \(0.8\pm 0.3\) & 1.0 & \(-0.1\pm 1.2\) & \(-0.1\) & 23.0 & \(-10.0\) \\ & \(r\) & 5881632 & 487 & 107 & \(1.5\pm 0.3\) & 1.0 & \(2.4\pm 1.1\) & 2.1 & 23.1 & \(-9.9\) \\ SN 2019enr & \(g\) & 5190631 & 39 & 114 & \(28.3\pm 0.8\) & 1.1 & \(-0.3\pm
## Appendix A ZTF search
### Forced Photometry Pipeline and Quality Cuts
Instead of implementing our own image subtraction algorithm, we rely on difference images created by IPAC (Masci et al., 2019) based on the ZOGY algorithm (Zackay et al., 2016). IPAC difference images are based on relatively shallow reference images that were produced by coadding 15 to 20 images. We download the IPAC difference images, their associated point spread functions (PSFs), and reference images using the _ztfquery_ package (Rigault, 2018) and calculate forced-photometry light curves for all SNe.
We first determine the exact SN position by calculating the median right ascension and declination of all ZTF alerts (Patterson et al., 2019) that were issued for the SN. Our forced-photometry pipeline is a modified version of the pipeline written by Yao et al. (2019), also used by Strotjohann et al. (2021). As demonstrated by Strotjohann et al. (2021) 50 MCMC walkers yield a sufficiently precise flux measurement. The results are similar to light curves produced by the ZTF forced-photometry service (Masci et al., 2019), but our pipeline returns additional parameters that we use for quality cuts.
After calculating the SN light curves, we reject less reliable data points according to the following criteria: (1) flagged images (i.e. the INFOBITS keyword in the header is not zero), (2) images with large residuals in the background region (a background standard deviation of \(>25\) counts), (3) images with bad pixels within the \(7\times 7\) pixels around the SN position, (4) images with a seeing of \(>4\) arcsec, (5) images with intermittent clouds that are flagged in the IPAC metadata tables8 (6) images with large residuals at the SN position that have a reduced \(\chi^{2}>1.4\), (7) images based on deleted reference images. A few ZTF reference images were accidentally overwritten, such that later difference images are based on a new, different reference image. To avoid combining observations with different reference images we discard any difference images that were created earlier than the available reference image. This removes \(3-4\%\) of the data.
Footnote 8: As described in [https://web.ipac.caltech.edu/staff/fmasci/ztf/extended_cautionary_notes.pdf](https://web.ipac.caltech.edu/staff/fmasci/ztf/extended_cautionary_notes.pdf). The flag is only present in the meta tables because the image headers are written earlier.
ZTF \(i\)-band observations are on average one magnitude less sensitive compared to the \(g\) and \(r\) bands and the images suffer from fringing. Therefore, we discard them here even though the \(i\)-band flux of RSGs are typically \(\sim 1.5\) mag brighter than their \(r\)-band flux (see Sec. 2.4).
Like Yao et al. (2019), we use the zeropoints from the image headers to calculate fluxes \(f\) in units of maggies. These are converted to asinh magnitudes (Lupton et al., 1999) via:
\[m_{\rm AB} = -2.5/\ln(10)(\mathrm{arcsinh}(f/2b)+\ln(b))\] (A1) \[\Delta m_{\rm AB} = 2.5/\ln(10)\times\Delta f/f\] (A2)
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline IAU name & band & FCQFID & \(n_{\rm early}\) & \(n_{\rm late}\) & host flux & scaling & flux & sig. & limmag & abs. limmag \\ & & & & & & \((10^{-10}\,\mathrm{m}\mathrm{y}\mathrm{\,arcsec}^{-2})\) & & \((10^{-10}\,\mathrm{m}\mathrm{y})\) & & (mag) & (mag) \\ \hline SN 2018gix & \(g\) & 6041331 & 29 & 291 & \(5.7\pm 0.7\) & 1.2 & \(-2.4\pm 1.7\) & \(-1.4\) & 22.7 & \(-10.5\) \\ & \(r\) & 6041332 & 50 & 493 & \(10.0\pm 0.9\) & 1.8 & \(-1.9\pm 2.2\) & \(-0.9\) & 22.4 & \(-10.8\) \\ SN 2019lkx & \(g\) & 6551021 & 59 & 43 & \(1.6\pm 0.5\) & 1.1 & \(7.2\pm 8.0\) & 0.9 & 21.0 & \(-12.3\) \\ & \(r\) & 6551022 & 73 & 156 & \(3.2\pm 0.4\) & 1.1 & \(0.6\pm 4.1\) & 0.2 & 21.7 & \(-11.5\) \\ SN 2020tx & \(g\) & 6841241 & 737 & 37 & \(0.9\pm 0.4\) & 1.4 & \(0.2\pm 2.9\) & 0.1 & 22.1 & \(-11.1\) \\ & \(r\) & 6841242 & 968 & 47 & \(1.4\pm 0.4\) & 1.3 & \(-4.9\pm 1.6\) & \(-3.0\) & 22.7 & \(-10.5\) \\ \hline \end{tabular} Note. – Fields with early and late data. The FCQFID encodes the ZTF field ID (first three digits), the CCDID (following two digits), the quadrant ID (second-last digit), and the filter ID (last digit), where 1 stands for the \(g\) band and 2 for the \(r\) band. Difference images with the same FCQFID have the same reference image. \(n_{\rm early}\) is the number of images obtained more than ten days before the SN discovery at \(t_{0}\) and \(n_{\rm late}\) is the number of images available after the SN has faded to at least 23rd magnitude. The sixth column lists the host magnitude in maggies arcsec\({}^{-2}\). The scaling factor quantifies how much the residuals scatter at different positions in the image and the errors on the flux residuals are scaled up by this factor. The following column lists the flux residuals \(f\) when subtracting late from early observations and its scaled-up error in units of maggies. They can be converted to AB magnitudes via \(m=-2.5\times\log 10(f)\). The third-last column shows the significance of the progenitor detection and the last two columns display the limiting magnitudes that the search reaches.
\end{table}
Table 4: _(continued)_
\[l_{\rm AB} = -2.5\log(5\times\Delta f),\] (A3)
with a softening parameter \(b=10^{-10}\). If the calculated magnitude is smaller than the limiting magnitude, the source is detected at the \(5\,\sigma\) level. Otherwise, we use the \(5\,\sigma\) upper limit as a non-detection, and the magnitude does not have a physical meaning (Lupton et al., 1999).
In the following, we only consider objects with at least 20 pre-SN observations per field. For these sources, we do a baseline correction, to ensure that the weighted mean pre-SN flux is zero. This is done separately for each field or reference image, i.e., for observations with the same combination of ZTF field, CCD, quadrant, and filter. We use the baseline correction only to calculate the SN fading time in Sect. 2.1. When searching for progenitors in Sect. 2.2 we subtract late from early fluxes, such that the baseline correction cancels out.
To check whether the errors of the flux can account for the scatter of the pre-explosion light curve, we calculate the reduced \(\chi^{2}\) relative to the zero-flux level. If the result is larger than one, the flux errors are likely underestimated and we scale them up such that the reduced \(\chi^{2}\) reaches one. If the reduced \(\chi^{2}\) is smaller than one, the error bars are potentially too large and we do not modify them. The corrections are generally small with factors close to one.
For the sample selection, described in Sect. 2.1, we bin light curves in 7-day-long bins and also combine fluxes with different reference images. We use the median observation time as the time of the binned data point. The flux per bin is the mean flux weighted by the flux errors and the error on the flux is calculated via error propagation because many bins do not contain enough data points to calculate bootstrap errors (Efron, 1982). We make sure that observations before and after the estimated explosion date are never combined in the same bin.
### Additional error sources in ZTF images
One major error source for our analysis are bright host galaxies which can impact our search in several ways as image subtractions do not always work well for these environments: One effect is that the higher source noise increases the statistical errors in each pixel (as quantified in Sect. 3.2). This higher background results in larger errors when fitting the flux and is therefore considered automatically by the forced-photometry pipeline. SN 2020oi and SN 2018ivc have the brightest hosts (see Table 4) and Fig. 1 shows that the typical flux upper limits are less constraining compared to the other SNe in the sample.
In addition to increased source noise, bright hosts can cause photometric errors or residuals in difference images, that are harder to quantify. Both registration errors, i.e., misalignments by the fraction of a pixel, and errors on the gain are more severe in such locations as the large fluxes in the science and reference images are subtracted from each other. We find that our quality cuts are to some degree sensitive to these problems: For positions on top of a bright galaxy such as SN 2018ivc, most \(r\)-band observations are rejected, because they have poor PSF fits with a reduced \(\chi^{2}>1.4\).
When searching for progenitor stars, we are comparing observations that were obtained several years apart and we observe for some sources that the residuals become larger due to long-term changes. This is also observed for SN 2020cxd (see Fig. A1) for which the background flux decreases both before and after the SN. We do not detect similar trends at other positions in the galaxy and it might be caused by a background source. Another error source are the rather shallow reference images. In the progenitor search, we are trying to detect sources that are potentially fainter than the limiting magnitude of the reference image. In our search, we only compare difference images that were produced with the same reference image, such that the reference's impact cancels out to first order. However, when producing difference images the reference is convoluted with the PSF of the science image which could introduce residuals, e.g. if the average seeing is different in pre-explosion images compared to late-time images. To mitigate the impact of these errors empirically, we repeat the progenitor search for 20 positions in the field where the host is similarly bright and if the resulting residuals are larger than expected, we scale up the error bars.
First, we measure the host flux at the SN position. Some reference images contain SN light and, therefore, we measure the host flux in late-time science images instead. For each field, we download 25 science images, that are not flagged, have a limiting magnitude of more than \(19.5\,\mathrm{mag}\), and a seeing of \(<4\,\mathrm{arcsec}\). Then, we measure the sky background in a \(600\times 600\) pixels large region around the SN position. We reject \(2\,\sigma\) outliers to remove any sources in the image and calculate the median count rate for the remaining pixels. We then determine the host flux by calculating the median flux of the \(7\times 7\) pixels around the SN position, the same pixels that are used for the PSF fit, and we subtract the sky background from it. To obtain a result in physical units, we convert the fluxes to maggies using the zero-point from the image header (Masci et al., 2019) and we normalize it to a flux per arcsec\({}^{2}\). In Table 4 we quote the median host flux of all 25 late-time images as the host flux and use the standard deviation as the uncertainty on the flux.
Next, we select positions within the same host galaxy that have similar fluxes. For this purpose, we take a cartesian grid of positions with separations of 7 pixels and 21 times 21 points. We estimate the host flux at each of these positions as described above and select the 20 positions with the flux that is closest to the flux at the SN position. Then, we obtain forced-photometry light curves for each position and run the progenitor search.
Each of the 20 background positions yields a flux residual, the early-time minus the late-time flux, and we calculate the standard deviation of these 20 residuals. If it is close to one the size of the error bars is appropriate, but if it is larger than one the residuals scatter more than expected and we scale up the error bars by the size of the standard deviation. All scaling factors are given in the seventh column of Table 4 and they are typically close to one. For SN 2018ivc, the scaling factor is close to two due to its bright host and a few other fields also have relatively high scaling factors, but we cannot identify an obvious reason.
### Simulated sources
To verify our sensitivity we inject point sources into the ZTF images and rerun the search to check whether these sources are detected significantly. To save computational time, we do this test only for two fields that include SN 2018hna, 8190141 and 8190142, which yield deep limits in the \(g\) and \(r\) bands (see Table 4).
We simulate progenitors at the same 20 positions that we use to construct the background dataset in Sect. A.2. When simulating a point source with a certain magnitude in a given image, we first calculate the expected number of photons based on the zeropoint and exposure time and simulate a Gaussian with the width of the measured seeing. We also consider atmospheric scintillations which shift the point source by the fraction of a pixel (the root mean squared displacement is given by Eq. 51 by Zackay et al., 2016). We inject sources of various luminosities into the pre-explosion images, run the progenitor search, and check whether the simulated progenitors are detected with a significance of \(5\,\sigma\).
Figure 11 shows that we obtain \(5\,\sigma\)-detections if the progenitor is brighter than \(-7.5\,\mathrm{mag}\) to \(-8.25\,\mathrm{mag}\) in
the \(g\) or \(r\) band. At the actual SN position, we measured \(5\,\sigma\) limiting magnitude of \(-8.1\,\mathrm{mag}\) and \(-8.3\,\mathrm{mag}\) in the \(g\) and \(r\) band, respectively (see also Table 4 and Sect. 2.3). The limit that the progenitor search returns is, hence, comparable to the magnitudes at which we can detect simulated sources in the same field. We, therefore, conclude that the quoted limiting magnitudes are realistic. The dashed lines in the figure show that \(3\,\sigma\) limiting magnitudes would be \(\sim 0.7\,\mathrm{mag}\) more constraining. However, we here maintain the \(5\,\sigma\) threshold to avoid false detections.
## Appendix B Converting between different spectral bands
Converting between different spectral bands requires assuming a spectral shape. We use observed stellar spectra from the third data release of the X-shooter spectral library (XSL9; Verro et al., 2022). We only select dereddened spectra that are corrected for slit losses and have data over the entire wavelength range. We interpolate over telluric bands and chip gaps (wavelength ranges given in Fig. A.3 of Lancon et al., 2021) with a quadratic spline function. Supergiant spectra are not available for the coolest RSGs (more evolved than type M5) and we use giant spectra instead. We calculate bolometric magnitudes by integrating the XSL spectra between \(3500\,\mathrm{\AA}\) and \(24800\,\mathrm{\AA}\) and single-band magnitudes by integrating over the spectrum multiplied with the filter profile. Figure B3 shows the resulting bolometric corrections for supergiant or giant stars of different spectral types.
Footnote 9: [http://xsl.u-strasbg.fr](http://xsl.u-strasbg.fr)
We assume that blue supergiant (BSG) progenitors have spectra similar to B2 stars (Walborn et al., 1989). For YSG progenitors of SNe IIb we adopt an A5 star spectrum (Kilpatrick et al., 2022) and Kilpatrick et al. (2021) found the only detected SN Ic progenitor has a similar \(V-I\) color. The progenitors of SNe Ib are assumed to be similar to F2 stars (Xiang et al., 2019). We caution that the true spectral shapes are largely unknown, such that the resulting bolometric corrections have large uncertainties.
Davies & Beasor (2018) carefully consider the temperatures of RSG progenitors and conclude that they are most similar to the reddest RSGs. Davies & Beasor (2018), therefore, adopt an average M4-M7.5 type spectrum. However, none of the M-type supergiants in the XSL is as red (compare Fig. B3). Instead of picking the reddest RSG in the catalog (shown as a dark red line in Fig. B3), we adopt the spectrum of a slightly hotter M4 star for all RSGs, except the progenitors of SN 2003gd and SN 2013ej for which we assume M3 and M2 spectra following Davies & Beasor (2018). The spread of the red lines in Fig. B3 illustrate the size of the uncertainty for the entire range of M-type giants. Most RSG progenitors are detected in the F814W band where the bolometric corrections differ by up to \(0.5\,\mathrm{mag}\) between M supergiants of different types. However, the impact of the unknown progenitor temperatures is much larger in the \(r\) band, where the cooler M9 stars are \(2\,\mathrm{mag}\) fainter than M0 supergiants. As a consequence, RSG observations in the \(r\) band, including the ZTF data analyzed in Sect. 2, have highly uncertain bolometric magnitudes.
|
2309.05133 | Parallel RAM from Cyclic Circuits | Known simulations of random access machines (RAMs) or parallel RAMs (PRAMs)
by Boolean circuits incur significant polynomial blowup, due to the need to
repeatedly simulate accesses to a large main memory.
Consider a single modification to Boolean circuits that removes the
restriction that circuit graphs are acyclic. We call this the cyclic circuit
model. Note, cyclic circuits remain combinational, as they do not allow wire
values to change over time.
We simulate PRAM with a cyclic circuit, and the blowup from our simulation is
only polylogarithmic. Consider a PRAM program $P$ that on a length-$n$ input
uses an arbitrary number of processors to manipulate words of size $\Theta(\log
n)$ bits and then halts within $W(n)$ work. We construct a size-$O(W(n)\cdot
\log^4 n)$ cyclic circuit that simulates $P$. Suppose that on a particular
input, $P$ halts in time $T$; our circuit computes the same output within $T
\cdot O(\log^3 n)$ gate delay.
This implies theoretical feasibility of powerful parallel machines. Cyclic
circuits can be implemented in hardware, and our circuit achieves performance
within polylog factors of PRAM. Our simulated PRAM synchronizes processors via
logical dependencies between wires. | David Heath | 2023-09-10T20:53:18Z | http://arxiv.org/abs/2309.05133v3 | # Parallel RAM from Cyclic Circuits
###### Abstract
Known simulations of random access machines (RAMs) or parallel RAMs (PRAMs) by Boolean circuits incur significant polynomial blow-up, due to the need to repeatedly simulate accesses to a large main memory.
Consider a single modification to Boolean circuits that removes the restriction that circuit graphs are acyclic. We call this the _cyclic circuit_ model. Note, cyclic circuits remain _combinational_, as they do not allow wire values to change over time.
We simulate PRAM with a cyclic circuit, and the blow-up from our simulation is only _polylogarithmic_. Consider a PRAM program \(\mathcal{P}\) that on a length-\(n\) input uses an arbitrary number of processors to manipulate words of size \(\Theta(\log n)\) bits and then halts within \(W(n)\) work. We construct a size-\(O(W(n)\cdot\log^{4}n)\) cyclic circuit that simulates \(\mathcal{P}\). Suppose that on a particular input, \(\mathcal{P}\) halts in time \(T\); our circuit computes the same output within \(T\cdot O(\log^{3}n)\) gate delay.
This implies theoretical feasibility of powerful parallel machines. Cyclic circuits can be implemented in hardware, and our circuit achieves performance within polylog factors of PRAM. Our simulated PRAM synchronizes processors via logical dependencies between wires.
_Keywords--_ Parallel Random Access Machine, Boolean Circuits, Circuit Cycles
###### Contents
* 1 Introduction
* 1.1 Our Contribution: PRAM from Boolean Gates
* 1.2 Intuition and Novelty
* 1.3 Related Work
* 2 Overview
* 2.1 Simulating Random Access with Gates
* 2.2 Dynamic Parallelism
* 2.3 Our PRAM Circuit
* 2.4 Programming Compute Units
* 2.5 Our Asymptotics
* 3 Preliminaries
* 3.1 Word Parallel RAM
* 3.2 Notation
* 4 Cyclic Circuits
* 4.1 Syntax and Semantics
* 4.2 Complexity Measures
* 4.3 Uniformity
* 4.4 Simulating Cyclic Circuits with PRAM
* 5 Parallel Single Access Machines
* 6 PSAM from Cyclic Circuits
* 6.1 Swap
* 6.2 Helper Circuits
* 6.3 Permutations
* 6.4 Simulating PSAM
* 6.5 Compute Units
* 6.6 Uniformity
* 7 PRAM from PSAM
* 7.1 Tree Operations
* 7.2 Simulating PRAM
* 7.3 Step Complexity
* 8 Concluding Remarks
Introduction
The parallel random access machine (PRAM, [10]) model is ubiquitous in the study of parallel algorithms. The model considers a single machine that hosts an arbitrary number of _processors_, each of which operates in lockstep. At each machine step, each processor is given read/write access to a shared memory, and the machine satisfies these accesses simultaneously and in constant time.
We consider two measures of PRAM complexity: the machine's _work_ measures the total number of instructions executed across all processors, while the machine's _runtime_ (sometimes called its _span_) measures the number of machine steps. Generally speaking, parallel algorithms reduce runtime while keeping work reasonable, which often involves aggressively breaking a problem into pieces and then using different processors to work on different pieces in parallel.
PRAM processors can easily distribute problems and combine solutions due to the model's _synchronous_ nature. One processor can simply write a value to a shared memory cell with the understanding that some other processor will read that cell on the next machine step, achieving a communication channel.
Within PRAM there exists a hierarchy of variants that explain what happens when more than one processor reads/writes the same memory cell in the same step. The most powerful of these that is typically considered is concurrent-read concurrent-write (CRCW) PRAM, which, as the name suggests, allows multiple processors to read/write the same memory cell without error.
While PRAM is central to the analysis of parallel algorithms, the model - and in particular the CRCW variant - is often seen as unrealistic. It seems fundamentally challenging to build machines that implement numerous tightly synchronized processors. Today's multicore machines usually display only relatively low degrees of parallelism, often limited to only dozens or hundreds of processors, and parallelism is usually _asynchronous_, with each processor executing independently of the others. Distributed systems can reach higher levels of parallelism, with the trade-off that synchronous behavior becomes even harder to achieve. This lack of synchronization makes it harder for processors to communicate, which complicates the system. The inherent difficulty of implementing PRAM has led to alternate parallel models that forgo PRAM's power and simplicity in exchange for more accurately capturing real-world machines; see Section 1.3. However, PRAM remains an attractive abstraction for reasoning about parallelism, and implementations of the model are highly desirable.
### Our Contribution: PRAM from Boolean Gates
We show theoretical feasibility of machines that implement PRAM by simulating PRAM with a combinational Boolean circuit. Our circuit design has size/delay that closely matches the work/runtime of the PRAM, and it can support any level of parallelism.
Our target model.We consider a PRAM that manipulates words of size \(w=\Theta(\log n)\) bits. The PRAM supports concurrent reads/writes (it is CRCW), and we consider the most powerful accepted strategy for write conflict resolution: when multiple processors write to the same address, the machine combines written values with an arbitrary associative operation \(\star\).
Our target PRAM supports varying _processor activity_; at each step, any of the processors can be inactive. The machine's Total work is the sum over the number of active processors in each machine step. We _do not_ limit the maximum number of active processors.
See Section 3.1 for more details of the target model.
Our circuit's properties.We construct a combinational circuit \(C\) that simulates the above PRAM. To construct \(C\), we make only one meaningful change to the classically accepted definition of a Boolean circuit: Boolean circuits are classically _acyclic_, and we remove this constraint. This generalization of circuits has been considered by prior work; see Section 1.3. We emphasize that our circuit remains combinational in that its wires are _stateless_: each of \(C\)'s wire values is a (deterministic) function of \(C\)'s input.
Our main result is as follows:
**Theorem 1** (PRAM from Cyclic Circuits).: _Let \(\mathcal{P}\) be a PRAM program that on length-\(n\) inputs halts within \(W(n)\) work. There exists a cyclic circuit \(C\) with \(O(W(n)\cdot\log^{4}n)\) fan-in-two gates such that for any length-\(n\) input \(\mathbf{x}\), \(C_{n}(\mathbf{x})=\mathcal{P}(\mathbf{x})\). Suppose that on input \(\mathbf{x}\), \(\mathcal{P}(\mathbf{x})\) halts in time \(T\). Then \(C_{n}(\mathbf{x})\) computes \(\mathcal{P}(\mathbf{x})\) within \(T\cdot O(\log^{3}n)\) gate delay._
\(C\) is essentially an _ideal_ PRAM, modulo polylog performance factors. \(C\) closely matches PRAM in terms of both work and runtime, and because \(C\) is built from Boolean gates, it is theoretically feasible that we could build \(C\). Informally speaking, the constant factors in \(C\)'s size and delay are also reasonable, as we _do not_ use concretely expensive components such as expander graphs [1].
Theorem 1 - together with a relatively obvious simulation of cyclic circuits by PRAM (Theorem 2) - implies that cyclic circuits and PRAM are nearly _equivalent_ in power. Indeed, the complexity classes implied by the two models are closely related:
**Corollary 1** (PRAM and Cyclic Circuits - informal).: _Denote by \(\mathsf{PRAM}(W(n),T(n))\) the set of problems solvable by a bounded-word-size PRAM (Section 3.1) within \(O(W(n))\) work and \(O(T(n))\) time. Denote by \(\mathsf{CCKT}(W(n),T(n))\) the set of problems solvable by a cyclic circuit family \(\{C_{n}:n\in\mathbb{N}\}\) where \(C_{n}\) has \(O(W(n))\) size and computes its output within \(O(T(n))\) delay. For all \(W(n)\) and for all \(T(n)\):_
\[\mathsf{PRAM}(W(n)\cdot\mathrm{poly}(\log n),T(n)\cdot\mathrm{poly }(\log n))\] \[=\mathsf{CCKT}(W(n)\cdot\mathrm{poly}(\log n),T(n)\cdot\mathrm{poly }(\log n))\]
(More formally, the above circuit family requires a modest notion of uniformity; see Section 4.) Hence, our work shows that to develop parallel algorithms that are efficient in terms of both work and time (modulo polylog factors), one can work with PRAM, or one can work with Boolean gates. Choose whichever is more convenient.
### Intuition and Novelty
At a high level, our combinational circuit design consists of a large number of small _compute units_, each of which is a Boolean subcircuit that executes some single PRAM task, such as reading a memory element, adding two words, branching to a different instruction, etc. The main challenge of our simulation is in properly gluing the units together. Specifically, our simulation addresses the following key problems:
* How can we maintain a main memory such that each compute unit can access an arbitrary memory cell at amortized polylog gate cost?
* How can we coordinate compute units such that they can run in sequence or in parallel?
The main ingredients used to solve these problems are _permutation networks_, which we use as the glue between compute units. Prior works also glue PRAM with permutation networks, but the novelty of our work is in showing that _the entire simulation_ can be achieved by a single combinational Boolean circuit.
Simulating PRAM with just Boolean gates requires careful handling. In particular, (1) we ensure our permutation networks satisfy a _dynamic_ property, which prevents compute units from 'blocking' execution, and (2) we 'program' compute units to upgrade from the limited memory routing of a permutation to full-fledged CRCW parallel random access memory. Section 2 explains our approach at a high level.
To our knowledge, this polylogarithmic connection between PRAM and combinational circuits has been overlooked. We believe that this may be due in large part to a common misconception that combinational circuits _must_ be acyclic. This is simply false, and Boolean circuits with cycles are well defined, without introducing notions from sequential hardware design such as stateful wires and timing; see examples throughout this work and discussion in [14]. Even with the insight that cycles are well defined and useful, careful handling is required, and this work describes such handling in detail.
### Related Work
PRAM and Circuits.[25] was the first to simulate PRAM with a poly-size Boolean circuit. Their simulation incurs significant polynomial overhead. They consider a PRAM with \(n\)-word input where each data word is at most \(n\) bits long. Let \(T\) bound the runtime of the PRAM and let \(p\) denote the number of processors. Let \(L=O(n+T+\log p)\). [25] simulate their considered PRAM by a circuit with \(O(pTL(L^{2}+pT))\) unbounded-fan-in Boolean gates.
[25]'s considered machine manipulates larger words than ours, but this discrepancy does not account for the difference in circuit size. The main cost of [25]'s simulation is that the circuit \(T\) times simulates each of \(p\) processors scanning each element in a shared memory of size \(O(p\cdot T)\). This immediately imposes \(\Omega(p^{2}\cdot T^{2})\) cost, even before word size is considered. The high cost of acyclic circuit simulation seems inevitable, as it is hard to imagine a correct acyclic circuit that _does not_ scan all of shared memory on each simulated instruction, and to our knowledge no follow-on work improved the simulation. While [25]'s size blow-up is high, they use unbounded fan-in gates to achieve _constant_ blow-up in terms of circuit depth, establishing an important connection between circuits and PRAM. [15] discusses this connection further, in the context of the complexity class NC.
Our PRAM simulation uses fan-in-two gates and incurs only polylog overhead in terms of both size and delay. Thus, our work can be understood as trading in polylog circuit delay in exchange for dramatically improved size. To achieve this size improvement, we must consider circuit cycles.
Cyclic Circuits.Combinational circuits with cycles have been studied in numerous works, e.g. [19, 20, 21, 22, 23] and more. These works use cycles to reduce circuit size, but they obtain only relatively small benefit. For instance, [23] demonstrate a circuit family for which they can improve by only approximately factor two. To our knowledge there are no prior results (aside from [1], see next) that demonstrate _asymptotic_ improvement from the inclusion of cycles. Our construction uses cycles to reduce PRAM overhead from a significant polynomial factor [25] to a relatively low _polylogarithmic_ factor. Thus, we feel that our result shows cyclic circuits are far more interesting than previously thought.
Note, our formalization of cyclic circuits is similar to that of some of these prior works, in particular to that of [23] and [24].
To our knowledge, the connection between RAM and circuit cycles went unnoticed until the work of [1]. [1] formalized a cyclic circuit model called _tri-state circuits_ (TSCs). The differences between TSCs and the cyclic circuits considered here are not important in this work; we prefer cyclic Boolean circuits here because the gates are more familiar. Similar to our work, [1] demonstrate circuits that implement (non-parallel) RAM. Both our approach and [1]'s approach leverage _dynamic permutation networks_, a key ingredient for simulating memory accesses. [1] show that for a word RAM program running for \(T=O(\operatorname{poly}(n))\) steps, there exists a (randomized) TSC with \(O(T\cdot\log^{3}T\cdot\log\log T)\) total gates that simulates the word RAM. Our work follows on from [1], taking the interesting and non-trivial step from RAM to PRAM.
Connections to Cryptography.[1] used their TSC-based construction in the context of a cryptographic technique called circuit garbling [26]. By applying TSCs, [1] obtained a new approach to 'Garbled RAM' [1]. This application required [1] to consider _oblivious_ TSCs. Oblivious TSCs are roughly analogous to oblivious Turing Machines [23]. In short, circuits with cycles allow gates to propagate values in data-dependent orders; in an oblivious TSC, this order must 'appear' (in a cryptographic sense) independent of the circuit's input.
Extending our PRAM results to oblivious execution could enable interesting results for 'oblivious parallel garbled RAM' [1]. We do not pursue this result here, and achieving cyclic-circuit-based oblivious PRAM without unacceptably high overhead remains an interesting open problem.
Simulating PRAM; alternative parallel models.We are, of course, not the first to specify a construction that simulates PRAM. However, a key difference between our work and prior works is that most
prior works take as primitive the notion of a standalone processor. Namely, they start from the assumption that there is some collection of processors, and what needs to be achieved is an appropriate coordination of those processors, see e.g. [21, 22].
Achieving PRAM starting from independent processors is challenging, because inherently asynchronous processors must be synchronized. This challenge has led researchers to propose numerous alternative parallel models, e.g. the aggregate model [10], the Bulk Synchronous Parallel model [23], LogP [11], malthreaded models [1], the Massively Parallel Communication model [1], the fork-join model and its variants [1, 2], and more [12]. The common thread of such models is that they embrace the asynchronous nature of independently executing processors. While these models are important for understanding how to control large and/or distributed systems, PRAM arguably remains the central ingredient in the study of parallel algorithms, see e.g. [1, 1].
Our approach circumvents the difficulty of processor synchronization. For us, synchronous behavior is 'free' in the sense that we coordinate simulated processors with simple logical dependencies between wires. This automatically enforces a notion of lockstep behavior between processors, without the need for any additional enforcement.
As an aside, we technically introduce a parallel model, which we call the parallel single access machine (PSAM) model; see Section 5. PSAM is a relatively natural weakening of PRAM. We introduce PSAM simply as an intermediate step of our PRAM simulation, but the model may be of independent interest. To our knowledge, this weakening of PRAM has not been explored.
Sorting networks and permutations networks.The key ingredient in our PRAM is a dynamically routing _permutation network_. Permutation networks are the subject of many works, see e.g. the classic works of [1, 22, 23].
[10] presented a _self-routing_ permutation network using \(O(n\cdot\log^{2}n)\) swap operations. Their network is based on a _binary radix sort_. Our construction also features a self-routing permutation network whose structure is similar to that of [10]. Our network (1) can be constructed from Boolean gates and (2) permutes _dynamically_, by which we mean that even if inputs arrive one by one, each input can be routed to its destination _before_ the next input arrives. [10] showed their network automatically routes all input packets to output destinations, but they do not show their network achieves the above _dynamic_ property where packets pass through the network even when not all packets are available.
[13] also constructed their circuit-based RAM from a permutation network, but our network has better constants, and - as we will see - it supports parallelism. [13]'s network supports sequential routing only.
## 2 Overview
This section sketches the main components of our approach at a high level. Subsequent sections formalize the ideas explained here.
### Simulating Random Access with Gates
Consider the Boolean basis of AND gates, XOR gates, and a distinguished constant wire holding 1. Now, (1) modify this basis such that circuits are allowed to have cycles and (2) ensure AND gates output zero eagerly. Namely, an AND gate where one input wire holds zero outputs zero, regardless of the content of the second input wire. Circuit cycles allow us to run certain parts of circuits before some input wires to those parts are computed. As we will see, this unlocks the ability to execute subcircuits in _data-dependent orders_. By leveraging this, we can simulate PRAM with only a quasilinear number of gates.
Running gates in data-dependent orders.We start with an example that demonstrates this key data-dependence. To explain our example, we need a helper component called a multiplexer. A multiplexer is a standard circuit component that selects between two bits \(x\) and \(y\) based on some selection bit \(s\):
Because of AND's eager semantics, the output wire of the MUX _does not depend_ on whichever input is not selected. For instance, if \(s=0\), the output does not depend on \(y\), so the circuit can compute the multiplexer output _before_ it computes \(y\). In other words, this multiplexer inherits the eager behavior of AND.
Now, consider the following cyclic circuit on the left1 which combines three multiplexers and two unspecified subcircuits \(f\) and \(g\):
Footnote 1: This left example was noticed also in prior works, e.g. [12, 13].
Suppose we set the left circuit's selection bit \(s\) to zero. In this case, the two multiplexers on the left select their top argument, and the multiplexer on the right selects its bottom argument. Due to the eager behavior of multiplexers, the top left multiplexer outputs \(x\), even though its bottom input is not yet computed. \(x\) flows into \(f\), and \(f(x)\) flows to the bottom left multiplexer, which passes \(f(x)\) to \(g\). Thus, the final multiplexer - which outputs its bottom input - outputs \(g(f(x))\).
If we instead set \(s\) to one, then the top left multiplexer initially cannot fire, but the bottom left multiplexer can. By tracing execution, we see that the circuit outputs \(f(g(x))\). Thus, the circuit computes the following:
\[y=\begin{cases}g(f(x))&\text{if }s=0\\ f(g(x))&\text{if }s=1\end{cases} \tag{1}\]
Our example uses \(f\) and \(g\) in an order that depends on the runtime value \(s\). This demonstrates clear advantage over acyclic circuits, because an acyclic circuit cannot in general compute the above function, unless we include an extra copy either of \(f\) or of \(g\).
Connecting subcircuits with permutations.We can generalize our example to more than two subcircuits via a _permutation network_. A permutation network is a circuit that _routes_ each of its inputs to a distinct output. The routing of a permutation network can be chosen by runtime values, so if we connect each network output to the input of some subcircuit, and if we cyclically connect the output of each subcircuit to a network input, then we can compose an arbitrary number of subcircuits in a data-dependent order. Importantly, permutation networks can be built from only a quasilinear number of gates; see Section 6.
This generalization becomes interesting when we add more connections between subcircuits; see the above right-hand example. In this example, we connect subcircuits via a permutation \(\pi\), and we also _sequentially_ connect the subcircuits. By properly setting up the subcircuits, we can arrange that \(f_{0},...,f_{3}\) run sequentially, and each \(f_{i}\) can _write_ a wire value by sending it to \(\pi\).
By properly choosing \(\pi\)'s programming string \(\mathbf{s}\), we can arrange that \(\pi\) routes \(f_{i}\)'s written value to some other subcircuit \(f_{j\neq i}\), allowing \(f_{j}\) to _read_ the write. \(f_{j}\)'s output might (or might not, especially if the write is made by a future subcircuit) depend on the read value. This _almost_ simulates a memory access: we write values to memory cells (wires), and then read them at arbitrary later points in the evaluation.
From permutations to memory.There are two gaps between our example and full-fledged memory. The first gap is that in our example the permutation network routing decisions \(\mathbf{s}\) are made _globally_ and independently of the execution of the subcircuits. To achieve random access memory, each subcircuit must _locally_ select its own read address, requiring that the routing of the permutation network be chosen on the fly, as the subcircuits run. The second gap is that our example is limited in that each memory cell can only be read _once_, due to our use of a permutation. We discuss resolution of this second gap in Section 2.4.
To resolve the first gap, Section 6 constructs a permutation network which is _dynamically programmed_. The network uses typical butterfly network configurations of swap elements, and it is similar in structure to existing networks, e.g. [1, 1]. We show that these familiar structures can be made to work properly in the context of a cyclic circuit.
Our emphasis in Section 6 is in showing our network's crucial _dynamic_ property: in our network, each routing decision can be made based only on a _prefix_ of inputs to the network. This dynamic property is merely a result of the order of dependencies between network wires: the routing of the \(i\)-th input simply does not depend on the routing of any subsequent input \(j>i\). Thus, our network can route each of its inputs eagerly, before subsequent inputs to the network are computed. This allows each of our subcircuits to locally choose its own read address while preventing our simulation from becoming 'deadlocked' with two subcircuits each waiting on the choice of the other.
### Dynamic Parallelism
The above discussion sketches how cyclic circuits can achieve sequential RAM, but our goal is to construct _parallel_ RAM. We consider a PRAM whose number of active processors can vary over the program execution. Because of this, PRAM runtime can also vary. Our cyclic circuit faithfully simulates this varying parallelism, matching runtime performance in its _gate delay_, up to polylog factors. If the simulated PRAM is highly parallel, then the circuit has low delay; if the PRAM is highly sequential, then the circuit has higher delay.
The following example circuit on the left illustrates how varying delay can be achieved:
\[\tikzfig{fig:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:p:prem:prem:prem:prem:prem:prem:p:prem:p:prem:prem:p:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:p:prem:prem:p:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:p:prem:prem:prem:p:prem:prem:prem:prem:p:prem:prem:prem:p:prem:prem:p:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:p:prem:prem:prem:prem:p:prem:prem:prem:p:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:p:prem:p:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:p:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem:prem
PRAM execution starts at unit \(\mu_{0}\), and it proceeds to the right through subsequent units. For instance, if the PRAM behaves fully sequentially, then \(\mu_{0}\) will compute some small task, then pass its state to \(\mu_{1}\), which will compute some subsequent task, and so on. The complexity of our construction comes from the _coordination_ of compute units. We must arrange that (1) units can read/write a large shared memory and (2) units can run _in parallel_.
Filters.To coordinate units, we apply permutation-like circuits called _filters_:
A filter implements a routing between \(n\)_sources_ and \(n/2\)_targets_. Half of the sources should be tagged with zero, half should be tagged with one; those sources tagged with one are connected to the targets (preserving order), and we 'filter out' those sources tagged by zero. Section 6 constructs a dynamically routing filter with quasilinear size and logarithmic delay.
The role of filters in our PRAM is that they allow our compute units to _opt in_ to particular machine capabilities. For instance, our machine places its input tape behind a filter. By sending a 1 to the filter, a compute unit indicates that it would like to read a word of input, and the filter responds with the next input word; by sending a 0, the unit indicates it does not need an input tape word, and the filter replies with an all-zero string. Requiring units to explicitly opt in/out of each machine capability is tedious, but necessary. This requirement ensures that early compute units do not "block" the operation of later units, since we can use the explicit opt-out to appropriately set up logical dependencies with Boolean gates.
Figure 1: Sketch of our circuit-based PRAM. The circuit has five main components: (1) a collection of compute units \(\mu_{i}\), (2) a permutation network and two filters that jointly implement memory, (3) a filter that acts as a parallel coordination mechanism between compute units, (4) an input tape behind a filter (not depicted), and (5) an output tape behind a filter (not depicted).
Our filters require that precisely half of the units opt in and the other half opt out. This requirement is not hard to achieve, but it requires that we use additional compute units which "clean up" machine execution by consuming unused resources.
Enabling dynamic parallelism via a filter.One important difference between Figure 1 and our earlier example from Section 2.1 is that in Figure 1 we _do not_ connect compute units \(\mu_{i}\) to each other directly. Instead, connections pass through a filter (top of Figure 1). This filter acts as a coordination mechanism, and its operation is key to our dynamic parallelism. A compute unit can send a message through the filter, and this message is routed to some subsequent compute unit. This allows compute units to pass state to successor units, activating those successors and continuing the computation.
Notice that each compute unit \(\mu_{i}\) is connected to the source side of the coordination filter _twice_. This is crucial. By connecting each unit twice, we allow unit \(\mu_{i}\) to activate _up to two_ children. \(\mu_{i}\) can decide dynamically how many children it will activate by tagging modified copies of its state with 0 or 1 before sending them to the filter. Think of \(\mu_{i}\) as representing a single execution step of some parallel process. \(\mu_{i}\)'s number of children represents three possible continuations of this process:
* Zero children: The process terminates.
* One child: The process continues (in parallel with other processes).
* Two children: The process _forks_ into two parallel processes.
A program can quickly increase parallelism by having each unit activate two children. For example,
* \(\mu_{0}\) sends two states through the filter, activating \(\mu_{1}\) and \(\mu_{2}\) in parallel.
* \(\mu_{1}\) and \(\mu_{2}\) each send two states through the filter. \(\mu_{1}\)'s states arrive at \(\mu_{3}\) and \(\mu_{4}\), and \(\mu_{2}\)'s states arrive at \(\mu_{5}\) and \(\mu_{6}\).
* \(\mu_{3},...,\mu_{6}\) each send two states through the filter, and so on.
Note that our circuit is fully synchronous and deterministic. For instance, in this particular execution, \(\mu_{1}\)'s children are _not_ chosen arbitrarily. Each unit \(\mu_{i}\) will always have children of the lowest possible index, with priority given to parents with lower indices (i.e., priority is given to _older_ processes). This determinism comes simply from the fact that our construction is indeed a Boolean circuit, albeit with cycles.
Parallelism in our circuit arises from the low delay of its components. For instance, the coordination filter has logarithmic delay. Hence, even if some huge number of units'simultaneously' request children, all requests are handled within log delay.
### Programming Compute Units
By construction, our compute units have access to the various capabilities of our machine, including reading/writing shared memory, reading the input tape, writing the output tape, activating children, and responding to the parent; see Figure 2. These capabilities are sufficient to implement our target PRAM.
However, appropriately 'programming' these compute units \(\mu_{i}\) is nontrivial. The challenge here is that our circuit implements memory via a _permutation_. Each unit can write a memory element by sending it to the permutation network, but because we use a permutation, only _one_ unit can read that write. In other words, our circuit's memory cells are inherently _single-use_. Of course, full-fledged PRAM allows repeated reads/writes to each memory address, and we must account for this discrepancy in our simulation.
The PSAM model.Our observation is that while we cannot use compute units to _directly_ run PRAM instructions, we _can_ use compute units to run simple parallel programs that manipulate binary-tree-based data structures. Single-use memory cells are sufficient for this task because we can store pointers to binary tree child nodes inside parent nodes; each time we read a tree node from single-use memory, we can write back a fresh copy of that node. With some care, we can use a tree-based program to implement PRAM.
We formalize the ability to manipulate binary trees by introducing a model that we call the parallel single access machine (PSAM) model. In short, this model is the same as PRAM (Section 3.1), except that each memory address can only be written to once and read from once. See Section 5 for details.
Thus, we decompose our simulation into two parts.
* First, we show that cyclic circuits can simulate PSAM. Each of our compute units simulates a single PSAM instruction, and we glue our units with permutations and filters.
* Second, we show that PSAM can simulate PRAM. Our PSAM maintains binary trees that store PRAM data, and by repeatedly traversing the trees, our PSAM simulates PRAM.
Plugging these together yields our contribution.
Our PSAM program.Our PRAM simulation uses circuit-based compute units to run a small PSAM program that manipulates two trees. The first _memory tree_ holds at its leaves all words that have been written in the PRAM shared memory; the second _processor tree_ holds at its leaves the state of each active PRAM processor. The high level goal of our PSAM program is to in parallel traverse the two trees together, matching up the state of each processor with the memory element that processor wishes to access. By doing so, we can take one step in that processor's execution. By repeatedly traversing and rebuilding these two trees, we simulate a full PRAM execution.
In more detail, the memory tree is a log-depth binary tree where each leaf along some path \(i\) encodes the memory value written to address \(i\). The processor tree is also a log-depth binary tree, and each of its leaves stores the local state of some active PRAM processor. The processor tree is arranged such that each processor's state _aligns_ with the memory address that processor wishes to access. We sketch an example:
Here, we consider a small memory with three addresses and three processors \(\rho_{0},\rho_{1},\rho_{2}\). In our example, \(\rho_{1}\) wishes to access memory address \(0\) and \(\rho_{0},\rho_{2}\) each wish to access address \(2\). We ensure that \(\rho_{1}\)'s state is in the subtree rooted at position \(0\), and \(\rho_{0}\)'s and \(\rho_{2}\)'s states are each in the subtree rooted at position \(2\). Our memory tree does not have an entry for address \(1\) because no processor has yet accessed that address. We implicitly store the all zeros string in each unaccessed address.
Our PSAM program implements a single PRAM machine step via a recursive procedure. The PSAM procedure is roughly as follows:
Figure 2: Structure of compute units \(\mu_{i}\). Each unit activates when it receives a local state from its parent. It then performs some combination of the following: write a value to memory, read a value from memory, activate up to two children, read a word from the input tape, write a word to the output tape, compute some function of local state. By properly programming compute units, we achieve PRAM.
* Simultaneously and recursively traverse the memory tree and the processor tree. When the current processor tree node has two children, the current PSAM processor forks execution, and we continue in parallel down both branches of the memory/processor tree.
* Suppose we reach a memory leaf storing value \(x_{i}\) (or an empty tree, denoting implicitly that \(x_{i}=0\)). Save \(x_{i}\) in the PSAM processor's local storage, then continue traversing the processor tree. When the current processor tree node has two children, fork execution, broadcasting \(x_{i}\) to each child.
* Ultimately, the PSAM processor arrives at a leaf storing some PRAM processor state \(\rho\). We compute a single PRAM instruction based on state \(\rho\) and value \(x_{i}\). The instruction writes2 back some memory value \(x_{i}^{\prime}\). We encode the written value into a sparse tree with exactly one leaf, and this leaf is stored on path \(i\). Additionally, the instruction can create zero, one, or two subsequent PRAM processor states. We encode these states into a tree with zero, one, or two leaves, each rooted at the memory address they wish to access next. Thus, we create a fresh memory tree and a fresh processor tree. Footnote 2: If the PRAM processor is merely reading a memory value, it can write back whatever it just read.
* The recursion begins to unwind. As it unwinds, we _merge_ together memory trees (resp. processor trees) created by recursive calls. Consider a merge on trees \(t_{0},t_{1}\). Our merge operation \(\uplus\) ensures that each leaf node in \(t_{0}\) (resp. \(t_{1}\)) appears on the same path in the merged tree \(t_{0}\uplus t_{1}\) as it does in \(t_{0}\) (resp. \(t_{1}\)). When two merged trees share some leaf position, we combine those leaves with some binary associative operator \(\star\). We can ensure that no two merged processor trees share leaves, so only memory tree merges use \(\star\), and this is how we resolve write conflicts (see Section 3.1). Computing \(t_{0}\uplus t_{1}\) is _efficient_ because we ensure that \(t_{0},t_{1}\) are sparse and/or \(t_{0},t_{1}\) share few leaf positions.
When the recursion completes, we are left with a fresh memory tree and a fresh processor tree. The fresh memory tree stores the combined writes from each processor, as well untouched content from the initial memory tree; the fresh processor tree stores all updated active processor states, and those states are rooted at the memory address they wish to access next. Thus, we are back in a state accepted by our procedure, and we can apply the procedure again to implement another machine step. Each call to this procedure takes one step in each processor, and it runs in \(O(\log n)\) time. By calling the procedure until the processor tree is empty, we simulate an end-to-end PRAM execution.
### Our Asymptotics
Recall from Theorem 1 that our circuit has size \(O(W(n)\cdot\log^{4}n)\) and delay \(T\cdot O(\log^{3}n)\). We explain the sources of these factors.
Our circuit simulates \(W(n)\) PRAM instructions, which in turn requires that it simulate \(O(W(n)\cdot\log n)\) PSAM instructions. This log factor loss comes from the fact that our PSAM program repeatedly traverses log-depth binary trees (Section 2.4). Hence, each PRAM instruction is simulated by \(O(\log n)\) PSAM instructions.
These PSAM instructions are, in turn, handled by \(O(W(n)\cdot\log n)\) compute units (Figure 2). Recall that these units must read/write memory elements, and this leads to the bottleneck in our circuit's size. In particular, we connect these units by instantiating a permutation network, and to permute \(m\) inputs our network uses \(O(m\cdot\log^{2}m)\) swap operations. Because the network handles \(\Theta(\log n)\)-bit words, each swap requires \(\Theta(\log n)\) Boolean gates. Plugging in \(m=O(W(n)\cdot\log n)\) as the number of permuted inputs, the size of the network is \(O(W(n)\cdot\log^{4}n)\) Boolean gates, dominating our circuit's size.
In terms of delay, our circuit incurs cost from (1) the \(O(\log^{2}n)\) delay of our permutation network, (2) the \(O(\log n)\) overhead of the simulation of PRAM by PSAM, and (3) the inherent \(T\) steps required by the PRAM program itself. Combining these results in our \(T\cdot O(\log^{3}n)\) total delay.
Thus, our cyclic-circuit-based simulation indeed achieves performance within polylog factors of the target PRAM. The following sections expand on discussion given here, presenting our circuits and our simulation in full detail.
Preliminaries
### Word Parallel RAM
Our target model is a CRCW PRAM with bounded word size. The PRAM allows processors that vary in number across program steps, and the PRAM combines write conflicts with an associative operator \(\star\). The following explains in detail.
Terminology.The PRAM's input length is denoted \(n\). The PRAM manipulates words of size \(w=\Theta(\log n)\) bits. PRAM input is stored on a tape; when a processor reads the input tape, that word is popped from the tape such that when another processor reads the input tape, it obtains the next input word. We similarly store the PRAM output on an output tape.
We place a modest bound on the PRAM's addressable main memory: the highest memory address is at most polynomial in \(n\). This ensures that (1) our log-length words can address the shared memory and (2) the shared memory can be neatly encoded as a binary tree with logarithmic depth.
We place no bound on the machine's maximum number of processors \(p\). Each PRAM processor can be either _active_ or _inactive_. We refer to the number of steps for which a processor has been consecutively active as its _age_. If processor \(\rho_{0}\) activated processor \(\rho_{1}\), then we call \(\rho_{0}\) 'parent' and \(\rho_{1}\) 'child'. If a processor goes inactive, it no longer has a parent (until it is activated again).
Each processor runs the same constant-sized program with instructions indexed by natural numbers, though different processors can run different instructions on the same step (the model is MIMD). Each processor has a small3 local state storing \(O(1)\) words. Local state contains one distinguished word called the _program counter_. The program counter indicates which instruction to run next.
Footnote 3: Our circuit cannot handle large processor local state without harming its asymptotics. This is not a serious limitation, as processors can store local state in shared memory.
Complexity.The machine's _runtime_\(T\) denotes the total number of machine steps before the program halts. The machine's _work_\(W\) denotes the total number of instructions executed across all processors before the machine halts.
Syntax and semantics.At each step, each active processor runs an instruction according to its program counter. Instructions are chosen from the following grammar:
\[\begin{array}{llll}\mathsf{instr}::=&\mathbf{x}\gets f(\mathbf{x})& \text{update local state}\\ &|\ y\leftarrow\mathbf{read}\ x&\text{read address $x$; save the result in $y$}\\ &|\ \mathbf{write}\ x\ y&\text{write word $y$ to address $x$}\\ &|\ x\leftarrow\mathbf{input}&\text{read one word from the input tape}\\ &|\ \mathbf{output}\ x&\text{write one word to the output tape}\\ &|\ \mathbf{fork}\ f(\mathbf{x})&\text{activate a processor with local state $f(\mathbf{x})$}\\ &|\ \mathbf{die}&\text{go inactive}\end{array}\]
Above, \(\mathbf{x}\) refers to the processor's entire local state, and metavarables \(x,y\) refer to individual words in the local state. Metavariable \(f\) ranges over arbitrary functions that transform local state; \(f\) must be expressible as a polylog-uniform (in \(n\)) cyclic circuit (see Section 4) with \(O(\log^{2}n)\) gates and \(O(\log^{2}n)\) delay. This is sufficient for addition, subtraction, comparison, multiplication, etc. Notably, \(f\) may manipulate the program counter, allowing conditional branching.
Machine execution begins with a single active processor in an all zeros local state, and where the shared memory stores all zeros. If every processor is inactive, the machine halts.
Conflict resolution.When more than one processor writes the same address, the machine aggregates written values using an associative operation \(\star\). \(\star\) can be instantiated by a polylog-uniform (in \(n\)) cyclic circuit with at most \(O(\log^{2}n)\) gates and \(O(\log^{2}n)\) delay. This is sufficient to aggregate by, e.g., adding, multiplying, taking the first, taking the maximum, etc.
Since \(\star\) is not necessarily commutative, the order in which the machine combines values matters. Similarly, multiple processors might simultaneously read from the input tape/write to the output tape. The machine resolves such conflicts according to processor age, where older processors receive priority; ties are broken by the age of the processor's parent at the time the child was activated, then by the age of the grandparents, and so on. Since machine execution starts with one processor, and since each processor can only fork one child at a time, this resolution is unambiguous.
When two or more processors read the input tape in the same step, the processor with the highest priority pops the first remaining word of the tape, the processor with the second-highest priority pops the next word, and so on. Writing to the output tape is handled in the same manner, with higher priority processor output appearing first.
### Notation
* All logarithms are base two.
* Vectors are written in bold: \(\mathbf{x}\).
* Vectors are indexed using bracket notation: \(\mathbf{x}[i]\). Indexing starts at zero.
* \(\mathbf{x}[i..]\) denotes the subvector of \(\mathbf{x}\) starting from index \(i\). \(\mathbf{x}[i..j]\) denote the subvector of \(\mathbf{x}\) starting from index \(i\) and ending at index \(j\), inclusive.
* \([\;]\) denotes an empty vector and \([x]\) denotes a singleton vector holding \(x\).
* \(\mathbf{x}\sqcup\mathbf{y}\) denotes the concatenation of \(\mathbf{x}\) and \(\mathbf{y}\).
* \(x\triangleq y\) denotes that \(x\) is equal to \(y\) by definition.
*'msb' stands for'most significant bit'; 'lsb' stands for 'least significant bit'.
* We view index zero as the msb of a vector, as it is furthest to the left.
## 4 Cyclic Circuits
We simulate PRAM with a cyclic circuit. This section formalizes cyclic circuits, and we explain the semantics and complexity measures of the model.
### Syntax and Semantics
For concreteness, we choose a particular gate set, allowing AND/XOR circuits with cycles:
**Definition 1** (Cyclic Circuit).: _A cyclic circuit \(C\) is a circuit allowing cycles (i.e., its wiring graph need not be acyclic) composed from fan-in two AND/XOR gates. \(C\) has \(n\) input wires and \(m\) output wires. \(C\) may use a distinguished wire \(1\) that holds constant \(1\). Each wire in \(C\) has a distinct identifier from some set \(\mathsf{wire\text{-}id}\)._
The semantics of cyclic circuits are defined by stating which values can appear on each circuit wire. An _assignment_ (defined next) is a _map_ from wires to values such that assigned values satisfy constraints imposed by gates.
**Definition 2** (Assignment).: _Let \(C\) be a cyclic circuit and let \(\mathbf{x}\in\{0,1\}^{n}\) be an input. An **assignment** for \(C,\mathbf{x}\) is a map \(\mathsf{assign}:\mathsf{wire\text{-}id}\to\{0,1\}\) that sends \(e\) of \(C\)'s wires to a value. An assignment \(\mathsf{assign}\) is considered **valid** if (1) each \(i\)-th input wire is sent to corresponding input value \(\mathbf{x}[i]\), and (2) the output wire of each gate \(g\) is related to \(g\)'s input wires according to \(g\)'s function:_
\begin{tabular}{c|c} \(\oplus\) & \(0\) & \(1\) \\ \hline \(0\) & \(0\) & \(1\) \\ \(1\) & \(1\) & \(0\) \\ \end{tabular}
\begin{tabular}{c|c} \(\cdot\) & \(0\) & \(1\) \\ \hline \(0\) & \(0\) & \(0\) \\ \(1\) & \(0\) & \(1\) \\ \end{tabular}
We emphasize that AND outputs zero if either of its arguments is zero. This captures the eager nature of AND, enabling our constructions.
Legal circuits.Consider a simple circuit \(C\) defined as follows:
\[\textbf{let}\ y=x\cdot y\ \textbf{in}\ y \tag{2}\]
So far, \(C\) is well-defined with respect to Definition 1: it is a single AND gate whose second input wire is its own output:
\[x\]
\[y\]
However, this circuit is problematic. Suppose we consider input \(x=1\). The pair \(C,x\) admits two valid assignments:
\[\{x\mapsto 1,y\mapsto 0\}\qquad\{x\mapsto 1,y\mapsto 1\}\]
Indeed, both settings of \(y\) satisfy the AND gate constraint.
While it is possible to consider cyclic circuits with multiple valid assignments, it is far simpler to only consider those circuits that have exactly one assignment per input. We say that such circuits are _legal_.
**Definition 3** (Legal Cyclic Circuit).: _A cyclic circuit \(C\) is considered **legal** if for any input \(\mathbf{x}\in\{0,1\}^{n}\), the pair \(C,\mathbf{x}\) has exactly one valid assignment (Definition 2). If \(C\) is not legal, it is considered **illegal**._
Henceforth and outside this section, all considered cyclic circuits are legal. For instance, Theorem 1 refers to legal cyclic circuits, and the example circuits we considered in Section 2 were legal.
**Notation 1** (Wire values).: _When \(C\) and \(\mathbf{x}\) are clear from context, we denote the single assignment for \(C,\mathbf{x}\) by \(\mathsf{val}\). We denote by \(C(\mathbf{x})\) the string of values in the image of \(\mathsf{val}\) corresponding to \(C\)'s output wires._
### Complexity Measures
In standard _acyclic_ Boolean circuits, we typically measure circuit complexity via size and depth. In _cyclic_ circuits, we instead measure size and _delay_. The size of a cyclic circuit \(|C|\) is simply its number of gates. The _delay_ of a wire measures the time needed before that wire acquires its value. We assume that each gate takes unit time to propagate input to output. Wire delay _depends on circuit input_:
**Definition 4** (Wire Delay).: _Let \(C\) be a cyclic circuit with input \(\mathbf{x}\in\{0,1\}^{n}\). The **wire delay** of \(C,\mathbf{x}\) is a map \(\mathsf{delay}:\mathsf{wire-id}\rightarrow\mathbb{N}\) that sends each wire to the lowest value satisfying the following constraints:_
\[\mathsf{delay}(w_{0}\oplus w_{1}) =1+\mathsf{max}(\mathsf{delay}(w_{0}),\mathsf{delay}(w_{1}))\] \[\mathsf{delay}(w_{0}\cdot w_{1}) \geq 1+\mathsf{min}(\mathsf{delay}(w_{0}),\mathsf{delay}(w_{1}))\] \[\mathsf{delay}(w_{0}\cdot w_{1}) \geq 1+\mathsf{delay}(w_{0})\] _if \[\mathsf{val}(w_{1})\neq 0\] \[\mathsf{delay}(w_{0}\cdot w_{1}) \geq 1+\mathsf{delay}(w_{1})\] if \[\mathsf{val}(w_{0})\neq 0\]_
As an example, \(\mathsf{delay}\) maps each input to \(0\), as this is the lowest natural number and as Definition 4 places no further constraint. The delay of an AND gate depends on its inputs, reflecting the gate's eager nature. If an AND gate's faster input holds zero, then the gate has low delay; if not, the gate has higher delay, as it must wait until its slower input acquires its Boolean value.
Note that because each wire's value is uniquely determined by the circuit input (Definition 3), it is relatively straightforward that each wire must have finite delay.
Circuit delay.Formally, wire delay is a measure for _wires_, but it is also useful to discuss the delay of a _circuit_. The delay of \(C\) with respect to input \(\mathbf{x}\in\{0,1\}^{n}\) is the maximum delay of \(C\)'s wires. We also consider \(C\)'s delay without a specified input \(\mathbf{x}\), which is defined as the highest delay over all inputs \(\mathbf{x}\in\{0,1\}^{n}\).
### Uniformity
We are interested not only in showing that cyclic circuit families can simulate PRAM, but also the other way around. The former is far more interesting, but the latter is needed to tightly connect the two models. To show a simulation of cyclic circuits by PRAM, we need a notion of uniformity such that a PRAM can efficiently compute the description of a given circuit.
**Notation 2** (Polylog Uniform/Computable).: _Say that a circuit family \(\{C_{n}:n\in\mathbb{N}\}\) is polylog uniform if - upon input \(n\) and \(i\) - a random access Turing machine can compute the description of \(C_{n}\)'s \(i\)-th gate in time \(O(\operatorname{poly}(\log n))\). Similarly, say a quantity \(f(n)\) is polylog computable if the quantity can be computed by a random access Turing machine in time \(O(\operatorname{poly}(\log n))\)._
This uniformity is convenient for connecting PRAM and cyclic circuits, because it means PRAM can work-efficiently simulate cyclic circuits. Note that the more standard logspace uniformity condition is not sufficient for our purposes, because all we can conclude about a logspace program is that it must halt in _polynomial_ time, and our theorems cannot tolerate polynomial work blow-up. Another standard notion is DLOGTIME uniformity, but this notion seems insufficient to describe our circuits without blowing up the circuit size by a polynomial factor.
We can now more formally state Corollary 1.
**Corollary 1** (PRAM and Cyclic Circuits).: _Denote by \(\mathsf{PRAM}(W(n),T(n))\) the set of problems solvable by a bounded-word-size PRAM (Section 3.1) within \(O(W(n))\) work and \(O(T(n))\) time. Denote by \(\mathsf{CCKT}(W(n),T(n))\) the set of problems solvable by a polylog-uniform cyclic circuit family \(\{C_{n}:n\in\mathbb{N}\}\) where \(C_{n}\) has \(O(W(n))\) size and computes its output within \(O(T(n))\) delay. For \(W(n)=O(\operatorname{poly}(n))\) s.t. \(W(n)\) is polylog-computable, and for all \(T(n)\):_
\[\mathsf{PRAM}(W(n)\cdot\operatorname{poly}(\log n),T(n)\cdot \operatorname{poly}(\log n))\] \[=\mathsf{CCKT}(W(n)\cdot\operatorname{poly}(\log n),T(n)\cdot \operatorname{poly}(\log n))\]
### Simulating Cyclic Circuits with PRAM
The goal of this work is to simulate PRAM with a cyclic circuit, but the other direction is also needed to establish a strong connection between the two models. The following is a relatively straightforward fact:
**Theorem 2** (Cyclic Circuits from PRAM).: _Let \(\{C_{n}:n\in\mathbb{N}\}\) be a polylog-uniform cyclic circuit family such that each \(C_{n}\) has size at most \(W(n)=O(\operatorname{poly}(n))\). There exists a PRAM program \(\mathcal{P}\) such that for any length-\(n\) input \(\mathbf{x}\), \(\mathcal{P}(\mathbf{x})\) outputs \(C_{n}(\mathbf{x})\) within \(O(W(n)\cdot\operatorname{poly}(\log n))\) work. If on a particular input \(\mathbf{x}\), \(C_{n}(\mathbf{x})\) outputs within \(T\) delay, then \(\mathcal{P}(\mathbf{x})\) halts within time at most \(T\cdot O(\operatorname{poly}(\log n))\)._
Proof.: Straightforward from the uniformity of the circuit family.
First, \(\mathcal{P}\) computes the description of \(C_{n}\) in \(O(\operatorname{poly}(\log n))\) time and \(O(W(n)\cdot\operatorname{poly}(\log n))\) work. The description can be computed efficiently in parallel due to polylog-uniformity.
Next, \(\mathcal{P}\) simulates each gate. As its invariant, \(\mathcal{P}\) maintains a set of processors, each of which represents a wire whose Boolean value has already been computed. \(\mathcal{P}\) sets up this invariant by assigning one processor to each circuit input wire. Then, each wire processor handles fan-out by (recursively) forking children such that each connected gate has a corresponding processor. This processor attempts to evaluate the gate. If the gate's output is indeed determined by the current configuration of its input wires, then the processor marks the gate as handled and becomes the handler of the gate's output wire. (To avoid two processors handling the same output wire, the processors use gate markers to decide who takes control of the output wire; this
is possible due to PRAM's synchronous nature.) If the gate's output is not yet determined, the processor simply saves its wire value on the gate input and goes inactive.
By handling wire fan-out in a binary tree fashion, every wire value is propagated to each connected gate in \(O(\log n)\) time, and every gate is touched only twice (once per gate input). Thus, the total runtime is \(T\cdot O(\operatorname{poly}(\log n))\) and the total work is \(O(W(n)\cdot\operatorname{poly}(\log n))\).
## 5 Parallel Single Access Machines
There seems to be some natural tension between the PRAM model - which allows arbitrary re-use of stored data values - and the cyclic circuit model - where each wire can only be read by statically connected gates. We accordingly introduce an intermediate model: parallel single access machines (PSAMs). PSAM seems to be more naturally compatible with cyclic circuits because each memory address can be written to/read from only once, a natural fit for our permutation-based memory. We give two simulations:
* We show cyclic circuits can simulate PSAM.
* We show PSAM can simulate PRAM.
The PSAM model is similar to bounded PRAM (Section 3.1), with the notable exceptions that (1) each memory address can be read _only once_, and (2) written memory addresses are chosen by the machine, not the program. These restrictions are quite strong, but it remains possible to construct programs that manipulate and traverse tree-like data structures.
Syntax and semantics.PSAM is identical to PRAM (Section 3.1), except that we change the instruction set. PSAM instructions are specified by the following grammar:
\[\mathsf{instr} ::=\ \mathbf{x} \gets f(\mathbf{x})\] update local state read value stored at address
\[x\] write
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
\[x\]
in a fresh address; save address in
\[y\]
\[x\]
\[x\]
\[x\]
\[x\]
\[x\]
\[x\]
\[y\]
\[x\]
\[x\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[x\]
\[y\]
\[y\]
\[x\]
\[y\]
\[y\]
\[y\]
\[y\]
\[y\]
\[y\]
\[y\]
\[y\] \[y\]
\[y\] \[\] \[y\] \[\] \[y\] \[\
In PSAM, we use a word-size \(w=\Theta(\log n)\) where the hidden constant is large enough to store two memory addresses, plus extra metadata, sufficient to implement binary tree data structures.
PSAM capabilities; specifying PSAM programs.Rather than tediously writing out PSAM programs in the above instruction format, we specify PSAM programs as simple recursive function definitions. Our PSAM programs manipulate _binary trees_ that store data words at their leaves. We use the following inductive definition:
\[\boxed{\mathsf{Tree}::=\mathsf{Empty}\mid\mathsf{Leaf}(\mathsf{word})\mid \mathsf{Branch}(\mathsf{word},\mathsf{Tree},\mathsf{word},\mathsf{Tree})} \tag{3}\]
Each branch node stores two pointers to its subtrees, as well as two natural numbers denoting the _depth_ of each subtree.
In the following, we argue that we can compile our recursive function definitions to PSAM programs. To aid understanding, we specify an example PSAM program that in parallel sums words stored at the leaves of a binary tree:
\[\begin{array}{rl}&\mathsf{sum}(t)\triangleq\\ &2\
from a called function, the processor uses \(\mathsf{read}\) to pop its old state and continue where it left off. In our example, the processor saves its local state before the call to \(\mathsf{sum}(t_{\mathsf{deep}})\); after the recursive call concludes, the processor reads its old state before performing the final addition.
* **Parallel execution.** Our specifications include _parallel_ behavior, e.g. traversing two branches of a tree in parallel. We denote this by writing \(\mathbf{PAR}(e)\) where \(e\) is an arbitrary expression to be computed by a child process. A PSAM processor handles parallel expressions by invoking \(\mathsf{fork}\). The value of \(e\) is represented by the pointer returned by \(\mathsf{fork}\). **Important note:** it is _crucial_ that on \(\mathbf{PAR}(e)\), the child must return before the parent uses the value of the parallel expression; else the PSAM will dereference an invalid return pointer and crash. In our example, we delegate the _shallower_ tree to the child process, ensuring the child computes \(\mathsf{sum}(t_{\mathsf{shallow}})\) using fewer instructions than the call to \(\mathsf{sum}(t_{\mathsf{deep}})\). Because PSAM processors operate in lockstep, the child computes its sub-sum before the parent, so the subsequent addition \(s_{0}+s_{1}\) does not dereference an invalid pointer.
* **Destructive behavior.** When our specifications case analyze trees, those trees are destroyed. This is reflected in our PSAM by the fact that reading a memory element invalidates the read address. It is thus important to check that specifications do not use the same tree twice. Notice that in our example, \(t\), \(t^{\ell}\), \(t^{r}\), \(t_{\mathsf{shallow}}\), and \(t_{\mathsf{deep}}\) are each used at most _once_, regardless of the program's execution path. If we wished to adjust \(\mathsf{sum}\) such that \(t\) is not 'erased', we could rebuild \(t\) as the recursion unwinds.
In Section 6, we simulate PSAM with Boolean gates alone. The key ideas behind this simulation are depicted in Figure 1: we implement main memory with a permutation network, and we implement each PSAM instruction with a compute unit. These units are connected to each other through a filter, allowing for calls to \(\mathsf{fork}/\mathsf{ret}\).
## 6 PSAM from Cyclic Circuits
In this section, we simulate PSAM (Section 5) with a cyclic circuit (Section 4). The goal of this section is to establish the following:
**Lemma 1** (PSAM from Cyclic Circuits).: _Let \(\mathcal{P}\) be a PSAM program (Section 5) that on length-\(n\) inputs halts within \(W(n)\) work, where \(W(n)=O(\operatorname{poly}(n))\) and the quantity \(W(n)\) is polylog-computable (Notation 2). There exists a cyclic circuit \(C_{n}\) of size \(O(W(n)\cdot\log^{3}n)\) that simulates \(\mathcal{P}\) on all length-\(n\) inputs. Suppose that on length-\(n\) input \(\mathbf{x}\), \(\mathcal{P}(\mathbf{x})\) halts in time \(T\). Then \(C_{n}(\mathbf{x})\) computes its output within delay \(T\cdot O(\log^{2}n)\). The family \(\{C_{n}:n\in\mathbb{N}\}\) is polylog-uniform._
The proof of Lemma 1 is by construction of the circuit family \(C_{n}\); the construction is described in the remainder of this section. By combining this result with results from Section 7, we obtain our simulation of PRAM (Theorem 1).
### Swap
Our construction uses a permutation network to route memory elements between subcircuits. The primitive underlying this network is a _swap_:
\[\mathsf{swap}(s,x,y)\triangleq((\neg s\cdot x)\vee(s\cdot y),(\neg s\cdot y) \vee(s\cdot x))\]
Here, \(\vee\) denotes logical OR and \(\neg\) denotes logical NOT; each of these can be implemented from AND/XOR/1. \(\mathsf{swap}\) outputs \((x,y)\) when \(s=0\), and it outputs \((y,x)\) when \(s=1\). For convenience, we generalize \(\mathsf{swap}\) to the following definition, which swaps two length-\(w\) vectors:
\[\boxed{\mathsf{swap}_{w}(s,\mathbf{x},\mathbf{y})\triangleq((\neg s\cdot \mathbf{x})\vee(s\cdot\mathbf{y}),(\neg s\cdot\mathbf{y})\vee(s\cdot\mathbf{ x}))}\]
This definition treats \(\vee\) as the element-wise OR of two vectors and \(\cdot\) as the AND scaling of each vector element by a single scalar. \(\mathsf{swap}_{w}\) uses \(O(w)\) total gates.
Eagerness of swap.While swap may seem simple, its small definition belies a subtle detail. To see this, we propose another strawman definition:
\[\mathsf{bad}(s,x,y)\triangleq(s\cdot(x\oplus y)\oplus x,s\cdot(x\oplus y)\oplus y)\]
This gate seems to compute the same function as swap. One might even attempt to argue that bad is superior to swap, as it can be computed with fewer gates. However, in the context of a cyclic circuit, swap and bad are _not equivalent_. To see this, suppose that input \(y\) is not yet computed at the time we consider the swap, which we denote by setting \(y\) to \(\bot\):
\[\begin{array}{c c c c c}s&x&y&\mathsf{swap}(s,x,y)&\mathsf{bad}(s,x,y)\\ \hline 0&x&\bot&(x,\bot)&(x,\bot)\\ \hline\mathbbm{1}&x&\bot&(\bot,\mathbf{x})&(\bot,\bot)\\ \end{array}\]
The table shows that swap can _eagerly_ forward \(x\) to its output wires, even before the value of \(y\) is known. (Indeed, swap also eagerly forwards \(y\) if \(x=\bot\).) bad cannot eagerly forward \(x\) because its second output _always_ depends on \(y\). There are two important points to this example.
First, the rules of Boolean algebra do not necessarily apply in a cyclic circuit. bad is not equivalent to swap because, in a cyclic circuit, AND does not distribute over XOR. Replacing \(x\cdot(y\oplus z)\) by \(x\cdot y\oplus x\cdot z\) can _change circuit dependencies_. On the other hand, many Boolean algebra rules _do_ apply. For instance, AND and XOR each remain commutative and associative.
Second, the eager nature of swap is _central_ to our construction. To see why, suppose that bit \(x\) is computed by some step of the PSAM, and suppose \(y\) is computed by some _later_ step. Indeed, suppose that \(y\)_depends on \(x\). Even in this situation, swap works, because it can deliver \(x\) to the destination where \(y\) is computed, then \(y\) can be routed as input to swap via a cycle. It is precisely this eager feature of swap that allows us to build a single permutation network that wires together PSAM steps.
### Helper Circuits
We construct some subcircuits used in our PSAM simulation.
First, halves takes a vector of wires \(\mathbf{x}\) and splits it into two vectors of half the length. halves has no gates:
\[\boxed{\mathsf{halves}(\mathbf{x})\triangleq\mathbf{let}\ n\triangleq|\mathbf{ x}|\ \mathbf{in}\ (\mathbf{x}[0..n/2-1],\mathbf{x}[n/2..n-1])}\]
Second, we formalize a classic ripple-carry adder.
\[\boxed{1\ \ \mathbf{x}+\mathbf{y}\triangleq}\] \[\boxed{2\ \ \mathbf{if}\ |\mathbf{x}|=0\ \mathbf{then}\ [0]\ \mathbf{else}}\] \[\boxed{3\ \ \mathbf{let}\ [\mathsf{carry}]\sqcup\mathbf{lsbs} \triangleq\mathbf{x}[1..]+\mathbf{y}[1..]}\] \[\boxed{4\
### Permutations
This section formalizes the main circuit components of our construction. Our goal is to construct a _dynamic permutation network_. This network takes as input \(n\) words, each tagged by some distinct target destination. The network automatically routes each word to its destination within polylog delay.
The crucial _dynamic_ property of the network is that even if only some _prefix_ of input words is available, those words are _eagerly routed_ to their destination within polylog delay. This eagerness is central to our handling of PSAM/PRAM, since it allows _sequential_ composition of instructions. At the same time, the network's low delay enables efficient _parallel_ composition of instructions.
Our network is essentially a binary radix sort implemented in hardware, and it is similar in structure to prior permutation/sorting networks [1, 2]. Our emphasis is to show that the network indeed achieves dynamic routing.
Partition at position \(i\).The main component of our permutation network is a subcircuit that we call partition-at. partition-at takes as input (1) a vector of \(n\cdot w\) wires \(\mathbf{x}\), where \(n\) is a power of two, and (2) a vector of \(\log n\) wires \(\mathbf{i}\). The circuit interprets \(\mathbf{x}\) as an array of \(n\) length-\(w\) words, and it interprets \(\mathbf{i}\) as an index of that array. partition-at uses swap gates to _rearrange_ the content of \(\mathbf{x}\) such that those words in \(\mathbf{x}\) with msb 0 occur to _right_ of position \(i\), and those with msb 1 occur to the _left_ of \(i\), wrapping around as necessary.
Figure 3 sketches an example; this sketch is useful throughout the following our explanation. The partition-at circuit is formalized below:
\begin{tabular}{|l l|} \hline
1 & partition-at\({}_{n,w}(\mathbf{i},\mathbf{x})\triangleq\) \\
2 & **if**\(n=1\)**then**\(([1\oplus\mathbf{x}[0]],\mathbf{x})\) \\
3 & **else** \\
4 & **let**\((\mathbf{x}^{\ell},\mathbf{x}^{r})\triangleq\text{halves}(\mathbf{x})\) \\
5 & \((\mathbf{zeros}^{\ell},\mathbf{y}^{\ell})\triangleq\text{partition-at}_{n/2,w }(\mathbf{i}[1..],\mathbf{x}^{\ell})\) \\
6 & \((\mathbf{zeros}^{r},\mathbf{y}^{r})\triangleq\text{partition-at}_{n/2,w}(( \mathbf{i}+\mathbf{zeros}^{\ell})[2..],\mathbf{x}^{r})\) \\
7 & \((\mathbf{z}^{\ell},\mathbf{z}^{r})\triangleq\text{merge}_{n,w}(\mathbf{i}[0],0,0,\mathbf{i}[1..],\mathbf{y}^{\ell},\mathbf{y}^{r})\) \\
8 & **in**\((\mathbf{zeros}^{\ell}+\mathbf{zeros}^{r},\mathbf{z}^{\ell}\sqcup\mathbf{z}^{r})\) \\ \hline \end{tabular}
Figure 3: An example of partition-at. Inputs are depicted at the top; outputs are at the bottom. Consider eight elements, four of which have msb zero (shaded); the others have msb one (unshaded). partition-at rearranges elements such that shaded elements occur consecutively starting at position \(i\) (here, \(i=3\)). We first recursively partition each half of the array, and then merge conditionally swaps elements, depending on their shading and on which side of \(i\) they are on.
partition-at recursively breaks down the array \(\mathbf{x}\); as the recursion unwinds, it uses a linearly-sized, log-delay subcircuit called merge to combine the two halves; merge is explained later.
The key detail of partition-at is its management of index \(i\). Recall, we wish to place all msb 0 elements to the right of \(i\). Let \(\mathbf{x}^{\ell},\mathbf{x}^{r}\) denote the halves of \(\mathbf{x}\). partition-at uses its first recursive call to place msb 0 elements of \(\mathbf{x}^{\ell}\) to the right of \(i\) (actually, we place elements to the right of \(i\bmod n/2\)). The idea here is to _align_\(\mathbf{x}^{\ell}\) elements with their position in the output vector.
Next, we wish to place each msb 0 element from \(\mathbf{x}^{r}\) to the right of all msb 0 elements from \(\mathbf{x}^{\ell}\). To achieve this, we shift \(i\) to the right before making our second recursive call. This explains the use of adders. Each call to partition-at actually achieves two tasks: (1) it partitions the elements as already described and (2) it counts the number of msb zero elements in \(\mathbf{x}\). The first recursive call thus tells us how many elements are already to the right of \(i\), allowing us to position \(\mathbf{x}^{r}\) elements further to the right.
Merging recursively partitioned arrays.partition-at wishes to place zero elements to the right of position \(i\), but \(i\) is in the range \([0,n)\). On its recursive calls, partition-at adjusts \(i\) such that each recursive \(i\) is in the _smaller_ range \([0,n/2)\). Suppose we were to take the two recursive output arrays \(\mathbf{y}^{\ell},\mathbf{y}^{r}\) and simply _concatenate_ them. Because the recursive calls operate on arrays of length \(n/2\), not length \(n\), each element in the concatenated array could be in one of two possible array positions: (1) the correct position or (2) a position exactly distance \(n/2\) from the correct position.
These two possibilities must be resolved, motivating the design of merge:
\[\begin{array}{|ll|}\hline 1&\mathsf{merge}_{n,w}(\mathsf{parity},\mathsf{ left},\mathsf{right},\mathsf{i},\mathbf{x},\mathbf{y})\triangleq\\ 2&\mathsf{if}\ n=1\ \mathsf{then}\\ 3&\mathsf{let}\ \mathbf{s}\triangleq\mathsf{parity}\oplus\mathsf{ left}\oplus\mathbf{x}[0]\\ 4&\mathsf{in}\ \mathsf{swap}_{w}(\mathbf{s},\mathbf{x},\mathbf{y})\\ 5&\mathsf{else}\\ 6&\mathsf{let}\ (\mathbf{x}^{\ell},\mathbf{x}^{r})\triangleq\mathsf{ halves}(\mathbf{x})\\ 7&(\mathbf{y}^{\ell},\mathbf{y}^{r})\triangleq\mathsf{ halves}(\mathbf{y})\\ 8&(\mathbf{z}^{\ell},\mathbf{w}^{\ell})\triangleq\mathsf{merge}_{n,w}( \mathsf{parity},\mathbf{i}[1..],\mathsf{left}\vee(\mathbf{i}[0]\cdot\neg \mathsf{right}),\mathsf{right},\mathsf{x}^{\ell},\mathbf{y}^{\ell})\\ 9&(\mathbf{z}^{r},\mathbf{w}^{r})\triangleq\mathsf{merge}_{n,w}(\mathsf{ parity},\mathsf{left},\mathsf{right}\vee(\neg\mathbf{i}[0]\cdot\neg\mathsf{ left})\mathbf{i}[1..],\mathbf{x}^{r},\mathbf{y}^{r})\\ 10&\mathsf{in}\ (\mathbf{z}^{\ell}\sqcup\mathbf{z}^{r},\mathbf{w}^{\ell}\sqcup \mathbf{w}^{r})\\ \hline\end{array}\]
merge combines two partitioned vectors \(\mathbf{x},\mathbf{y}\) from calls to partition-at. The high-level operation of merge is to element-wise swap entries of \(\mathbf{x}\) and \(\mathbf{y}\) yielding a single partitioned array. merge is conceptually a _half-cleaner_ from the sorting network literature. The challenge of this operation is in deciding _which_ elements should be swapped and which should not.
merge recursively breaks down \(\mathbf{x}\) and \(\mathbf{y}\); once \(\mathbf{x},\mathbf{y}\) each contain exactly one word, it conditionally swaps those two words. The key detail of merge is its management of the position \(i\) as well as variables left and right. The value of left (resp. right) denotes an answer to the following question: does the currently considered subvector of \(\mathbf{x}\) lie entirely to the left (resp. right) of partition-at's value of \(i\)? These bits are useful because once we reach the base case, we know we wish to move the single element in \(\mathbf{x}\) to the right depending on whether we are considering a location that is left of \(i\). The value parity flips this logic if the original value of \(i\) is at least \(n/2\).
As the recursion unwinds, merge concatenates results of its recursive calls, in the end yielding two correctly partitioned halves, which partition-at concatenates together.
Stability.The partition-at circuit is _stable_ in the following sense. Output words with msb 0 appear in their original relative order; output words with msb 1 appear in the _reverse_ of their original relative order. Namely, the output sequence is _bitonic_[1].
The stability of partition-at can be seen by an inductive argument. In the base case, partitioning a singleton vector is trivially stable. In the general case, we have two stable bitonic sequences \(\mathbf{y}^{\ell}\) and \(\mathbf{y}^{r}\), where each \(0\)-tagged \(\mathbf{y}^{r}\) elements appear to the _right_ of all \(0\)-tagged \(\mathbf{y}^{\ell}\) elements (modulo \(n/2\)), and similarly for all \(1\)-tagged elements; by merging, we thus ensure that all \(0\)-tagged elements appear in their original relative order, and similarly for all \(1\)-tagged elements. Thus, partition-at indeed achieves this notion of bitonic stability.
Dynamic behavior.Crucially, merge's base case decision of whether to swap two elements is only made with respect to (1) the index \(i\) and (2) the msb of \(\mathbf{x}\). I.e., the value of \(\mathbf{y}\) is _irrelevant_. Moreover, if we revisit partition-at, we notice that arguments to the first recursive call _do not depend_ on the second recursive call. These two parts together mean that partition-at starts sending elements to correct positions even when only an arbitrary prefix of \(\mathbf{x}\) is available.
These properties are crucial to the dynamic behavior of our permutation network, because they mean that decisions about how to route a word depend only on the values of words originally to that element's _left_. Rephrasing this in the context of PSAM/PRAM, we can correctly route a memory element based only on that element's target address and _prior_ routing decisions.
The partition circuit and its complexity.We provide a simple wrapper around partition-at which (1) partitions at position zero, (2) reverses the second half of the vector, and (3) drops the msb of each word in the output. Dropping the msb is convenient for composing partitions into a full permutation.
\[\begin{array}{|l|}\hline 1&\mathsf{partition}_{n,w}(\mathbf{x})\triangleq\\ 2&\mathsf{let}\left(\_\,,\,\mathbf{z}\right)\triangleq\mathsf{partition-at}_{n,w}(\mathsf{log}^{n},\mathbf{x})\\ 3&\left(\mathbf{z}^{\ell},\mathbf{z}^{r}\right)\triangleq\mathsf{halves}( \mathbf{z})\\ 4&\mathsf{in}\left(\mathsf{drop-msbs}_{w}(\mathbf{z}^{\ell}),\mathsf{drop-msbs }_{w}(\mathsf{reverse}(\mathbf{z}^{r}))\right)\end{array}\]
Figure 4: (Left) An example partition. Inputs are depicted at the top; outputs are at the bottom. The partition is _stable_: it preserves the relative order of elements. Horizontal bars depict calls to swap. (Right) The circuit begins routing even when the input is only partially available. This dynamic routing is possible because the destination of each element depends only on the destination of elements initially to the left.
Above, drop-mbss denotes a procedure that drops the msb of each length-\(w\) word in the argument vector. Figure 4 depicts an end-to-end call to partition.
partition has the following complexity:
* partition has size at most \(w\cdot O(n\cdot\log n)\).
* partition has delay \(O(\log n)\).
The size of partition can be derived from (1) solutions to basic recurrence equations, (2) the linear size of ripple-carry adders, and (3) the fact that \(\mathtt{swap}_{w}\) has \(\Theta(w)\) gates.
Maximum delay is more nuanced. Indeed, the partition-at circuit involves adders on \(O(\log n)\)-bit strings, and adders have linear delay. Thus, it may _seem_ that partition-at has \(O(\log^{2}n)\) delay, caused by a sequence of \(O(\log n)\) adders on \(O(\log n)\) bit numbers. However, recall from discussion in Section 6.2 that an adder produces its low bits with low delay and high bits with higher delay. Ultimately, this _pipelines_ the adders induced by partition-at's recursion: the adders at the leaves produce their lowest bits within constant delay, allowing adders one level up the recursion tree to compute their low bits only a constant delay later, and so on. In this manner, all adders complete in \(O(\log n)\) delay.
Permutation.We achieve a dynamic permutation network by recursively applying our partition network (see Figure 5):
\[\begin{array}{|l|}\hline 1&\mathsf{permute}_{n,w}(\mathbf{x})\triangleq\\ 2&\textbf{if }n=1\textbf{ then }\mathbf{x}\\ 3&\textbf{else}\\ 4&\textbf{let }(\mathbf{x}^{0},\mathbf{x}^{1})\triangleq\mathsf{ partition}_{n,\log n+w}(\mathbf{x})\\ 5&\textbf{in }\mathsf{permute}_{n/2,w}(\mathbf{x}^{0})\sqcup\mathsf{ permute}_{n/2,w}(\mathbf{x}^{1})\\ \hline\end{array}\]
permute takes as input \(2^{k}\) words \(\mathbf{x}\) where each entry of \(\mathbf{x}\) is tagged with a distinct target location. It uses partition to split \(\mathbf{x}\) in half such that the resulting left vector \(\mathbf{x}^{0}\) contains those elements intended for locations with msb \(0\), and \(\mathbf{x}^{1}\) contains those elements intended for locations with msb \(1\). Recall that partition drops the
Figure 5: (Left) An example permutation. Inputs are depicted at the top; outputs are at the bottom. The circuit implements a binary radix sorting network; it repeatedly partitions input elements based on bits of each element’s destination. (Right) The circuit begins routing even when only a prefix of input wires are set: the permutation is _dynamic_. This dynamic nature is inherited from the dynamic nature of our partition network; see Figure 4.
msb of each vector input, so the msb of each tag is dropped, allowing us to simply recursively apply permute to each half. permute calls partition with word size \(\log n+w\) to account for the \(\log n\) tag bits. Permuting a singleton vector is trivial. permute can be understood as an implementation of binary radix sort where all inputs are distinct.
permute's size/delay can be calculated using the complexity of partition and by solving simple recurrences:
* **Size. permute has size at most \(w\cdot O(n\cdot\log^{2}n)\). When \(w=\Theta(\log n)\), permute has \(O(n\cdot\log^{3}n)\) gates.
* **Delay. permute** has \(O(\log^{2}n)\) delay.
Filters.Section 2.3 describes a _filter_ subcircuit, which takes as input \(2^{k}\) words and keeps those \(2^{k-1}\) words tagged with 1. filter can be built from a single partition:
\[\mathsf{filter}_{n,w}(\mathbf{x})\triangleq\mathbf{let}\ (\_,\mathsf{keep}) \triangleq\mathsf{partition}_{n,w+1}(\mathbf{x})\ \mathbf{in}\ \mathsf{keep}\]
filter inherits partition's size and delay. filter calls partition with word size \(w+1\) to account for the single extra tag bit.
Bidirectional permutations and filters.Our permutation network takes \(n\) tagged input words and sends those words to \(n\) distinct output locations. So far, our network is _one-directional_, in the sense that it sends data from its source side to its target side only. To achieve RAM we need a _bidirectional_ permutation network that allows data to flow both from source to target and from target back to source. Each source _request_ flows through the network in one direction and the corresponding _response_ flows back in the opposite direction. This is needed for memory lookups, where a compute unit inputs a memory address to the source side of a network. From here, the network connects this read to the appropriate memory write, and then the written value flows back through the network to the source side where it was requested.
Suppose we have \(n\) source addresses and \(n\) (untagged) _target_ inputs. Our bipermute circuit sends each source address to the addressed location, and then pulls the corresponding target element back to the source. bipermute is almost identical to permute; just rotate some swap components such that they point from target to source instead of source to target. The asymptotic size and delay of this subcircuit thus matches that of permute. Our formal circuit \(\mathsf{bipermute}_{n,w}\) takes as input two vectors: (1) a length-\(n\) vector of \((\log n)\)-bit tagged source-side inputs and (2) a length-\(n\) vector of \(w\)-bit target-side inputs. It outputs a length-\(n\) vector of \(w\)-bit source-side outputs. As bipermute is a simple extension of permute, we do not specify further.
We assume a similar generalization of filter. Like filter, bifilter connects half of \(n\) sources with \(n/2\) targets. bifilter simply extends filter such that these connections are bidirectional. Formally, \(\mathsf{bifilter}_{n,w,\omega}\) takes as input two vectors: (1) a length-\(n\) vector of \((1+w)\)-bit tagged source-side inputs and (2) a length-\(n/2\) vector of \(\omega\)-bit target-side inputs. It outputs (1) a length-\(n\) vector of \(\omega\)-bit source-side outputs (each source tagged with 0 receives an all zeros output) and (2) a length-\(n/2\) vector of \(w\)-bit target-side outputs.
### Simulating PSAM
We are now ready to construct our full PSAM-simulating circuit. Note, Figure 1 is a relatively faithful depiction of our full circuit. The circuit itself, along with a circuit that implements our PSAM memory, is listed in Figure 6.
First, our circuit includes \(O(W(n))\)_compute units_, where \(W(n)\) is the total work of the simulated PSAM program. We have not yet described the content of these units, but at its interface a compute unit has six input ports and seven output ports (see Figure 2). Each of these ports are \(O(w)\) bits wide. To complete our construction, we must properly connect up these ports.
To do so, we first instantiate a _memory unit_ which consists of one bipermute subcircuit and two bifilter subcircuits. The bipermute subcircuit is placed between the two bifilters; we arrange that each bifilter has an equal number of target ports as the bipermute subcircuit has source/target ports:
\begin{tabular}{|l|} \hline
1memory\({}_{n,w}\)(write-reqs, read-reqs) \(\triangleq\) \\
2let\((\text{read-}\text{resps},\text{addresses})\triangleq\text{bifilter}_{n,\log n-1,w}( \text{read-reqs},\text{reads})\) \\
3reads\(\triangleq\text{bipermute}_{n/2,w}(\text{addresses},\text{writes})\) \\
4(write-resps, writes) \(\triangleq\text{bifilter}_{n,w,\log n-1}(\text{write-reqs},[0,1,...,n/2-1])\) \\
5in\((\text{write-}\text{resps},\text{read-}\text{resps})\) \\ \hline
1PSAM\({}_{\mu,n,w}\)(input-tape) \(\triangleq\) \\
2let\((\text{input-reqs},\text{output-reqs},\text{write-reqs},\text{read-reqs}, \text{to-parent},\text{to-children})\triangleq\) \\
3\(\mu^{n}\)(input-resps, write-resps, read-resps, from-parent, from-children) \\
4(write-resps, read-resps) \(=\)memory\({}_{n/2,w}\)(write-reqs, read-reqs) \\
5(input-resps, \(\_\)) \(=\) bifilter\({}_{n,0,w}\)(input-reqs, input-tape) \\
6output-tape \(=\) filter\({}_{n,w}\)(output-reqs) \\
7(from-children, from-parent) \(=\) bifilter\({}_{n,w,w}\)(to-children, to-parent) \\
8in output-tape \\ \hline \end{tabular}
**Figure 6: Our circuit-based memory unit (top) connects \(n\) conditional writes with \(n\) conditional reads (exactly \(n/2\) of each pass through a respective filter). The memory responds to each write with the address where the entry was written. Our PSAM circuit (bottom) instantiates \(n\) computes units \(\mu\) (see Figure 2) and connects them to each other, to the memory, and to the input/output tapes. Memory words and messages sent between compute units are each of size \(w=\Theta(\log n)\). We denote by \(\mu^{n}\) the parallel composition of \(n\) copies of \(\mu\). Each vector input to \(\mu^{n}\) is partitioned into \(n\) chunks with each chunk being sent to one compute unit \(\mu\); similarly outputs of each compute unit are concatenated into the resulting six vectors.**
_read–requests_ _read–responses_ _bifilter_ _read–_ _bifilter_ _write–_ _requests_ _write–_responses_ _write–_ _requests_ _write–_responses_ _This memory unit responds to read requests on its source side and to write requests on its target side. It handles these requests by using the permutation to match reads with writes. Each write request receives as response the next available address. In addition to our memory, our PSAM circuit includes an input tape and an output tape, each behind a filter. Finally, we connect our units to one another through a coordination bifilter, allowing each unit two activate up to two subsequent units, and allowing child units to send a word back to their parent. In sum, we connect compute units to an infrastructure that allows each unit to leverage some combination of machine capabilities, which we next list. In this list, 'tag' denotes a leading \(\{0,1\}\) value that indicates whether a payload should pass through a filter or not. Each compute unit can perform some combination of the following:
* Send one tag bit through a bifilter to conditionally read one input tape word.
* Send a tagged word through a bifilter to conditionally write the output tape.
* Send two tagged words through a bifilter to activate up to two children and receive responses.
* Send one word in response to the parent.
* Send one tagged word to memory to conditionally write a word.
* Send one tagged address to memory to conditionally read a word.
The precise handling done by units is described next.
### Compute Units
We have now set up the full infrastructure in which we embed our compute units. The remaining task is to say what these compute units actually _are_.
Recall that our current goal is to simulate PSAM (Section 5). We now construct a single compute unit that can execute _any_ instruction in a particular PSAM program. To do so, it suffices to _separately_ handle each instruction of the target program. This works because the number of instructions in the program is constant, so we can implement a custom compute unit for each instruction, then glue together a "top level" unit that conditionally behaves as any one of these custom units, depending on the PSAM program counter. This conditional behavior is achieved by executing _each_ instruction of the program, then multiplexing the effect of that instruction by using multiplexers controlled by the program counter. Designing our compute unit this way incurs constant overhead; if one wished to refine our approach, it would be useful to consider carefully designed compute units.
Figure 7 sketches a compute unit for each of the seven PSAM instruction types. Rather than writing out each of these units as a formal circuit, we find it more direct to simply describe their handling in prose. In the following, if we do not explicitly mention how the unit handles one of the machine capabilities (receiving an input tape word, writing a word, etc.), then this means that the unit opts out of that capability by sending a 0 to the appropriate filter.
In each of the following cases except for fork and ret, the unit updates its local state, then activates exactly one child by passing the updated state. Then, the unit receives from its single child a returned word,
Figure 7: The structure of compute units (top left) and the compute unit configuration for each PSAM instruction type.
which it forwards to its own parent (see, e.g., update in Figure 7). Each of the following units appropriately updates the program counter. Instruction-specific details follow:
* \(\mathbf{x}\gets f(\mathbf{x})\): Our first unit describes how to update processor local state. This unit receives from its parent a local state \(\mathbf{x}\), then implements \(f(\mathbf{x})\) as a circuit.
* \(y\leftarrow\mathsf{read}\ x\): To read a memory word at address \(x\), the unit places \(x\) on its outgoing read port. The memory responds via the incoming read port, which the unit then saves in word \(y\) of its local state.
* \(y\leftarrow\mathsf{write}\ x\): To write a memory word \(x\), the machine places \(x\) on its outgoing write port. The memory responds with an address, indicating where \(x\) was written. The unit saves this address in word \(y\) of its local state.
* \(x\leftarrow\mathsf{input}\): To read from the input tape, the unit places a one on its outgoing input port, indicating it wishes to read the tape. The tape responds on the incoming input port, and the unit saves the value in word \(x\) of its local state.
* \(\mathsf{output}\ x\): To write to the output tape, the unit stores \(x\) on its output port.
* \(y\leftarrow\mathsf{fork}\ f(\mathbf{x})\): To fork a parallel process, the unit first computes \(f(\mathbf{x})\), yielding the forked process starting state. Then, the unit sets aside word \(y\) of its local state to hold the return pointer of the forked process. The unit then activates two children, passing to its first child its local state (with the program counter incremented) and passing to its second child the state \(f(\mathbf{x})\).
* \(\mathsf{ret}\ x\): The unit sends local word \(x\) to its parent. It activates no children.
Cleaning up execution.At this point, we have shown the core our simulation of PSAM by cyclic circuits. However, there remains one technical issue: we have not yet ensured that our cyclic circuit is _legal_ (Definition 3). Each of the individual _components_ of our main construction is legal, but we have not yet shown that the composition is legal.
The challenge here is that our circuit may not use up all of its available resources. For instance, we might not use all provisioned memory reads. When this happens, there will be portions of partition networks whose routing decisions are unspecified, leading to ambiguity in the circuit's assignment (Definition 2).
The solution to this problem is relatively straightforward: once the PSAM program output is computed, use additional compute units to burn all remaining resources. These spare units perform the following:
* Write all unwritten memory addresses.
* Read all unread memory addresses.
* Activate all remaining compute units.
* Read any remaining words from the input tape.
* Write blanks to the end of the output tape.
Most of the details here are tedious and unsurprising: our filters are modified to propagate only the first \(n/2\) inputs tagged with \(1\); all subsequent \(1\)'s are filtered out. From here, each burn unit just (1) writes an all zeros word, (2) reads its own write, (3) activates two subsequent burn units, (4) reads an input word, and (5) writes an all zeros word to the output tape. We emphasize that the cost here is at most a constant factor blow-up, and resources can be burned by units that operate in parallel.
There is one detail that must be explored: our burn strategy does not explain how to read memory addresses that were written (and not read) by the PSAM program itself. To burn such values, the machine must be able to _reach_ these values. So far, this is nontrivial. Indeed, an arbitrary PSAM program could create an 'orphan' memory element, for which there is no sequence of memory addresses the machine can follow to find the orphan. In such cases, it is not clear how to read this element, and hence it is not clear how to complete the routing of the memory permutation network. For this reason, we consider PSAM programs that clean up after themselves:
**Definition 5** (Clean).: _A PSAM program \(\mathcal{P}\) is **clean** if after executing on any input \(\mathbf{x}\), \(\mathcal{P}\) has not written to any address that has not also been read. I.e., \(\mathcal{P}\) is **clean** if each of \(\mathcal{P}\)'s \(\mathsf{write}\) instructions is eventually followed by a \(\mathsf{read}\) to the same address._
Our considered PSAM program simulates PRAM by implementing two binary trees; to clean up after itself, the program simply traverses those two trees without writing anything back.
When we consider clean PSAM programs, our burn strategy ensures that all partition networks are fully programmed, and hence we achieve a legal cyclic circuit.
### Uniformity
Recall that our simulation of cyclic circuits by PRAM only works for circuits that are polylog uniform. The fact that our circuit constructions (see Section 6) are polylog uniformity is trivial from our explicit descriptions of the circuit's components. Namely, in our algorithms we can clearly compute tight bounds on the number of gates in a particular subcircuit just by adding a multiplying a polylog number of times. This makes it easy, based on an integer \(i\), to "search" the circuit description for the \(i\)-th gate.
## 7 PRAM from PSAM
In this section, we simulate PRAM with a specific PSAM program. The goal of this section is to establish the following:
**Lemma 2** (PRAM from PSAM).: _Let \(\mathcal{P}\) be a PRAM program (Section 3.1). There exists a PSAM program \(\mathcal{P}^{\prime}\) such that for all \(\mathbf{x}\), \(\mathcal{P}(\mathbf{x})=\mathcal{P}^{\prime}(\mathbf{x})\). Moreover, suppose that on a particular length-\(n\) input \(\mathbf{x}\), \(\mathcal{P}(\mathbf{x})\) halts within \(W\) work and \(T\) time. Then \(\mathcal{P}^{\prime}(\mathbf{x})\) halts within \(W\cdot O(\log n)\) work and \(T\cdot O(\log n)\) time._
The proof of Lemma 2 is by construction of an appropriate PSAM program, which is described in the remainder of this section. By combining Lemma 2 with Lemma 1, we obtain Theorem 1: an efficient simulation of PRAM by cyclic circuits.
Section 5 showed that PSAM programs can be expressed as simple recursive procedures over tree structures (Equation (3)). This section presents a number of such procedures that, when composed together, achieve our PRAM simulation. Recall from Section 2.4 that our high-level approach is to implement a simple PSAM program that in parallel traverses (1) a memory tree and (2) a processor tree. This section formalizes that procedure and accounts for its complexity.
### Tree Operations
We begin with helper PSAM procedures that manipulate trees. We give a procedure used to traverse trees, a procedure used to merge trees, and a procedure used to construct fresh trees.
Splitting trees.Our first helper PSAM procedure \(\mathsf{split}\) is useful for traversing our memory tree. \(\mathsf{split}\) takes as input a pointer to the root of a tree \(t\). It then reads \(t\) and splits the read tree into two trees:
\[\begin{array}{|ll|}\hline 1&\mathsf{split}(t)\triangleq\\ 2&\mathsf{match}\ t\ \mathsf{with}\\ 3&\mathsf{Empty}\mapsto(\mathsf{Empty},\mathsf{Empty})\\ 4&\mathsf{Leaf}(x)\mapsto(\mathsf{Leaf}(x),\mathsf{Leaf}(x))\\ 5&\mathsf{Branch}(\_,t^{\ell},\_,t^{r})\mapsto(t^{\ell},t^{r})\\ \hline\end{array}\]
If \(t\) stores an empty or singleton tree, then \(\mathsf{split}\) simply outputs two copies of that tree. If \(t\) instead stores an internal tree node, then \(\mathsf{split}\) outputs the two subtrees.
split takes constant time/work on the PSAM, since it simply (1) reads \(t\), (2) rearranges words of PSAM local memory, and (3) performs either zero writes (branch case) or two writes (empty and leaf case) to save the two resulting trees.
Merging trees.Our second helper procedure takes as input two pointers to trees \(t_{0}\) and \(t_{1}\), and it _merges_ those trees (recall discussion in Section 2.4). Namely, the merged tree \(t_{0}\uplus t_{1}\) contains all leaves in both \(t_{0}\) and \(t_{1}\), where we merge overlapping leaves with a binary associative operator \(\star\). Recall, we use \(\star\) to resolve write conflicts in our CRCW PRAM. We give a simple example of our merge operation:
Our specification of \(\uplus\) assumes that all Leaf nodes are on the same level of each argument tree. The code is as follows:
In words, \(\uplus\) proceeds by case analysis. If either merged tree is empty, then the merge is trivial. If both trees are singleton, then we combine the content of the leaves with \(\star\). In the general case, we break each tree into two trees, then we pairwise merge via recursion and glue the resulting trees together with a Branch node. These cases are exhaustive; we assumed all leaves reside on the same fixed level, so we will never merge Branch with Leaf.
Notice that \(\uplus\) delegates _each_ recursive call to a child process. This is safe because Line 13 of \(\uplus\) simply writes pointers created by the recursive calls, and it does not _dereference_ the pointers \(t^{\ell},t^{r}\). Due to the parallel calls, the root pointer of the merged tree is computed in _constant_ time on a PSAM, though it can take time linear in the depth of the tree for the PSAM to compute the full merged tree. The fact that the root is available in constant time allows us to _pipeline_ calls to \(\uplus\), which will be useful in reducing delay imposed by our PRAM simulation.
If \(t_{0}\) has \(n\) total nodes and \(t_{1}\) has \(m\) total nodes, then \(\uplus\) takes PSAM work at most proportional to the _minimum_ of \(n\) and \(m\). However, we emphasize that \(\uplus\) can in certain cases terminate with _much_ less work than this. For instance, suppose all leaves of \(t_{0}\) are in the left branch of the root, and suppose all leaves of \(t_{1}\) are in the right branch of the root. In this case, \(\uplus\) terminates within _constant_ work, because we reach a trivial merge with Empty after one recursive call. Namely, the work required to merge two trees scales only with the amount those trees _overlap_ with one another.
Encoding strings as trees.Our final helper procedure is used to build new binary trees. encode-path takes as input two arguments: (1) a length-\(O(\log n)\) binary string \(\mathbf{i}\) that represents an index and (2) a data value \(x\) to store. encode-path interprets \(\mathbf{i}\) as the name of a path through a tree. It then builds a tree with a single leaf that stores \(x\) and that lies along path \(\mathbf{i}\). See the following example:
The specification for encode-path follows:
\[\begin{array}{|l|}\hline 1\;\;\mathsf{encode\mbox{-}path}(\mathbf{i},x)\triangleq\\ 2\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;
* [1]**[http://www.cds.org/](http://www.cds.org/)
Figure 8: Our PSAM program that simulates a single step of a PRAM program. step takes as input a binary tree encoding of the current PRAM memory/current PRAM processors (see Invariant 1), and it traverses these trees simultaneously, pairing each processor state with the appropriate memory element. This allows each PRAM processor to perform one instruction. As step’s recursion unwinds, it builds a fresh memory/processor tree, ready for the next PRAM step.
to memory and (2) two optional processor states. I.e., the process can (1) halt by returning two empty processor states, (2) continue by returning one non-empty processor state, or (3) fork by returning two non-empty processor states. For simplicity, we assume that these optional processor states are encoded as either (1) an empty tree or (2) a singleton tree. We assume the details of handling an actual PRAM instruction, e.g. adding/multiplying/comparing/branching, are handled by handle-gram-instruction. We also assume that handle-gram-instruction runs in some fixed number of PSAM instructions that is at most \(O(\log n)\). Parameterizing over handle-gram-instruction provides flexibility in the capabilities of the PRAM. We emphasize that all the difficult parts of simulating PRAM, e.g. routing memory and appropriately parallelizing behavior, are handled by step.
step takes as input two pointers to trees: memory and processors. These trees inductively satisfy Invariant 1. step returns as output a pointer to a fresh memory tree and a fresh processor tree which also satisfy Invariant 1. In this manner, step can be repeatedly applied. To achieve full-fledged PRAM, we compute a fixed-point by applying step until the processor tree is empty.
step proceeds by reading processors and then case analyzing the tree. If the processor tree is empty, then no work needs to be done. Other cases are more detailed.
**If the processor tree is an internal node**, then we recursively handle both subtrees. More specifically, we first split the memory tree, then recurse on the left trees/right subtrees. This handling ensures that we can indeed match each processor with its desired memory element, because Invariant 1 ensures that each processor state is in a subtree that matches the subtree of the memory element it wishes to read.
As an important detail, our recursion proceeds in parallel, delegating the shallower processor tree to a child PSAM process. This ensures that the child process terminates before the parent process completes its own recursive call. This, in turn, ensures that the dereference of \(m_{\textsf{shallow}},p_{\textsf{shallow}}\) (line 12) does not illegally dereference the return pointer of an incomplete child process (see Section 5).
Once both recursive calls are complete, we have two memory trees and two processor trees, which we pairwise merge with calls to \(\uplus\).
**If the processor tree is a leaf node**, then we are ready to dispatch a single PRAM instruction. To do so, we first case analyze the memory tree. Note that it is impossible that the memory tree is a Branch node, because by Invariant 1 the processor tree is always deeper than the memory tree. Thus, the memory tree can either hold the single RAM element required by the current processor \(\rho\), or it can be empty. The latter case corresponds to a case where the processor \(\rho\) wishes to access a memory cell that has not yet been accessed by any processor. In this latter case, the processor will read the all zeros word.
From here, we call handle-gram-instruction with the processor state and the accessed data element. The processor returns an element to be written back to memory and up to two subsequent processor states. We encode these values as trees, then we return the fresh memory/processor trees.
Cleaning up.Once we compute the fixed point of step, the PRAM simulation is finished. However, recall that to achieve a _clean_ PSAM procedure (Definition 5), our program must read all unread data.
Cleaning up is straightforward. Our processor tree is, by the fact that we have finished the simulation, trivially empty, and to clean the memory tree, we simply traverse it from root to leaves in a recursive procedure, without writing back any data. This cleanup imposes at most constant factor overhead, as the PRAM memory cannot have accessed more than \(W(n)\) leaves.
### Step Complexity
In the following, we assume that the processor tree has \(p\) leaves; i.e., there are \(p\) active processors.
Work.We claim that step uses at most \(p\cdot O(\log n)\) total PSAM work. To see this, first note that both the memory tree and the processor tree have \(O(\log n)\) depth; this is ensured by the fact that the number of processors and the highest memory address are each at most polynomial in \(n\). If we for now ignore calls to \(\uplus\), we see that each time we call step on an internal node, we expend constant work (and time) before recursing
down both subtrees. Our recursion only follows paths through the tree to locations where processors reside, so the total number of recursive calls is at most \(p\cdot O(\log n)\).
The calls to \(\uplus\) are more difficult to analyze, but they similarly can be seen to consume at most \(p\cdot O(\log n)\) total work. One way to see this is as follows: For the processor tree, we merge together \(O(p)\) trees resulting from calls to encode-path, where each merged tree is a "path tree" with one leaf and \(O(\log n)\) internal nodes. Our step procedure merges these trees together in some order resulting from steps's recursion tree. However, instead suppose we were to merge these trees together one at a time, maintaining an accumulator tree and merging each path tree into this accumulator. Each such merge would cost at most \(O(\log n)\) work, simply due to the small size of the path tree. Thus, this strategy uses at most \(p\cdot O(\log n)\) work.
Now, step does not merge path trees into an accumulator in this way; it merges trees at each recursive call site. However, we claim that step uses _strictly less work_ than the above accumulator approach. Indeed, the merge of two trees is strictly smaller than those two trees alone, since the merged tree will join together some internal nodes. Hence, merging the processor tree consumes at most \(p\cdot O(\log n)\) work. This same argument holds for the memory tree.
Hence, step consumes at most \(p\cdot O(\log n)\) work.
Time.We claim that step computes root pointers of its output trees within \(O(\log n)\) PSAM time. This follows almost immediately from the fact that both the memory and processor tree have logarithmic depth.
There is one non-trivial component to this argument: we must argue that calls to \(\uplus\) can be pipelined. Indeed, recall that \(\uplus\) takes \(O(\log n)\) to completely compute its output, but it computes its root pointer within \(constant\) time.
Formally, we can establish an inductive hypothesis that step outputs its root pointers within \(O(\log n)\) time. In the base case this holds, since handle-pram-instruction is assumed to take at most \(O(\log n)\) time and since it takes \(O(\log n)\) steps to encode a tree path (indeed, the root of the encoded path is available within \(O(1)\) time). In the inductive case, we recursively call step twice in parallel, taking \(O(\log n)\) time by the inductive hypothesis. We conclude with calls to \(\uplus\), which return their root in \(O(1)\) time. Hence, the general case incurs only constant additive time on top of its recursive calls. As the depth of the tree is at most logarithmic, the induction holds. Hence, step computes its output within \(O(\log n)\) time.
## 8 Concluding Remarks
By combining our results from Sections 6 and 7, we attain a simulation of CRCW PRAM, and our simulation is achieved using cyclic circuits, a simple and easily implemented model. Surprisingly, the simulation incurs only polylogarithmic overhead in terms of both work and time.
This demonstrates feasibility of powerful parallel machines, at least in theory. Of course, substantial effort is needed to determine if this theoretical feasibility can translate to practical outcomes. At the least, we believe our result shows that cyclic circuits are far more interesting than previously thought. While prior works investigated cyclic circuits, none showed a connection between this simple model and PRAM.
An open question.Can our simulation of PRAM by cyclic circuits be improved? Our simulation achieves \(O(\log^{4}n)\) work overhead and \(O(\log^{3}n)\) runtime overhead, and it is not obvious how to do significantly better. We incur \(O(\log n)\) overhead from word size, \(O(\log^{2}n)\) overhead from the dynamic permutation network, and \(O(\log n)\) overhead from the simulation of PRAM by PSAM. Thus, improving our result in a modular way would require either (1) changing the circuit model, (2) improving dynamic permutation networks, or (3) improving the simulation of PRAM by PSAM; none of these improvements are obvious. Of course, it might also be possible to mix the concerns of the dynamic permutation with PRAM-by-PSAM simulation, or by applying some completely different approach; these ideas are also not clear.
In terms of negative results, our only current insight is that there is a trivial \(\Omega(\log n)\) lower bound on work overhead. This bound comes simply from "unfairness" of the comparison between the considered PRAM and cyclic circuits: circuit wires hold individual bits while PRAM manipulates \(\Theta(\log n)\)-bit words. To show
a \(\Omega(\log n)\) lower bound, we can consider simulating a PRAM program that, e.g., forces the cyclic circuit to take as input \(O(n)\) elements and write them out in some input-specified order. To achieve generality, the circuit must fully read the \(O(n\cdot\log n)\)-bit input.
One might expect that this permutation problem would allow us to strengthen the lower bound to \(\Omega(\log^{2}n)\) work overhead, since it is well known that comparison-based sorting requires \(\Theta(n\cdot\log n)\) comparisons. However, trying to apply this insight immediately runs into well-known open questions regarding the complexity of sorting, see e.g. [11, 1]. Namely, it might be possible to sort with \(o(n\cdot\log n)\) Boolean gates, and this prevents us from establishing this stronger lower bound.
|
2309.13751 | Separable effects for adherence | Comparing different medications is complicated when adherence to these
medications differs. We can overcome the adherence issue by assessing
effectiveness under sustained use, as in the usual causal `per-protocol'
estimand. However, when sustained use is challenging to satisfy in practice,
the usefulness of this estimand can be limited. Here we propose a different
class of estimands: separable effects for adherence. These estimands compare
modified medications, holding fixed a component responsible for non-adherence.
Under assumptions about treatment components' mechanisms of effect, the
separable effects estimand can eliminate differences in adherence. These
assumptions are amenable to interrogation by subject-matter experts and can be
evaluated using causal graphs. We describe an algorithm for constructing causal
graphs for separable effects, illustrate how these graphs can be used to reason
about assumptions required for identification, and provide semi-parametric
weighted estimators. | Kerollos Nashat Wanis, Mats Julius Stensrud, Aaron Leor Sarvet | 2023-09-24T21:00:08Z | http://arxiv.org/abs/2309.13751v1 | # Separable effects for adherence
###### Abstract
Comparing different medications is complicated when adherence to these medications differs. We can overcome the adherence issue by assessing effectiveness under sustained use, as in the usual causal 'per-protocol' estimand. However, when sustained use is challenging to satisfy in practice, the usefulness of this estimand can be limited. Here we propose a different class of estimands: _separable effects for adherence_. These estimands compare modified medications, holding fixed a component responsible for non-adherence. Under assumptions about treatment components' mechanisms of effect, the separable effects estimand can eliminate differences in adherence. These assumptions are amenable to interrogation by subject-matter experts and can be evaluated using causal graphs. We describe an algorithm for constructing causal graphs for separable effects, illustrate how these graphs can be used to reason about assumptions required for identification, and provide semi-parametric weighted estimators.
keywords: Pharmacoepidemiology, Causal Inference, Comparative Effectiveness Research, Lifetime and Survival Analysis +
Footnote †: journal: Journal of Mathematical Biology
,
## 1 Introduction
Comparing different medications is a core objective in pharmacoepidemiologic studies [(1)]. In these studies, the term efficacy is defined as "performance of a treatment under ideal and controlled circumstances," while effectiveness refers to "the performance of a treatment under usual or'real world' circumstances" [(2)]. For sustained pharmacologic treatments, adherence, the continuous utilization of a prescribed medication, is often central to the distinction between efficacy and effectiveness. While a range of adherence strategies can be defined, two types have received special attention: the causal 'per-protocol' strategy, which requires continuous adherence, and the 'intention-to-treat' strategy, which permits adherence to vary naturally for each individual.
When continuous adherence is expected for treatments, the usual 'per-protocol' strategy is suitable for studying their effectiveness. But in some settings, continuous adherence may be too demanding in practice [(3, 4)]. In observed data, nonadherence manifests as a lack of support for certain conditional distributions, known as positivity violations [(5, 6, 7)]. These violations preclude identification of usual 'per-protocol' parameters under standard assumptions for causal inference. Further, when non-adherence is prevailing, studying treatments in an idealized setting where everybody adheres has limited relevance to investigators interested in effectiveness.
Conversely, when medication adherence is allowed to vary naturally as in the 'intention-to-treat' strategy, a difference in the effectiveness of two initiated medications might arise simply due to a difference in adherence processes [(8, 9, 10)]. Even when strict utilization strategies are not of primary interest, perhaps because non-adherence is widespread for the treatments under study, investigators often want to compare strategies with equivalent adherence. In other words, in settings with non-adherence, there is interest in a trade-off between the'real-world' adherence that defines effectiveness and the idealized adherence that characterizes efficacy.
These considerations are evident in the comparison of medications for hypertension monotherapy, which we will use as a running example. International and US guidelines recommend initial monotherapy for individuals with low risk grade 1 hypertension to reduce the risk of adverse cardiovascular outcomes [(11, 12)]. Investigators interested in antihypertensive evaluation and refinement will evaluate the comparative effectiveness of available agents. But antihypertensive effectiveness depends on adherence, which is low and varies from agent to agent [(13, 14)].
In this article, we introduce a new estimand for comparative effectiveness research (CER): _separable effects for adherence_. The separable effects for adherence build on a generalized theory of separable effects [(15, 16, 17, 18, 19)]. Their definition requires that investigators conceptualize modifying the medications under study into independently manipulable components, e.g., corresponding to a hypothetical pharmacologic refinement, and consider the components' separate effects on the outcome through adherence pathways, and through other causal pathways. Hypothetical refinements could be considered at the policy level, e.g., a reduction in the cost of the drug, or at the pharmaceutical level, e.g., a change in the size or taste of the drug. In subsequent sections, we elaborate on examples of hypothetical treatment modifications and discuss the identification and interpretation of the separable effect under different assumptions encoded on causal graphs. In some cases, the separable effect quantifies the effectiveness of medication initiation strategies on an outcome of interest under the adherence process of one of the medications.
In Section 2 we define separable effects, contrasting them with total effects, in Section 3 we describe the construction of causal graphs for separable effects estimands, in Sections 4 and 5 we discuss how different assumptions about adherence mechanisms impact the interpretation of the separable effects estimand, in Section 6 we detail the assumptions required for identification of separable effects using observed data, in Sections 7 and 8 we discuss algorithms for estimation, and in Section 9 we discuss the role of separable effects estimands
in studies of efficacy.
## 2 Total and separable effects
Consider a study where individuals initiate one of two medications. Let \(Z\) denote the medication initiated, e.g., let \(Z=0\) denote initiation of an angiotensin-converting enzyme inhibitor (ACEI) and \(Z=1\) a thiazide diuretic. \(Z\) may be randomly assigned, as in an experiment, or selected naturally, as in an observational study. In each interval \(k=1,\ldots,K\), variables are measured: \(A_{k}\), an indicator of adherence to the initiated medication; \(L_{k}\), covariates (e.g., diagnoses or symptoms); and \(Y_{k}\), an indicator of failure (e.g., adverse cardiovascular events). Overlines represent an individual's history through interval \(k\) (e.g., \(\overline{L}_{k}\)).
Let \(Y_{k}^{z}\) denote failure status had, possibly contrary to fact, \(Z=z\). The comparative effectiveness of the two medication initiation strategies on the outcome risk by \(k\) is defined as
\[\Pr[Y_{k}^{z=1}=1]\ \text{vs.}\ \ \Pr[Y_{k}^{z=0}=1], \tag{1}\]
which we refer to as the _total_ effect of medication initiation (the 'intention-to-treat' effect).
The total effect of medication initiation depends on adherence, a characteristic that, at least sometimes, is undesirable to investigators, even when comparing'real-world' pharmacologic effectiveness. Suppose that \(Z\) can be modified such that it is represented by two binary components, \(Z_{A}\) and \(Z_{Y}\), i.e., that \(Y_{k}^{z}=Y_{k}^{z_{A}=z_{Y}=z}\). With the modified medication, an investigator may consider the effect on an outcome risk of the \(Z_{Y}\) component, fixing the \(Z_{A}\) component to the value \(z_{A}\),
\[\Pr[Y_{k}^{z_{A},z_{Y}=1}=1]\ \text{vs.}\ \ \Pr[Y_{k}^{z_{A},z_{Y}=0}=1], \tag{2}\]
which is the _separable_ effect of \(Z_{Y}\) initiation under \(Z_{A}=z_{A}\).
An investigator will consider a modification of \(Z\) where \(Z_{A}\) is particularly relevant for its effect on adherence. Interventions defining the separable effects correspond to medication modifications that could actually be evaluated in a real-world clinical trial (at least in principle). Arguably, when the \(Z_{A}\) component is fixed to a value representing a medication with a more favorable effect on adherence (e.g., a placebo or another well tolerated medication), separable effects estimands can correspond precisely to the parameters that might be observed in experiments conducted for drug development and refinement. As such, they would appeal to investigators interested in emulating this process with data.
Although the choice of an estimand needs to be context specific, separable effects for adherence are useful for pharmacoepidemiologic CER because of the following properties: (a) the resulting adherence patterns are plausible because they arise naturally (that is, investigators do not directly intervene on adherence); (b) their assumptions are amenable to interrogation by subject-matter experts and can be tested (in principle) by experiment; and (c) under assumptions on the mechanisms of \(Z_{A}\) and \(Z_{Y}\), modified medications will be compared under equivalent adherence. Table 1 contrasts separable effects for adherence with other estimands for pharmacoepidemiologic CER.
Property (a) follows immediately because separable effects do not consider interventions under which adherence is directly controlled. In subsequent sections, we will illustrate properties (b) and (c) using causal graphs as tools for reasoning and model representation.
## 3 Causal graphs for separable effects
Causal inference requires background knowledge. Causal graphs can be constructed based on this knowledge, allowing the use of simple graphical rules to evaluate conditions for identification using data.
Investigators interested in the _total_ effect (equation 1) can construct a causal Directed Acyclic Graph (DAG) that includes a single treatment node \(Z\) and a single outcome \(Y_{k}\)
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Estimand & Description & Natural & Testable\({}^{b}\) & Equivalent \\ & & adherence\({}^{a}\) & & adherence\({}^{c}\) \\ \hline \hline \multicolumn{5}{l}{\({}^{\prime}\)Intention-to-treat’} & Comparison of risks under the natural & ✓ & ✓ & ✗ \\ effect (20) & adherence processes of the medications & & & \\ \multicolumn{5}{l}{\({}^{\prime}\)Per-protocol’ effect} & Comparison of risks under interventions & ✗ & ✓ & ✓ \\ (8) & that enforce continuous adherence & & & \\ Principal stratum & Comparison of risks in the subgroup of the & ✓ & ✗ & ✓ \\ direct effect (21) & population who would continuously adhere & & & \\ & under either medication & & & \\ Natural direct effect\({}^{d}\) & Comparison of risks under interventions & ✗ & ✗ & ✓ \\ (22) & that set adherence for each individual to its & & & \\ & counterfactual value under one of the & & & \\ & medications & & & \\ Randomized & Comparison of risks under the & ✗ & ✓ & ✓ \\ interventional & counterfactual distribution of adherence & & & \\ analogue of the & that would arise under one of the & & & \\ natural direct effect & medications & & & \\ (22) & & & & \\ Stochastic adherence & Comparison of risks under interventions & ✗ & ✓ & ✓ \\ effect (10) & that randomly determine adherence & & & \\ & according to an investigator-specified & & & \\ & distribution & & & \\ Separable effect & Comparison of risks under the natural & ✓ & ✓ & ✓\({}^{c}\) \\ & adherence processes of modified & & & \\ & medications, holding fixed a component & & & \\ & responsible for non-adherence & & & \\ \hline \hline \end{tabular} \(a\) Investigators do not directly intervene on adherence.
\(b\) Assumptions can be tested (in principle) by experiment.
\(c\) Medications are compared under equivalent adherence.
\(d\) Natural direct effects are not identified when there are time-varying confounders affected by prior exposure and mediator (22).
\(e\) The equivalent adherence expected under the interventions comprising the separable effects contrast does not follow by definition of the parameter but is rather contingent on assumptions, which may be encoded in causal DAGs, as in Section 4.
\end{table}
Table 1: Properties of selected estimands that compare the effectiveness of pharmacologic treatments.
Covariates that are direct causes of any two variables on the graph are then iteratively included, and directed arrows are added between any two variables when a causal effect is supposed to exist. When considering (joint) interventions on more than one variable, an algorithm for constructing the DAG is identical, except initializing the graph with all treatment and outcome nodes, and again iteratively adding all common causes.
When a separable effects estimand is understood as a total effect of a joint intervention, a DAG can be constructed according to the above classical algorithm. However, even when all common causes of the modified medication components and outcomes are measured, identification by usual strategies will typically fail because the effect involves a combination of components \(Z_{A}\) and \(Z_{Y}\) that have not been implemented in the observed data: whenever \(z_{A}\neq z_{Y}\), a positivity condition, which is necessary for identification by conventional approaches, will be violated.
An important contribution of the general literature on separable effects is to clarify graphical conditions for identification via _other_ strategies (see, e.g., Robins et al (15) and Stensrud et al (17)). However, these alternate conditions cannot generally be evaluated in graphs constructed by the classical algorithm, because they will involve variables that investigators would not typically be prompted to include on a DAG. However, this classical algorithm is not the _only_ approach for constructing valid causal DAGs. A valid DAG may instead be constructed by initializing a graph with not only the treatments and outcome - here \(Z_{A}\), \(Z_{Y}\), and \(Y_{k}\) - but also with a set of intervening (mediating) variables that, minimally, are involved in the causal mechanism of the effect of \(Z_{A}\) on \(Y_{k}\). With separable effects for adherence, this set will include the adherence indicators \(\overline{A}_{k}\). Furthermore, this initial set of intervening variables should be expanded if there still remain direct paths from \(Z_{A}\) into the outcome \(Y_{k}\) or direct paths from \(Z_{Y}\) into any adherence indicator in \(\overline{A}_{k}\). Ultimately, identification via the strategies in (17) requires independencies encoded by a causal DAG where the effects of the modified medication components are completely intersected by non
overlapping (separate) sets of intervening variables, except for \(Z_{Y}\), which may have a direct effect on \(Y_{k}\).
In Sections 4 and 5, we review several classes of graphs for separable effects and discuss their implications for identification and interpretation. To avoid clutter, we restrict the graphs to two time points and assume the initiated medication, \(Z\), was randomly assigned, as in a clinical trial. Likewise, we suppose that no outcome events occur in the first interval, so that \(Y_{1}\) is omitted from the graphs.
## 4 Structures where separable effects balance adherence
Suppose that the \(Z_{Y}\) component of an individual's initiated medication exerts no effect on adherence, neither directly, nor through any covariates, except for past survival. This is a condition that balances adherence for survivors under the separable effect of \(Z_{Y}\) under \(Z_{A}=z_{A}\). The condition holds in the causal graphs of Figures 1, 2, and 3 which encode different assumptions about the mechanisms through which the outcome is affected by the modified medication.
Figure 1 represents a setting where differences in adherence are entirely due to the modified medication component \(Z_{A}\), which exerts its effect on the outcome only through adherence. This setting can also be represented by Figure 2, which introduces \(V_{k}\), a vector of _non-prognostic covariates_. The term _non-prognostic_ is used to denote covariates not associated with the outcome except through the initiated medication or adherence. When the \(Z_{Y}\) component exerts no effect on adherence, except via past survival, adjustment for non-prognostic variables is not required for identification, but their inclusion on graphs is helpful for reasoning about the underlying assumptions.
Building on our example, suppose \(Z\) indicates initiation of an ACEI versus a thiazide diuretic, and investigators consider a modified medication where \(Z_{A}\) represents the medication's out-of-pocket cost (23), and \(Z_{Y}\) represents the remaining features of the medication.
Thiazide diuretics are less costly than ACEIs, and these costs are known to affect adherence in healthcare settings without universal coverage [24; 25; 26; 27; 28]. This example is represented by Figure 1 if out-of-pocket costs were entirely responsible for adherence differences (alternatively, by Figure 2 with current wealth included in the covariate vector \(V_{k}\)). For this modified medication example, an experiment comparing the effectiveness of the \(Z_{Y}\) components of a thiazide diuretic versus an ACEI while setting \(Z_{A}=1\) could be conducted if the out-of-pocket costs of ACEIs were lowered to that of thiazides through pharmaceutical innovation or health policy interventions.
Alternatively, consider a setting with no effect of differential drug costs on adherence, which, for example, is plausible in a clinical trial where investigators cover drug costs. Suppose the investigators consider a \(V_{k}\) representing a history of cough, headache, or angioedema, which are considered to be non-prognostic adverse-effects of ACEIs [29], but not generally of thiazides. Instead, thiazides can cause other non-prognostic adverse-effects, which are also included in \(V_{k}\): urinary frequency, erectile dysfunction, fatigue, and muscle cramps [30]. A version of an initiated medication - modified to affect non-prognostic adverse-effects in the same way as the other medication - can be represented by Figure 2. For example, if instead of choosing initiation of a thiazide as the referent drug, investigators chose a drug with minimal adverse-effects, they would arguably be emulating the type of innovation that has already been attempted in the form of angiotensin II receptor blockers, which were developed to overcome some deficiencies in ACEIs [31].
So far, we have only considered settings where non-prognostic covariates affect adherence.
Figure 1: Causal DAG representing a setting in which the \(Z_{A}\) component of a modified treatment exerts an effect on \(Y_{2}\) only through adherence, and the \(Z_{Y}\) component exerts no effect on adherence.
However, some adverse-effects of antihypertensives are prognostic. ACEIs are thought to cause acute kidney injury (AKI), while the relationship between thiazides and AKI is less clear [32, 33, 34]. Individuals who develop an AKI are likely to discontinue their initiated medication. Unlike non-prognostic covariates, AKI may increase the risk of cardiovascular disease [35]. Now, suppose that Figure 3 represents another separable effect setting, with \(L_{k}\) representing a history of AKI. Consider an investigator who wants to study the outcome risk under a modification of ACEIs that leads to a similar distribution of AKI as thiazides. In this context, the effect of \(Z_{A}\) on the outcome is not mediated entirely by adherence: there is an effect mediated by \(L_{2}\), as shown in Figure 3. Nevertheless, the separable effect will contrast treatments under an equivalent distribution of adherence.
Figure 3: Causal DAG representing a setting in which the \(Z_{A}\) component of a modified treatment exerts an effect on \(Y_{2}\) through adherence and prognostic covariates, \(L_{2}\), and the \(Z_{Y}\) component exerts no effect on adherence.
Figure 2: Causal DAG representing a setting in which the \(Z_{A}\) component of a modified treatment exerts an effect on \(Y_{2}\) only through adherence, mediated by non-prognostic covariates, \(V_{2}\), and the \(Z_{Y}\) component exerts no effect on adherence.
## 5 Structures where separable effects do not balance adherence
Now consider a setting where the investigators believe that the \(Z_{Y}\) component of the modified medication affects adherence. For example, investigators considering a modification of ACEIs equalizing their out-of-pocket costs relative to thiazides will have this belief if prognostic covariates, such as AKI, affect adherence, as represented in Figure 4.
Suppose these same investigators study a modified medication that not only equalizes out-of-pocket costs, but also the distribution of AKI. This setting can be represented by Figure 3. However, even in this setting, the investigators may not accept that adherence is unaffected by \(Z_{Y}\) if there are other prognostic covariates that could affect adherence. In some cases, the effect of \(Z_{Y}\) on the outcome will be mediated by measured intermediaries whose values will be known to the individuals under treatment and might therefore impact adherence (e.g., measured blood pressure (36)). In these cases, both components of a modified medication \(Z_{A}\) and \(Z_{Y}\) will exert effects on adherence, through different sets of prognostic covariates. The investigators can represent this setting using Figure 5. The prognostic covariates \(L_{k}\) have been divided into two subvectors \(L_{A,k}\) (which includes AKI) and \(L_{Y,k}\) (which includes systolic and diastolic blood pressure), and both \(Z_{Y}\) and \(Z_{A}\) exert effects on the outcome through their effects on adherence.
In both Figures 4 and 5 the separable effect (equation 2) considers a contrast that eliminates some, but not all, differences in adherence. However, the extent to which adherence to antihypertensives depends on blood pressure rather than the convenience, cost, and tolerability of the initiated medication is open to debate by subject-matter experts (37). Whether our investigators are correct about the impact of blood pressure on adherence is important for deciding whether their modified medication that balances the distribution of cost and AKI is representable by Figure 3 or must be represented by Figure 5, and thus whether their estimand balances adherence.
Figure 4: Causal DAG representing a setting in which the \(Z_{A}\) component of a modified treatment exerts an effect on \(Y_{2}\) only through adherence, while the \(Z_{Y}\) component exerts an effect on adherence through its effect on \(L_{2}\).
Figure 5: Causal DAG representing a setting in which both \(Z_{Y}\) and \(Z_{A}\) exert effects on adherence through different sets of prognostic covariates. The \(Z_{A}\) component exerts an effect on \(Y_{2}\) through adherence and \(L_{A,2}\), while the \(Z_{Y}\) component exerts an effect on adherence through its effect on \(L_{Y,2}\).
## 6 Identification of separable effects using observed data
Graphical conditions for identification of \(\Pr[Y_{K}^{z_{A},z_{Y}}=1]\) differ from those classically used for identification of \(\Pr[Y_{K}^{z}=1]\). In the identification strategies we review for each of these estimands, we require unconfoundedness for the initiated medication \(Z\), which can be read from a DAG as the absence of backdoor paths connecting \(Z\) and \(Y_{K}\). This condition is expected to hold in data from a study where \(Z\) is randomly assigned, or may hold conditional on baseline covariates in an observational study.
Identification of \(\Pr[Y_{K}^{z_{A},z_{Y}}=1]\) further requires a set of conditions not required for identification of \(\Pr[Y_{K}^{z}=1]\) (17). Specifically, we prohibit any directed paths connecting \(Z_{A}\) and \(Y_{k}\) or \(L_{Y,k}\) conditional on the intervening treatment, covariate, and outcome history from \(k\), for each \(k=1,\ldots,K\). Analogously, we prohibit directed paths (38) connecting \(Z_{Y}\) and \(A_{k}\) or \(L_{A,k}\) conditional on the intervening treatment, covariate, and outcome history from \(k\), for each \(k=1,\ldots,K\). It is possible that these conditions are violated even when the data arise from a study where \(Z\) is randomly assigned. Section 6.1 provides examples where these assumptions would not hold.
Investigators should be mindful that features of the causal structure represented by the graph for the observed data can change in the contexts considered by separable effects estimands, that is, when \(z_{A}\neq z_{Y}\)[15, 19]. For example, a variable that is not a confounder in the observed data setting, because it is a cause of the initiated medication, \(Z\), but not an outcome \(Y_{k}\), might cause the outcome in a setting where \(z_{A}\neq z_{Y}\), violating unconfoundedness for the initiated medication. Similarly, a variable that was solely a cause of adherence in the observed data setting may also be a cause of the outcome under \(z_{A}\neq z_{Y}\), violating an independence condition not required for classical identification strategies but necessary for separable effects. These theoretical issues will not arise if the causal structure between variables in the graph is correctly assumed to be invariant from the observed data context to the separable effect contexts considered.
### When do independence conditions for separable effects not hold?
To make transparent the assumptions required for identification of the separable effect, we now give three examples of their violation.
First, suppose, as we did in Figure 3, that \(\overline{L}_{k}\) includes a history of prognostic variables that have an effect on adherence. But also suppose, unlike in Figure 3, that there are unmeasured common causes of prognostic variables and outcomes. This is depicted in Figure 5(a). The path \(Z_{A}\rightarrow\boxed{L_{A,2}}\gets U\to Y_{2}\), which is unblocked due to conditioning (denoted by enclosing a variable in a square) on the collider \(L_{A,2}\) (38), violates an independence assumption required for identification. Similarly, identification will fail whenever there is an unmeasured common cause of \(L_{Y,k}\) and any future treatment \(A_{k^{\prime}}\), \(k^{\prime}>k\). More generally, identification will fail if there are any unmeasured common causes of an element in \(\{\overline{A}_{K},\overline{L}_{A,K}\}\) and an element in \(\{\overline{Y}_{K},\overline{L}_{Y,K}\}\).
Independence assumptions can still be violated even in the absence of unmeasured common causes. Figure 5(b) illustrates a violation due to an unmeasured intermediate on the causal path from \(Z_{A}\) to \(Y_{2}\) and Figure 5(c) shows an analogous violation due to an unmeasured intermediate on the causal path from \(Z_{Y}\) to \(L_{A,2}\). These latter two examples illustrate the importance of collecting information on potential mediating variables whenever separable effects estimands are targeted. More generally, identification will fail if there is any unblocked path from \(Z_{Y}\) into \(\{\overline{A}_{K},\overline{L}_{A,K}\}\) or if there is any unblocked path from \(Z_{A}\) into \(\{\overline{Y}_{K},\overline{L}_{Y,K}\}\).
## 7 The g-formula
Suppose that the investigator articulates a realistic modified medication with components \(Z_{A}\) and \(Z_{Y}\), and that consistency, which allows linking counterfactual to factual random variables, holds. Suppose also that the distribution of variables on the causal DAG is positive. When suitable independence conditions hold - implied by the absence of biasing
Figure 6: Causal DAGs representing settings where independence conditions required for separable effects do not hold due to an unmeasured variable, \(U\).
paths in a DAG as discussed in Section 6 - then the expected counterfactual mean of \(Y_{K}\) under an intervention that sets \(Z_{Y}=z_{Y}\) and \(Z_{A}=z_{A}\) is given by the g-formula (17),
\[\sum_{k=1}^{K}\sum_{\overline{a}_{k}}\sum_{\overline{l}_{k}}\Pr[Y_{ k}=1\mid\overline{A}_{k}=\overline{a}_{k},\overline{L}_{Y,k}=\overline{l}_{Y,k}, \overline{L}_{A,k}=\overline{l}_{A,k},\overline{Y}_{k-1}=0,Z=z_{Y}]\times\\ \prod_{j=1}^{k}\{\Pr[Y_{j-1}=0\mid\overline{A}_{j-1}=\overline{a} _{j-1},\overline{L}_{Y,j-1}=\overline{l}_{Y,j-1},\overline{L}_{A,j-1}=\overline {l}_{A,j-1},\overline{Y}_{j-2}=0,Z=z_{Y}]\times\\ f(l_{Y,j}\mid\overline{a}_{j-1},\overline{l}_{Y,j-1},\overline{l}_{ A,j},\overline{Y}_{j-1}=0,Z=z_{Y})\times\\ f(l_{A,j}\mid\overline{a}_{j-1},\overline{l}_{Y,j-1},\overline{l}_{ A,j-1},\overline{Y}_{j-1}=0,Z=z_{A})\times\\ f(a_{j}\mid\overline{a}_{j-1},\overline{l}_{Y,j},\overline{l}_{ A,j},\overline{Y}_{j-1}=0,Z=z_{A})\}.\]
This g-formula can be re-expressed as a weighted representation,
\[\sum_{k=1}^{K}\lambda_{A,k}\prod_{j=1}^{k-1}[1-\lambda_{A,j}]\text{ or }\sum_{k=1}^{K}\lambda_{Y,k}\prod_{j=1}^{k-1}[1-\lambda_{Y,j}]\]
where
\[\lambda_{A,k}=\frac{\mathrm{E}[Y_{k}(1-Y_{k-1})W_{A,k}\mid Z=z_{Y}]}{\mathrm{ E}[(1-Y_{k-1})W_{A,k}\mid Z=z_{Y}]}\text{ and }\lambda_{Y,k}=\frac{\mathrm{E}[Y_{k}(1-Y_{k-1})W_{Y,k}\mid Z=z_{A}]}{ \mathrm{E}[(1-Y_{k-1})W_{Y,k}\mid Z=z_{A}]}\]
and the weights \(W_{A,k}\) and \(W_{Y,k}\) are defined as
\[W_{A,k}=\prod_{j=1}^{k}\Bigg{[}\frac{f(A_{j}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1},Z=z_{A})}{f(A_{j}\mid \overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1},Z= z_{Y})}\times\] \[\frac{\Pr(Z=z_{A}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j},\overline{A}_{j-1})}{\Pr(Z=z_{Y}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1})}\times\] \[\frac{\Pr(Z=z_{Y}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j-1},\overline{A}_{j-1})}{\Pr(Z=z_{A}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j-1},\overline{L}_{A,j-1},\overline{A}_{j-1})}\Bigg{]}\]
and
\[W_{Y,k}= \prod_{j=1}^{k}\frac{f(Y_{j}\mid\overline{Y}_{j-1}=0,\overline{ L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z=z_{Y})}{f(Y_{j}\mid \overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z= z_{A})}\times\] \[\prod_{j=1}^{k}\Bigg{[}\frac{\Pr(Z=z_{Y}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1})}{\Pr(Z=z_{A}\mid \overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1})}\times\] \[\frac{\Pr(Z=z_{A}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j},\overline{A}_{j-1})}{\Pr(Z=z_{Y}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1})}\Bigg{]}.\]
The equivalence of these expressions is shown in Supplementary Materials Sections S1 and S2.1. Weighted representations motivate inverse probability weighted (IPW) estimators. Choosing between the two weighted representations should not be done arbitrarily, and should be guided by assumptions about the data generating mechanism producing the study data, which can be represented on causal graphs. When \(L_{k}=(L_{Y,k},\emptyset),k=1,\ldots,K\), then \(W_{A,k}\) simplifies to
\[\prod_{j=1}^{k}\frac{f(A_{j}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j}, \overline{L}_{A,j},\overline{A}_{j-1},Z=z_{A})}{f(A_{j}\mid\overline{Y}_{j-1}= 0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1},Z=z_{Y})}\]
because there are no variables in \(L_{A,k}\) needed for identification (e.g., Figure 4).
Similarly, when \(L_{k}=(\emptyset,L_{A,k}),k=1,\ldots,K\) then \(W_{Y,k}\) simplifies to
\[\prod_{j=1}^{k}\frac{f(Y_{j}\mid\overline{Y}_{j-1}=0,\overline{L}_{Yj}, \overline{L}_{A,j},\overline{A}_{j},Z=z_{Y})}{f(Y_{j}\mid\overline{Y}_{j-1}=0, \overline{L}_{Yj},\overline{L}_{A,j},\overline{A}_{j},Z=z_{A})}\]
because, likewise, there are no variables in \(L_{Y,k}\) needed for identification (e.g., Figure 3).
The weighted representation that includes the most simplified expression will be preferred, because estimation based on this representation will require the fewest correctly specified models. When neither expression can be simplified, the choice of the representation should be justified using substantive reasoning about which models can be specified correctly. For example, investigators may feel more confident specifying parametric models for adherence rather than for outcomes.
For technical articulations of analagous identification conditions, see (17).
## 8 Inverse probability weighted estimation
If the terms in \(W_{Y,k}\), can be modelled correctly using parametric methods, then the following IPW algorithm will be a consistent estimator for \(\Pr[Y_{K}^{z_{A},z_{Y}}=1]\), with \(z_{A}\neq z_{Y}\):
1. Using the entire data set, fit a pooled (over time) parametric regression model for * \(\Pr(Y_{k}=y_{k}\mid\overline{Y}_{k-1}=0,\overline{l}_{Y,k},\overline{l}_{A,k},\overline{a}_{k},z)\), * \(\Pr(Z=z\mid\overline{Y}_{k-1}=0,\overline{l}_{Y,k},\overline{l}_{A,k}, \overline{a}_{k-1})\), and * \(\Pr(Z=z\mid\overline{Y}_{k-1}=0,\overline{l}_{Y,k-1},\overline{l}_{A,k}, \overline{a}_{k-1})\). For example, we might assume pooled logistic regression models.
2. For each row in the data set for each individual with \(Z=z_{A}\), at each time interval, \(k\), in \(1,\ldots,K\): 1. Obtain predicted values * \(\widehat{f}(Y_{k}\mid\overline{Y}_{k-1}=0,\overline{L}_{Y,k},\overline{L}_{A,k },\overline{A}_{k},Z=z_{A})\), * \(\widehat{f}(Y_{k}\mid\overline{Y}_{k-1}=0,\overline{L}_{Y,k},\overline{L}_{A,k },\overline{A}_{k},Z=z_{Y})\),
* \(\widehat{\Pr}(Z=z_{A}\mid\overline{Y}_{k-1}=0,\overline{L}_{Y,k-1},\overline{L}_ {A,k},\overline{A}_{k-1})\),
* \(\widehat{\Pr}(Z=z_{Y}\mid\overline{Y}_{k-1}=0,\overline{L}_{Y,k-1},\overline{ L}_{A,k},\overline{A}_{k-1})\),
* \(\widehat{\Pr}(Z=z_{A}\mid\overline{Y}_{k-1}=0,\overline{L}_{Y,k-1},\overline{ L}_{A,k-1},\overline{A}_{k-1})\), and
* \(\widehat{\Pr}(Z=z_{Y}\mid\overline{Y}_{k-1}=0,\overline{L}_{Y,k-1},\overline{ L}_{A,k-1},\overline{A}_{k-1})\).
2. Evaluate 1. \[\widehat{W}_{Y,k}= \prod_{j=1}^{k}\frac{\widehat{f}(Y_{j}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z=z_{Y})}{\widehat{f}(Y _{j}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline {A}_{j},Z=z_{A})}\times\] \[\prod_{j=1}^{k}\left[\frac{\widehat{\Pr}(Z=z_{Y}\mid\overline{Y} _{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1})}{\widehat {\Pr}(Z=z_{A}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j}, \overline{A}_{j-1})}\times\right.\] \[\left.\frac{\widehat{\Pr}(Z=z_{A}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1})}{\widehat{\Pr}(Z= z_{Y}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1},\overline{L}_{A,j}, \overline{A}_{j-1})}\right]\] or, when \(L_{k}=(\emptyset,L_{A,k}),k=1,\ldots,K\), (e.g., Figure 4), the simplified weight expression 2. \[\widehat{W}_{Y,k}= \prod_{j=1}^{k}\frac{\widehat{f}(Y_{j}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z=z_{Y})}{\widehat{f}( Y_{j}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j}, \overline{A}_{j},Z=z_{A})}.\]
3. Compute the risk of failure by the end of \(K\) as \(\sum_{k=1}^{K}\widehat{\lambda}_{Y,k}\prod_{j=1}^{k-1}[1-\widehat{\lambda}_{Y, j}]\) with the estimated weights, \(\widehat{W}_{Y,k}\), in \(\widehat{\lambda}_{Y,k}=\frac{\widehat{E}[Y_{k}(1-Y_{k-1})\widehat{W}_{Y,k}|Z= z_{A}]}{\widehat{E}((1-Y_{k-1})\widehat{W}_{Y,k}|Z=z_{A}]}\).
In a study of two medications, to compute the separable effect of \(Z_{Y}\) setting \(Z_{A}=z_{A}\), the analyst would compute an estimate of the outcome under \(z_{Y}\neq z_{A}\) using the IPW algorithm and compare it to the 'intention-to-treat' estimate of the outcome under \(Z=z_{A}\), which can be computed as a simple empirical mean of \(Y_{K}\) among individuals with \(Z=z_{A}\).
An analogous algorithm using \(W_{A,k}\) is detailed in Supplementary Materials Section S2. These algorithms assume \(Z\) was randomly assigned. When data arises from an observational study, further adjustment for baseline confounding will be necessary.
Valid 95% confidence intervals can be obtained using a non-parametric bootstrap.
In Supplementary Materials Section S3 we implement the algorithm in a simulated data example.
## 9 Separable effects and 'efficacy'
In Section 6, we discussed assumptions under which the g-formula (equation 3) identifies terms of the separable effect (equation 2) of \(Z_{Y}\) under \(Z_{A}=z_{A}\). An advantage of this estimand, in contrast to the total effect (equation 1) of \(Z\), is that its value cannot be due to differential adherence to the initiated medications whenever certain conditions hold, which can be encoded on causal graphs (Section 4). This is useful when investigators are interested in medication effectiveness comparisons that preclude certain adherence-based mechanisms, which can intuitively be interpreted as a trade-off between the usual 'intention to treat' effectiveness comparison versus an efficacy comparison specifying perfect adherence.
Efficacy refers to "biomedical end-points... under optimal and highly controlled experimental conditions" and is achieved by studying medications "under highly unusual and structured protocols by very motivated clinicians with careful monitoring" (2). Despite balancing adherence, separable effects will generally not be well suited to studying efficacy, because they will not satisfy all optimal conditions, even those related to adherence-based _mechanisms_. Specifically, consider a setting where all individuals who do not adhere under \(Z_{Y}=1,Z_{A}=z_{A}\) are simply untreated for their hypertension during those time-points, whereas all individuals who do not adhere under \(Z_{Y}=0,Z_{A}=z_{A}\) instead switch (or 'cross-over') to the pharmacologic treatment consistent with \(Z_{Y}=1\). Clearly, even if adherence patterns are nominally balanced (according to the definition of the vector \(\overline{A}_{K}\)), an investigator who is purely interested in the _efficacy_ of the \(Z_{Y}\) component will be less interested in the separable effect of \(Z_{Y}\) under \(Z_{A}=z_{A}\) when such data generating mechanisms are possible.
To preclude such data generating mechanisms, we could restrict ourselves to studies
where individuals are unable to take any of the study medications except the one they were initially assigned. A randomized trial where participants only have access to their assigned medication, and not to any of the other study drugs, is an example of such a study. But, in many cases, investigators will be interested in analyzing data arising from an observational study, or from a 'pragmatic' randomized trial, where individuals have access to medications other than the one they initiated at baseline. In such settings, a separable effects estimand under a simultaneous hypothetical intervention to prohibit 'crossing-over' can be considered. This alternative estimand is discussed in Supplementary Materials Section S4.
Even when crossovers to non-initiated medications are prohibited, separable effects estimands may still not satisfy investigators seeking to emulate the optimal conditions required for studying efficacy. Investigators will need to be precise about the particular mediating pathways, where adherence and crossover are just two of potentially many others, that ought to be excluded in order for the effect estimate to have the desired efficacy interpretation. For example, suppose that the investigators define the relative efficacy of two medications as a contrast of protocols that not only prohibit co-medications and rescue treatments, but also specify the lifestyle choices allowable under the treatment strategies. These various specifications imply that efficacy can refer to a variety of estimands that prohibit different types of mechanisms, depending on the particular interpretation desired.
## 10 Discussion
We described an estimand, based on the generalized theory of separable effects, that can compare medication initiation strategies while eliminating differences in adherence under assumptions that are well suited to interrogation by subject-matter experts.
Investigators who are naive to separable effects estimands might have considered different estimands based on the property of balanced adherence. Natural direct effects would balance adherence by assigning an adherence pattern to each individual that is precisely equal to the
pattern that would arise for that individual under \(Z=z_{A}\) (22). Randomized interventional analogues assign an adherence pattern from the distribution of patterns under \(Z=z_{A}\) (22). Controlled direct effects can balance adherence via interventions, either through enforcement of continuous treatment adherence (as in 'per-protocol' estimands) or by imposing an investigator-selected distribution of adherence (10) via a stochastic intervention. Lastly, principal stratum direct effects can balance adherence by considering the subgroup of the population who would have (counterfactually) continuously adhered regardless of initiated medication (21). Each of these other estimands have some but not all the properties of separable effects (see Table 1). In contrast to these other estimands, the separable effects contrast does not guarantee equivalent adherence by its definition. Instead, equivalent adherence depends on assumptions about the treatment components' mechanisms of effect, which may be encoded in causal DAGs. Ultimately, choosing the appropriate estimand will be based on deep interdisciplinary dialogue, with subject matter expertise, specific to the particular treatments under comparison.
We used an example of antihypertensive medications to illustrate important assumptions for identification of separable effects, but these assumptions may hold in many other settings. For example, antiretroviral drugs (39), antiepileptics (40), and antihyperglycemias (41, 42) are each sustained use medication classes with substantial variability in out-of-pocket costs. A comparison of medications within one of these classes using a separable effects estimand could be consistent with the assumptions discussed in Section 4 if differences in adherence are driven by differences in costs. Further, DAGs similar to Figures 1 and 2 could arise in settings where investigators consider a modified medication, manipulating a \(Z_{A}\) denoting the complexity of the dosing, the taste or size of the drug, the logistics of its dispensing, or the component that exerts effects on non-prognostic covariates. Investigators can reason about whether assumptions allowing identification of the separable effect hold by constructing causal graphs, guided by pharmacoepidemiologic expertise.
## References
* [1] Lash TL, Vanderweele TJ, Haneuse S, Rothman KJ. _Modern epidemiology_. Philadelphia: Wolters Kluwer / Lippincott Williams & Wilkinsfourth edition. ed. 2021.
* [2] Revicki DA, Frank L. Pharmacoeconomic evaluation in the real world: effectiveness versus efficacy studies _Pharmacoeconomics_. 1999;15:423-434.
* [3] Osterberg L, Blaschke T. Adherence to medication _New England journal of medicine_. 2005;353:487-497.
* [4] Organization WH, others. _Adherence to long-term therapies: evidence for action_. World Health Organization 2003.
* [5] Petersen ML, Porter KE, Gruber S, Wang Y, Van Der Laan MJ. Diagnosing and responding to violations in the positivity assumption _Statistical methods in medical research_. 2012;21:31-54.
* [6] Kennedy EH. Nonparametric causal effects based on incremental propensity score interventions _Journal of the American Statistical Association_. 2019;114:645-656.
* [7] Robins J. A new approach to causal inference in mortality studies with a sustained exposure period--application to control of the healthy worker survivor effect _Mathematical modelling_. 1986;7:1393-1512.
* [8] Hernan MA, Hernandez-Diaz S. Beyond the intention-to-treat in comparative effectiveness research _Clinical trials_. 2012;9:48-55.
* [9] Murray EJ, Caniglia EC, Swanson SA, Hernandez-Diaz S, Hernan MA. Patients and investigators prefer measures of absolute risk in subgroups for pragmatic randomized trials _Journal of clinical epidemiology_. 2018;103:10-21.
* [10] Wanis KN, Sarvet AL, Wen L, et al. The role of grace periods in comparative effectiveness studies of different medications _arXiv preprint arXiv:2212.11398_. 2022.
* [11] Unger T, Borghi C, Charchar F, et al. 2020 International Society of Hypertension global hypertension practice guidelines _Hypertension_. 2020;75:1334-1357.
* [12] Whelton PK, Carey RM, Aronow WS, et al. 2017 ACC / AHA / AAPA / ABC / ACPM / AGS / APhA / ASH / ASPC / NMA / PCNA guideline for the prevention, detection, evaluation, and management of high blood pressure in adults: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines _Journal of the American College of Cardiology_. 2018;71:e127-e248.
* [13] Ishida T, Oh A, Hiroi S, Shimasaki Y, Nishigaki N, Tsuchihashi T. Treatment patterns and adherence to antihypertensive combination therapies in Japan using a claims database _Hypertension Research_.
2019;42:249-256.
* (14) Elliott WJ, Plauschinat CA, Skrepnek GH, Gause D. Persistence, adherence, and risk of discontinuation associated with commonly prescribed antihypertensive drug monotherapies _The Journal of the American Board of Family Medicine._ 2007;20:72-80.
* (15) Robins JM, Richardson TS, Shpitser I. An interventionist approach to mediation analysis in _Probabilistic and Causal Inference: The Works of Judea Pearl_:713-764Association for Computing Machinery 2022.
* (16) Robins JM, Richardson TS. Alternative graphical causal models and the identification of direct effects _Causality and psychopathology: Finding the determinants of disorders and their cures._ 2010;84:103-158.
* (17) Stensrud MJ, Hernan MA, Tchetgen Tchetgen EJ, Robins JM, Didelez V, Young JG. A generalized theory of separable effects in competing event settings _Lifetime Data Analysis._ 2021;27:588-631.
* (18) Stensrud MJ, Young JG, Didelez V, Robins JM, Hernan MA. Separable effects for causal inference in the presence of competing events _Journal of the American Statistical Association._ 2022;117:175-183.
* (19) Stensrud MJ, Robins JM, Sarvet A, Tchetgen Tchetgen EJ, Young JG. Conditional separable effects _Journal of the American Statistical Association._ 2022:1-13.
* (20) Gupta SK. Intention-to-treat concept: a review _Perspectives in clinical research._ 2011;2:109.
* (21) Bornkamp B, Rufibach K, Lin J, et al. Principal stratum strategy: Potential role in drug development _Pharmaceutical Statistics._ 2021;20:737-751.
* (22) VanderWeele TJ, Tchetgen Tchetgen EJ. Mediation analysis with time varying exposures and mediators _Journal of the Royal Statistical Society Series B: Statistical Methodology._ 2017;79:917-938.
* (23) Hassan N, Hasanah C, Foong K, et al. Identification of psychosocial factors of noncompliance in hypertensive patients _Journal of human hypertension._ 2006;20:23-29.
* (24) Fischer MA, Avorn J. Economic implications of evidence-based prescribing for hypertension: can better care cost less? _Jama._ 2004;291:1850-1856.
* (25) Fretheim A, Aaserud M, Oxman AD. The potential savings of using thiazides as the first choice anti-hypertensive drug: cost-minimisation analysis _BMC Health Services Research._ 2003;3:1-9.
* (26) Park C, Wang G, Ng BP, Fang J, Durthaler JM, Ayala C. The uses and expenses of antihypertensive medications among hypertensive adults _Research in Social and Administrative Pharmacy._ 2020;16:183-189.
* (27) Johansen ME, Byrd JB. Total and Out-of-Pocket Expenditures on Antihypertensive Medications in the United States, 2007-2019 _Hypertension._ 2021;78:1662-1664.
* (28) Khera R, Valero-Elizondo J, Das SR, et al. Cost-related medication nonadherence in adults with
atherosclerotic cardiovascular disease in the United States, 2013 to 2017 _Circulation._ 2019;140:2067-2075.
* (29) Gregoire JP, Moisan J, Guibert R, et al. Tolerability of antihypertensive drugs in a community-based setting _Clinical therapeutics._ 2001;23:715-726.
* (30) Kronish IM, Woodward M, Sergie Z, Ogedegbe G, Falzon L, Mann DM. Meta-analysis: impact of drug class on adherence to antihypertensives _Circulation._ 2011;123:1611-1621.
* (31) Barreras A, Gurk-Turner C. Angiotensin II receptor blockers in _Baylor University Medical Center Proceedings_;16:123-126Taylor & Francis 2003.
* (32) Albasri A, Hattle M, Koshiaris C, et al. Association between antihypertensive treatment and adverse events: systematic review and meta-analysis _bmj._ 2021;372.
* (33) Ejaz AA, Mohandas R. Are diuretics harmful in the management of acute kidney injury? _Current opinion in nephrology and hypertension._ 2014;23:155-160.
* (34) Nigwekar SU, Waikar SS. Diuretics in acute kidney injury _Seminars in Nephrology._ 2011;31:523-534.
* (35) Legrand M, Rossignol P. Cardiovascular consequences of acute kidney injury _New England Journal of Medicine._ 2020;382:2238-2247.
* (36) Rahimi K, Bidel Z, Nazarzadeh M, et al. Pharmacological blood pressure lowering for primary and secondary prevention of cardiovascular disease across different levels of blood pressure: an individual participant-level data meta-analysis _The Lancet._ 2021;397:1625-1636.
* (37) Burnier M, Egan BM. Adherence in hypertension: a review of prevalence, risk factors, impact, and management _Circulation research._ 2019;124:1124-1140.
* (38) Greenland S, Pearl J, Robins JM. Causal diagrams for epidemiologic research _Epidemiology._ 1999:37-48.
* (39) Tseng CW, Dudley RA, Chen R, Walensky RP. Medicare part D and cost-sharing for antiretroviral therapy and preexposure prophylaxis _JAMA Network Open._ 2020;3:e202739-e202739.
* (40) Callaghan BC, Reynolds E, Banerjee M, et al. Out-of-pocket costs are on the rise for commonly prescribed neurologic medications _Neurology._ 2019;92:e2604-e2613.
* (41) Bibeau WS, Fu H, Taylor AD, Kwan AY. Impact of out-of-pocket pharmacy costs on branded medication adherence among patients with type 2 diabetes _Journal of managed care & specialty pharmacy._ 2016;22:1338-1347.
* (42) DeJong C, Masuda C, Chen R, Kazi DS, Dudley RA, Tseng CW. Out-of-pocket costs for novel guideline-directed diabetes therapies under Medicare Part D _JAMA internal medicine._ 2020;180:1696-1699.
**Supplemental Materials for "Separable effects for adherence"**
## S1 Equivalence of the g-formula and the inverse probability weighted expression
In this section we prove the equivalence of the g-formula and its inverse probability weighted expression. The weighted representation is
\[\sum_{k=1}^{K}\lambda_{Y,k}\prod_{j=1}^{k-1}[1-\lambda_{Y,j}]\] (S1)
where
\[\lambda_{Y,k}=\frac{\mathrm{E}[Y_{k}(1-Y_{k-1})W_{Y,k}\mid Z=z_{A}]}{\mathrm{ E}[(1-Y_{k-1})W_{Y,k}\mid Z=z_{A}]}\] (S2)
with weights, \(W_{Y,k}\), defined as
\[W_{Y,k}= \prod_{j=1}^{k}\frac{f(Y_{j}\mid\overline{Y}_{j-1}=0,\overline{L }_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z=z_{Y})}{f(Y_{j}\mid\overline{Y}_ {j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z=z_{A})}\times\] \[\prod_{j=1}^{k}\Bigg{[}\frac{\Pr(Z=z_{Y}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1})}{\Pr(Z=z_{A}\mid \overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1} )}\times\] \[\frac{\Pr(Z=z_{A}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j},\overline{A}_{j-1})}{\Pr(Z=z_{Y}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1})}\Bigg{]},\]
which can be reexpressed in the following way,
\[W_{Y,k}=\prod_{j=1}^{k}\frac{f(Y_{j}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z=z_{Y})}{f(Y_{j}\mid \overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z= z_{A})}\times\] \[\prod_{j=1}^{k}\left[\frac{\frac{f(Z=z_{Y,LY,j}|\overline{Y}_{j-1 }=0,\overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1})}{f(L_{Y,j}| \overline{Y}_{j-1}=0,\overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j- 1})}}{\frac{f(Z=z_{A},L_{Y,j}|\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j},\overline{A}_{j-1})}{f(L_{Y,j}|\overline{Y}_{j-1}=0, \overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1})}}\times\right.\] \[\left.\frac{\Pr(Z=z_{A}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j -1},\overline{L}_{A,j},\overline{A}_{j-1})}{\Pr(Z=z_{Y}\mid\overline{Y}_{j-1 }=0,\overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1})}\right]\] \[=\prod_{j=1}^{k}\frac{f(Y_{j}\mid\overline{Y}_{j-1}=0,\overline{ L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z=z_{Y})}{f(Y_{j}\mid \overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z= z_{A})}\times\] \[\prod_{j=1}^{k}\frac{f(L_{Y,j}\mid\overline{Y}_{j-1}=0,\overline{ L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1},Z=z_{Y})}{f(L_{Y,j}\mid \overline{Y}_{j-1}=0,\overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j- 1},Z=z_{A})}.\]
First, we use (equation S2) to rewrite (equation S1) as
\[\sum_{k=1}^{K} \lambda_{Y,k}\prod_{j=1}^{k-1}[1-\lambda_{Y,j}]\] \[= \sum_{k=1}^{K}\mathrm{E}[Y_{k}(1-Y_{k-1})W_{Y,k}\mid Z=z_{A}]\times\] \[\prod_{j=1}^{k}\frac{\mathrm{E}[(1-Y_{j-2})W_{Y,j-1}\mid Z=z_{A}]- \mathrm{E}[Y_{j-1}(1-Y_{j-2})W_{Y,j-1}\mid Z=z_{A}]}{\mathrm{E}[(1-Y_{j-1})W_ {Y,j}\mid Z=z_{A}]}\] \[= \sum_{k=1}^{K}\mathrm{E}[Y_{k}(1-Y_{k-1})W_{Y,k}\mid Z=z_{A}],\]
where any variables with \(k\leq 0\) are set to an arbitrary constant by convention, and in the second step we use that
\[\mathrm{E}[(1-Y_{j-1})W_{Y,j}\mid Z=z_{A}]=\mathrm{E}[(1-Y_{j-2})W_{Y,j-1}\mid Z=z _{A}]-\mathrm{E}[Y_{j-1}(1-Y_{j-2})W_{Y,j-1}\mid Z=z_{A}]\]
because, using the definition of an expectation,
\[\mathrm{E}[(1-Y_{j-1})W_{Y,j}\mid Z=z_{A}]\] \[=\sum_{\overline{l}_{j},\overline{a}_{j},y_{j}}W_{Y,j}f(y_{j}, \overline{l}_{j},\overline{a}_{j},\overline{Y}_{j-1}=0\mid Z=z_{A})\] \[=\sum_{\overline{l}_{j-1},\overline{a}_{j-1}}\prod_{s=1}^{j-1} \left[\frac{f(y_{s}\mid\overline{Y}_{s-1}=0,\overline{l}_{Y,s},\overline{l}_{A,s},\overline{a}_{s},Z=z_{Y})}{f(y_{s}\mid\overline{Y}_{s-1}=0,\overline{l}_{ Y,s},\overline{l}_{A,s},\overline{a}_{s},Z=z_{A})}\times\right.\] \[\left.\frac{f(l_{Y,s}\mid\overline{Y}_{s-1}=0,\overline{l}_{Y,s-1 },\overline{l}_{A,s},\overline{a}_{s-1},Z=z_{Y})}{f(l_{Y,s}\mid\overline{Y}_{ s-1}=0,\overline{l}_{Y,s-1},\overline{l}_{A,s},\overline{a}_{s-1},Z=z_{A})}\right]\times\] \[\sum_{l_{j},a_{j},y_{j}}\left[\frac{f(y_{j}\mid\overline{Y}_{j-1 }=0,\overline{l}_{Y,j},\overline{l}_{A,j},\overline{a}_{j},Z=z_{Y})}{f(y_{j} \mid\overline{Y}_{j-1}=0,\overline{l}_{Y,j},\overline{l}_{A,j},\overline{a}_{j },Z=z_{A})}\times\right.\] \[\left.\frac{f(l_{Y,j}\mid\overline{Y}_{j-1}=0,\overline{l}_{Y,j-1 },\overline{l}_{A,j},\overline{a}_{j-1},Z=z_{Y})}{f(l_{Y,j}\mid\overline{Y}_{ j-1}=0,\overline{l}_{Y,j-1},\overline{l}_{A,j},\overline{a}_{j-1},Z=z_{A})}\right] f(y_{j},\overline{l}_{j},\overline{a}_{j},\overline{Y}_{j-1}=0\mid Z=z_{A})\] \[=\sum_{\overline{l}_{j-1},\overline{a}_{j-1}}W_{Y,j-1}f(\overline {l}_{j-1},\overline{a}_{j-1},\overline{Y}_{j-1}=0\mid Z=z_{A})\] \[=\mathrm{E}[(1-Y_{j-1})W_{Y,j-1}\mid Z=z_{A}]\]
and, using linearity of expectations and that \(Y_{j-1}=0\) implies \(\overline{Y}_{j-2}=0\),
\[\mathrm{E}[(1-Y_{j-1})W_{Y,j-1}\mid Z=z_{A}] =\mathrm{E}[(1-Y_{j-1})(1-Y_{j-2})W_{Y,j-1}\mid Z=z_{A}]\] \[=\mathrm{E}[(1-Y_{j-2})W_{Y,j-1}-Y_{j-1}(1-Y_{j-2})W_{Y,j-1}\mid Z =z_{A}]\] \[=\mathrm{E}[(1-Y_{j-2})W_{Y,j-1}\mid Z=z_{A}]-\mathrm{E}[Y_{j-1} (1-Y_{j-2})W_{Y,j-1}\mid Z=z_{A}].\]
With the reexpressed weighted representation, we use the definition of an expectation to write
\[\sum_{k=1}^{K}\mathrm{E}[Y_{k}(1-Y_{k-1})W_{Y,k}\mid Z=z_{A}]\] \[=\sum_{k=1}^{K}\sum_{\overline{l}_{k},\overline{a}_{k},\overline{ y}_{k}}y_{k}(1-y_{k-1})W_{Y,k}f(\overline{l}_{k},\overline{a}_{k},\overline{y}_{k} \mid Z=z_{A})\] \[=\sum_{k=1}^{K}\sum_{\overline{l}_{k},\overline{a}_{k},y_{k}}y_{k }W_{Y,k}f(y_{k},\overline{Y}_{k-1}=0,\overline{l}_{k},\overline{a}_{k}\mid Z=z_ {A})\] \[=\sum_{k=1}^{K}\sum_{\overline{l}_{k},\overline{a}_{k},y_{k}}y_{k }W_{Y,k}f(y_{k}\mid\overline{Y}_{k-1}=0,\overline{l}_{k},\overline{a}_{k},Z=z_ {A})f(a_{k}\mid\overline{Y}_{k-1}=0,\overline{l}_{k},\overline{a}_{k-1},Z=z_{A })\times\] \[\qquad\qquad f(l_{k}\mid\overline{Y}_{k-1}=0,\overline{l}_{k-1}, \overline{a}_{k-1},Z=z_{A})f(\overline{Y}_{k-1}=0,\overline{l}_{k-1},\overline {a}_{k-1}\mid Z=z_{A})\] \[=\sum_{k=1}^{K}\sum_{\overline{l}_{k},\overline{a}_{k},y_{k}}y_{ k}W_{Y,k}f(y_{k}\mid\overline{Y}_{k-1}=0,\overline{l}_{k},\overline{a}_{k},Z=z_ {A})\times\] \[\qquad\Bigg{[}\prod_{j=1}^{k}\mathrm{Pr}(Y_{j-1}=0\mid\overline{Y }_{j-2}=0,\overline{l}_{j-1},\overline{a}_{j-1},Z=z_{A})\times\] \[\qquad\qquad f(l_{j}\mid\overline{Y}_{j-1}=0,\overline{l}_{j-1}, \overline{a}_{j-1},Z=z_{A})f(a_{j}\mid\overline{Y}_{j-1}=0,\overline{l}_{j}, \overline{a}_{j-1},Z=z_{A})\Bigg{]}\] \[=\sum_{k=1}^{K}\sum_{\overline{l}_{k},\overline{a}_{k},y_{k}}y_{ k}W_{Y,k}f(y_{k}\mid\overline{Y}_{k-1}=0,\overline{l}_{Y,k},\overline{l}_{A,k}, \overline{a}_{k},Z=z_{A})\times\] \[\Bigg{[}\prod_{j=1}^{k}\mathrm{Pr}(Y_{j-1}=0\mid\overline{Y}_{j- 2}=0,\overline{l}_{Y,j-1},\overline{l}_{A,j-1},\overline{a}_{j-1},Z=z_{A})f(l_ {Y,j}\mid\overline{Y}_{j-1}=0,\overline{l}_{Y,j-1},\overline{l}_{A,j},\overline {a}_{j-1},Z=z_{A})\times\] \[f(l_{A,j}\mid\overline{Y}_{j-1}=0,\overline{l}_{Y,j-1},\overline{ l}_{A,j-1},\overline{a}_{j-1},Z=z_{A})f(a_{j}\mid\overline{Y}_{j-1}=0, \overline{l}_{Y,j},\overline{l}_{A,j},\overline{a}_{j-1},Z=z_{A})\Bigg{]},\]
where we use the definition of conditional probability and that \(L_{j}=(L_{Y,j},L_{A,j})\) in the last
two steps. Then, plugging in the expression for \(W_{Y,k}\), we have
\[\sum_{k=1}^{K}\sum_{\tilde{l}_{k},\overline{a}_{k}}\Pr(Y_{k}=1\mid \overline{Y}_{k-1}=0,\overline{l}_{Y,k},\overline{l}_{A,k},\overline{a}_{k},Z=z _{Y})\times\] \[\Bigg{[}\prod_{j=1}^{k}\Pr(Y_{j-1}=0\mid\overline{Y}_{j-2}=0, \overline{l}_{Y,j-1},\overline{l}_{A,j-1},\overline{a}_{j-1},Z=z_{Y})f(l_{Y,j} \mid\overline{Y}_{j-1}=0,\overline{l}_{Y,j-1},\overline{l}_{A,j},\overline{a}_ {j-1},Z=z_{Y})\times\] \[f(l_{A,j}\mid\overline{Y}_{j-1}=0,\overline{l}_{Y,j-1},\overline {l}_{A,j-1},\overline{a}_{j-1},Z=z_{A})f(a_{j}\mid\overline{Y}_{j-1}=0, \overline{l}_{Y,j},\overline{l}_{A,j},\overline{a}_{j-1},Z=z_{A})\Bigg{]},\]
which is the g-formula for the separable effect of \(Z_{Y}\) under \(Z_{A}=z_{A}\).
## S2 An alternative inverse probability weighted expression
In Supplementary Materials Section S1, we considered one inverse probability weighted representation of the g-formula for the separable effect of \(Z_{Y}\) under \(Z_{A}=z_{A}\). In this section, we consider an alternative expression characterized by defining
\[\lambda_{A,k}=\frac{\mathrm{E}[Y_{k}(1-Y_{k-1})W_{A,k}\mid Z=z_{Y}]}{\mathrm{ E}[(1-Y_{k-1})W_{A,k}\mid Z=z_{Y}]}\]
and weights, \(W_{A,k}\), defined as
\[W_{A,k}=\prod_{j=1}^{k}\Bigg{[}\frac{f(A_{j}\mid\overline{Y}_{j-1}=0,\overline{L}_{ Y,j},\overline{L}_{A,j},\overline{A}_{j-1},Z=z_{A})}{f(A_{j}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1},Z=z_{Y})}\times\]
\[\frac{\Pr(Z=z_{A}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1},\overline{L}_{ A,j},\overline{A}_{j-1})}{\Pr(Z=z_{Y}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j},\overline{A}_{j-1})}\times\]
\[\frac{\Pr(Z=z_{Y}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1},\overline{L}_{ A,j-1},\overline{A}_{j-1})}{\Pr(Z=z_{A}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j-1},\overline{A}_{j-1})}\Bigg{]}\]
which can be reexpressed as
\[W_{A,k}=\prod_{j=1}^{k}\Bigg{[}\frac{f(A_{j}\mid\overline{Y}_{j-1 }=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1},Z=z_{A})}{f(A_{ j}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1 },Z=z_{Y})}\times\] \[\frac{\frac{f(Z=z_{A},L_{A,j}|\overline{Y}_{j-1}=0,\overline{L}_{ Y,j-1},\overline{L}_{A,j-1},\overline{A}_{j-1})}{f(L_{A,j}|\overline{Y}_{j-1}=0, \overline{L}_{Y,j-1},\overline{L}_{A,j-1},\overline{A}_{j-1})}}{\frac{f(Z=z_{Y },L_{A,j}|\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1},\overline{L}_{A,j-1}, \overline{A}_{j-1})}{f(L_{A,j}|\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j-1},\overline{A}_{j-1})}}\times\] \[\frac{\Pr(Z=z_{Y}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j-1},\overline{A}_{j-1})}{\Pr(Z=z_{A}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1},\overline{L}_{A,j-1},\overline{A}_{j-1})}\Bigg{]}\] \[=\prod_{j=1}^{k}\Bigg{[}\frac{f(A_{j}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1},Z=z_{A})}{f(A_{j}\mid \overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1}, Z=z_{Y})}\times\] \[\frac{f(L_{A,j}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j-1},\overline{A}_{j-1},Z=z_{A})}{f(L_{A,j}\mid\overline{Y}_{ j-1}=0,\overline{L}_{Y,j-1},\overline{L}_{A,j-1},\overline{A}_{j-1},Z=z_{Y})} \Bigg{]}.\]
_S2.1. Equivalence of the g-formula and the alternative inverse probability weighted expression_
\[\sum_{k=1}^{K}\mathrm{E}[Y_{k}(1-Y_{k-1})W_{A,k}\mid Z=z_{Y}]\] \[=\sum_{k=1}^{K}\sum_{l_{k},\overline{a}_{k},\overline{y}_{k}}y_{k }(1-y_{k-1})W_{A,k}f(\overline{l}_{k},\overline{a}_{k},\overline{y}_{k}\mid Z=z _{Y})\] \[=\sum_{k=1}^{K}\sum_{\overline{l}_{k},\overline{a}_{k}}W_{A,k}f(Y _{k}=1,\overline{Y}_{k-1}=0,\overline{l}_{k},\overline{a}_{k}\mid Z=z_{Y})\] \[=\sum_{k=1}^{K}\sum_{\overline{l}_{k},\overline{a}_{k}}W_{A,k} \Pr(Y_{k}=1\mid\overline{Y}_{k-1}=0,\overline{l}_{k},\overline{a}_{k},Z=z_{Y}) f(a_{k}\mid\overline{Y}_{k-1}=0,\overline{l}_{k},\overline{a}_{k-1},Z=z_{Y})\times\] \[f(l_{k}\mid\overline{Y}_{k-1}=0,\overline{l}_{k-1},\overline{a} _{k-1},Z=z_{Y})f(\overline{Y}_{k-1}=0,\overline{l}_{k-1},\overline{a}_{k-1} \mid Z=z_{Y})\] \[=\sum_{k=1}^{K}\sum_{\overline{l}_{k},\overline{a}_{k}}W_{A,k} \Pr(Y_{k}=1\mid\overline{Y}_{k-1}=0,\overline{l}_{k},\overline{a}_{k},Z=z_{Y})\times\] \[\qquad\Bigg{[}\prod_{j=1}^{k}\Pr(Y_{j-1}=0\mid\overline{Y}_{j-2}=0,\overline{l}_{j-1},\overline{a}_{j-1},Z=z_{Y})\times\] \[\qquad\qquad f(l_{j}\mid\overline{Y}_{j-1}=0,\overline{l}_{j-1}, \overline{a}_{j-1},Z=z_{Y})f(a_{j}\mid\overline{Y}_{j-1}=0,\overline{l}_{j}, \overline{a}_{j-1},Z=z_{Y})\Bigg{]}\] \[=\sum_{k=1}^{K}\sum_{\overline{l}_{k},\overline{a}_{k}}W_{A,k} \Pr(Y_{k}=1\mid\overline{Y}_{k-1}=0,\overline{l}_{Y,k},\overline{l}_{A,k}, \overline{a}_{k},Z=z_{Y})\times\] \[\Bigg{[}\prod_{j=1}^{k}\Pr(Y_{j-1}=0\mid\overline{Y}_{j-2}=0, \overline{l}_{Y,j-1},\overline{l}_{A,j-1},\overline{a}_{j-1},Z=z_{Y})f(l_{Y,j} \mid\overline{Y}_{j-1}=0,\overline{l}_{Y,j-1},\overline{l}_{A,j},\overline{a}_ {j-1},Z=z_{Y})\times\] \[f(l_{A,j}\mid\overline{Y}_{j-1}=0,\overline{l}_{Y,j-1},\overline{ l}_{A,j-1},\overline{a}_{j-1},Z=z_{Y})f(a_{j}\mid\overline{Y}_{j-1}=0,\overline{l}_{Y,j}, \overline{l}_{A,j},\overline{a}_{j-1},Z=z_{Y})\Bigg{]},\]
which is equivalent to the g-formula after the expression for \(W_{A,k}\) is plugged in.
### Inverse probability weighted estimation
1. Using the entire data set, fit a pooled (over time) parametric regression model for * \(f(A_{k}=a_{k}\mid\overline{Y}_{k-1}=0,\overline{l}_{Y,k},\overline{l}_{A,k}, \overline{a}_{k-1},Z)\), * \(\Pr(Z=z\mid\overline{Y}_{k-1}=0,\overline{l}_{Y,k-1},\overline{l}_{A,k}, \overline{a}_{k-1})\), and * \(\Pr(Z=z\mid\overline{Y}_{k-1}=0,\overline{l}_{Y,k-1},\overline{l}_{A,k-1}, \overline{a}_{k-1})\). For example, we might assume pooled logistic regression models.
2. For each row in the data set for each individual with \(Z=z_{A}\) at each time interval, \(k\), in \(1,\ldots,K\): 1. Obtain predicted values * \(\widehat{f}(A_{k}\mid\overline{Y}_{k-1}=0,\overline{L}_{Y,k},\overline{L}_{A,k },\overline{A}_{k-1},Z=z_{A})\), * \(\widehat{f}(A_{k}\mid\overline{Y}_{k-1}=0,\overline{L}_{Y,k},\overline{L}_{A,k },\overline{A}_{k-1},Z=z_{Y})\), * \(\widehat{\Pr}(Z=z_{A}\mid\overline{Y}_{k-1}=0,\overline{L}_{Y,k-1}, \overline{L}_{A,k},\overline{A}_{k-1})\), * \(\widehat{\Pr}(Z=z_{Y}\mid\overline{Y}_{k-1}=0,\overline{L}_{Y,k-1}, \overline{L}_{A,k-1},\overline{A}_{k-1})\), and * \(\widehat{\Pr}(Z=z_{Y}\mid\overline{Y}_{k-1}=0,\overline{L}_{Y,k-1}, \overline{L}_{A,k-1},\overline{A}_{k-1})\). 2. Evaluate i. \[\widehat{W}_{A,k}=\prod_{j=1}^{k}\left[\frac{\widehat{f}(A_{k}\mid \overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1}, Z=z_{A})}{\widehat{f}(A_{k}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j}, \overline{L}_{A,j},\overline{A}_{j-1},Z=z_{Y})}\times\right.\] \[\left.\frac{\widehat{\Pr}(Z=z_{A}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1})}{\widehat{\Pr}(Z =z_{Y}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1},\overline{L}_{A,j}, \overline{A}_{j-1})}\times\right.\] \[\left.\frac{\widehat{\Pr}(Z=z_{Y}\mid\overline{Y}_{j-1}=0, \overline{L}_{Y,j-1},\overline{L}_{A,j-1},\overline{A}_{j-1})}{\widehat{\Pr}(Z =z_{A}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j-1},\overline{L}_{A,j-1}, \overline{A}_{j-1})}\right]\] or, when \(L_{k}=(L_{Y,k},\emptyset),k=1,\ldots,K\), the simplified weight expression ii. \[\widehat{W}_{A,k}=\prod_{j=1}^{k}\frac{\widehat{f}(A_{j}\mid \overline{Y}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1}, Z=z_{A})}{\widehat{f}(A_{j}\mid\overline{Y}_{j-1}=0,\overline{L}_{Y,j}, \overline{L}_{A,j},\overline{A}_{j-1},Z=z_{Y})}.\]
3. Compute the risk of failure by the end of \(K\) as \(\sum_{k=1}^{K}\widehat{\lambda}_{A,k}\prod_{j=1}^{k-1}[1-\widehat{\lambda}_{A,j}]\) with the estimated weights, \(\widehat{W}_{A,k}\), in \(\widehat{\lambda}_{A,k}=\frac{\widehat{E}[Y_{k}(1-Y_{k-1})\widehat{W}_{A,k}|Z= z_{Y}]}{\widehat{E}[(1-Y_{k-1})\widehat{W}_{A,k}|Z=z_{Y}]}\).
## S3 Example using simulated data
### Design
To illustrate an implementation of the separable effects for adherence, we simulated data from a hypothetical randomized trial in which investigators compare two different medication treatment strategies. Each individual is assigned to receive an ACEI (\(Z=0\)) or a thiazide diuretic (\(Z=1\)) for the duration of follow-up. Individuals are followed until death (the outcome) or the administrative end of the study (24 months following randomization).
We generated data with \(500,000\) individuals assigned to each treatment arm. We considered \(L_{A,k}\) as an indicator of acute kidney injury (AKI), and \(L_{Y,k}\) as an indicator of abnormal blood pressure in interval \(k\). The parameters used to generate the data were inspired by literature on the impact of cost, AKI, and blood pressure on adherence to antihypertensive agents [(1; 2; 3; 4)]. Covariates were generated among those surviving to interval \(k\) according to the following models:
\[L_{A,k} \sim\text{Bernoulli}\left(0.05+0.035*A_{k-1}-0.035*Z_{A}*A_{k-1} \right),\] \[L_{Y,k} \sim\text{Bernoulli}\left(0.95-0.60*A_{k-1}-0.15*Z_{Y}*A_{k-1} \right).\]
Adherence was generated among those surviving to interval \(k\) using three different models, which are comparable to the causal models depicted in main text Figures 1, 3, and 5, respectively:
_Adherence model 1 (Figure S1)._
\[A_{k}\sim\text{Bernoulli}\left(0.6+0.2*Z_{A}*A_{k-1}\right)\]
_Adherence model 2 (Figure S2)._
\[A_{k}\sim\text{Bernoulli}\left(0.6+0.2*Z_{A}*A_{k-1}-0.5*L_{A,k}\right)\]
_Adherence model 3 (Figure S3)._
\[A_{k}\sim\text{Bernoulli}\left(0.6+0.2*Z_{A}*A_{k-1}-0.5*L_{A,k}+0.2*L_{Y,k}\right)\]
The outcome was generated among those surviving to interval \(k\) according to the following model:
\[Y_{k}\sim\text{Bernoulli}\left(0.035+0.01*L_{Y,k}-0.03*A_{k}+0.01*L_{A,k}\right)\]
In words, the data was generated with the risk of AKI and poorly controlled blood pressure being greater for individuals adherent to ACEIs compared to those adherent to thiazides.
Figure S2: Causal DAG representing the causal structure of adherence model 2 at each interval \(k=1,2,3\) for survivors to interval \(k\).
Adherence was generated using three different models: as a function of the \(Z_{A}\) component of the initiated medication and past adherence (analogous to the main text example using medication cost); the \(Z_{A}\) component of the initiated medication, past adherence, and AKI; and the \(Z_{A}\) component of the initiated medication, past adherence, AKI, and poorly controlled blood pressure. The outcome risk was determined by adherence, poorly controlled blood pressure, and AKI incidence. Unlike Figures 1, 3, and 5, the effect of the initiated medication on the outcome risk is entirely mediated through adherence and covariates. In other words, the data was generated as though there was no directed arrow from \(Z_{Y}\) to \(Y_{k}\) when \(L_{Y,k}\) is included in the graphs (see Figures S1, S2, and S3).
The code to reproduce the simulated data example is provided at: [https://github.com/KerollosWanis/separable_effects_for_adherence](https://github.com/KerollosWanis/separable_effects_for_adherence).
### Simulation example results
In Figures S1, S2, and S3 we present the distribution of the outcome, of adherence, and of the covariates (AKI and abnormal blood pressure) for the total ('intention-to-treat') effect comparison and the separable effect comparison. The latter compares outcomes under initiation of a modified version of ACEI to initiation of thiazide diuretics, with the adherence causing component of ACEI set to the value of the adherence causing component of thiazides (i.e. \(\Pr[Y_{k}^{z_{A}=1,z_{Y}=0}=1]\) vs. \(\Pr[Y_{k}^{z_{A}=1,z_{Y}=1}=1]\)). The separable effect was computed using the algorithm detailed in Section 8.
To give intuition about how the interpretation of the separable effect varies for different data generating mechanisms and the causal graphs that represent them, we elaborate on the findings under each adherence model.
in adherence under ACEI initiation versus thiazide initiation for the total effect comparison. As expected, consistent with the adherence model, adherence to thiazides is higher. Further, the probability of abnormal blood pressure is lower for thiazides, both because adherence to thiazides is higher and because adherence to thiazides is more effective than adherence to ACEIs. The probability of AKI is also lower for thiazides for identical reasons.
Under the separable effect comparison, the probability of adherence is identical because \(Z_{A}\) is set to the value it takes for thiazides. Further, the probability of AKI is identical because AKI depends only on the \(Z_{A}\) component of the initiated medication and on past adherence. Lastly, the probability of abnormal blood pressure is more similar under the separable effect comparison than under the total effect comparison because the distribution of adherence is balanced. However, the probability of abnormal blood pressure is still lower for thiazides because the \(Z_{Y}\) component exerts an effect on abnormal blood pressure causing thiazide adherence to be more effective than adherence to ACEIs.
_Adherence model 2: another causal structure where the separable effect balances adherence_
In data generated using the second adherence model, the \(Z_{A}\) component of the initiated medication, past adherence, and AKI exert effects on adherence. Under this data generating mechanism, the difference in adherence is even greater under ACEI initiation versus thiazide initiation because having an AKI reduces the probability of adherence and AKI is more likely for individuals adherent to an ACEI. But even though AKI exerts an effect on adherence, the separable effect still balances adherence because only \(Z_{A}\), not \(Z_{Y}\), affects the risk of AKI and, for both initiated medications, \(Z_{A}\) is set to the value it takes for thiazides. As before, the probability of AKI is also identical for the separable effect because AKI depends only on the \(Z_{A}\) component of the initiated medication and on past adherence.
Figure S4: Cumulative incidence of death and probabilities of adherence, AKI, and abnormal blood pressure over 24 months of follow-up for a comparison of ACEI versus thiazide diuretic under the total (‘intention-to-treat’) effect in the left panels and under the separable effect (comparing a modified version of ACEI to thiazide diuretic) in the right panels. Data were generated using a simulated example with adherence depending only on the \(Z_{A}\) component of the initiated medication and past adherence.
Figure S5: Cumulative incidence of death and probabilities of adherence, AKI, and abnormal blood pressure over 24 months of follow-up for a comparison of ACEI versus thiazide diuretic under the total (‘intention-to-treat’) effect in the left panels and under the separable effect (comparing a modified version of ACEI to thiazide diuretic) in the right panels. Data were generated using a simulated example with adherence depending only on the \(Z_{A}\) component of the initiated medication, past adherence, and AKI.
_Adherence model 3: a causal structure where the separable effect does not balance adherence_
In data generated using the third adherence model, the \(Z_{A}\) component of the initiated medication, past adherence, AKI, and poorly controlled blood pressure exert effects on adherence. Under this data generating mechanism, adherence to thiazides is still greater than adherence to ACEIs; but the difference is smaller than in the prior data generating mechanisms because abnormal blood pressure is more likely under ACEI initiation and adherence is higher for those with abnormal blood pressure. Because \(Z_{Y}\) exerts an effect on abnormal blood pressure, the separable effect does not balance adherence. Figure S6 shows a small difference in adherence under the separable effect which is a consequence of the effect that the \(Z_{Y}\) component has on adherence through its effect on abnormal pressure.
### S4. An estimand that prohibits 'crossover'
In the main text, we argued that a necessary, but perhaps not sufficient, condition for interpretation of the separable effect of \(Z_{Y}\) under \(Z_{A}=z_{A}\) as the biological effectiveness of the medications under comparison is that individuals in the study be prohibited from taking a study medication except the one they were assigned at baseline. In some studies, assuming that treatment 'crossovers' do not occur might be reasonable. For example, in some randomized trials, individuals do not have access to study medications other than the one they were assigned to at baseline. In observational studies or pragmatic randomized trials where 'crossover' is permitted, this assumption will not be plausible. In this section, we describe how the inverse probability weighted representation used to motivate the estimation algorithm in the main text can be extended to identify the separable effect of \(Z_{Y}\) under \(Z_{A}=z_{A}\) with 'crossover' eliminated.
Let \(C_{k}\) be an indicator for whether an individual takes a study medication in interval \(k\) that they were not assigned to at baseline (e.g. \(C_{k}=1\) if an individual assigned to a thiazide diuretic at baseline takes an angiotensin-converting enzyme inhibitor during interval \(k\)).
Figure S6: Cumulative incidence of death and probabilities of adherence, AKI, and abnormal blood pressure over 24 months of follow-up for a comparison of ACEI versus thiazide diuretic under the total (‘intention-to-treat’) effect in the left panels and under the separable effect (comparing a modified version of ACEI to thiazide diuretic) in the right panels. Data were generated using a simulated example with adherence depending only on the \(Z_{A}\) component of the initiated medication, past adherence, AKI, and poorly controlled blood pressure.
An intervention that eliminates 'crossover' is analogous to an intervention that eliminates censoring due to loss to follow-up, and the identification strategy for the latter is given by Stensrud et al (5). The results in this section on an estimand that prohibits 'crossover' can easily be modified to consider estimands that prohibit censoring due to loss to follow-up or that abolish competing events by re-defining the indicator \(C_{k}\) to be an indicator of loss to follow-up or an indicator of the competing event, respectively.
Estimation using inverse probability weighting is motivated by the following weighted representations of the g-formula for the expected counterfactual mean \(\mathrm{E}[Y_{K}^{z_{A},z_{Y},\overline{c}_{K}=0}]\):
\[\sum_{k=1}^{K}\lambda_{A,k}^{C}\prod_{j=1}^{k-1}[1-\lambda_{A,j}^{C}]\text{ or }\sum_{k=1}^{K}\lambda_{Y,k}^{C}\prod_{j=1}^{k-1}[1-\lambda_{Y,j}^{C}]\]
where
\[\lambda_{A,k}^{C}=\frac{\mathrm{E}[Y_{k}(1-Y_{k-1})W_{A,k}^{C}W_{C,k}^{A}\mid Z =z_{Y}]}{\mathrm{E}[(1-Y_{k-1})W_{A,k}^{C}W_{C,k}^{A}\mid Z=z_{Y}]}\text{ and }\lambda_{Y,k}^{C}=\frac{\mathrm{E}[Y_{k}(1-Y_{k-1})W_{Y,k}^{C}W_{C,k}^{Y} \mid Z=z_{A}]}{\mathrm{E}[(1-Y_{k-1})W_{Y,k}^{C}W_{C,k}^{Y}\mid Z=z_{A}]}\]
with weights
\[W_{A,k}^{C}=\prod_{j=1}^{k}\Bigg{[}\frac{f(A_{j}\mid\overline{Y} _{j-1}=\overline{C}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A }_{j-1},Z=z_{A})}{f(A_{j}\mid\overline{Y}_{j-1}=\overline{C}_{j-1}=0,\overline {L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1},Z=z_{Y})}\times\] \[\frac{\mathrm{Pr}(Z=z_{A}\mid\overline{Y}_{j-1}=\overline{C}_{j-1 }=0,\overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1})}{\mathrm{Pr}(Z =z_{Y}\mid\overline{Y}_{j-1}=\overline{C}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j},\overline{A}_{j-1})}\times\] \[\frac{\mathrm{Pr}(Z=z_{Y}\mid\overline{Y}_{j-1}=\overline{C}_{j-1 }=0,\overline{L}_{Y,j-1},\overline{L}_{A,j-1},\overline{A}_{j-1})}{\mathrm{Pr} (Z=z_{A}\mid\overline{Y}_{j-1}=\overline{C}_{j-1}=0,\overline{L}_{Y,j-1}, \overline{L}_{A,j-1},\overline{A}_{j-1})}\Bigg{]},\]
\[W_{Y,k}^{C}= \prod_{j=1}^{k}\frac{f(Y_{j}\mid\overline{Y}_{j-1}=\overline{C}_{j- 1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z=z_{Y})}{f(Y_{j} \mid\overline{Y}_{j-1}=\overline{C}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z=z_{A})}\times\] \[\prod_{j=1}^{k}\Bigg{[}\frac{\Pr(Z=z_{Y}\mid\overline{Y}_{j-1}= \overline{C}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j-1} )}{\Pr(Z=z_{A}\mid\overline{Y}_{j-1}=\overline{C}_{j-1}=0,\overline{L}_{Y,j}, \overline{L}_{A,j},\overline{A}_{j-1})}\times\] \[\frac{\Pr(Z=z_{A}\mid\overline{Y}_{j-1}=\overline{C}_{j-1}=0, \overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1})}{\Pr(Z=z_{Y}\mid \overline{Y}_{j-1}=\overline{C}_{j-1}=0,\overline{L}_{Y,j-1},\overline{L}_{A,j},\overline{A}_{j-1})}\Bigg{]},\]
\[W_{C,k}^{A}= \prod_{j=1}^{k}\frac{1(C_{j}=0)}{f(C_{j}\mid\overline{Y}_{j-1}= \overline{C}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z =z_{Y})},\]
and
\[W_{C,k}^{Y}= \prod_{j=1}^{k}\frac{1(C_{j}=0)}{f(C_{j}\mid\overline{Y}_{j-1}= \overline{C}_{j-1}=0,\overline{L}_{Y,j},\overline{L}_{A,j},\overline{A}_{j},Z =z_{A})}.\]
Estimates can be obtained using algorithms comparable to those given in the main text and supplement.
|
2303.00004 | Parameter Optimization of LLC-Converter with multiple operation points
using Reinforcement Learning | The optimization of electrical circuits is a difficult and time-consuming
process performed by experts, but also increasingly by sophisticated
algorithms. In this paper, a reinforcement learning (RL) approach is adapted to
optimize a LLC converter at multiple operation points corresponding to
different output powers at high converter efficiency at different switching
frequencies. During a training period, the RL agent learns a problem specific
optimization policy enabling optimizations for any objective and boundary
condition within a pre-defined range. The results show, that the trained RL
agent is able to solve new optimization problems based on LLC converter
simulations using Fundamental Harmonic Approximation (FHA) within 50 tuning
steps for two operation points with power efficiencies greater than 90%.
Therefore, this AI technique provides the potential to augment expert-driven
design processes with data-driven strategy extraction in the field of power
electronics and beyond. | Georg Kruse, Dominik Happel, Stefan Ditze, Stefan Ehrlich, Andreas Rosskopf | 2023-02-28T14:23:09Z | http://arxiv.org/abs/2303.00004v1 | # Parameter Optimization of LLC-Converter with multiple operation points using Reinforcement Learning
###### Abstract
The optimization of electrical circuits is a difficult and time-consuming process performed by experts, but also increasingly by sophisticated algorithms. In this paper, a reinforcement learning (RL) approach is adapted to optimize a LLC converter at multiple operation points corresponding to different output powers at high converter efficiency at different switching frequencies. During a training period, the RL agent learns a problem specific optimization policy enabling optimizations for any objective and boundary condition within a pre-defined range. The results show, that the trained RL agent is able to solve new optimization problems based on LLC converter simulations using Fundamental Harmonic Approximation (FHA) within 50 tuning steps for two operation points with power efficiencies greater than 90%. Therefore, this AI technique provides the potential to augment expert-driven design processes with data-driven strategy extraction in the field of power electronics and beyond.
Power Electronics, LLC Converter, Reinforcement Learning, PPO
## I Introduction
During the last decade, reinforcement learning (RL) has outperformed humans in strategy games like Chess, Go [1] or Starcraft [2] and also demonstrated first impressive results in engineering and related fields. In [3] new protein structures were designed with this method, while in [4] and [5] the positioning of plasma in a magnetic field and the component routing on a PCB were optimized. In all cases, this special machine learning approach learns a policy of a so called RL agent, which solves engineering problems based on several thousands of iterations and interactions in a simulation environment.
In this paper we first demonstrate the potential of a Proximal Policy Optimization (PPO) [10] RL agent for the optimization of an electric circuit at multiple operation points. This approach outperforms current multi-objective optimization methods using genetic algorithms in terms of the number of evaluations [7] and state-of-the-art RL agents in the domain of power electronics in the amount and range of the input parameters as well as objectives [8].
The described use case targets resonant converters using LLC resonant circuits, which are important in the design of galvanically isolated switch-mode power supplies and inductive power transfer systems in industrial, automotive, and avionic applications. The LLC resonant converter depicted in Fig. 1 uses sinusoidal oscillations between the inducances \(L_{r}\), \(L_{m}\) and the resonant capacitance \(C_{r}\) offering the advantages of zero-voltage switching in the applied semiconductors, size reduction of passive components by selecting high switching frequency, and advantageous utilization of parasitic elements (stray inductances).
## II Problem Statement
LLC circuit simulations using accurate models of the semiconductor devices are always very CPU-intensive. To provide engineers with a first intuition of a good starting point for a detailed design process, we will use a fast, static approximation of the electrical behavior of the LLC converter based on analytic equations using Fundamental Harmonic Approximation (FHA) [9].
This approximation algorithm takes input voltage \(V_{in}\), the equivalent load resistance \(R_{L}\), switching frequency \(f_{i}\), inductances \(L_{r}\) and \(L_{m}\), resonant capacitance \(C_{r}\) and coupling factor \(k\) as input and computes the output power and the corresponding efficiency at the load resistance \(R_{L}\). In applications such as for electric vehicles charging, the system needs to work efficiently at multiple operation points corresponding to a set of frequencies for one fixed hardware configuration which generates losses at specific power transfer rates. The objective function of this optimization problem can be written generally as
\[f(L_{r},L_{m},C_{r},k,f_{i})=min\frac{1}{n}\sum_{i=1}^{n}(1-e_{i})+\triangle P _{i} \tag{1}\]
Fig. 1: Topology of the LLC resonant converter
where \(e_{i}\) denotes for the circuit's efficiency and \(\triangle P_{i}\) describes the deviation between the real output power \(p_{r_{i}}\) and the target output power \(p_{t_{i}}\) for all \(i\) out of \(n\) operation points.
## III Reinforcement Learning
In RL an agent interacts with an environment (see Fig. 2). At every step of interaction, the agent receives a state from the environment and takes an action depending on its policy. This policy is optimized through a reward signal which the agent receives after a predefined number of steps in the environment (also referred to as an episode). The goal of the agent is to converge towards a policy which maximizes the reward signal at the end of an episode in the given environment.
### _RL Agent_
In the field of RL, the Proximal Policy Optimization (PPO) algorithm belongs to the class of policy gradient model-free reinforcement learning agents using a special gradient clipping approach for the stabilization of the training process [10]. Because of its training robustness and its capability of using continuous actions, we used PPO for our experiments. The implementation of the algorithm is taken from the RLlib Ray framework [6].
### _RL Environment_
The optimization problem of Section II is reformulated as a RL environment (OpenAI gym standard), where the goal of the agent is to update the LLC converter's fixed hardware configuration \(L_{r},L_{m},C_{r}\) and \(k\) as well as all frequencies \(f_{i}\) in a predefined number of steps. An episode in this environment consists of \(N\) parameter update steps by the agent. For our experiments we use two operation points and 50 update steps for each episode (_N_=50).
action shapingTo map the output of the PPO to the necessary size for the parameter updates, the agents' actions, which lie in the ranges of [-1, 1] for each parameter, need to be scaled such that they fit the magnitude of the parameter ranges in Tab. I. For each update step the maximal update step size is chosen to be 10% of the parameter range of the respective parameter. Additionally, instead of linearly mapping the agents output to this step size by setting 1 \(\rightarrow\) 10% and -1 \(\rightarrow\) -10% we introduce a logarithmic scaling between 0 and +/- 10% to favor smaller step sizes by the agent.
reward shapingGenerally, in RL the agent receives a reward from the environment at the end of each episode which is either 0 (for losing) or 1 (for winning). Since losing or winning is difficult to specify in the case of optimizing the parameters of a LLC Converter, we introduce a continuous reward between 0 and 1. As stated in Eq. 1 the goal is to maximize the efficiency and minimize the output power deviation from our specified output power targets. Since the efficiency naturally scales between 0 and 1 it can be directly converted into a part of the reward. We therefore set the reward for the efficiency equal to the efficiencies themselves: \(e_{i}\to r_{e_{i}}\).
To convert the output power deviation into a reward signal the camera norm between the output powers and the target output powers are calculated. Since the goal is to minimize this deviation we set the output power reward to
\[r_{p_{i}}=\Big{(}1-\frac{|p_{r_{i}}-p_{t_{i}}|}{p_{r_{i}}+p_{t_{i}}}\Big{)}. \tag{2}\]
This output power reward \(r_{p_{i}}\) is additionally scaled by introducing a scaling threshold \(t_{s}\): If \(r_{p_{i}}\) is smaller than \(t_{s}\), then the total reward \(r_{total}\) is set to 0, and the episode is considered as lost. If this criterion is met, then the \(r_{p_{i}}\) is scaled between 0 and 1 according to \(r_{p_{i}}=(r_{p_{i}}-t_{s})\cdot(1/(1-t_{s}))\) and the total reward of the agent at the end of an episode for two operation points is calculated by
\[r_{total}=\frac{1}{4}\Big{[}r_{e_{1}}+r_{p_{1}}+r_{e_{2}}+r_{p_{2}}\Big{]}. \tag{3}\]
### _Training_
The flow of an episode during training can be seen in Fig. 2. At the beginning of an episode, the parameters \(L_{r},L_{m},C_{r},k\) and \(f_{1}\) and \(f_{2}\) as well as the constant target values for the output powers (see Tab. I) are sampled randomly within the predefined ranges. The output power and efficiency values for these values are computed by the LLC FHA simulation environment and fed back together with the target output powers and the current deviations and parameter values as input state to the agent. The parameters are then updated continuously by the agent for all \(N\) steps according to its current policy. At the end of each episode, a reward is calculated according to Eq. 3 and fed back to the agent. The agent updates its policy in order to maximize this reward for each episode over the course of training and extracts a policy in the latent space of its neural network that enables a generic optimization of circuits for multiple and varying target output powers. The agent is trained for 80.000 episodes for varying target output powers and randomly sampled parameter configurations.
### _RL based Optimization Strategy_
After the agent is trained it can be used to optimize specified LLC Converter configurations within one episode. The user can specify the configuration of interest as part of the initial state (see Fig. 2) and the trained agent optimizes for \(N\) iterations the randomly initialized parameters for this configuration. The final parameter configuration can then be
used by the user to start a detailed design process for the specified LLC Converter.
## IV Results
We trained 10 PPO-agents with two fully connected neural network layers of size 256 with different random weight initializations. We chose a learning rate of 1e-5 and a reward scaling threshold \(t_{s}\) of 0.815. The mean (solid lines) and the standard deviation (shaded areas) of this training can be seen in Fig. 3. In the beginning of training, the agent quickly finds good configurations for some of the randomly initialized target LLC Converter configurations (blue line), but fails to do so for others (red line). On average the agent finds for the randomly sampled LLC configurations output powers and efficiencies corresponding to a reward of 0.91 (Eq. 3).
Fig. 4 shows an episode of a trained agent tackling an exemplary optimization problem with \(p_{t_{1}}=140W\) and \(p_{t_{2}}=4700W\). One can see in the upper plot, that the agent driven tuning process of manipulating the set of input parameters converge step-by-step to good solutions of the optimization problem. The two target operation points, marked by the dashed lines, are found within the scope of one episode (50 parameter update steps) with minor deviations. In the lower plot the correlation between the increasing values of the output power rewards, the efficiency rewards and the overall reward can be seen. The resulting parameter configuration has output power deviations from the specified target values of less than 5 % and efficiencies greater than 90%. These optimized parameters can be used by the user to start a detailed design process.
To further analyze the optimization capabilities of the agent, we performed a grid search of 25 different LLC Converter configurations (out of the ranges of Tab. I bottom) for one of the 10 trained agents. For each of the 25 configurations we tested the agent for five different and randomly initialized starting parameters for \(L_{r},L_{m},C_{r},k\) and \(f_{1}\) and \(f_{2}\). We then averaged the performance over these five runs. In Fig. 5 the matrix of this grid search can be seen. The average of the power deviation for the two operation points is plotted in white and has a maximum value of 4.2 %. The efficiencies in black have a minimal value of 93.2 %.
In Fig. 6 all tested configurations are plotted: In the upper plot, the output power deviations and efficiencies in percent for the 125 total samples can be seen. For the first operation point (in the range from 100W to 300W), the maximal deviation lies
Fig. 3: Training curves of 10 PPO-agents with different initializations of the weights of their neural networks. The mean of these trainings are the solid lines and the standard deviation are the shaded areas.
Fig. 2: Training (blue+black): In the beginning of each episode, a LLC Converter target configuration is initialized randomly within the ranges of Tab. I (1). Then the efficiencies and output powers are calculated for this parameter configuration by FHA (2a) and are fed back as part of the state to the PPO-agent (2b). The PPO-agent then updates the parameters according to its policy (2c) and this loop is repeated 50 times. At the end, a reward is calculated (Eq. 3) and fed back to the agent. Utilization (green+black): The trained agent receives a LLC Converter target configuration specified by the user (1) and then optimizes the randomly initialized parameters for 50 steps (2a-2c). The final optimized parameter configuration can then be used by the user (3).
between 5-6% while the second operation point(in the range from 4kW to 5kW) shows deviations up to 12-13%.
While the average efficiencies of the two operation points lie on average around 94%, the lower plot of Fig. 6 shows large differences between the two operation points: While all tested configurations for the second operation point show efficiency values greater 98%, the efficiency values of the first operation point never reach values greater than 93% and seem be primarily distributed around 89%.
## V Conclusion and Outlook
We showed that the parameters of the LLC converter can be optimized at multiple operation points by a pre-trained RL agent within 50 steps corresponding to a calculation time of less than a second. In contrast to existing GA strategies requiring several tens of thousands of evaluations for each optimization setup, the RL requires once four million evaluations enabling any optimization within the pre-defined optimization range within 50 steps. This provides great opportunities for solving computationally expensive optimization problems in engineering domains with similar, yet different requirements. The capability of such RL agents to solve similar optimization problems, but with different objective values and constraints, sets this approach apart from established optimization methods. Future research will focus on wider parameter ranges, more complex LLC models and shorter training time of the agent, thus increasing its sample efficiency.
## VI Acknowledgments
This work was supported by the Federal Ministry of Education and Research in the CODAPE "Kollaborative Entwicklungsumgebung fur die Leistungselektronik" projekt via grant 16ME0356
|
2309.05127 | Learning Personalized User Preference from Cold Start in Multi-turn
Conversations | This paper presents a novel teachable conversation interaction system that is
capable of learning users preferences from cold start by gradually adapting to
personal preferences. In particular, the TAI system is able to automatically
identify and label user preference in live interactions, manage dialogue flows
for interactive teaching sessions, and reuse learned preference for preference
elicitation. We develop the TAI system by leveraging BERT encoder models to
encode both dialogue and relevant context information, and build action
prediction (AP), argument filling (AF) and named entity recognition (NER)
models to understand the teaching session. We adopt a seeker-provider
interaction loop mechanism to generate diverse dialogues from cold-start. TAI
is capable of learning user preference, which achieves 0.9122 turn level
accuracy on out-of-sample dataset, and has been successfully adopted in
production. | Deguang Kong, Abhay Jha, Lei Yun | 2023-09-10T20:22:56Z | http://arxiv.org/abs/2309.05127v1 | # Learning Personalized User Preference from Cold Start in Multi-turn Conversations
###### Abstract.
This paper presents a novel teachable conversation interaction system (TAI for short) that is capable of learning users' preferences from cold start by gradually adapting to personal preferences. In particular, the TAI system is able to automatically identify and label users' preference in live interactions, manage dialogue flows for interactive teaching sessions, and reuse learned preference for preference elicitation. We develop the TAI system by leveraging BERT encoder models to encode both dialogue and relevant context information, and build action prediction (AP), argument filling (AF) and named entity recognition (NER) models to understand the teaching session. We adopt a seeker-provider interaction loop mechanism to generate diverse dialogues from cold-start. TAI is capable of learning users' preference, which achieves 91.22% turn level accuracy on out-of-sample dataset, and has been successfully adopted in production.
2020
## 1. Introduction
In conversation AI (Deguang Kong et al., 2020), the interaction between users and conversation agents is helpful to guide users to achieve certain tasks. Customers may desire and expect the conversation agent to provide proactive personalized experiences in order to fulfill an advertised promise of a helpful, personal assistant (Jha and Lei, 2020). One significant effort would be accelerating personalization by harmonizing disparate preference learning experiences across the conversation agent.
In this paper, we present a teachable conversation agent system that augments the task oriented AI agents with an interactive teaching capability. In particular, the _challenges_ we are facing here include _(C1)_ how to allow users to naturally start with describing the preferences and its conditionals at a high-level, and then recursively teach the agent through the conversations, _(C2)_ how to understand diverse related entities and conversation flow users may have for their preferences in a multi-turn dialog, _(C3)_ how to train the conversation model given scare training data in cold-start environment, _(C4)_ how to reuse the previously taught user preferences across different domains, _etc_.
Despite some works (Beng et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019) about learning from user interactions, they are not sufficient to address the above challenges. For example, interaction learning (Kang et al., 2019) is widely used in learning visual scene concept (Kang et al., 2019) about attributes (such as color, shape, etc), learning semantic concept (e.g., "set my room to cozy") (Kang et al., 2019)
using concept parser, as well as games [10] using the symbolic execution for state progression and scene understanding. However, the method is not directly applicable for preference teaching. While reinforcement learning approach [14] views the dialogue conversation as a sequential decision making process that learns optimal policy, it is over-complicated and difficult to customize for different domain teaching sessions.
To our knowledge, this work is the _first_ work that builds a conversation agent which allows the users to share their preferences across different topics and get more relevant and personalized results. The _contribution_ of this paper is summarized as follows.
\(\bullet\) To address _challenge C1_, a multi-turn domain agnostic dialog system is designed to support user initialized teaching, allowing users to describe the preference, clarify ambiguities from explicit instructions.
\(\bullet\) To address _challenge C2_, the conversation agent performs a multi-turn action prediction loop upon named entity recognition, action prediction and argument filling models by encoding dialog context using BERT and Seq2Seq model pipeline.
\(\bullet\) To address _challenge C3_, we designed a dialogue simulator module to generate synthetic training data with adequate annotations guided by the seeker-provider interaction loop to achieve the seeker goals from cold-start.
\(\bullet\) To address _challenge C4_, we enable centralized preference management, which stores the taught preference in a persistent knowledge base, facilitating effective reusability and generalizability of learned knowledge.
Figure 1: The working flow of learning user preferences conversational system (including teaching stage and reuse stage).
Figure 2: Overview of conversation understanding and dialogue management
## 2. System Overview
TAI consists of two stages (i.e., teaching stage and reuse stage). In the teaching stage, given user utterance, TAI will call a dialogue management module to learn users' preferences from interaction. Users' preferences will be stored at the knowledge base after standardizing the schema for future use. After enabling reuse, when a user asks the specific questions, the domain will retrieve the relevant preference (stored in the knowledge base) and further infer additional latent customer preferences to curate more proactive personalized experiences to users. Fig. 1 depicts how the TAI system works. The purpose of dialogue management (Fig. 2) is to conduct interactive sessions with users for learning preferences and drive the conversations based on the seeker goals to complete the objectives. Essentially, it consists of several components:
\(\bullet\)**Preference parser**: Conversational agent will bridge the prediction gap to understand and interpret the preference for the given utterance from the user. For example, it will understand the slot relevant to preference in utterance such as I love Giants [sportsteam]. This is mainly achieved by NER illustrated in Section 4.1.
\(\bullet\)**Dialogue management**: After the session has been initiated by the user in the conversation, the dialog management module will predict the right clarification question to ask the user, based on the context of utterance and the interactions. Then the user will respond for clarification (if needed) and continue the multi-turn conversations until the respective preferences are grounded before storage of them for future re-use. This is achieved by a multi-turn action prediction loop illustrated in Section 4.2.
\(\bullet\)**Dialogue simulation from cold start**: we build the simulator based on seeker provider interaction loop to generate diverse dialogues that are beyond these seed examples by enriching them with surface form (i.e. utterance) variations, other dialogue flows and possible unhappy cases. This is mainly achieved by simulation in Section 3.
\(\bullet\)**Centralized Knowledge Storage**: The taught preference is stored in a persistent knowledge graph, which facilitates effective reusability and generalizability of learned knowledge. In particular, it supports the following functionalities:
```
Retrieve_KB(User_Id,&PrefList) Update_KB(User_Id,&PrefList)
```
which enables the TAI to store, update and retrieve the [user_id, preference_list] pairs in the knowledge base. Through these interactions, it leads to more complex personal interactions and action requests that surface more nuanced gaps in conversation understanding.
### Utterance Examples
The sampled utterance example is shown in Table 1.
## 3. Simulating Dialogues to Address Cold Start Issues
We design and implement the dialogue simulator module ("simulator" for short) to generate synthetic training dialogues with adequate annotations since we do not have any user data before product launch. For this purpose, the **input** to the simulator module would be small number of annotated seed dialogues with descriptions, and the **output** would be diverse dialogues that are generated beyond these seed examples by enriching them with surface form (i.e. utterance) variations, other dialogue flows and possible unhappy cases. In the conversation, the user is annotated as a seeker who
seeks to accomplish a goal. The conversation agent (annotated as a provider) provides information or a service to help the seeker complete the goal. The simulation is mainly achieved by **seek-provider interaction**.
**Stage 1: Seeker goal** In this stage, a fixed seeker goal is sampled from the goal sampler. To describe the seeker goal of a conversation that consists of a sequence of API calls by the provider to fulfill the seeker's goal, we adopt an _entity transfer graph_, i.e., a _directed acyclic graph (DAG)_ where the vertices consists of the entities supplied by the seeker and the API calls of the provider, and the edges encode the way that seeker entities and entities returned by API calls are transferred (i.e. carried over), such that the arguments of each API call in the goal are therefore encoded as parents in the graph. Fig. 3 gives an example of entity transfer graph. It draws an API by following a Markov chain distribution (denoted as \(A_{1:i}\) for API sequences [1:i]) estimated from the seed conversation, i.e,
\[\Pr(A_{1:i})=\prod_{j}\Pr(A_{i}|A_{j|j<i}) \tag{1}\]
where the probability of goal \(\Pr(A_{1:j})\) is achieved by sampling a sequence of intents (via APIs) in the sequential order (with index \(i\), \(j\)).
Then starting with an empty entity transfer graph, seeker goals are sampled incrementally by repeatedly determining if a new vertex should be added. In particular, it draws an API by following a Markov chain distribution estimated from the seed conversation, i.e,
\[Pr(\text{goal})=\Pi_{i}Pr(\text{API}_{i}| \tag{2}\]
where the probability of goal \(Pr(\text{goal})\) is achieved by sampling a sequence of intents (via APIs) in the sequential order (e.g., \(i\), \(j\)). The sampling sequence \(j\) is restricted to be \(i-1\) for Markov sampling. For each API argument, it randomly transfers existing entities or elicits new entities from the seeker, and then adds the entities elicited (if any) and adds the new API vertex with parent edges from the transferred entities.
**Stage 2: Seeker-Provider Interaction Loop** The provider and seeker agents exchange a series of messages composed of their respective actions and utterances. On one hand, the seeker incrementally reveals the goal to the provider with actions corresponding to dialogue acts. On the other hand, the provider collects all the arguments necessary to make API calls, makes those calls and informs the seeker of the results with NLG template actions and API calls. Starting with the seeker, the conversation agent executes at each turn for a) action sampling (i.e., a sequence of
\begin{table}
\begin{tabular}{l c c} \hline \hline Domain & Goal & Utterances \\ \hline sports & like sport team & The Yankees are my favorite baseball team \\ sports & like sport team & My favorite teams are the Yankees and The Knicks \\ sports & like sport team & Add the Warriors to my favorites \\ sports & like sport team & I am a new York giant fan. \\ sports & dislike sport team & Remove the Yankees from my favorite sports teams \\ sports & dislike sport team & Unfollow San Francisco giant. \\ restaurant & restaurant/cuisine one loves & I like to dine at Thai restaurant \\ restaurant & restaurant/cuisine one loves & I enjoy seafood restaurant. \\ preference management & reset preference & Forget everything you’ve learned about my preferences \\ preference management & reset preference & Delete all my preferences \\ preference management & delete preference & Remove Mexican food from our food preferences. \\ preference management & delete preference & I want you to forget my preference for vegetarian food. \\ weather provider & conditional preference & I prefer big sky for weather update. \\ weather provider & conditional preference & Always use RADAR for weather forecasting. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Sample golden utterances
actions is drawn according to a distribution of possible actions), b) NLG: an utterance corresponding to the dialogue acts associated to the sampled actions is drawn. Both parts provide the necessary annotations.
### Design principle
To dive deep into the simulator mechanics, the design is guided by **generating diverse dialogues**. It could be generated in the follow ways.
First, dialog variations can be introduced by resampling the entities of the conversation using catalogs and using alternative utterance wordings. For example, the slot sport_team in "I love sport_team" would be substituted by other entities in catalog. Second, the user could change the information provided in prompts in each turn. For example, when a user says "I don't like the Saint", the generated clarification would be "which gaint do you mean, New York Gaint or San Francisco gaint?" The benefit is to proactive suggest users' preferences for feature discovery. Third, it is important to simulate cases with sufficient randomness or even errors, since neither the conversation agent or the user will behave optimally. For example, when a user says "what's my sports update?", the conversation agents may see the situation that a user has not set his preference yet, and asked "Please set up your sport preference before getting notifications." The clarification sub-dialogues is to reduce uncertainty in conversation agents' estimation of the dialogue state. Finally, to avoid overfitting during model training, it is important to generate a previously unseen conversation to "realistic" simulator potential errors or misunderstandings.
### Architecture
Based on the above design principle, we illustrate the components of the dialogue simulator and how they interact with the modeling component.
The simulator adopts the concept of dialogue acts to express the semantics associated to utterances. For example, inform is the act used by the seeker to communicate their intent and by the provider to describe the result of their actions. Dialogue acts are used at the interface between the natural language generation (NLG) unit, which converts the dialogue content and direction to a corresponding language realization, and conversation agent.
**Procedure** Natural Language Generation component is currently based on templates and is responsible for the generation of the seeker's utterances from dialogue acts, where seeker utterances are first categorized with Dialogue Act classifier and then converted into NLG templates using simple heuristics. In particular, it consists of three key steps:
**Step 1:** A small batch of 100 dialogues is simulated with the extracted seeker NLG templates.
Figure 3. An example of entity transfer graph for the utterance I prefer the weather app for daily updating and rain forecasting
**Step 2:** A Mechanical Turk task is submitted to obtain variations of the extracted seeker NLG templates by crowdsourcing, where paraphrasing is done both at the utterance level and conversation level.
**Step 3:** A larger set (e.g., 50k) of annotated dialogues is simulated and used for supervised training the conversation model.
### Simulated dialogues for addressing cold-start issues
An example of simulated dialogue for addressed cold-start issue is shown below.
**U-1**: "type": "User_intent", "[String] User side utterance": "forget everything I have taught you", "[List of string] UserNLGs associated with the user utterance": preferenceteaching.events_DeleteAllAffinityEvent();
**U-2**:"type": "User_intent", "partially_normalized_value": "[let it beSINGLESITE.Confirmation -> confirmation1]",
"fully_normalized_value": "["com_cu_preference teaching.events.ConfirmationEvent (confirmation=confirmation1)"];
**S-1**: "type": "api", "[String] User side utterance": "getAllAffinityAction() -> getAllPreferenceResult1", "[List of string] UserNLGs associated with the user utterance": getAllAffinityAction() -> getAllPreferenceResult1";
**S-2**: "type": "nlg", "notify_com_cu_preference teaching.actions.getAllAffinityAction_success
(getAllAffinityResult=getAllPreferenceResult1);
**S-3**: "type": "sys", "normalized_value": "wait_for_user_input()";
**S-4**: "type": "api", "normalized_value": "com_cu_preference teaching.actions.deleteAllAffinityAction (confirmAction=confirmation1) -> deleteAllPreferenceResult1";
**S-5**: "type": "nlg", "fully_normalized_value": "notify_com_cu_preference teaching.actions.deleteAllAffinityAction_success
(deleteAllAffinityResult=deleteAllPreferenceResult1).
### Preference teaching use case
In the context of preference teaching, the user goal in the goal-oriented dialogue system is a sequence of APIs (such as setDietOrCuisineAffinity, setSportAffinity) and their corresponding arguments and values. One can sample the dialogue based on predefined templates and perform sampling on the entities (e.g., I love {entity}). The arguments and their values of APIs are sampled as well for more variations. We perform sampling of API transitions in dialogue flows. For example, when computing API transition weight, we consider factors such as API transitions observed in dialog examples provided by the developer, shared entities amongst APIs, input-output relation between APIs, _etc_. We then combine these weights using the mixing ratio, treated as hyper parameters tuned for simulated dialogue generation. An example of simulated dialog annotated with markup language (where U- gives the user side utterance and S- gives the conversation agent side actions) is illustrated in 3.3.
## 4. Multi-Turn action prediction loop
### Overview
**Action Prediction** The responsibility of prediction modeling is to (a) understand dialogue context so that it can decide which actions should be taken next; (b) fill arguments for selected actions with entities that occurred in dialogue context, with the purpose of guiding the conversation in the loop. In particular, the model predicts the actions in several types, including intent API (e.g., SetMyPreference()) with argument, NLG service with argument (e.g., the corresponding natural language response based on NLG template and its argument, e.g., what is a restaurant type, and Sys system actions (e.g., wait for user input wait_for_user() or end the dialog end_dialogue), by leveraging the storage of dialogue context and input utterance and context information to the conversational model. Another import module
is name entity recognition (NER) model on user utterances so as to restrict the number of entities the model needs to consider in argument filling for action prediction.
**Multi-turn Prediction Loop** Since the model continuously interacts with the context storage in the conversation, the context storage is updated at dialog turn level when the model makes a new prediction for the action before handling the control between seeker and provider. For example, the user may request call an intent/API triggered by forge everything I have taught you, and the agent model will perform an API call for searching all users' preferences by getAllPreference(), and then generate one NLG call that informs the user successfully obtaining all user preferences by getAllAffinityAction_success, before waiting for user input via a sys_call like wait_for_user_input(). Then after prompting the user to provide the confirmation of the deletion operation via ConfirmationEvent() API call, the agent will call the intent API of deleteAllAffinityAction(confirmAction) followed by NLG generation of successful deletion of all preferences via deleteAllAffinityAction_success. Fig. 4 illustrates the model prediction operation loop.
### Model breakdown
As is illustrated before, the conversation model is not a single monolithic model, but consists of several key models instead.
**Dialogue Context Encoder** It encodes dialogue context into vector embeddings used by other models. In particular, in dialog contexts, we extract (1) current utterance (denoted as \(E_{cu}\)), (2) past utterances (denoted as \(E_{pu}\)), (3) current entities (denoted as \(Ece\)), (4) past entities (denoted as \(E_{pe}\)), (5) past actions (denoted as \(E_{pa}\)) as features, and adopt BERT encoder [(4)] as embeddings before concatenating them into a unified context embedding \(C\), i.e.,
\[C=[E_{cu},E_{pu},E_{ce},E_{pe},E_{pa}]. \tag{3}\]
**Action Prediction (AP) Model** Given the embedding of context history, it predicts \(n\)-best list of actions (API/NL-G/SYS)1 from encoded dialogue context by computing the distribution over possible actions (denoted as \(A\)) with a multi-layer feed-forward network using ReLu activation and softmax over possible actions, i.e.,
Footnote 1: API denotes standard api to achieve user goals, e.g., get_all_preference, set_all_preference, NLG denotes the process of producing natural language output as the response, SYS denotes system related operations, such as wait_for_user_input(), end the conversation, etc
\[A=\text{Softmax}(\text{ReLu}(WC+b)), \tag{4}\]
where \(W\) and \(b\) are the weight matrix and bias vector learned for action prediction, respectively. An example of action prediction output is shown below: [(API:get_all_preference, 0.75), (API: set_all_preference, 0.25)
Figure 4. The model prediction flow.
The responsibility of **Argument Filling (AF) Model** is to fill arguments of predicted actions with entities in dialogues, whereas **NER model** recognizes the entity in the utterance, which restricts search space for argument filling model.
### AF model and NER model
**Argument Filling (AF) Model** It fills arguments of predicted Actions with entities in dialog context. A complete argument-filled action should be consistent with API/NLG constraints defined in conversation goal descriptions with encoded dialog context. Given each \(c_{i}\) in a representation of the \(i\)-th token in context encoder, the prediction is a scalar \(p_{i}\) between 0 and 1, indicating the probability of \(i\)-th token being the argument value. To make a better prediction, the embeddings of _argument_name_, _argument_type_, _action_name_ are fed into the model as the dialog context, attention scores are calculated over all hidden states over dialogue context with entities that violates API/NLG constraints masked out. Finally, the highest scoring entity \(p_{i}\) is chosen as the argument of all tokens of \(x_{i}\), i.e.,
\[p_{i}=\text{sigmoid}(\text{Attention}(C,x_{i})). \tag{5}\]
**NER model** It recognizes the entity in the utterance, which restricts search space for argument filling model. Given the encoded context history, and the embedding of each (e.g., \(i\)-th) token in current utterance using BERT, a bidirectional LSTM-CRF (Chen et al., 2017) is employed to predict the probability vector over all possible entities for inside, outside, beginning tagging by optimizing the objectives:
\[Pr(s|x;w)=\frac{exp(w\dot{\phi}(x,s))}{\sum_{s^{\prime}}exp(w\dot{\phi}(x,s^{ \prime}))}, \tag{6}\]
where function \(\phi()\) maps the input sequence \(x\) to state sequence \(s\) with model parameter \(w\), and \(s^{\prime}\) ranges over all possible output sequence with the scoring function \(w\dot{\phi}(x,s))\) indicating how well the state sequence fits the given input sequence that is modeled using a bidirectional LSTM with parameter learned by back propagation.
### NER model optimization via N-gram catalog feature
Suppose we have entity label space {sport_team, city}, the sport_team catalog = {San Francisco Giant}, the city catalog = {San Francisco}, corresponding to each token in the user utterance "I follow San Francisco Giant", to identify the boundary and type of the entity, we generate the 1-gram catalog feature, i.e.,
\[[[0,0],[0,0],[0,0],[0,0]],\]
2-gram catalog feature, i.e.,
\[[[[0,0],[0,0],[0,1],[0,0],[0,0]],\]
and 3-gram catalog feature, i.e.,
\[[[0,0],[0,0],[1,0],[0,0],[0,0]].\]
## 5. Performance evaluations
### Model Accuracy
**Dialogues for training model** To train the model, we generate training dataset by sampling user goals (denoted as sequence of APIs, their arguments and values) before generating dialogues from extension of these user goals. Table 1 (Appendix 2.1) lists examples of the goals we would like to support using the corresponding utterances in the
dialogue flow. Using dialogue simulation (see Sec 3), This totally gives 50,000 dialogues, 546,509 actions and 151,208 turns (Table 2).
**Model performance test on in-sample dialogues** To evaluate the predictive capabilities of the models, We generate an in-sample testing dataset using the dialogue simulator, which gives 500 dialogues, 5673 actions and 1507 turns (Table 3).
**Model Robustness testing** To evaluate the robustness of our model, we also collect how users will describe these preferences by starting AMT2 task to collect the variations of those goal-oriented dialogues that would produce utterance level variance (i.e., different utterance for the same meaning) and dialog level variance (i.e., diverse dialogue working flow). Then we manually inspect each dialogue, and remove the redundant and unreasonable ones.
Footnote 2: [https://www.mturk.com/](https://www.mturk.com/)
After collecting these dialogues, to abide by the annotation contract, we manually label these AMT dialogue by properly injecting the dialogues based on dialogue definitions that clearly gives the entity boundaries, action signatures with corresponding parameters in consistent with dialogue generation contract. After data cleaning, finally it gives 500 dialogues and 1480 turns (Table 4). We call it "out-of-sample" evaluation dataset since it consists of utterance-level and dialogue level variations that are _not_ covered in training dialogues. Compared to in-sample dataset, this dialogue dataset has more variations at both utterance-level and dialogue level, which may not match well with the training dialogues. That's the reason we call it as "out-of-sample" evaluation dataset. The annotated dialogue flows are in the same format that can be processed by conversation understanding models.
**Metrics** We use _turn level accuracy_ to measure the performance of the models for the multi-turn dialog system. One turn includes the user's utterance plus the conversation agent's response, which measures a two-sided conversation in a dialog. Each turn is considered as correct when all predictions of models (including NER, AP, AF) in the turn are correct. For example, for the NER model, we can compare the predictions of all entities against the ground-truth, and define precision, recall, accuracy and F1-score based on the percentage of correctly predicted entities against the ground-truth of entities. The turn level accuracy for AP and AF are similarly defined. We also show the model performance at each action level (i.e., at API, NLG, SYS levels), and the accuracy is defined based on the percentage of correctly predicted ones against the ground-truth at each-action level. We show model prediction results for each model (such as NER, AP
\begin{table}
\begin{tabular}{l c c c} \hline \hline \#API & \# Testing dialogues & \# Action & \# turns \\ \hline
12 & 500 & 5355 & 1480 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Statistics of out-of-sample evaluation dataset
\begin{table}
\begin{tabular}{l c c c} \hline \hline \#API & \# Testing dialogues & \# Action & \# turns \\ \hline
12 & 50,000 & 546,509 & 151,208 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of training dataset
\begin{table}
\begin{tabular}{l c c c} \hline \hline \#API & \# Testing dialogues & \# Action & \# turns \\ \hline
12 & 500 & 5673 & 1507 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistics of in-sample evaluation dataset
and AF). Also, we show the end2end model evaluation result (denoted as NER+AP+AF), which indicates the it is correct if and only if all NER/AP/AF are correct for an action (or a turn).
**Result** Tables 5, 6 show both the turn level accuracy and action-level accuracy using both in-sample evaluation dataset and out-of-sample evaluation dataset. The result is expected as the model performs reasonably well on an in-sample dataset, whereas the model did not handle some entities correctly and therefore affects argument filling (AF) model performance as well since these entities might be out of dictionary.
### Lessons Learned
**Increasing simulated dialogues for addressing cold start issues** Thanks to the dialogue simulator component that generates many training dialogues, we are able to overcome the difficulty of insufficient training dialogues from cold-start. However, when we did evaluation using AMT data collected from user study, we still observed that some dialogue samples are _not_ covered by the simulator. We may leverage the recent advances in generative Q&A (e.g., GPT-3 (Beng et al., 2017), OPT-175B model (Zheng et al., 2018)) to complement current training dialogues from simulators. The training dialogues are augmented by collecting online user dialogues as well after the product is deployed. All these dialogues are put into _feedback loop_ to update the current model.
**More open and personalized preference suggestions** When we train the model, the entity type is given when setting the user preference. To move forward, some users are interested in receiving more accurate suggestions after setting preferences. We would provide more personalized suggestions based on preferences learned from current users.
\begin{table}
\begin{tabular}{l c c} \hline \hline model & ACC per-turn & ACC per-action \\ \hline NER & 94.48\% & 95.89\% \\ AP & 97.18\% & 97.18\% \\ AF & 93.92\% & 95.24\% \\ AP + AF & 92.57\% & 94.92\% \\ NER + AP + AF & 91.22\% & 91.22\% \\ \hline \hline \end{tabular}
\end{table}
Table 6. Accuracy (ACC) at per-turn level and per-action level for different models in _out-of-sample_ evaluation dataset
\begin{table}
\begin{tabular}{l c c} \hline \hline model & ACC per-turn & ACC per-action \\ \hline NER w/ CF & 96.79\% & 98.30\% \\ NER w/o CF & 94.48\% & 95.89\% \\ \hline \hline \end{tabular}
\end{table}
Table 7. Accuracy (ACC) at per-turn level and per-action level for NER models in _out-of-sample_ evaluation dataset after adding catalog specific features (denoted as CF)
\begin{table}
\begin{tabular}{l c c} \hline \hline model & ACC per-turn & ACC per-action \\ \hline NER & 99.85\% & 99.85\% \\ AP & 100\% & 100\% \\ AF & 97.89\% & 98.72\% \\ AP + AF & 97.89\% & 98.72\% \\ NER + AP + AF & 97.40\% & 97.65\% \\ \hline \hline \end{tabular}
\end{table}
Table 5. Accuracy (ACC) at per-turn level and per-action level for different models in _in-sample_ evaluation dataset
We may even support open conversations such as "What are my favorite ____" to see what is the entity list to be added, rather than the fixed entity types.
**Expanding catalogs** When we do offline evaluation for the model, we see NER model performs well on the in-sample evaluation dataset. However, it drops in out-of-sample dataset. In user study, some users responds wish to be able to add more options to their different categories of preferences. For example, we may expand sports team options to include college sports and teams, and minor league teams as well. We enabled _n-gram catalog feature_ to get catalog specific representations particularly for the entity types that have free-form catalog values which may not be well captured by BERT pre-trained models on Wikipedia data. These catalog features (Appendix 4.4) are concatenated into the embedding in LSTM-CRF models and Table 7 shows the performance improvement in out-of-sample dataset for NER tasks.
### User satisfaction
We conducted a user study to test the developed feature performance as well. Overall, participants expressed satisfaction when teaching their preferences for sports, restaurants and weather providers in a single session, emphasizing how easy and seamless the experience was when setting different types of preferences all at once. These findings are in line with the results observed in model evaluations.
In particular, we notice some reasons for user dis-satisfactions including ASR recognition error, issues with loop-s/crashes, and incorrect response, which motivates us to further improve the end2end modeling performance. Fig. 5 demonstrates how the user interacts with the conversation agent in the product.
\begin{table}
\begin{tabular}{l l} \hline \hline Reason & Ratio(\%) \\ \hline Issues with ASR recognition & 25\% \\ Issues with loops/errors/crashes & 20\% \\ incorrect/no response & 25\% \\ Some entities are incorrectly recognized & 20\% \\ Others & 10\% \\ \hline \hline \end{tabular}
\end{table}
Table 8. Top reasons of user dissatisfaction
## 6. Conclusion
This paper presents a conversation dialog system capable of teaching users' preference for bridging the gaps of conversation understanding and dialog management. The teachable dialogue system improves the language understanding capabilities and provides more natural conversations to users. TAI demonstrates good performance in teaching users' preferences in terms of model accuracy and user satisfaction. We are extending the application to various domains by supporting conversation based reasoning and personalized recommendation.
|
2309.08403 | The stratification of ISM properties in the edge-on galaxy NGC 891
revealed by NIKA2 | As the millimeter wavelength range remains a largely unexplored spectral
region for galaxies, the IMEGIN large program aims to map the millimeter
continuum emission of 22 nearby galaxies at 1.15 and 2 mm. Using the
high-resolution maps produced by the NIKA2 camera, we explore the existence of
very cold dust and take possible contamination by free-free and synchrotron
emission into account. We study the IR-to-radio emission coming from different
regions along the galactic plane and at large vertical distances. New
observations of NGC 891, using the NIKA2 camera on the IRAM 30m telescope,
along with a suite of observations at other wavelengths were used to perform a
multiwavelength study of the spectral energy distribution in the interstellar
medium in this galaxy. This analysis was performed globally and locally, using
the advanced hierarchical Bayesian fitting code, HerBIE, coupled with the
THEMIS dust model. Our dust modeling is able to reproduce the near-IR to
millimeter emission of NGC 891, with the exception of an excess at a level of
25% obtained by the NIKA2 observations in the outermost parts of the disk. The
radio continuum and thermal dust emission are distributed differently in the
disk and galaxy halo. Different dusty environments are also revealed by a
multiwavelength investigation of the emission features. Our detailed
decomposition at millimeter and centimeter wavelengths shows that emission at 1
mm is purely originated by dust. Radio components become progressively
important with increasing wavelengths. Finally, we find that emission arising
from small dust grains accounts for ~ 9.5% of the total dust mass, reaching up
to 20% at large galactic latitudes. Shock waves in the outflows that shatter
the dust grains might explain this higher fraction of small grains in the halo. | S. Katsioli, E. M. Xilouris, C. Kramer, R. Adam, P. Ade, H. Ajeddig, P. André, E. Artis, H. Aussel, M. Baes, A. Beelen, A. Benoît, S. Berta, L. Bing, O. Bourrion, M. Calvo, A. Catalano, C. J. R. Clark, I. De Looze, M. De Petris, F. -X. Désert, S. Doyle, E. F. C. Driessen, G. Ejlali, M. Galametz, F. Galliano, A. Gomez, J. Goupy, C. Hanser, A. Hughes, A. P. Jones, F. Kéruzoré, B. Ladjelate, G. Lagache, S. Leclercq, J. -F. Lestrade, J. -F. Macías-Pérez, S. C. Madden, A. Maury, P. Mauskopf, F. Mayet, A. Monfardini, M. Muñoz-Echeverría, A. Nersesian, L. Pantoni, D. Paradis, L. Perotto, G. Pisano, N. Ponthieu, V. Revéret, A. J. Rigby, A. Ritacco, C. Romero, H. Roussel, F. Ruppin, K. Schuster, A. Sievers, M. W. L. Smith, J. Tedros, F. Tabatabaei, C. Tucker, N. Ysard, R. Zylka | 2023-09-15T13:57:17Z | http://arxiv.org/abs/2309.08403v1 | # The stratification of ISM properties in the edge-on galaxy NGC 891 revealed by NIKA2
###### Abstract
Context:As the millimeter wavelength range remains a largely unexplored spectral region for galaxies, the IMEGIN large program aims to map the millimeter continuum emission of 22 nearby galaxies at 1.15 and 2 mm.
Aims:Using the high-resolution maps produced by the NIKA2 camera, we explore the existence of very cold dust and take possible contamination by free-free and synchrotron emission into account. We study the IR-to-radio emission coming from different regions along the galactic plane and at large vertical distances.
Methods:New observations of NGC 891, using the NIKA2 camera on the IRAM 30m telescope, along with a suite of observations at other wavelengths were used to perform a multiwavelength study of the spectral energy distribution in the interstellar medium in this galaxy. This analysis was performed globally and locally, using the advanced hierarchical Bayesian fitting code, HerBTE, coupled with the THEMIS dust model.
Results:Our dust modeling is able to reproduce the near-IR to millimeter emission of NGC 891, with the exception of an excess at a level of 25% obtained by the NIKA2 observations in the outermost parts of the disk. The radio continuum and thermal dust emission are distributed differently in the disk and galaxy halo. Different dusty environments are also revealed by a multiwavelength investigation of the emission features. Our detailed decomposition at millimeter and centimeter wavelengths shows that emission at 1 mm is purely originated by dust. Radio components become progressively important with increasing wavelengths. Finally, we find that emission arising from small dust grains accounts for \(\sim\) 9.5% of the total dust mass, reaching up to 20% at large galactic latitudes. Shock waves in the outflows that shatter the dust grains might explain this higher fraction of small grains in the halo.
Conclusions:NIKA2 observations have proven essential for a complete characterization of the interstellar medium in NGC 891. They have been critical to separate the dust, free-free, and synchrotron emission in the various emitting regions within the galaxy.
Conclusions:
## 1 Introduction
Observations of galaxies at different wavelengths reveal the different ingredients of which these objects consist and the different physical mechanisms that are taking place within them. The millimeter (mm) to centimeter (cm) part of the spectral energy distribution (SED) of a galaxy is a very important but still vastly unexplored wavelength range in which many physical mechanisms manifest their presence.
Many studies have detected excess emission in the submillimeter (submm) to mm that was higher than expected from several current dust models, including contamination by radio continuum, molecular lines, and cosmic background (CMB) fluctuations (e.g., Galliano et al., 2003, 2005, 2018; Zhu et al., 2009; Paradis et al., 2011; Galametz et al., 2012; Remy-Ruyer et al., 2013; Hermelo et al., 2016). In one of the proposed scenarios, this emission originates from the heating of very cold dust grains (T \(<\) 10 K), but other physical mechanisms have also been proposed, such as temperature dependent emissivity and/or magnetic dust grains (see Galliano et al., 2018, for a review). Chang et al. (2020) have concluded that spiral galaxies with a low mass and metallicity are more likely to show submm and/or mm excess. Because high-resolution observations of galaxies at millimeter wavelengths are limited, the interstellar medium (ISM) must be mapped at these wavelengths to detect this excess in different environments and to better constrain its origin.
The radio continuum emission, decomposed into free-free and synchrotron emission, is also present at mm wavelengths. It carries information on the energetics of electrons accelerated by electrostatic or magnetic forces. The synchrotron emission in galaxies is known to be most prominent at radio wavelengths down to 3 cm, and the average slope to range between 0.7 and 0.75 (see, e.g., Klein et al., 2018, and references therein). This emission dominates the halo of galaxies with cosmic-ray electrons which are transported via diffusion along the extraplanar magnetic fields or streaming and winds (e.g., Schmidt et al., 2019; Tabatabaei et al., 2022). Free-free emission, on the
other hand, is an ideal tracer of ionized hydrogen clouds (Hn regions) in which free electrons scatter off ions that are produced in these regions. The lack of measurements at this wavelength range (especially in spatially resolved observations) has limited our knowledge about the coexistence of these emission mechanisms in different environments within galaxies (see, e.g., Mulcahy et al., 2018; Fatigoni et al., 2021).
In addition to the well-known thermal dust and free-free and synchrotron emission components, another peculiar component seen in excess is found to be present in some galaxies at this wavelength range and peaks at \(\lambda\approx 1\) cm (see Bianchi et al. (2022); Ysard et al. (2022), and references therein). This anomalous microwave emission (AME) is detected in both the diffuse medium and in the more compact clouds and can be explained as emission from spinning dust grains (Draine & Lazarian, 1998; Ali-Haimoud et al., 2009; Galliano et al., 2018).
The way in which dust is distributed within the galaxies with respect to the stars has been a topic of research for many years. Radiative transfer models have revealed that the way in which the bulk of the dust is distributed in spiral galaxies can be approximated by an exponential disk with one to two times the stellar disk scale length and half of the stellar disk scale height (Xilouris et al., 1999; Smith et al., 2016; Casasola et al., 2017; Popescu et al., 2000; Nersesian et al., 2020, 2021; Moscahvo et al., 2022). In addition to this component of the diffuse dust, warmer dust that is mostly concentrated in Hn regions has to be taken into account. In radiative transfer models, this is often approximated with a thinner dust disk (Popescu et al., 2000; Dasyra et al., 2005; Bianchi, 2008; Nersesian et al., 2020, 2021). Furthermore, far-infrared (FIR) observations (e.g., Bianchi & Xilouris, 2011; Mosenkov et al., 2022) and also ultraviolet (UV) maps of edge-on galaxies (Seon et al., 2014) reveal large amounts of extraplanar dust at distances of up to a few kiloparsecs above the plane of the galaxies.
The geometry of nearby edge-on galaxies is ideal for studying the properties of the ISM not only in their disks, but also at large distances above the galactic plane (e.g., Xilouris et al., 1998; Bianchi, 2008; Baes et al., 2010; De Looze et al., 2012, 2012; Mosenkov et al., 2016). It is very important to quantify how dust grains of different sizes are distributed in the different environments within the galaxies. Dust has a strong impact on galactic observables and substantially affects the star formation activity of a galaxy. Therefore, a detailed knowledge of the dust grains properties (e.g., size and temperature distribution and the composition) as well as their evolution is crucial in order to model the ISM in galaxies. The general picture is that smaller grains (with typical sizes smaller than 15 A) are distributed around Hn regions, while large grains, which make up the bulk of the dust material, are distributed more evenly in the disk (Galliano, 2022, and references therein). Davies et al. (1998) produced a simple analytical model for assessing the evolution of dust grains within galaxies of various sizes. It was found that large grains \(>0.1\)\(\mu\)m can travel larger distances within the galaxies (within a period of \(10^{9}\) yr) and can be recycled through the halo back to the disk, while small grains do not travel far from the location in which they were formed.
At a distance of 9.6 Mpc (Bianchi & Xilouris, 2011), NGC 891 is the closest edge-on spiral galaxy. It has been extensively observed over a wide range of wavelengths and was interpreted with a variety of models. It was observed in the X-rays using both _Chandra_ and _XMM-Newton_ observatories (Hodges-Kluck & Bregman, 2013; Hodges-Kluck et al., 2018). These observations revealed a hot-gas halo around the galaxy with a lower metallicity (\(Z\sim 0.1\) Z\({}_{\odot}\)) than the disk metallicity. The disk metallicity is thought to be similar to that of our own Galaxy (Alton et al., 2000). This suggests accretion from the intergalactic medium as the origin of the hot halo. FUV and NUV observations of NGC 891 reveal extended emission above the galactic plane with scale heights of \(\approx 1.2-2.0\) kpc (Seon et al., 2014). These observations were interpreted as dust-scattered starlight, indicating the existence of dust grain material at large distances above the plane. Many studies support the scenario that dust is being transported at large distances above the plane through a network of dust chimneys (Howk & Savage, 1997; Alton et al., 2000), some of which are associated with ionized gas structures. Deep optical observations have revealed an extended optical halo in NGC 891 that extends up to a few kiloparsecs above the galactic plane (Bahcall & Kylafis, 1985), as well as very low surface brightness features extending up to \(\sim 40\) kpc from the main body of the disk (Mouhcine et al., 2010).
Guelin et al. (1993) presented the first map of NGC 891 at mm wavelengths (1.3 mm) using the MPIfR 7-channel bolometer array at the _Institut de Radio Astronomie Millimetrique_ (IRAM) 30m telescope at \(12^{\prime\prime}\) resolution with an RMS of 4 - 6 mJy/beam with observations along the major axis out to \(\pm 10.3\) kpc. These observations were conducted by scanning in azimuth while switching the subreflector/wobbler in azimuth. The resulting map shows a strong correlation of emission between the cold dust material that emits at 1.3 mm and the emission of molecular gas traced by the CO(2-1) line (Garcia-Burillo et al., 1992), but only a weak correlation with Hi emission. The strong spatial correlation between dust surface mass densities and molecular gas that is traced by the CO(3-2) line was confirmed more recently by Hughes et al. (2014). They fit modified blackbody (MBB) models to pixel-by-pixel SEDs comprising _Herschel_(Pilbratt et al., 2010) and JCMT - SCUBA 850 \(\mu\)m (Israel et al., 1999) maps. The derived dust temperatures range between about 17 to 24 K, with an average emissivity index of \(\beta=1.9\). An extended dust halo was discussed by Bianchi & Xilouris (2011), Alton et al. (1998, 2000), and Israel et al. (1999). _Planck_ detected the galaxy at 350, 550, and 850 \(\mu\)m and at 1.38 mm at resolutions \(\gtrsim 5^{\prime}\)(Planck Collaboration et al., 2011).
In this paper, we profit from the _New IRAM Kid Arrays 2_ (NIKA2) camera on the IRAM 30m telescope, based on which we map the critical 1.15 mm and 2 mm emission throughout NGC 891 to study the variations in the dust properties within the disk and beyond. The NIKA2 wavelengths bridge the gap between dust emission at _Herschel_ FIR, submm, and the radio emission extending beyond to cm wavelengths. The observations are part of the _Interpreting the Millimetre Emission of Galaxies with IRAM and NIKA2_ (IMEGIN) large program (PI: S. Madden), a guaranteed-time large program of 200 hours targeting nearby galaxies (distance smaller than 25 Mpc). In Sect. 2 we present the NIKA2 observations and data reduction as well as the ancillary data that are used in the subsequent analysis. Sect. 3 describes the treatment of all the available maps and additional corrections that have to be included so that the maps can be used in the SED modeling analysis, as described in Sect. 4. In the discussion that follows in Sect. 5, we examine the decomposition of the mm/cm wavelength range into dust thermal emission, free-free emission, and synchrotron emission, the spatial distribution of these components and of the various dust components, as well as the properties and the distributions of the small and large dust grains. Sect. 6 summarizes our conclusions.
## 2 Data
### NIKA2 observations and data reduction
The observations were conducted with the NIKA2 camera at the IRAM 30m telescope at Pico Veleta in the Spanish Sierra Nevada at an altitude of 2850 m. NIKA2 (Adam et al. 2018; Calvo et al. 2016; Bourrion et al. 2016) is a focal-plane camera that observes simultaneously in the two continuum wavebands at 1.15 and 2 mm by means of a dichroic mirror, with angular resolutions of 11.1\({}^{\prime\prime}\) and 17.6\({}^{\prime\prime}\), respectively. There are two 1.15 mm arrays, each made up of 1140 superconducting kinetic inductance detectors (KIDs). The 2 mm array is made of 616 KIDs. They fill the 6.5\({}^{\prime}\) diameter instantaneous field of view (FoV) in each case. The transmission bands of the detectors are broad, with a \(\sim 50\) GHz full width at half maximum (FWHM) transmission at both wavelengths. The noise-equivalent flux densities (NEIDs) are 33 mJy s\({}^{1/2}\) at 1.15 mm and 8 mJy s\({}^{1/2}\) at 2 mm. The commissioning campaign was completed in April 2017; the NIKA2 calibration and its performance were presented in Perotto et al. (2020).
NGC 891 was observed on 21 October 2019, 11 December 2019, 15 January 2020, and 17 January 2020. A total of 7 hours of telescope time were used to conduct 12 on-the-fly (OTF) scans on NGC 891, alternating the scanning directions of each map between \(\pm 45^{\circ}\) relative to the major axis of the galaxy to minimize residual stripping patterns in the maps. The central detectors of each array covered an area of \(23.4^{\prime}\times 10.4^{\prime}\). Scanning was performed with a speed of 40\({}^{\prime\prime}\)s\({}^{-1}\) and a spacing of 20\({}^{\prime\prime}\) between the subscans. The observations were taken under stable weather conditions, with the 225 GHz zenith opacities varying between 0.1 and 0.25 with a mean of 0.16, corresponding to 2.7 mm of precipitable water vapor (pwv). Pointing and focus observations and corrections were carried out on nearby quasars about every one or two hours. Observations of primary and secondary calibrators were conducted throughout the pool observing campaign. Dedicated observations of the sky opacity by scanning in elevations were also conducted, but were not used in the data reduction presented here.
Half of the observations were obtained in the UT range 20:00 and 04:00, indicating that the main beams are likely to be stable at their nominal values measured in the commissioning campaigns (Perotto et al. 2020, see their Fig. 12) with half power beam widths (HPBWs) of 11.1\({}^{\prime\prime}\) and 17.6\({}^{\prime\prime}\). The remainder of scans were observed in the afternoon between 15:00 and 19:00 UT when the HPBWs often tend to degrade slightly to \(\sim 12.5^{\prime\prime}\) and 18\({}^{\prime\prime}\), respectively.
The observations were coadded and calibrated using the piz/gildas1 software in the final version of the calibration database with the data associated files (DAFs) (Zylka 2013; Berta & Zylka 2022). piz allows for several free parameters to guide the data reduction. They were varied to find the optimum setting, trying to avoid biases and minimize map artifacts. For NGC 891, we ran piz (version of 3/2021) with 40 iterations, using third-order polynomials to fit the subscan timelines, and a threshold for the signal-to-noise ratio of 2 and 4 for 1.15 mm and 2 mm, respectively, above which a pixel was considered as a source and was protected in each iteration. To guide the algorithm, the inner part of the source was masked using an ellipse of \(180^{\prime\prime}\times 20^{\prime\prime}\) rotated by the position angle of NGC 891 and centered on the nucleus. The opacities that were measured every 5 minutes by an on-site taumeter working at 225 GHz were interpolated and used to correct for atmospheric absorption at the NIKA2 wavelengths. The extended emission seen by the 30m error beams at 1.15 mm and 2 mm was filtered out through the process.
Figure 1: NIKA2 maps of NGC 891 at 1.15 mm (top panel) and 2 mm (bottom panel) with a beam size of 11.1\({}^{\prime\prime}\) and 17.6\({}^{\prime\prime}\) (\(\sim 0.5\) kpc and \(\sim 0.8\) kpc, respectively; see the white circles in the bottom-left corner in each panel). The maps are centered at RA\({}_{2000}=2^{h}22^{m}33^{s}\), DEC\({}_{J2000}=+42^{\circ}20^{\prime}53^{\prime\prime}\) and have been rotated by 67.1\({}^{\circ}\) counterclockwise (the position angle of the galaxy) for illustration purposes. The surface brightness contours correspond to 3.5, 8, 15, and 30 times the RMS. The RMS values are 1.0 mJy beam\({}^{-1}\) and 0.3 mJy beam\({}^{-1}\) at 1.15 mm and 2 mm, respectively.
To estimate the total absolute flux uncertainties of the NGC 891 observations, we took an absolute flux uncertainty of the primary planets of 5% into account (see Perotto et al., 2020, and refereces therein), added in quadrature, with the observed relative RMS scatter on the calibrators as reduced with piic1. The average RMS during the three pool weeks of observations during day and night, weighted with the number of scans per week, was 5.5% at 1.15 mm and 2.1% at 2 mm. This resulted in a total absolute flux uncertainty of 8% at 1.15 mm and 6% at 2 mm (see Table 1).
Footnote 1: [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
The final data were projected onto a grid with pixel sizes of \(3^{\prime\prime}\) and \(4^{\prime\prime}\) for 1.15 mm and 2 mm, respectively, to create the final maps (Fig. 1). Both maps show a similar morphology in which the disk extends about \(\pm 13\) kpc along the major axis from the center of the galaxy and about \(\pm 1\) kpc above and below the plane at the 3.5 RMS level. The disk is hardly resolved in the vertical direction. The inner disk shows regions of enhanced emission. The most pronounced region lies in the direction of the bulge of the galaxy, and two secondary fainter regions lie at \(\sim\pm 3\) kpc galacto-centric distance along the major axis. Furthermore, fainter blobs of emission are present at various positions in the disk. The outermost regions of the disk at these wavelengths show indications of a warped morphology. This morphology is also evident in the H i map at the same spatial scales as the NIKA2 maps, and it becomes more prominent at larger radial distances up to \(\sim 24\) kpc from the center of the galaxy. In general, the emission at the NIKA2 wavelengths agrees well with other wavelengths at FIR and submm, even in the outermost regions of the galaxy. A more detailed presentation of the
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Telescope (Band)** & **Central** & **Resolution** & **Pixel size** & **Luminosity \(\pm\) RMS** & \(\pm\)**Calibration** \\ & **wavelength (\(\mu\)m)** & & & **(L\({}_{\mathbf{0}}\))** & **uncertainty** \\ \hline WISE (W1) [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W4)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Spitzer (MIPS1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**_Spitzer (MIPS2)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**_Herschel_ (PACS-Blue)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Telescope (Band)** & **Central** & **Resolution** & **Pixel size** & **Luminosity \(\pm\) RMS** & \(\pm\) Calibration** \\ & **wavelength (\(\mu\)m)** & & & **(L\({}_{\mathbf{0}}\))** & **uncertainty** \\ \hline WISE (W1) [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**_Spitzer (IRAC1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Spitzer (IRAC2)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Telescope (Band)** & **Central** & **Resolution** & **Pixel size** & **Luminosity \(\pm\) RMS** & \(\pm\) Calibration** \\ & **wavelength (\(\mu\)m)** & & & **(L\({}_{\mathbf{0}}\))** & **uncertainty** \\ \hline WISE (W1) [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Spitzer (IRAC1)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Spitzer (IRAC2)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**WISE (W2)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Spitzer (IRAC3)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Spitzer (IRAC3)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Spitzer (MIPS2)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Herschel (PACS-Green)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Herschel (PACS-Red)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Herschel (PACS-Red)** [FOOTNOTE:]Footnote : [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)\(\sim\)m/
\end{tabular}
\begin{tabular}{l c c c} \hline \hline
**Herschel (PACS-Green)** [FOOTNOTE:]Footnote : [http://www.astro](http://www.astro).
morphology of the mm maps and a comparison with other wavelengths follows in Sect. 5.
The observing strategy (fast scanning at \(\pm 45^{\circ}\) relative to the major axis of NGC 891) and data reduction (low-order polynomials fit to the timelines, etc.) were optimized to retrieve the extended mm emission of NGC 891 in the best possible way. The global SED, constructed with the addition of _Planck_ (and _Herschelchel_) data (discussed in detail in Sect. 4.3), shows that, indeed, NIKA2 retrieves the full emission (see Fig. 2). This is made possible by the edge-on orientation of a relatively thin disk. The FWHMs perpendicular to the major axis are about \(2^{\prime}\) only at the Herschel wavelengths (Bianchi & Xilouris 2011). On the other hand, the spatial scales at which NIKA2 starts to miss extended emission, which is caused by residual drifts of electronics and atmosphere, are larger than about \(4^{\prime}\) (Ruppin et al. 2018; Keruzore et al. 2020). A detailed discussion of the NIKA2 transfer function as observed in the nearby face-on galaxy NGC 6946 will be presented in Ejlali et al. (in prep.).
### Ancillary data
A set of high-resolution maps of the galaxy is necessary for a resolved analysis. Modern instrumentation at infrared (IR) and submm wavelengths renders the resolved analysis of the dusty ISM feasible. High-resolution radio maps are needed to constrain the SED of the radio continuum emission locally. For the purposes of the current analysis, we compiled data ranging from the near-infrared (NIR) up to cm wavelengths. We used photometrical data received from space infrared and submm telescopes: the _Spitzer Space Telescope_ (SST), the _Wide-field Infrared Survey Explorer_ (WISE), _Herschel_, and the _Planck Space Telescope_, as well as ground-based telescopes: the _James Clerk Maxwell Telescope_ (JCMT), the _Very Large Array_ (VLA), the _Arcminute Microkelvin Imager_ (AMI), the _100m Effelsberg Telescope_, the _Owens Valley Radio Observatory_ (OVRO), the _West-erbork Synthesis Radio Telescope_ (WSRT), and the _Green Bank Telescope_ (GBT). A complete list of the telescopes and the detectors is given in Table 1.
Enriching the available dataset with the NIKA2 maps, presented in this work (see Sect. 2.1), allows for a high-resolution study of NGC 891 at approximately kiloparsec scales. In order to perform a multiwavelength study, all the maps were convolved to a common spatial resolution. Considering the need to keep the resolution as high as possible, we considered maps of \(25^{\prime\prime}\) (1.17 kpc) resolution or lower. This excluded the SPIRE - 500 \(\mu\)m map of \(36^{\prime\prime}\). Because the submm part of the spectrum is well represented at many wavelengths, omitting the SPIRE - 500 \(\mu\)m map from the resolved analysis is not expected to introduce any major uncertainties in constraining the SED.
Infrared and submm maps were collected from the DustPedia database2, and radio maps were compiled from the NASA/IPAC Extragalactic Database (NED)3. The basic properties of the maps we used are given in Table 1. In this table, entries with an indication of the resolution and the pixel size refer to the resolved maps that were used in the global and spatially resolved SED analysis of the galaxy. The rest were only used as global measurements in the SED analysis. Along with the broadband multiwavelength measurements, we also used the CO(3-2) line emission map with a resolution of \(\sim 14^{\prime\prime}\) and a pixel size of \(7.3^{\prime\prime}\) that was obtained with the JCMT (Hughes et al. 2014). Last, we used the atomic hydrogen (Hi) map at \(\sim 20^{\prime\prime}\) resolution and \(4^{\prime\prime}\) pixel size that was observed with the WSRT telescope (Oosterloo et al. 2007).
Footnote 2: [http://dustpedia.astro.noa.gr/](http://dustpedia.astro.noa.gr/)
Footnote 3: [https://ned.ipac.caltech.edu/](https://ned.ipac.caltech.edu/)
## 3 Data processing
We corrected all maps for background emission using the Python Toolkit for SKIRT (PTS; Verstocken et al. 2020) framework. First, we masked all pixels belonging to NGC 891 within an elliptical aperture. Then, we used the mesh-based background estimation from photutils(Bradley et al. 2018) to estimate the large-scale pixel variations around the galaxy. In this method, a particular image is divided into a grid of rectangular regions in which the background is estimated. Following the method of Verstocken et al. (2020), we defined square boxes with a side of six times the FWHM of the image. The background emission and its standard deviation were calculated by interpolating the photutils maps within the central ellipse of the galaxy. Then, the background emission was subtracted from the original image. For the NIKA2 maps, only a small offset was fit and substracted that only accounts for less than 0.6% of the total emission at 1.15 mm and for less than 0.1% at 2 mm. After the background was subtracted, the error maps were generated for each waveband. For the error maps, we calculated the pixel-to-pixel noise by measuring the deviation of the pixel values from the smooth background. Maps of the galaxy with a signal-to-noise ratio higher than three in each pixel were then created in order to be used in the subsequent SED modeling analysis. Pixels with fewer than two measurements between 60 and 500 \(\mu\)m were masked out in order to ensure that there are sufficient data to constrain the dust component.
The observed NIKA2 data may be contaminated by line emission. We corrected the 1.15 mm NIKA2 emission for the strongest possible contaminant, the CO(2-1) line, following the method described in Drabek et al. (2012) using the observed CO(3-2) map. To do this, we assumed a CO(3-2)/CO(2-1) line ratio of 0.43 making use of the CO(3-2)/CO(1-0) ratio of about 0.3 found by Wilson et al. (2009) in the diffuse ISM of other nearby galaxies and the CO(2-1)/CO(1-0) ratio of 0.7 found in molecular clouds (e.g., Penaloza et al. 2018). The NIKA2 transmission curves are given in Perotto et al. (2020). The CO(2-1) line accounts for 1.8% of the total flux for the whole galaxy, while on local scales, the contamination varies. The highest values are encountered at the center of the galaxy (\(2.5-3.5\%\)) and at the emission peaks at either side of the center (\(1-3\%\)). The CO(2-1) contamination in the rest of the galactic disk ranges from 0.1% to 1.5%.
Last, because a pixel-by-pixel modeling requires that all the galaxy maps are homogenized in terms of units, resolution, and pixel size, the flux densities were converted into units of monochromatic luminosity (L\({}_{\odot}\) Hz\({}^{-1}\)pix\({}^{-1}\)), assuming a distance of 9.6 Mpc (Bianchi & Xilouris 2011) and the corresponding pixel sizes (see Table 1). Then, we degraded all maps to the 350 \(\mu\)m map resolution of \(25^{\prime\prime}\) and rebinned them to a common grid with a pixel size of \(8^{\prime\prime}\).
## 4 SED modeling
### HerBIE SED fitting tool
In order to infer the physical parameters of the dust content and also of the radio emission, we used the hierarchical Bayesian inference for dust emission (HerBIE) SED fitting code. HerBIE was described in Galliano (2018) and Galliano et al. (2021). Using this code, we were able to reveal the integrated galaxy prop
erties, but also to derive the properties of the galaxy on local scales along the line of sight.
The HerBIE SED fitting code is able to extract information about the basic properties of galaxies using a hierarchical Bayesian approach. This means that the prior distributions are not set before running, but are inferred from the data. This method is more robust than the least-squares approach, for example, in deriving parameters close to the true values and in computing realistic uncertainties. In addition, it is able to eliminate the noise-induced scatter and correlation of the parameters in order to recover the intrinsic scatter and correlation, in contrast to the least-squares method or any other nonhierarchical Bayesian approach. The code samples the probability distribution of the physical parameters by applying a prior distribution controlled by hyperparameters. The probability density function (PDF) of the parameters, which are poorly constrained by the observations, is largely determined by the prior. In contrast to nonhierarchical Bayesian models, the prior we used is itself constrained by the whole distribution of parameters. Poorly constrained parameters are thus inferred from the average distribution of the ensemble of pixels. In total, ten freely varied parameters were considered in our analysis.
### Themis dust model
HerBIE incorporates the advanced dust model THEMIS (Jones et al. 2013, 2017), which consists of core-mantle carbon and silicate grains. The mantles on all grains are photoprocessed, H-poor, atomic-rich carbon. The size-dependent optical properties of this model were constrained as much as possible by laboratory data. HerBIE offers the opportunity of modeling different physical conditions by taking into account realistic optical properties, size, starlight intensity distributions, and stochastic heating (Guhathakurta & Draine 1989). The code takes the color correction and the calibration uncertainties of each band into account (see Table 1). For this study, we used the module powerU, which includes the formulation for a nonuniformly illuminated dust mixture, the starBB, which computes the direct or scat
\begin{table}
\begin{tabular}{l l} \hline \hline
**Parameters** & **Global values** \\ \hline M\({}_{dust}\) [M\({}_{\odot}\)] & (\(3.48\pm 0.22\)) \(\times\) 10\({}^{7}\) \\ M\({}_{small\ grains}\) & (\(3.32\pm 0.37\)) \(\times\) 10\({}^{6}\) \\ M\({}_{large\ grains}\) & (\(3.15\pm 0.24\)) \(\times\) 10\({}^{7}\) \\ M\({}_{gas}\)/M\({}_{dust}\) & 261 \(\pm\) 26 \\ T\({}_{dust}\) & (24.03 \(\pm\) 0.34) K \\ L\({}_{sur}\) [L\({}_{\odot}\)] & (\(6.80\pm 0.37\)) \(\times\) 10\({}^{10}\) \\ L\({}_{dust}\) [L\({}_{\odot}\)] & (\(3.39\pm 0.13\)) \(\times\) 10\({}^{10}\) \\ \(<\)U\(>\) [\(2.2\times 10^{5}\) W m\({}^{-2}\)] & 2.63 \(\pm\) 0.95 \\ q\({}_{AF}\) & 0.10 \(\pm\) 0.01 \\ f\({}_{ion}\) & 0.21 \(\pm\) 0.11 \\ \(\beta\) & 1.88 \(\pm\) 0.47 \\ \(\alpha_{s}\) & 0.79 \(\pm\) 0.07 \\ \hline \end{tabular}
\end{table}
Table 2: Global parameters of NGC 891 as inferred by the HerBIE SED fitting code (see Sect. 4). In adapting the THEMIS model to HerBIE, q\({}_{AF}\), the mass fraction of a-C(:H) lower than 15 Å, is an analog to the q\({}_{PAH}\) in the model of Draine & Li (2007), and f\({}_{ion}\), the mass of the a-C(:H) lower than 7 Å divided by the mass of a-C(:H) lower than 15 Å, is an analog to the fraction of ionized PAHs. HerBIE incorporates the ISRF of Mathis et al. (1983) with a mean intensity \(<\)U\(>\), normalized in the solar neighborhood, and it parameterizes the Rayleigh-Jeans spectral index \(\beta\) and the synchrotron spectral index \(\alpha_{s}\).
Figure 2: Spectral energy distributions at different positions throughout the galaxy. The top SED shows the global SED (with HerBIE fitted to the integrated luminosities), and the remaining six SEDs are at the positions A to F that are indicated in the top panel and refer to an area of \(0.37\times 0.37\) kpc\({}^{2}\) each. These positions are centered at \(-5.5,-3.4\), 0.0, 3.0, 7.5, and 11 kpc along the major axis of the galaxy (A, B, C, D, E, and F, respectively) and represent the regions of interest discussed in various places in our analysis. The observed luminosities are indicated with the different symbols for each SED. The respective model (and its uncertainty) is presented with the continuous line. NIKA2 luminosities are indicated in red in all the SEDs. The luminosity values are correct only for the global SED, and the rest of the SEDs are scaled by the number indicated next to each model. The vertical dashed line at 98 \(\mu\)m, the peak wavelength of the IR SED in region A, indicates how the peaks of the rest of the SEDs are placed with respect to this SED. This is an indication of the relative dust temperature difference in the various positions throughout the galaxy.
tered starlight, and the radio module, which combines the free-free and the synchrotron emissions. Hereafter, the parameters of each module are subscribed with these names. Their mathematical formalism is reported extensively in Sect. 2.2 of Galliano (2018) (see also Sect. 3 in Galliano et al. 2021).
### Global SED
HerBITE allows for the use of external parameters as a prior knowledge (e.g., gas mass and metallicity) in the SED fitting. Galliano (2018) showed that including external parameters in the prior distribution improves the recovery of potential correlations between the external parameters and the dust properties. In the current study, the maps of the atomic and the molecular hydrogen as traced by the Hi and CO(3-2) emission lines were included in the prior distribution. In addition, after several SED trial fits, it was evident that a reduced abundance of the carbonaceous nanoparticles (smaller than 10 nm) by a factor of two had to be applied in order to achieve a better fit at MIPS - 24 \(\mu\)m and PACS - 70 \(\mu\)m (see Appendix A.2 of Galliano et al. 2011). As a first step, the SED of the whole galaxy was fit using available integrated luminosities from the NIR to radio wavelengths. In this way, the stellar emission, the emission from instrectellar dust, but also the radio emission (taking into account the contributions from both the free-free and the synchrotron radiation) were constrained on global scales.
The photometry data (luminosity values) for the global SED fitting are listed in Table 1. The luminosities derived in the current study (labeled "b" in Table 1) were computed within an ellipse centered at the center of the galaxy (see Fig. 1), with major and minor axes of 14 and 2.2 kpc, respectively. This configuration ensured that the bulk of the emission originating from the disk of the galaxy was measured. The median background level and its RMS value were calculated within an elliptical ring of inner major and minor axes of 18.2 and 7 kpc, and outer major and minor axes of 23 and 12 kpc, respectively.
For the NIKA2 bands, we computed \((1.56\pm 0.03)\times 10^{7}\) L\({}_{\odot}\) and \((1.15\pm 0.19)\times 10^{6}\) L\({}_{\odot}\) at 1.15 mm and 2 mm, respectively. Guelin et al. (1993) used the MPIfR 7-channel bolometer array at the IRAM 30m telescope and measured \(5.62\times 10^{6}\) L\({}_{\odot}\) at 1.3 mm, while the _Planck_ observation at 1.38 mm measured galactic emission of \((8.68\pm 0.84)\times 10^{6}\) L\({}_{\odot}\). The total 1 mm luminosities are difficult to compare because they were not obtained at exactly the same effective frequency and with the same bandwidth. The NIKA2 observations detect more of the faint extended disk because its FoV is larger and the RMS is better by a factor 4 - 6 than in Guelin et al. (1993). With these caveats, the three 1mm measurements agree reasonably well.
The global SED of NGC 891 is presented in Fig. 2 (the first SED from top; the luminosity values are plotted as open squares). The width of the model SED is representative of the uncertainty of HerBITE when it derives the SED. Overall, the global SED is well constrained by the observations, which is also indicated by the relatively low value of 2.3 of the reduced \(\chi^{2}\) (24 degrees of freedom are considered in this case for a total number of 34 measurements). The best-fit parameters derived with the HerBITE code are presented in Table 2. The quest for a correct determination of the dust mass in a galaxy is a long-standing problem and is mostly related to the availability of the appropriate measurements. First attempts to measure the dust mass of a galaxy were made in the _Infrared Astronomical Satellite_ (IRAS) era, in which observations were only sensitive to the warm dust. This resulted in a strong underestimation of this parameter. For NGC 891, the IRAS-derived dust mass was calculated to be \(8.8\times 10^{6}\) M\({}_{\odot}\)(Devereux & Young 1990), scaled to the adopted distance of 9.6 Mpc. The bulk of the dust could be detected with the advent of higher sensitivity to the cold dust at mm to submm wavelengths through space observatories such as _Infrared Space Observatory_ (ISO) and _Herschel_, but also the ground-based telescope, JCMT. This resulted in dust masses that were much higher than previously thought. Using SCUBA observations, Alton et al. (1998) computed a dust mass of \(5.1\times 10^{7}\)M\({}_{\odot}\) that was later supported by the inclusion of ISO observations at FIR wavelengths (\(7.2\times 10^{7}\) M\({}_{\odot}\); Popescu et al. 2004). Radiative transfer models that only account for the extinction effects of the stellar light by the dust (and do not take the dust emission into account) have been successful in determining the dust mass quite accurately (\(5.7\times 10^{7}\) M\({}_{\odot}\); Xilouris et al. 1999). Recent studies that mostly exploited the _Herschel_ capabilities (e.g., Hughes et al. 2014; Yoon et al. 2021) reported dust masses for NGC 891 ranging between \(8.5\times 10^{7}\) and \(1.1\times 10^{8}\) M\({}_{\odot}\)
Figure 3: Observed and modeled maps (top and middle panels, respectively) at 1.15 and 2 mm (left and right panels, respectively). The maps are displayed in linear brightness scale. The brightness levels are indicated in the color bars on the right in each panel in units of mJy per 25\({}^{\prime\prime}\) beam. The bottom panels indicate the percentage residuals between observation and model at both wavelengths. All maps are at a common resolution of 25\({}^{\prime\prime}\), as indicated by the circle in the bottom left corner in each panel.
depending on the regions considered and the assumptions for the dust emissivity index fit with a MBB, and with dust temperatures ranging between 17 K and 24 K. With our analysis that includes the NIKA2 observations, adopts the THEMIS dust model, and uses the Bayesian approach that HerBIE performed, we derive a dust mass of \(3.48\times 10^{7}\) M\({}_{\odot}\) and a dust temperature of 24 K. The derived dust temperature agrees well with previous studies. The dust mass value is lower by a factor of \(\sim 2-3\) than the values derived with other methods (e.g., MBBs) that only considered the _Herschel_ data. This can be explained by the different adopted codes and dust grain models (see, e.g., Chastenet et al., 2021).
In order to calculate the total gas mass of NGC 891, we used the WSRT 21 cm line measurements (Oosterloo et al., 2007) for the atomic hydrogen mass (M\({}_{H_{\rm 2}}\)) and the JCMT CO(3-2) line measurements (Hughes et al., 2014). We adopted a line ratio of CO(3-2)/CO(1-0) = 0.3 (Wilson et al., 2009) to convert the CO(3-2) line measurements into molecular hydrogen mass (M\({}_{H_{\rm 2}}\)). We found \(3.05\times 10^{9}\) and \(3.81\times 10^{9}\) M\({}_{\odot}\) for M\({}_{H_{\rm 2}}\) and M\({}_{H_{\rm 1}}\), respectively. Multiplying the sum of the hydrogen mass by a factor of 1.36 to account for the contribution of helium and heavy elements, we find a total gas mass of \((9.08~{}\pm~{}0.60)\times 10^{9}\) M\({}_{\odot}\) and a gas-to-dust mass ratio of 261 \(\pm\) 26.
### Spatially resolved SED
In addition to the global SED, we computed the spatially resolved SEDs of individual areas throughout the galaxy. This provided us with information about the variation in the physical parameters in different environments within the galaxy. Examples of the SED models are given in Fig. 2 at six positions in the galaxy (see Fig. 2), and the typical parameters derived with our analysis for these regions are provided in Table 1 (see Appendix A for details). The pixels were chosen in such a way as to represent different environments, such as the very center of the galaxy (C), the secondary peaks around the center (B, and D), the positions of enhanced brightness farther out in the disk (A, and E), and a position at the very faint end of the disk (F). The reduced \(\chi^{2}\) values for the regions A to F are 8.85, 6.72, 3.16, 12.61, 8.59, and 2.53, respectively, considering seven degrees of freedom (17 measurements). Although the observations compare well with the model, the high residuals found in the outermost regions at mm wavelengths (at 11 kpc from the center; see, e.g., the residuals in region F in Fig. 2) need further investigation. At this position, the data seem to indicate an excess of emission, and the dust appears to be underestimated by HerBIE and THEMIS. In order to better visualize areas with excess emission at mm wavelengths, we compared the observed and modeled maps at 1.15 and 2 mm and computed the respective residual maps, which are shown in Fig. 3. The residual maps clearly show that although the observed and modeled maps agree well at the two wavelengths (the residuals are lower than 10% in most places in the galactic disk), in regions in the outer part of the disk (between 10 and 12 kpc), the observation is higher than the model prediction by more than \(\sim 25\%\). This may indicate an additional component of very cold dust in the outer parts of the galaxy. Another obvious point from the SEDs in Fig. 2 is that the dust gradually becomes colder while moving from the center of the galaxy to the edges. The vertical dashed line in the plot, centered at the peak of the dust emission (occurring at 98 \(\mu\)m) at the center of the galaxy (region C), shows the relative displacement of the peaks of the SEDs of the different regions. It indicates that the SED peaks shift farther away from the center peak at longer wavelengths (colder dust temperatures).
## 5 Discussion
### Decomposing the emission in the mm to cm wavelength range
The mm to cm wavelength range is a very complex region of the galactic SED in which emission may originate from a variety of mechanisms. The main emission sources are the thermal emission from dust grains (mainly from large dust grains emitting at low temperatures) and the emission of the ionized gas comprising free-free and synchrotron emission. In addition, other secondary less obvious mechanisms that are difficult to detect may take place, revealing their existence at these wavelengths. These can be very cold dust grains that produce excess emission at mm wavelengths as well as anomalous microwave emission (AME) emitting at cm wavelengths (see Galliano, 2022, for a review).We have carried out a detailed decomposition of the emitting mechanisms to evaluate their relative importance, both spatially and at different wavelengths.
Fig. 4 presents the modeled emission map at 2 mm. The emission was decomposed using the HerBIE model into dust emission, free-free emission, and synchrotron emission. It is worth noticing here that the modeled map at 2 mm is more extended than the observed emission (Fig. 1). This is because,
Figure 4: Decomposition of the modeled emission at 2 mm into dust, free-free, and synchrotron emission. The top panel shows the total modeled emission, and the next three panels show the dust, the free-free, and the synchrotron emission maps from top to bottom, respectively. The color bars are in mJy per 25\({}^{\prime\prime}\) beam. The vertical dotted lines indicate the positions of regions A to F (see Fig. 2).
as described in Sec. 3, the produced modeled maps depend on all the available maps that are used in the HerBIE fitting code, which consider the more extended emission that is detected at other wavelengths.
On the global scale of the galaxy, 91% of the total 2 mm emission arises from cold dust, 5% from free-free emission, and 4% from synchrotron emission. The bulk of the emission is concentrated along the disk of the galaxy, but a prominent halo component is also present. The dust disk is composed of four prominent components, a central peak C, two secondary peaks B and D, and a diffuse halo component that extends to vertical distances beyond the plane of the disk, to up to 5 kpc. The enhanced dust emission feature above the center of the galaxy (region C) at a distance of \(\sim 4\) kpc is also interesting. Yoon et al. (2021) also indicated dusty filaments such as this one using image-sharpening techniques (see their Fig. 9), rising up to \(\sim 4\) kpc above the galactic plane, based on FIR observations. We find that this feature carries 1.95\(\times 10^{5}\) M\({}_{\odot}\) of dust, 14% of which is small dust grains (see also Sect. 5.3). The dust halo shows a symmetric elliptical distribution that keeps its shape even at large distances away from the disk of the galaxy.
The free-free emission map shows enhanced emission in the disk, but no obvious emission is detected at the center of the galaxy (region C). In contrast, enhanced free-free emission is seen in regions B and D, but also in regions A and E. The free-free emission at high galactic latitudes seems to follow the general shape of the features in the disk (i.e., a deficit in the central region C and enhancement in regions A, B, D, and E), forming a peanut-shaped halo. The synchrotron emission map, on the other hand, shows the opposite topology with respect to that of the free-free emission, with enhanced emission at the center of the galaxy (region C) that decreases toward regions A, B, D, and E. The synchrotron emission halo maintains the same peanut-shaped distribution as in the case of the free-free emission.
In Fig. 5 we show the relative contribution of the three main mechanisms at four wavelengths (1 mm, 2 mm, 5 mm, 2 cm, and 20 cm) bracketing the cases where dust thermal emission and radio emission dominate the SED (at 1 mm and 20 cm, respectively). The leftmost panels show the fractions of the different emission mechanisms in the disk of the galaxy (see the central lane indicated in the top map, and the rightmost panels indicate the fractions in the halo of the galaxy). These were calculated as the median values within two parallel lanes above and below the disk of the galaxy centered at 3 kpc from the plane of the disk.
In the disk of the galaxy (leftmost panels in Fig. 5), the emission at 1 mm clearly mainly arises from thermal dust (at a \(98-99\)% level), with negligible contamination from free-free emission (\(\sim 1-2\)%) and practically no synchrotron emission. At 2 mm, the free-free emission begins to increase and is more prominent at the secondary peaks around the center (regions B and D; with a contribution of \(\sim 15\)% ), while the synchrotron emission begins to be detected at the center (with a contribution of only \(\sim 2-3\)% ). At 5 mm, the radio emission becomes the prominent component, and emission from thermal dust is only \(\sim 20-30\)%, depending on the position inside the galaxy. From the radio emission, the free-free component is the most important with up to \(\sim 70\)% in the secondary peaks, which drops to \(\sim 39\)% in the center. The interplay between the two radio emission mechanisms with synchrotron emission filling in the gaps where free-free emission shows a deficit is interesting (e.g., in the center of the galaxy). This becomes more obvious at longer wavelengths (\(\simeq\) cm) where the dust emission is negligible (lower than 1%). As a result, the center of the galaxy is synchrotron dominated (\(\sim\)70%), while the secondary peaks are high in free-free emission (\(\sim 60\) to 70%). At much longer wavelengths (20 cm), synchrotron emission dominates the disk with up to \(\sim 93\)% in the center and with \(\sim 75\) to 80% in the rest of the disk.
Figure 5: Emission components that contribute to the total flux at 1, 2, and 5 mm, 2 and 20 cm (top to bottom panels) in the galactic plane (left panels) and in the halo (right panels). The emission percentages for the dust, free-free, and synchrotron emissions are shown as orange, red, and blue bars, respectively. The actual percentage for each emission mechanism at each position is given in numbers in the plots. The top two panels indicate the positions along the major axis of the galaxy where the decomposition was made. For the halo at \(|\mathrm{z}|\sim 3\) kpc (right panel), the mean values in the two horizontal lanes indicated in the plot were taken into account.
The galaxy halo (rightmost panels in Fig. 5) shows significant differences in the way the different emission mechanisms are distributed compared to the galactic disk. At 1 mm, the difference is negligible because dust emission dominates both regions at \(\sim 99\%\). At 2 mm, the halo of the galaxy is still dominated by dust emission at \(\sim 93\%\), and the free-free and synchrotron emission begin to increase, but at lower level than in the disk (by \(\sim 4-6\%\) and \(\sim 2-4\%\) for the free-free and the synchrotron emission, respectively). The difference compared to the disk in the distribution along the major axis is also notable. It is flatter in the halo, without obvious central and secondary peaks. At 5 mm, the dust emission in the halo is the weakest component, at a somewhat higher fraction compared to the disk (\(\sim 30-40\%\)). The free-free emission is at the level of \(\sim 30\%\), much lower than in the disk, and its distribution is flat, while the synchrotron emission starts to become the dominant emission and contributes up to \(\sim 47\%\). At 2 cm and 20 cm, synchrotron emission has become the dominant emission in the halo of the galaxy and contributes up to \(\sim 84\%\) and \(\sim 97\%\) at 2 cm and 20 cm, respectively. The difference compared to the disk of the galaxy is evident: The contribution of free-free emission is far lower, and the distribution along the major axis is flatter.
### Distribution of the warm and cold dust
In the general picture of the dust distribution in spiral galaxies, the cold dust material is distributed diffusely throughout the disk of the galaxy and along the spiral arms, and the warm dust is mainly found near the Hii regions. This is directly visible in the face-on geometry because disk, spiral arms, and Hii regions can be easily spotted, but they are hard to distinguish in the edge-on configuration. Kramer et al. (2010) and Xilouris et al. (2012) have examined this by studying the distribution of the warm and cold dust in the face-on Local Group galaxy M33, but recent radiative transfer modeling of face-on galaxies also examined the detailed geometry of dust that is diffusely distributed in the disk and also concentrated in compact Hii regions (see, e.g., Verstocken et al., 2020; Nersesian et al., 2020; Viaene et al., 2020; Nersesian et al., 2020).
The edge-on configuration of NGC 891 reveals a dust morphology that is not smoothly distributed, but has obvious enhancements at several positions throughout the disk of the galaxy. This is evident in Fig. 6, in which we plot the MIPS - 24 \(\mu\)m, the PACS - 70 \(\mu\)m, the PACS - 160 \(\mu\)m, and the NIKA2 - 1.15 mm maps. The maps here were convolved to an FWHM of 12'', the limiting resolution of PACS - 160 \(\mu\)m (see Table 1), and they are meant to reveal the morphology of the central plane of the disk and not the extended halo. The emission along the central spine shows the six substructures introduced in Fig. 2, which are marked with vertical dotted lines in the four maps. The diffuse dust emission is present in all bands, and the different wavelengths show varying relative intensities of the disk substructures. Structure C, at the very center of the galaxy, appears in all bands and seems to dominate the bulge region. Structures B and D are very prominent at PACS - 160 \(\mu\)m and NIKA2
Figure 6: Distribution of different dust components as traced by the MIPS - 24 \(\mu\)m, the PACS - 70 \(\mu\)m, the PACS - 160 \(\mu\)m, and the NIKA2 - 1.15 mm (top to bottom panels, respectively) convolved at an FWHM of 12′′. The profiles along the major axis are plotted in the bottom panel. All profiles are normalized at 6 kpc. The vertical dotted lines indicate the positions of interest introduced in Fig. 2. The colors were scaled in such a way as to detail the disk morphology and not the extended emission above the plane.
Figure 7: Graphical illustration of the different dust emitting regions in NGC 891 in a face-on and an edge-on configuration (top and bottom, respectively). This illustration presents the diffusely distributed dust along the dust disk (blue), the cold dust along the spiral arms and the bulge (yellow), and warm dust emitting from individual Hii regions (orange).
1.15 mm and become dimmer at shorter wavelengths (especially B, which is very faint at MIPS - 24 \(\mu\)m). On the other hand, structures A and E are bright at MIPS - 24 \(\mu\)m and become progressively fainter at increasing wavelengths. Because MIPS - 24 \(\mu\)m is mostly sensitive to warmer dust, while PACS - 160 \(\mu\)m and NIKA2 - 1.15 mm trace the cool dust, it might be argued that these are three different types of dust environments, with C, the central bulge region, hosting both warm and cold dust grain material, B and D being dominated by cold dust, and A and E mostly composed of warm dust material.
One scenario that explains the different regions at different wavelengths is presented in Fig. 7 with the schematic of the edge-on view of the galaxy with the predominant dust components and the corresponding face-on orientation of the galaxy. Here, the blue color represents the dust that is smoothly distributed in the disk of the galaxy, yellow is the, mostly, cold dust that is distributed along the spiral arms, and orange indicates warm dust in Hii regions. This sketch is only meant to describe the dust emission distribution and is kept in a very simplistic form, even though it is known that more complicated structures (e.g., a central bar; Garcia-Burillo & Guelin 1995) are present. According to this scenario, B and D in Fig. 6 could be the projection of the dust in the spiral arms that we see along the line of sight in the edge-on orientation, while A and E are the accumulation of Hii regions dominating the line of sight, harboring warm dust emitting at MIPS - 24 \(\mu\)m (hardly visible at FIR and mm wavelengths; see Fig. 7). This picture agrees well with the decomposition reported by Xilouris et al. (2012) for M33. These differences can be better visualized in the bottom panel of Fig. 6, where the profiles along the major axis of the galaxy are overplotted, normalized at their values at 6 kpc (this region is free of any obvious enhanced emission in all bands). This plot clearly shows that region A stands out prominently at 24 and 70 \(\mu\)m, but is not significant at longer wavelengths (similar to region E, but much fainter at all wavelengths). Regions B and D are visible at all wavelengths (dimmer at mm wavelengths), while the central region, C, is prominent at all wavelengths, but is especially bright at 70 \(\mu\)m. Finally, region F at the extremes of the disk (11 kpc) clearly shows the relative dominance of very cold dust in this area. The NIKA2 - 1.15 mm emission is stronger than at other wavelengths, resulting in the emission excess relative to the fitted HerBIE model that we discussed in Sect. 4.4 (see also Fig. 3).
### Distribution of small and large dust grains
The properties of the dust grains in HerBIE are described using the THEMIS model. THEMIS is based on laboratory data accounting for the aromatic and aliphatic MIR features with a single population of small partially hydrogenated amorphous carbons, noted a-C(:H). Although largely dehydrogenated, small a-C(:H) are very similar to polycyclic aromatic hydrocarbons (PAHs). The other main component of THEMIS is a population of large a-C(:H)-coated amorphous silicates, with Fe and FeS nano-inclusions. The distribution of aromatic-feature-emitting grains is parameterized in such a way as to distinguish between very small a-C(:H) (VSAC; smaller than 7 A) and small a-C(:H) (SAC; radius between 7 A and 15 A) and between medium and large a-C(:H) (MLAC; with a radius larger than 15 A). Hereafter, the population of VSAC and SAC grains are referred to as small grains, and MLAC as large grains. The free parameter that controls the mass fraction of small grains is \(q_{AF}=q_{VSAC}+q_{SAC}\), which, multiplied by the total dust mass, provides the dust mass of small grains. The mass of the small dust grains emitting in the NIR/MIR wavelengths is largely constrained by the _Spitzer_ and WISE observations, while the _Herschel_ and IRAM - NIKA2 measurements are necessary to constrain the emission from large grains that constitute the bulk of the dust mass (see the emitting spectral regions of each component in Fig. 1 of Galliano et al. 2021). In the case of NGC 891, the dust mass of small grains is calculated to be M\({}_{small\ grains}=3.32\times 10^{6}\) M\({}_{\odot}\) (see Table 2), which accounts for 9.5% of the total dust mass.
The map of the mass fraction of small grains is presented in Fig. 8. The mass fraction of small grains in most of the disk plane of the galaxy is \(\sim 10\)% (blue), with lower values found in the region of the bulge (region C) of the galaxy (\(\sim 6\)%), while regions of the disk with enhanced emission (regions A, B, D, and E) show increased abundances of small grains with mass fractions reaching up to \(\sim 15\)%. At large distances above and below the plane of the disk (\(>2\) kpc), the mass fraction of small dust grains increases and reaches up to \(\sim 20\)%. For comparison, the mass fraction of small a-C(:H) in the solar neighborhood from the THEMIS model is 17% (Galliano et al. 2021).
Fig. 8 indicates that the small dust grain abundance is low where the interstellar radiation field (ISRF) is intense and vice versa. In order to better quantify this anticorrelation, we plot in Fig. 9 the mass fraction of small grains with the mean ISRF (\(<\)U\(>\)) from the HerBIE fit for every pixel. We focus on the A-F disk regions. Region A seems to be a particular outlier with respect to the rest of the disk regions: the mass fraction of the small dust grains is higher than that of the average ISRF. This agrees with the previous finding in Sect. 5.2, according to which this region is extremely bright at MIR wavelengths (indicating a large abundance of small grains) with very low emission of cold dust at FIR/submm emission. The enhancement of the fraction of the small grains at high ISRF, such as in region A, might thus originate from the fact that some of these grains are shielded in the molecular cocoon of this giant HII region. In this plot, the dust feature at \(\sim 4\) kpc above the center of the galaxy (see Fig. 4) is in the locus between the disk and the halo regions (gray square and error bars), indicating that although it is located in the galactic halo, its properties are more similar to those of the disk. In this area, however, both the the mass fraction of small grains and the mean ISRF (\(<\)U\(>\)) have large uncertainties, which are also reflected in the relatively large error bars of this point compared to those calculated for regions A to F.
Fig. 9 clearly shows that although the abundance of small dust grains is anticorrelated with \(<\)U\(>\), two distinct populations (the disk and the halo occupy two different loci in this plot. Small a-C(:H) could be destroyed within strong radiation fields.
Figure 8: Mass fraction of the small dust grains (VSAC and SAC) over the total dust mass for NGC 891. The vertical dotted lines indicate the positions of interest introduced in Fig. 2.
A negative correlation between the small grain abundance and \(<\)U\(>\) has already been reported (e.g., Draine et al., 2007; Galliano et al., 2008; Khramtsova et al., 2013; Remy-Ruyer et al., 2015; Galliano et al., 2021). Therefore, a different processing mechanism may be active in the halo of the galaxy. Yamagishi et al. (2012) detected the 3.3 \(\mu\)m feature in the halo of M82, indicating the presence of small PAHs. It is possible that we obtain an excess of small a-C(:H) due to the shattering of larger carbon grains by shocks. The disk population, shown as cyan crosses in Fig. 9, is spread over at least one order of magnitude in \(<\)U\(>\) and a mass fraction of small grains ranging from \(\sim 0.06\) to \(\sim 0.2\), while the halo population, shown as red crosses in Fig. 9, is very localized in the parameter space, with \(<\)U\(>\) having values within a relatively small range and a mass fraction of small grains ranging between \(\sim 0.125\) and \(\sim 0.2\).
The increase in the abundance of small dust grains with galactic latitude (compared to large grains) is shown in Fig. 10, where the vertical profiles crossing the regions A, B, C, D, E, and F are plotted. The profiles of the small dust grains are scaled so that they match the profiles of the large dust grains at their central peaks. These profiles show that the small dust grains at \(\sim|z|>2-2.5\) kpc (depending on the position) show a flatter distribution compared to the large grains. This difference is more prominent in the region C, where large grains decrease more steeply than small grains, which is evident already at \(\sim 1\) kpc.
In the disk, the negative correlation between the small grain fraction and the ISRF intensity (Fig. 9) probably results from the progressive destruction of these small grains by UV photons. This is indeed a well-known relation in the nearby Universe. Above the plane, the grains might be expelled from the galaxy via outflows. In these outflows, the shock waves could shatter the grains and thereby replenish the small grain reservoir. The contribution of this outflow to the total emission is negligible, but the edge-on orientation of NGC 891 allows us to see it precisely.
Seon et al. (2014) reported a non-negligible amount of dust (\(3-5\%\) of the total dust mass) located at distances beyond 2 kpc from the midplane, in accordance with Bocchio et al. (2016), who reported \(2-3.3\%\). We find a higher mass fraction of small dust grains (\(8\%\)) for \(|z|>2\) kpc. Furthermore, Bocchio et al. (2016) reported that the relative abundance of small grains with respect to the large grains (m\({}_{SG}\)/m\({}_{LG}\)) varied from 0.06 in the disk to 0.03 at a vertical distance of \(z=2.5\) kpc. With our analysis, we find higher values of m\({}_{SG}\)/m\({}_{LG}\) ranging from \(\sim 0.07\) to 0.15 in the disk and from \(\sim 0.1\) to 0.2 in the halo (at distances larger than 2 kpc from the midplane). Our result that the ratio of m\({}_{SG}\)/m\({}_{LG}\) increases with distance from the midplane differs from previous studies. It is probable that the different modeling techniques used or the different wavelength coverage (e.g., in Bocchio et al., 2016, the large grain emission is constrained by wavelengths only up to 250 \(\mu\)m) can explain these discrepancies.
## 6 Conclusions
We explored the complex mm wavelength range of the SED of the edge-on nearby galaxy NGC 891. We presented new observations of the galaxy at 1.15 and 2 mm obtained with the NIKA2 camera on the IRAM 30m telescope. Making use of these unique mm data combined with a set of multi-wavelength ancillary data and the HerBIE SED fitting code, we conclude the following:
- By comparing the modeled maps at 1.15 and 2 mm, we find that there is significant evidence of submm/mm dust emission in excess compared to the dust model in the outermost regions of the galaxy (between 10 and 12 kpc). The NIKA2 fluxes are higher by up to 25% than the model-predicted emission.
- Decomposing the emission at mm wavelengths into dust, free-free, and synchrotron, we find different morphologies of each component in the disk and in the halo of the galaxy. Specifically, at 2 mm, the disk emission shows prominent dust and synchrotron emission in the bulge region without obvious free-free emission. Regions of enhanced cold dust emission away from the center of the galaxy do not seem to correlate well with free-free and synchrotron emission features. Bright free-free emis
Figure 10: Vertical profiles of the mass surface density of small (VSAC and SAC; brown) and large grains (green) at different positions throughout the galactic disk. The profiles of the small dust grains are normalized so that they match the profiles of the large dust grains at z = 0 kpc.
Figure 9: Mass fraction of the small dust grains (VSAC and SAC) as a function of the mean instertellar radiation field \(<\)U\(>\). The cyan and red points are measurements for the individual pixels (along with their uncertainties) for the disk and the halo, respectively (with a crude separation at \(|z|=2\) kpc). The purple symbols are median values and uncertainties within circular apertures with a radius of 0.8 kpc centered at the regions from A to F. The gray square in this plot shows the median value (and the respective uncertainty) of the dust emission feature located at \(\sim 4\) kpc above the center of the galaxy (see Fig. 4) within an elliptical aperture encompassing the feature.
sion features in the disk show deficits in the synchrotron map. On the other hand, the dust halo of the galaxy maintains an elliptical shape similar to that of the disk, even at high galactic latitudes, while the synchrotron and free-free maps show a peanut-like shaped halo.
- The emission at mm/cm wavelengths was decomposed in detail, and the relative contributions of the dust, the free-free, and the synchrotron emissions were presented for regions in the disk and in the halo of the galaxy. At 1 mm, the emission comes from the dust (with negligible emission coming from free-free emission), and the free-free emission begins to increase at 2 mm at levels of \(\sim 5-15\%\) depending on their location in the disk or halo. At 5 mm, the radio emission becomes the primary component, and the free-free emission dominates in the disk, where it reaches emission levels up to \(\sim 70\%\), while both free-free and synchrotron emission dominate in the halo. Dust at 5 mm is still a significant emission source with a surprisingly larger contribution in the halo rather than in the disk (\(\sim 30-40\%\) in halo compared to \(\sim 20-30\%\) in the disk). At 2 cm, only radio emission with the free-free emission dominates the disk and the synchrotron emission the halo, while at 20 cm most of the emission in the disk and in the halo of the galaxy is synchrotron emission.
- Comparing the dust morphology seen at warm and cold dust tracers, we detected enhanced emission features in the galactic disk seen at all wavelengths (e.g., the central bulge region), only at MIR wavelengths, and only at FIR/mm wavelengths. We explained this by a simple scenario in which cold regions are the product of the projection in the edge-on geometry of the cold dust situated along the spiral arms of the galaxy, while warm dust emission arises from dust in large compact Hi regions.
- Taking advantage of the analysis performed with the HerBIE code, we decomposed the dust emission into two components, one component originating from small dust grains (smaller than 15 A), and the other originating from the larger grains. We concluded that the mass fraction of the small grains accounts for \(\sim 9.5\%\) of the total dust mass. In the disk of the galaxy, the mass fraction of small dust grains varies from \(\sim 6\%\) in the center of the galaxy to \(\sim 10\%\) in other regions in the disk, reaching up to \(\sim 15\%\) of enhanced dust emission. At large distances above and below the disk (\(>2\) kpc), the mass fraction of the small dust grains increases, reaching up to \(20\%\). The distribution of small grains from the disk into the halo is flatter than the distribution of the large grains, indicating the increase in the relative abundance of this dust population at high galactic latitudes. We speculate that the grains are expelled from the galaxy via outflows in which the shock waves shatter the grains and thereby replenish the small grain reservoir.
The NIKA2 observations at 1.15 and 2 mm of the nearby galaxy NGC 891 have proven to be very important in constraining the SED of the galaxy at mm wavelengths and to investigate the morphology of the cold dust with respect to warmer dust as well as the radio emission. Similar studies, exploiting the NIKA2 observations of nearby galaxies observed within the framework of the IMEGIN program, are expected to further advance our understanding of the ISM properties and of the thermal/radio emission mechanisms that are taking place in galaxies.
###### Acknowledgements.
We would like to thank the referee, Santiago Garcia-Burillo, for the useful comments and suggestions, which helped improving the quality of the manuscript. The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the 3rd Call for HFRI PhD Fellowships (Fellowship Number: 5357). We would like to thank the IRAM staff for their support during the campaigns. The NIKA2 dilution cryostat has been designed and built at the Institut Neel. In particular, we acknowledge the crucial contribution of the Cryogenics Group, and in particular Gregory Garde, Henri Rodenas, Jean Paul Leggeri, Philippe Camus. The NIKA2 data were processed using the Pointing and Imaging In Continuum (PIIC) software, developed by Robert Zylka at the Institut de Radioastronomie Millimetrique (IRAM) and distributed by IRAM via the GILDAS pages, PIIC is the extension of the MOP-SIC data reduction software to the case of NIKA2 data. This work has been partially funded by the Foundation Nanoscience Grenoble and the LabEx FOCUS ANR-11-LABX-0013. This work is supported by the French National Research Agency under the contracts "MKIDES", "NIKA" and ANR-15-CE31-0017 and in the framework of the "Investissements d'avenir" program (ANR-15-IDEX-02). This work has benefited from the support of the European Research Council Advanced Grant ORISTARS under the European Union's Seventh Framework Programme (Grant Agreement no. 291294). F.R. acknowledges financial supports provided by NASA through SAO Award Number SV2-8203 issued by the Chandra X-Ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. This work was supported by the Programme National "Physique et Chimie du Milieu Interstellariz" (PCM) of CNRS/INSUt with INCINP and the Programme National Cosmology et Galaxies (PNCG) of CNRS/INSUt with INP and IN2P3, co-funded by CEA and CNES. M.B., A.N., and S.C.M. acknowledge support from the Flemish Fund for Scientific Research (FWO-Vlaanderen, research project GOC4723N).
|
2309.07730 | AIDPS:Adaptive Intrusion Detection and Prevention System for Underwater
Acoustic Sensor Networks | Underwater Acoustic Sensor Networks (UW-ASNs) are predominantly used for
underwater environments and find applications in many areas. However, a lack of
security considerations, the unstable and challenging nature of the underwater
environment, and the resource-constrained nature of the sensor nodes used for
UW-ASNs (which makes them incapable of adopting security primitives) make the
UW-ASN prone to vulnerabilities. This paper proposes an Adaptive decentralised
Intrusion Detection and Prevention System called AIDPS for UW-ASNs. The
proposed AIDPS can improve the security of the UW-ASNs so that they can
efficiently detect underwater-related attacks (e.g., blackhole, grayhole and
flooding attacks). To determine the most effective configuration of the
proposed construction, we conduct a number of experiments using several
state-of-the-art machine learning algorithms (e.g., Adaptive Random Forest
(ARF), light gradient-boosting machine, and K-nearest neighbours) and concept
drift detection algorithms (e.g., ADWIN, kdqTree, and Page-Hinkley). Our
experimental results show that incremental ARF using ADWIN provides optimal
performance when implemented with One-class support vector machine (SVM)
anomaly-based detectors. Furthermore, our extensive evaluation results also
show that the proposed scheme outperforms state-of-the-art bench-marking
methods while providing a wider range of desirable features such as scalability
and complexity. | Soumadeep Das, Aryan Mohammadi Pasikhani, Prosanta Gope, John A. Clark, Chintan Patel, Biplab Sikdar | 2023-09-14T14:07:11Z | http://arxiv.org/abs/2309.07730v1 | # AIDPS:Adaptive Intrusion Detection and Prevention System for Underwater Acoustic Sensor Networks
###### Abstract
Underwater Acoustic Sensor Networks (UW-ASNs) are predominantly used for underwater environments and find applications in many areas. However, a lack of security considerations, the unstable and challenging nature of the underwater environment, and the resource-constrained nature of the sensor nodes used for UW-ASNs (which makes them incapable of adopting security primitives) make the UW-ASN prone to vulnerabilities. This paper proposes an Adaptive decentralised Intrusion Detection and Prevention System called AIDPS for UW-ASNs. The proposed AIDPS can improve the security of the UW-ASNs so that they can efficiently detect underwater-related attacks (e.g., blackhole, grayhole and flooding attacks). To determine the most effective configuration of the proposed construction, we conduct a number of experiments using several state-of-the-art machine learning algorithms (e.g., Adaptive Random Forest (ARF), light gradient-boosting machine, and K-nearest neighbours) and concept drift detection algorithms (e.g., ADWIN, kdqTree, and Page-Hinkley). Our experimental results show that incremental ARF using ADWIN provides optimal performance when implemented with One-class support vector machine (SVM) anomaly-based detectors. Furthermore, our extensive evaluation results also show that the proposed scheme outperforms state-of-the-art bench-marking methods while providing a wider range of desirable features such as scalability and complexity.
Underwater Acoustic Sensor Networks, Intrusion Detection System, Incremental Machine Learning, Concept-drift Detection.
## I Introduction
Water covers more than 70% of the earth's surface and is also home to many natural resources. Most of these natural resources are inaccessible and unexplored. Hence, many countries have invested in monitoring and analysing sensing data observed from underwater environments (deep and shallow water) [1]. In this regard, Underwater Wireless Acoustic Sensor Networks (UW-ASNs) are an emerging technology for underwater exploration [2]. UW-ASNs have various applications, namely, habitat and natural resource exploration, border surveillance, disaster forecasting, navigation control, and safety-and-control. The components of UW-ASNs comprise of several sensor nodes, underwater sink, surface station, and surface sink (a.k.a. buoy) [3]. Each component coordinates and shares information to carry out their tasks.
UW-ASNs can vary in architecture depending on the use case, such as static two-dimensional UW-ASNs, static three-dimensional UW-ASNs, and three-dimensional networks. The mode of communication used for UW-ASNs is acoustic waves which use carrier waves to transmit data through modulation such as amplitude and frequency. Acoustic waves can cover long distances underwater (more than 100 kms) [4]. Figure 1 depicts an UW-ASN. UW-ASNs face challenges due to hardware limitations of nodes, acoustic propagation, and unstable underwater environment. UW-ASNs are deployed in constantly evolving data environments (e.g., due to sensor ageing, underwater current etc.). Sensors and actuators in UW-ASNs are resource constrained (e.g., limited energy resource, computational power, and storage capacity). The acoustic waves are influenced by high and variable path loss, Doppler spread, latency due to the propagation delay, limited bandwidth, lower data rate, noise and speed.
The existing routing protocols [5] for UW-ASNs, which help to communicate and share information among the nodes, are Hop-by-Hop dynamic addressing-based (H2-DAB) [6], geographic and opportunistic routing with depth adjustment-based topology control for communication recovery (GEDAR), energy-optimised path unaware layered routing protocol (E-PULRP) [7], owner-efficient routing (PER) and vector-based forward (VBF). H2-DAB routing protocol uses a dynamic addressing scheme among the sensor nodes to communicate and does not require localisation information. GEDAR implements a greedy, opportunistic mechanism to route data packets for communication. E-PULRP is a hybrid routing protocol that consists of layering and communication phases. In the PER protocol, a fuzzy logic inference system is used for forwarding packets towards the sink node. Hence, the forwarding tree-trimming approach is adopted to prevent the spread of forwarded packets.
The routing protocols of UW-ASNs are exposed to various attacks related to confidentiality, integrity, and availability (CIA) of actuated and sensed information [8]. The nature of these routing threats can be categorized as passive or active [9]. The passive routing attacks include eavesdropping attacks and traffic analysis attacks. The active attacks include denial of service (DoS) attacks, repudiation attacks and routing attacks. DoS attacks are the most dangerous and challenging to detect [10]. Some existing DoS-based attacks can be classified as blackhole, grayhole, flooding, scheduling, Sybil/wormhole, and low-rate flooding attacks [11]. In case of a blackhole attack, the compromised sensor node, which acts as the forwarding agent, drops the collected packets, increasing packet loss significantly. A grayhole attack is a transformation of blackhole attack, where the compromised
node strategically forwards or drops the packets to minimise the chance of getting exposed. A flooding attack floods the child node with packets sent from a malicious node or group of malicious nodes (Distributed Denial of Service attack) to reduce the bandwidth and exhaust energy.
Several defence mechanisms have been developed to maintain the CIA in UW-ASNs. The system's confidentiality is achieved by implementing the Cipher Text Stealing (CTS) encryption technique [12]. Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) achieve the integrity and availability of the system. Although IDS aims to detect and identify abnormal activities, it does not mitigate detected anomalous activities. Hence, researchers have developed IPS to not only detect intrusions but also prevent the compromised nodes from taking any further actions. IDS are mainly classified based on their detection technique which can be classified based on the data source: host-based or network-based; detection technique: signature-based or anomaly-based; architecture: centralised or de-centralised; and environment: wired, wireless or ad-hoc network. Network-based IDSs gather data by analysing the network traffic, whereas host-based IDSs deploy a host in the form of agents, which runs on a monitored device. Signature-based IDSs match the pattern of the monitored network's data to a database of known attack signatures for classification. However, this approach fails when a zero-day (unseen) attack occurs. An anomaly-based IDS establishes a normal baseline set dynamically and monitors the network to compare them against the baseline for anomalous behaviour. This type of IDSs can handle unknown attacks better; however, it increases the chances of false positives or alarms. Some IDSs use machine learning/deep learning and statistical methodologies to train the model to classify and detect attacks by processing and analysing the data from a network [13]. Such IDSs lack adaptivity to the changes in the evolving underwater environment. In this paper, we develop a new adaptive IPS for UW-ASNs to secure them against blackhole, grayhole, and flooding attacks. We propose an adaptive IDS and IPS system suitable for the targeted UW-ASNs using a hybrid model that includes adaptive random forest (RF) and one-class support vector machine (SVM) algorithms for concept drift detection. This hybrid adaptive model outperforms the existing standard ML-based IDS and IPS solutions.
### _Desirable Properties_
Considering the discussed challenges in UW-ASN, any defensive system should address the following Desirable Properties (DPs):
* **DP1 (Zero-day intrusion detection):** The defense system is expected to detect known and previously unseen intrusions accurately.
* **DP2 (Adaptive):** The defense system is expected to be adaptive against the evolving underwater environment to efficiently manage the imbalanced streaming data.
* **DP3 (Out-Of-Distribution data detection):** The defence system should be able to detect the time and place (when and where) of shifts in data distribution (a.k.a. concept-drift) in the evolving data stream.
* **DP4 (Scalable):** The defence system is expected to be generalised and maintain its performance against various scaled underwater network infrastructures (when the UW-ASN is scaled up with more sensor nodes).
* **DP5 (On-the-fly detection):** The defence system is expected to detect threats on the fly (because detecting the threats in real-time makes the system efficient in preventing the adversary from taking further actions).
Fig. 1: Decentralised architecture of the proposed solution.
* **DP6 (Lightweight):** The defence system is expected to be lightweight and computationally efficient since the UW-ASN senor nodes are resource constrained.
* **DP7 (Intrusion Prevention System):** The defence system should be integrated with a self-threat prevention system to prevent the adversary from taking further actions.
### _Motivation and Contribution_
To the best of our knowledge (as shown in Section II), existing IDSs in the literature cannot ensure all of the above-mentioned desirable properties (_DP1-DP7_) for UW-ASNs. Moreover, the occurrence of an intrusion is, in general, a rare incident (with respect to the volume of normal observations over the entire monitoring period) which makes the streaming data imbalanced and skewed toward the majority class. Hence, such an imbalanced streaming data environment causes an additional challenge for any learning and monitoring agents.
In order to mitigate the existing challenges discussed in Section I-B, an incremental security system is required to adapt to changes in data distribution (a.k.a. concept drifts) on-the-fly. Since it is not feasible for any security system to obtain and accommodate the entire normal and malicious activities, the development of an incremental and generalised security system is required to accurately classify out-of-distribution (OOD) data. In this context, due to the lack of adaptivity in the existing IDS for the evolving environment of UW-ASNs, we propose a robust and adaptive IPS to protect UW-ASNs against blackhole, grayhole and flooding attacks. The proposed IPS aims to achieve all desirable properties.
This paper makes the following contributions:
* A new robust hybrid incremental IDS to detect UW-ASN routing attacks. The proposed scheme can detect and adapt to shifts in data streams (a.k.a. concept drifts) on-the-fly to maintain its detection performance.
* The _first_ incremental cryptography-based IPS, which is lightweight and isolated against an external adversary. The proposed IPS can avoid the negative impacts of false positives.
* The _first_ solution to identify and mitigate grayhole, flooding, and blackhole routing attacks in UW-ASN environments.
* A generated dataset+ with 16\(\sim\)64 nodes for UW-ASNs, for the research community to use as a benchmark (UW-ASN dataset). Footnote †: Our datasets and source codes are available in the link below: drive.google.com/drive/folders/11d6f2AOkqqdrj57A00FAlg7V10gzTzz
* Benchmarked the performance of the proposed scheme against most of the state-of-the-art machine learning classifiers.
### _Organisation_
The rest of the paper is organised as follows. In Section II, we discuss the related works. In Section III, we present the preliminaries. Section IV presents the proposed scheme. Section V describes our implementation and evaluation details. Section VI concludes the paper and lists possible future work in this area. The organization of the paper is illustrated in Fig. 2.
## II Related Work
The lack of an efficient IDS, along with the external challenges (such as unstable underwater environment and UW-ASN sensor resource constraints) posed by the unstable nature of the underwater environment, makes UW-ASNs prone to vulnerabilities. Various IDSs have been proposed to secure UW-ASNs. However, most fail to achieve all the desirable properties required to build an efficient IDS for UW-ASNs. Table I compares related works to our approach with respect to the desirable properties.
The use of mobile agents to detect sinkhole attacks in wireless sensor networks (WSNs) is proposed in [14]. The proposed mechanism uses mobile agents to make sensor nodes aware of their trusted neighbours by not listening to traffic coming from malicious sensor nodes. Instead of the traditional client-server-based processing, it leverages the use of mobile agents and traverses through the network either periodically or on demand, bringing the functionality to the data rather than the other way around. The proposed mechanism increases scalability and keeps track of sensor network resource constraints, making them lightweight and energy efficient. This scheme can ensure the scalability (_DP4_) property and is resource efficient, but it is limited to just sinkhole attacks and built for WSNs. Also, this scheme fails to cover all the desirable properties required by an efficient IDS. It fails to prevent zero-day intrusions (_DP1_) and increases the detection time of the malicious node (_DP5_), as the agent needs to reach the sensor nodes to detect intrusion.
To overcome the properties lacked by [14], researchers in [15] leverage deep hybrid learning (DL) models across benchmarked datasets and analyse their performance. Individual DL classifiers implemented were Multi-layer Perceptron, CNN, and LSTM. Hybrid classifiers implemented were Autoencoder and Temporal Convolutional Network, Autoencoder and Bi-Directional Recurrent Neural Network, Autoencoder and Bidirectional LSTM, and CNN
Fig. 2: Organization of the paper.
and LSTM. Even though the DL-based security mechanism is adaptive (_DP2_), the models cannot detect intrusions on-the-fly (_DP5_), as the DL-based models require a good amount of training time. The model needs to be trained on the entire dataset every time there is a change in the data distribution, thereby increasing the training time. In addition, the models are not lightweight (_DP6_). Also, the datasets considered are specific to IoT networks on which the models have been trained and evaluated. The datasets considered are also aged and thus do not consider recent attacks. Since the model does not implement out-of-distribution data detection (_DP3_), it would require continuous monitoring and re-training of the model whenever the underlying data distribution changes, thereby increasing the cost of implementing such models in an evolving data stream.
To handle the problem of Out-Of-Distribution data detection (_DP3_), a network-based IDS (NIDS) has been proposed in [16] using Spark's master-slave node architecture. The data nodes containing the data perform feature selection in different slave nodes using RV coefficient-based hybrid feature fusion, which is designed by incorporating the wrapper, class-wise information gain, and Canberra distance. The unique features selected then undergo a process of Data Augmentation using oversampling on the slave node. The intrusion detection classification and training are done in the master node, which uses the DRN classifier. This approach has the potential to handle out-of-distribution data detection (_DP3_) and is specific for Internet applications. The proposed security mechanism can achieve on-the-fly detection (_DP5_) of intrusions. However, the datasets used are not specific to UW-ASNs, and no intrusion prevention mechanism has been proposed.
Another approach for designing an efficient IDS combines packet-based and flow-based intrusion detection techniques, which makes the IDS hybrid by considering both the traffic flow and packet analysis. The authors in [17] propose an IDS that uses Dempster-Shafer theory (DST-IDS). DST-IDS is an ensemble method that takes traffic flow information and the first \(N\) packets as input. Both the traffic flow predictions and packet-based IDS are fused to get the final detection result. A data collection and processing tool was proposed to reduce the processing time for massive data volumes. In addition, it was designed to work with heterogeneous data distributions to provide scalability to the DST-IDS. Though this technique stands out well regarding scalability (_DP4_) and on-the-fly detection (_DP5_), it fails to achieve other desirable properties.
In summary, the papers mentioned above introduce different techniques for developing an effective IDS. However, each system lacks some desirable properties, as depicted in Table I. Also, these IDSs are explicitly built for wireless terrestrial networks. They do not consider the challenges of UW-ASNs and the acoustic mode of communication, making them unsuitable for UW-ASN environments. To the best of our knowledge (as shown in Table I), there is no IDS with all the above-mentioned desirable properties (_DP1-DP7_) available for UW-ASNs. We believe that our proposed scheme will contribute to fill this gap and be used by the research community for future works related to UW-ASNs.
## III Preliminaries
This section introduces the background concepts relevant to the paper. To begin with, we provide an introduction to the various routing protocols employed in UW-ASN. Specifically, we focus on the Vector Based Forward protocol, which we utilized in our experimental setup. Subsequently, we outline the diverse attacks conducted against UW-ASN. Then, we present a comprehensive overview of different types of IDS and highlight their significance in safeguarding UW-ASNs. Additionally, we introduce the concept of incremental machine learning, which serves as a fundamental component of our proposed system. Lastly, we introduce various techniques for detecting concept drift and underscore their importance in the context of our research.
#### Iii-1 **Routing Protocols**
A routing protocol selects a suitable route or path for the data to travel from its source sensor node to its destination sensor node. In an UW-ASN environment, the data must be sent from the sensor node to the surface node. The surface node sends the data to the surface station or base station. This connects the underwater sensor nodes to other networks. To achieve reliable communication, the design of a routing protocol becomes critical. Among the existing routing protocols for UW-ASN (e.g., H2-DAB, E-PULRP, GEDAR, VBF and PER) [5], VBF is an efficient routing protocol in underwater environments as it considers the energy constraints and node mobility issues. VBF is a position-based routing approach [18], ensuring a robust, scalable and energy-efficient routing, and addressing the node mobility issue. Only the nodes close to the sensor node that is generating the packet to be sent to the next node (a.k.a. vector) will forward the packets (as shown in Fig. 3). Therefore, only a tiny fraction of the nodes communicate, preserving their energy resources and reducing network overhead. The VBF protocol also implements a self-adaptation algorithm to adjust the forwarding policy based on local information, forcing the algorithm to consider the density of the neighbourhood nodes to modify its forwarding policy for energy efficiency. This paper uses VBF as the routing protocol [18].
#### Iii-2 **Attacks against UW-ASNs**
UW-ASNs have many applications, such as marine ecosystem monitoring, international water border surveillance, underwater equipment monitoring, and natural calamity detection. They are often deployed in unprotected and hostile environments, which makes them vul
Fig. 3: Vector-based forward (VBF) routing.
nerable to attacks. DoS attacks are a common and dangerous attack for UW-ASNs which interrupt the service and make its functionality void to cause a negative impact on the network's availability. UW-ASNs inherit various types of routing attacks from Wireless Sensor Networks (WSNs), namely blackhole attacks and grayhole attacks. Furthermore, the adversary can generate various flooding attacks (e.g. Low-Rate DoS and Distributed DoS) in this network [11]. Attacks that have been considered as part of our experiments are:
* Blackhole attack: A blackhole attack occurs when an intermediary re-programs a sensor node or a set of sensor nodes to drop the packets instead of forwarding them to its neighbouring node [11].
* Grayhole attack: Grayhole attacks are a type of DoS attack which implements selective forwarding. The compromised sensor node selectively drops some packets and forwards the remaining packets to the destination nodes [11].
* Flooding attack: Flooding, as the name suggests, is a type of DoS attack which targets a sensor node and increases the traffic in that node. The high traffic volume can be sent by a single malicious node or a group of nodes to decrease the overall performance of the UW-ASN [11].
To generate the dataset, these attacks were implemented by changing the vector-based-forward routing protocol provided by the underwater-sensor patch in Network Simulator 2 (NS2). 16 sensor nodes were considered as part of the UW-ASN for generating the dataset and 64 sensor nodes for generating the out-of-distribution dataset, to test DP3 of the desirable property.
#### Iii-B3 **Intrusion Detection System**
An IDS is used to detect intrusions or threats. An IDS can either be Host-Based IDS or NIDS.
* Host-Based IDS: A host-based IDS protects a particular sensor node against internal and external threats. This type of IDS is capable of monitoring network traffic to and from the sensor node. It gets insight into the host's in-state. However, its visibility is limited to only the host sensor node.
* Network-Based IDS: A NIDS solution is designed to monitor an entire network, giving it visibility into all traffic flowing through it. This broader viewpoint provides the ability to detect threats in a wider area. However, they lack visibility inside the sensor nodes.
IDSs' detection techniques can also be classified as signature-based detection, anomaly-based detection, and hybrid detection.
* Signature-based detection: Signature-based IDS (SIDS) recognise threats with the help of signatures of known threats. This technique reduces false positives but is vulnerable to zero-day vulnerabilities.
* Anomaly-based detection: Anomaly-based IDS (AIDS) build a bounding wall of normal behaviour. All future behaviours are compared to this 'normal' behaviour. This technique helps detect zero-day vulnerabilities but increases false positives.
* Hybrid detection: A Hybrid IDS (HIDS) uses a combination of both signature-based and anomaly-based detection techniques. This helps the HIDS to reduce false positives and detect zero-day vulnerabilities.
#### Iii-B4 **Incremental Machine Learning**
Due to the evolving data environment of UW-ASN, the dataset does not remain static. Such a dataset involves data streams that can change over time. Having a model trained over such a dataset will yield poor results whenever the characteristics of the dataset change. Incremental machine learning can continuously learn from the changing stream of data and can maintain previously learned knowledge. Real-world applications use it to learn how data arrives over time in ever-changing environments [19]. This makes our proposed IDS accurately identify known and previously unseen intrusions (DP2). Incremental machine learning techniques used in our experiment are:
* **Adaptive Random Forest (ARF) Classifier**: Adaptive Random Forest Classifier is an ensemble algorithm which implements a group of Hoeffding trees. The final classification is computed by taking the votes from all the Hoeffding trees, where the class with the most votes becomes the final classification result. To handle drifts in the evolving data environment, a concept drift detection algorithm is coupled with the adaptive ensemble algorithm [20]. We connected the ADWIN concept drift detection algorithm with ARF for our experiments. The concept drift detection algorithm provides the algorithm with a 'warning' signal when the drift is initially detected and a 'flag' signal when the drift becomes significant. As soon as a 'warning' is detected, the ARF trains a set of background Hoeffding trees, replaces the foreground forest when the signal changes to 'flag', and stores the existing forest to be used in case the current scenario in the data environment reappears. ARF induces diversity through re-sampling and randomly selecting subsets of features for node splits, which effectively helps the algorithm to handle class imbalance.
* **Hoeffding Adaptive Tree (HAT) Classifier**: HAT is an
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Scheme** & **Approach** & \multicolumn{1}{p{113.8pt}|}{**Threat**} & \multicolumn{6}{p{113.8pt}|}{**Decirable Features**} \\ \cline{5-10} & **DIF1** & **DIF2** & **DIF3** & **DIF4** & **DIF5** & **DIF6** & **DIF7** \\ \hline
[14] & Novel agent based approach to detect sinkhole attacks in wireless sensor networks (WSNs) & SA & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline
[15] & Deep learning benchmark for IoT IDS & DN & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline
[16] & RV coefficient-Exponential Sea Lion Optimisation-enabled Deep Residual Network (HOSLO) & DN & \(\times\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\times\) \\ \hline
[17] & Hybrid IDS based on sparse-Shider evidence theory & DDoS & \(\times\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\times\) \\ \hline Proposed & Hybrid IDS using One Class SVM and Incremental adaptive random forest using ADWIN and Scheme & BA, GA and FA & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline
**DAN**: Different Network-technology. **DIF1**: Zero-day intrusion detection. **DP2**: Adaptive. **DP2**: Scalable. **DIF5**: On-the-fly detection. **DIF6**: Lightweight. **DP7**: Intrusion Prevention System. **BA**: Blackhole Attack. **GA**: Graphde Attack. **FA**: Flooding Attack. **SA**: Sinkhole Attack. **DDoS**: Distributed Denial of Service Attack. & \\ \hline \end{tabular}
\end{table} TABLE I: Related Works
incremental decision tree that uses ADWIN concept drift detection to monitor the performance of branches on the tree. HAT adaptively learns from data streams that change over time and replaces the branches with new branches when their accuracy decreases [21].
#### Iii-B5 **Concept Drift**
Due to the evolving data environment of UW-ASN, the properties of the dependent variables change over time. The model built using these dependent variables will decay in accuracy if the change becomes significant. The changes can be classified as Sudden/Abrupt, Incremental/Gradual or Recurring/Seasonal. An example of concept drift is when the underlying data distribution changes over time due to an influence. If the data engineering process is not strictly static, the changing underlying patterns in the data can be captured over time [22]. To be able to handle the concept drift effectively (DP3), different drift detection algorithms can be employed [23]1:
Footnote 1: The other concept drift detection algorithms are discussed in Appendix D of the Supplementary Material.
* **Adaptive Windowing (ADWIN) [24]**: ADWIN is based on calculating a window size (\(W\)) which grows dynamically unless the data pattern changes and shrinks when a change is detected. Based on the distribution of the data, the algorithm then attempts to find two subwindows of \(W\) (\(w_{0}\) and \(w_{1}\)), which have different averages.
* **Drift Detection Method (DDM) [25]**: This method is based on the Probably Approximately Correct (PAC) learning model premise that a learner's error rates decrease as more samples are analysed, as long as the data distribution remains stationary. Changes are detected if the algorithm detects an increase in error rate that exceeds a calculated threshold.
## IV Proposed Scheme
The proposed scheme employs a hybrid adaptive IDS (Section IV-A) and a cryptographically secured IPS (Section IV-B) for UW-ASN. Together, this forms the Adaptive Intrusion Detection and Prevention System (AIDPS). Algorithm 1 shows our proposed scheme.
### _Adaptive Intrusion Detection System_
The proposed IDS is a combination of anomaly-based IDS and signature-based IDS which makes it hybrid.
#### Iv-A1 Anomaly-based IDS
Due to their resource-constrained nature, underwater nodes are unable to accommodate computationally complex algorithms. Hence, in the proposed scheme, we develop an anomaly-based IDS monitoring agent to detect anomalous activities. One-class support vector machine (OCSVM) has been used to detect abnormal behaviour in the incoming data. OCSVM learns a decision function using its semi-supervised algorithm and classifies new data as similar or different to the training set [26]. Anomaly-based IDS is used to classify the new data stream as either normal if it lies within the decision boundary or anomalous if it lies outside the decision boundary. Figure 5 shows the OCSVM classification of normal and abnormal behaviour on the data points. The dataset was projected onto two dimensions using t-distributed Stochastic Neighbor Embedding (t-SNE) dimensionality reduction [27]. The parameters used by OCSVM are \(\nu\), \(\gamma\) and _kernel_. The parameter \(\nu\) is used to specify the percentage of anomalies. The kernel is used to identify the kernel type and also maps the data to a higher dimensional space for the SVM to draw a decision boundary. The parameter \(\gamma\) is used to set the kernel coefficient. For our experiment, the decision boundary for OCSVM has been set to \(\nu=0.01\) and \(\gamma=0.3\). The outcome of OCSVM is bipolar, where -1 are the outliers (shown in red) and +1 are the inliers (shown in white) predicted using the OCSVM's decision boundary. Section V-B (Experiment 1) discusses the evaluation of the OCSVM model on the UW-ASN dataset that we have generated in this article. The details of the UW-ASN dataset generation and feature engineering has been discussed in Section V-A.
#### Iv-A2 Hybrid IDS
To accurately detect previously known attacks, it is essential to have a signature-based IDS. The signature-based IDS, along with the concept drift detection algorithm, can handle the incoming streaming underwater data and accurately find known signatures of attacks. When coupled with an anomaly-based IDS, this also helps to detect unknown or previously unseen attacks. The entire system works in tandem as a hybrid IDS, which can achieve the desirable properties of DPI, DP2 and DP3.
The anomalous data points from the first (OCSVM-based) anomaly-based IDS were sent to the ARF classifier algorithm, an ensemble of Hoeffding trees (HT). ARF is an incremental machine learning algorithm widely used for evolving data streams. ARF, in turn, uses ADWIN and kdqTree concept drift detection algorithms, which implement error-rate-based and unsupervised detection techniques. The algorithm is implemented with the help of Evaluate Prequential technique, where ARF is trained over a subset of the data and then predicts when the ADWIN drift detector detects no drift. ARF also uses ADWIN for drift detection warning signal, and if a drift warning is seen, a new ARF (ARF1) is trained (which includes an ensemble of new Hoeffding trees) in the background. ARF1 replaces ARF as soon as ADWIN provides a drift signal, and predictions are taken from ARF1. This makes the proposed IDS adaptive (DP2). Section V-B (Experiment 2) discusses the evaluation of the ARF model on the UW-ASN dataset.
Figure 4, shows the architecture of our Adaptive IDS. The OCSVM anomaly detector takes the incoming data stream as input and gives its predictions. The data stream showing normal behaviour is allowed to pass and considered part of the final prediction. Data streams showing anomaly behaviour are then passed through an ARF classifier, which implements an ensemble of Hoeffding trees. ARF implements a drift detector algorithm in our proposed scheme: ADWIN and kdqTree. If no drift is detected, the ARF algorithm's predictions are sent to OCSVM for the final prediction. However, when ADWIN detects a drift warning signal, it updates the ARF by updating the ensemble of trees used for ARF. During this stage, the predictions from the older ARF algorithm are considered part of the final prediction by ARF. When ADWIN detects a clear drift signal, it replaces the old ARF algorithm with the new updated ARF to evaluate the predictions as part of the
final prediction by the ARF. The normal instances are further sent to the second anomaly-based detector, which implements an ensemble of OCSVM (Bagging of OCSVM) to check for abnormality [28]. The estimated boundary is sensitive in practice as it needs to uncover zero-day (unseen) attacks. To achieve this, we use an ensemble of eleven OCSVM anomaly detectors, and the bagging concept is used to decide whether the instance is an outlier. The advantage of using the bagging OCSVM is to tighten the decision boundary and reduce false positives. The final decision algorithm implements a simple logic to compute the final predictions. It takes the inputs from the initial OCSVM, which shows normal behaviour, and predictions from the signature-based adaptive random forest classifier and the final ensemble of OCSVMs to give out the final predictions as either normal or attack. If the final predictions classify an instance of the incoming data stream as expected, no further actions are taken. However, if the final prediction of the incoming data stream is classified as an attack, it notifies the cryptographically secured IPS to take further action. Section V-B (Experiment 4) discusses the evaluation of the proposed hybrid solution on the UW-ASN dataset.
### _Proposed Intrusion Prevention System_
In the proposed system, after the detection of anomalous activity by central IDS on the surface buoy (discussed in Section IV-A), the proposed intrusion prevention mechanism will be triggered. In this context, the surface buoy (border router) finds its distance from the suspicious nodes using the Received Signal Strength Indicator (RSSI). Upon detection of anomalous activity, the CIDS shares a new key with the suspicious node with the RSSI within the registered range. Hence, the malicious node cannot impersonate as a legitimate node (e.g., by conducting clone ID or Sybil attack) and will be isolated. Next, our proposed IPS (placed on the surface buoy) isolates potential malicious nodes and then establishes a new session key (\(SK_{new}\)) by replacing an old session key (\(SK_{old}\)) between a node and surface buoy as shown in Fig. 6. For the establishment of the session key, we consider that each node has a pair of keys generated through the Elliptic Curve Diffie-Hellman Key Exchange (ECDHE). Consider that the surface buoy has \(\{K_{Pub}^{BR}\), \(K_{PX}^{BR}\), \(K_{Pub}^{N}\}\) and a node has \(\{K_{Pub}^{BR}\), \(K_{Pr}^{N}\), \(K_{Pub}^{N}\}\), where \(K_{Pub}^{N}\) and \(K_{Pr}^{X}\) represents a public key and private key of entity \(X\). As shown in Fig. 6, the surface buoy uses _zero-signal (\(Z_{signal}\))_ to inform the node that intrusion is detected and there is a need to reset the key. In this regard, the surface buoy generates a nonce \(\Psi_{1}\), timestamp T1 and computes \(M_{1}=Enc(sign(Z_{signal})_{PR}^{BR}\), \(\Psi_{1},T1)_{Pub}^{N}\). Finally, the surface buoy constructs a message \(\{M_{1}\}\) and sends it to the node. Upon receiving the message \(\{M_{1}\}\), the node first decrypts \(M_{1}\), checks the timestamp
Fig. 4: Adaptive intrusion detection system: proposed scheme.
Fig. 5: One-class support vector machine (OCSVM) on a t-SNE projection (\(\nu\) and \(\gamma\) assigned to 0.01 and 0.3 respectively).
\(T1-T1^{*}\stackrel{{?}}{{=}}\Delta\)T, and then verifies the zero signal (\(Z_{signal}\)). Next, the node generates a random seed (\(\varepsilon\)), a random number \(\{\Psi_{2}\}\), timestamp \(T2\), a confirmation message \(M_{Conf}\), and computes \(\Delta=Hash(Enc(\varepsilon,\Psi_{2},T_{2},RSSI)_{Pub}^{BR},SK_{Old})\), and \(Sign(M_{Conf},\Delta)_{PR}^{N}\). Next the node constructs a message \(M_{2}\) = \((Enc(\varepsilon,\Psi_{2},T_{2},RSSI)_{Pub}^{BR},Sign(M_{Conf},\Delta)_{PR}^{N})\) and sends it to the surface buoy. After receiving the message \(\{M_{2}\}\), the buoy decryp the message \(M_{2}\), verifies the timestamp \(T2-T2^{*}\stackrel{{?}}{{=}}\Delta\)T and obtains \((\varepsilon,\Psi_{2},RSSI)\). Next the surface buoy verifies the decrypted parameters such as RSSI, \(M_{Conf}\), and then verifies the signature. If the verification is successful then both the surface buoy and the node compute the new secret key \(SK_{new}=KDF(\Psi_{1},\Psi_{2},\varepsilon)\). Upon successful key generation of \(SK_{new}\), both the node and the surface buoy again start secure communication with each other. This helps to achieve DP7 of the desired properties listed in Section I-A.
## V Implementation and Evaluation
In this section, we present the results of experiments conducted to evaluate the performance and effectiveness of the proposed scheme.
### _Dataset Generation and Feature Engineering_
This paper used the Network Simulator (NS-2) and Aqusim to generate the dataset [29]. We considered different network topologies (e.g., 16\(\sim\)64 nodes, different number of malicious nodes and position of these malicious nodes). Different topologies of the UW-ASN were created, and the vector-based forward routing protocol was used. Table II shows the simulation parameters and their values as part of the experiment. The simulation script was then modified to accommodate attacks like blackhole, grayhole and flooding. The blackhole attack was generated by targeting a forwarding sensor node in the network to drop its packets. For the grayhole attack, a selected forwarding node was made to drop a randomly chosen percentage of packets. Three malicious nodes were chosen for the flooding attack to flood a parent node in the network. The execution of the simulation scripts results in the generation of a trace file. Four scripts were used to generate the dataset. The first script (T1) contained a 16-node topology, and the second script (T2) was used for the flooding attack on the 16-node topology. The third script (T3) was used to generate the OOD dataset, which has a 64-node topology, and the fourth script (T4) was used for the flooding attack on the 64-node topology. For the generation of the normal dataset, T1 was used. T1, along with the blackhole attack's algorithm in the vector-based routing script, was used to generate the dataset for the blackhole attack. T1, along with the grayhole attack's algorithm in the vector-based routing script, was used to generate the dataset for the grayhole attack. T2 was used to create the dataset for the flooding attack. The trace files generated were converted to a structured CSV format using regular expressions of Python. Individual datasets were then generated for the normal scenario, blackhole attack, grayhole attack and flooding attack, and merged to form a master dataset.
Two master datasets, d1 and d2, were generated. The d1 dataset has the independent variables along with class labels
Fig. 6: Intrusion prevention system.
as:
* **0**: Belonging to the normal class.
* **1**: Belonging to the blackhole class.
* **2**: Belonging to the grayhole class.
* **3**: Belonging to the flooding class.
Blackhole was labelled 1 when the receiver was the malicious node(s). Grayhole was labelled 2 when either the sender or receiver was malicious. Flooding was labelled 3 when the sender was a malicious node(s). All other scenarios were labelled 0, i.e., normal.
The d2 dataset has the independent variables along with class labels as:
* **0**: Belonging to the normal class.
* **1**: Belonging to all the attack classes clubbed together, i.e., blackhole, grayhole and flooding attacks.
As part of the feature engineering process, we first derived five features: Sender RTR (Response Time Reporter), Sender MAC, cumulative count, RTR ratio, and MAC ratio. Here, the RTR feature is used to monitor the network performance and resources by measuring response times and the availability of the UW-ASN devices. RTR ratio is the ratio of the Sender RTR to the Cumulative_Count when the trace type is 'RTR'. MAC ratio is the ratio of the Sender MAC to the cumulative count when the trace type is 'MAC'. The cumulative count is the incremental count for each trace type, i.e. 'RTR' and 'MAC'. To check for the overall feature importance, we used Random Forest. Figure 7 shows the outcome of the feature importance given by random forest [31]. Variation Inflation Factor (VIF) was also used to check for multi-collinearity. Both techniques were used to remove the features that did not add value. Packet Information3 and Col have high VIF values and minor feature importance; thus, these features were removed. The final features are given in Table III. Our developed dataset contains 29157 instances, 16 independent features, and one dependent feature (a.k.a. target or label).
### _Performance Evaluation and Discussion_
To measure the effectiveness of the proposed scheme, we conducted eleven experiments to justify the contributions and to evaluate the extent to which the proposed scheme is capable of addressing each of the desirable properties discussed in Section I-A.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Parameters** & **Values** \\ \hline Simulator & Aquusim [30] \\ \hline Number of nodes & 16\(\sim\)64 \\ \hline Number of Malicious nodes & 5 \\ \hline \hline Channel & UnderwaterChannel \\ \hline Propagation & UnderwaterPropagation \\ \hline MAC & BroadcastMac \\ \hline Initial Energy & 10000 Watt \\ \hline Antenna & OmmJAntenna \\ \hline Filters & GradientFilter \\ \hline Max packet in ifq & 50 \\ \hline X dimension of the topography & 100 meters \\ \hline Y dimension of the topography & 100 meters \\ \hline Z dimension of the topography & 100 meters \\ \hline Dattane & 0.1 (1 packet/100 milliseconds) \\ \hline AdhocRouting & Vector Based Forward (VBF) \\ \hline Hop by Hop & 1 \\ \hline Frequency & 25 kHz \\ \hline Simulation Time & 600 seconds \\ \hline \end{tabular}
\end{table} TABLE II: Simulation Parameters
#### Iv-C1 **Experiment 1** (Anomaly Detectors)
As discussed in Section IV-A1, we deploy anomaly-based IDS on the underwater monitoring sensor nodes to detect anomalous network communications. In this regard, we benchmark the performance of the proposed scheme under different one-class classifiers (a.k.a. outlier detectors), such as OCSVM, LOF and isolation forest (IF):
* **One-class support vector machine (OCSVM):** OCSVM is a semi-supervised (it trains on normal behaviours) anomaly or outlier detector which creates a decision boundary, inside which it classifies data points as normal or inlier and outside which it classifies the data points as abnormal or outlier. For hyper-parameter tuning of OCSVM, we took different values of \(\gamma\) ranging from 0.1 to 0.5 and different values of \(\nu\) ranging from 0.004 to 0.05. \(\gamma\) decides how much curvature is needed in a decision boundary. The parameter \(\nu\) fine tunes the trade-off between overfitting and generalization. The optimal parameter value for \(\nu\) is 0.01 (meaning that at most 1% of the training samples are outliers by the decision boundary) and for \(\gamma\) is 0.3 [32].
* **Local outlier factor (LOF):** LOF is also a semi-supervised anomaly/outlier detector and can detect novelties in the dataset. It compares the local density of each data point to that of its neighbours. Data points with higher densities are classified as normal, whereas those with lower densities are classified as anomalies or outliers. For hyper-parameter tuning of LOF, we used different values for contamination ranging from 0.0001 to 0.5.
* **Isolation forest (IF):** IF is also a semi-supervised anomaly detection algorithm similar to random forest as it is built using decision trees. It works on the principle that data points that are easy to separate by the trees are considered anomalies. In contrast, the data points that were relatively difficult to separate are normal. It considers a random subset of the data and a random subset of the features. For hyper-parameter tuning of IF, we used different values for contamination ranging from 0.0001 to 0.5. The contamination value is float, and should be in the range (0, 0.5][33]. The contamination parameter simply controls the threshold for the decision function when a scored data point should be considered an outlier.
We also evaluated our proposed framework under Adaptive One-Class Support Vector Machine (AOCSVM) and QuantieFilter. AOCSVM performed well in true positive and false negative ratios, scoring 0.959 and 0.04, respectively. However, it had low true negative and high false positive ratios. The evaluation result of QuantieFilter was almost comparable to that of AOCSVM. OCSVM outperforms all the other anomaly detectors with the kernel: RBF, \(\nu=0.01\) and \(\gamma=0.3\). The AIDS using OCSVM was evaluated, and it showed an accuracy of 0.9374, recall or true positive ratio (TPR) of 1.0, F1-score of 0.9662, AUC of 0.9374, false negative ratio (FNR) of 0.0, true negative ratio (FNR) of 0.9374, precision of 0.935 and false positive ratio (FPR) of 0.0626. Though LOF and IF show good TPR and FNR, they have low TNR and FPR. OCSVM, on the other hand, shows promising results across all the metrics. The specific network characteristics, data patterns, and the nature of the underwater environment in the UW-ASN influence the performance of the different anomaly detectors.
To decentralise the proposed scheme, we suggest placing
\begin{table}
\begin{tabular}{|c|p{142.3pt}|} \hline
**Feature** & **Description** \\ \hline Packet\_Status\_Cat & Categorical column to show the packet category (r: receive, s: send, d: drop) \\ \hline \multirow{2}{*}{
\begin{tabular}{c} Sender\_MAC \\ \end{tabular} } & Numerical column which calculates the Sender MAC value \\ \hline ET & Numerical column which gives the value of ET at different instances. \\ \hline Packet\_Information2\_Cat & Categorical column which contains the application packet’s informations. \\ \hline Cumulative\_Count & Numerical column which calculates the incremental count for each trace type i.e., ’RTR’ and ’MAC’. \\ \hline Sender\_RTR & Numerical column which calculates the Sender RTR value \\ \hline MAC\_Ratio & Numerical column which computes the ratio of the Sender MAC to the Cumulative Count when the trace type is ’MAC’. \\ \hline ER & Numerical column which gives the value of ER at different instances. \\ \hline RTR\_Ratio & Numerical column which computes the ratio of the Sender RTR to the Cumulative Count when the trace type is ’RTR’. \\ \hline Energy & Numerical column which gives the value of the energy at different instances. \\ \hline Time & Time at which the application packet was sent \\ \hline Sent\_Packet\_Number & Numerical column which tells the packet number sent at different instances. \\ \hline Dst\_Port\_Cat & Categorical column which represents the destination port \\ \hline Src\_Port\_Cat & Categorical column which represents the source port \\ \hline Flag\_Cat & Categorical column which descelves the flag type. \\ \hline Trace\_Type\_Cat & Categorical column which descelves the trace type (RTR and MAC) \\ \hline Attack\_Cat & Categorical column which represents different scenarios (0: Normal, 1: Blackhole attack, 2: Grayhole attack, 3: Flooding attack) \\ \hline \end{tabular}
\end{table} TABLE III: Engineered Features
Fig. 7: Feature importance by random forest.
the AIDS in specific underwater sensor nodes, preferably on parent sensor nodes. Figure 1 shows the decentralised architecture of the proposed scheme. HIDS in the figure represents the hybrid intrusion detection system, a combination of anomaly and signature-based IDS. Our proposed solution combines OCSVM for AIDS and adaptive random forest for SIDS. HIDS gets the global view of the network and is placed at the surface station (the surface buoy). AIDS passively monitors (a.k.a. monitoring in promiscuous mode) its neighbouring nodes' network communications. Although our proposed scheme is network-based and performs in silent mode (a.k.a. ghost mode), to avoid the single point of failure, we consider a decentralised monitoring and intrusion detection approach by distributing the AIDS agents in the UWASN.
#### Iv-A2 **Experiment 2 (Incremental Machine Learning Classifiers)**
As discussed in Section IV-A2, our proposed scheme adopts an Incremental Machine Learning classifier to classify observations in evolving data streams. Hence, the proposed IDS can adapt incrementally. In this regard, our proposed scheme uses ARF for this task. The ARF algorithm (an ensemble of Hoeffding trees) introduces diversity through re-sampling and randomly selecting a subset of features on the streaming data environment. Moreover, using a concept drift detector (discussed in Section III-5), it replaces weak learners (a.k.a. losers) with the new learners (Hoeffding trees) constructed on recent concept drifts.
The evaluation of signature-based IDS using the ARF model shows an accuracy of 0.9774, a precision of 1.0, an AUC of 0.9887, a true negative ratio of 1.0, and a false positive ratio of 0.0. The recall and the true positive ratio value were 0.9775 and the false negative ratio is 0.0225. Factors that contributed to this experiment are the incremental machine learning algorithm and the selection of appropriate parameters. In addition to these, diversity and feature selection also played a crucial role in driving the results.
#### Iv-A3 **Experiment 3 (Concept Drift Detection)**
As discussed in Section IV-A2, the proposed scheme also adopts unsupervised and supervised concept drift detectors (kdqTree and ADWIN, respectively). In this regard, the proposed scheme implements a data-distribution-based central unsupervised concept drift detection algorithm for its solution. kdqTree has been implemented to understand when, how, and where the drift has occurred. It internally uses KL Divergence to detect the drifts. This information is then fed to the ARF algorithm, which uses a supervised error-rate-based concept drift warning detector and drift signal detector, i.e., ADWIN. ADWIN detects when the drift has occurred and uses this information to update the ARF, which has been trained on a new random subset of data and features. We evaluated ARF with different supervised concept drift detection algorithms like ADWIN, DDM, EDDM, HDDM_A, HDDM_W, Kolmogorov-Smirnov Winding (KSWIN) and Page-Hinkley [23]. ARF and ADWIN (\(\delta=0.001\)) outperformed the other concept drift detection algorithms. For the kdqTree, we used a window size of 500, \(\alpha=0.05\), a bootstrap sample of 500 and a count bound value of 50 [34]. Figure 8 shows the visualisation of the drift detected by kdqTree on the UW-ASN streaming data. The figure plots the streaming data instances along the x-axis with respect to the other variables in the UW-ASN dataset along the y-axis. A drift is detected by kdqTree at instance 841 (marked as a red line on the x-axis) and the blue window thereafter represents the drift induction window.
Figure C.14 and Fig. C.15 of Appendix C in the Supplementary Material show the concept drift detected in the blackhole and flooding attacks, respectively, using ADWIN. Though ADWIN was not successful in detecting the grayhole attack, it was noticed by other concept drift detection algorithms like HDDM_A, KSWIN and PageHinkley. Fig. C.16, Fig. C.17 and Fig. C.18 from Appendix C of the Supplementary Material show the concept drift detection algorithm.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**Model** & **Accuracy** & **AUC** & **Recall** & **Precision** & **F1-Score** & **Kappa** & **MCC** & **TT (See)** \\ \hline \hline
**Light Gradient Boosting Machine (lightpm)** & **0.9997** & **1.0000** & **0.9997** & **0.9997** & **0.9997** & **0.9997** & **0.9997** & **0.4180** \\ \hline Random Forest Classifier (rf) & 0.9994 & 1.0000 & 0.9994 & 0.9994 & 0.9994 & 0.9991 & 0.9991 & 0.690 \\ \hline Extra Trees Classifier (et) & 0.9993 & 1.0000 & 0.9993 & 0.9993 & 0.9993 & 0.9991 & 0.9991 & 0.2200 \\ \hline Decision Tree Classifier (dt) & 0.9988 & 0.9992 & 0.9988 & 0.9988 & 0.9988 & 0.9984 & 0.9984 & 0.0680 \\ \hline Gradient Boosting Classifier (gbc) & 0.9980 & 1.0000 & 0.9980 & 0.9980 & 0.9980 & 0.9973 & 0.9973 & 5.7930 \\ \hline K Neighbors Classifier (km) & 0.9912 & 0.9973 & 0.9912 & 0.9914 & 0.9912 & 0.9883 & 0.9884 & 0.8060 \\ \hline Logistic Regression (lr) & 0.9800 & 0.9961 & 0.9800 & 0.9807 & 0.9798 & 0.9733 & 0.9737 & 1.8310 \\ \hline SVM - Linear Kernel (svm) & 0.9725 & 0.0000 & 0.9725 & 0.9737 & 0.9720 & 0.9633 & 0.9639 & 0.9960 \\ \hline Naive Bayes (nb) & 0.9613 & 0.9902 & 0.9613 & 0.9634 & 0.9602 & 0.9484 & 0.9497 & 0.0500 \\ \hline Ridge Classifier (rfdge) & 0.8973 & 0.0000 & 0.8973 & 0.9091 & 0.8945 & 0.8630 & 0.8681 & 0.0270 \\ \hline Linear Discriminant Analysis (lda) & 0.8953 & 0.9802 & 0.8953 & 0.9074 & 0.8923 & 0.8604 & 0.8657 & 0.0470 \\ \hline Ada Boost Classifier (ada) & 0.7225 & 0.9344 & 0.7225 & 0.6554 & 0.6571 & 0.6301 & 0.6734 & 0.3900 \\ \hline Quadratic Discriminant Analysis (qa) & 0.2500 & 0.0000 & 0.2500 & 0.0625 & 0.1000 & 0.0000 & 0.0000 & 0.0290 \\ \hline Dummy Classifier (dummy) & 0.2499 & 0.5000 & 0.2499 & 0.0625 & 0.0999 & 0.0000 & 0.0000 & 0.0430 \\ \hline \end{tabular}
\end{table} TABLE IV: Evaluation of different standard ML classification models against different metrics on training dataset
Fig. 8: kdqTree concept drift detection.
Material show the concept drift detected in the grayhole attack using HDDM_A, KSWIN and PageHinkley. Implementing a data-distribution-based concept drift detector was important as it is relatively expensive to have labels all the time to use a supervised concept drift detection algorithm, especially to recognise a zero-day (unseen) intrusion related to UW-ASNs. This eliminates the expectation of having a label available as soon as the new data has arrived. For unsupervised concept drift, we focus on the probability distribution of the feature variables \(P(X)\) and compare any quantifying change in this distribution. To detect concept drifts on the distribution of the data stream, we adopt kdqTree using a sliding window in stage 1. Next, kdqTree employs Kulldorff's spatial scan statistic to identify the regions with the most changes in the streaming data. At the final stages, kdqTree uses the KL divergence test and implements bootstrapping method [23]. The Kullback-Leibler Divergence score, also known as KL divergence score, measures the disparity between two probability distributions. Overall, the selection of appropriate concept drift detection algorithms and the utilization of data-distribution-based detection techniques contributed to the successful identification of concept drifts in this experimental setup.
#### Iv-B4 **Experiment 4 (Proposed Hybrid Solution)**
In this experiment, we evaluate the overall performance of the proposed scheme. The network-based IDS, which implements AIDS on the parent sensor nodes, implements the OCSVM anomaly detection algorithm using RBF kernel, and \(\nu\) and \(\gamma\) are assigned to 0.01 and 0.3, respectively (based on the outcome of Experiment 1). OCSVM model is evaluated against the testing dataset, and the evaluation results are shown in Table V. Adaptive Random Forest (ARF), along with kdqTree, is able to detect 28 drift instances in the UWASN dataset. This information was further fed to the SIDS. The SIDS employs ADWIN concept-drift detector with drift warning and drift detection parameters both assigned as \(\delta=0.001\). ARF employs 50 estimators (Hoeffding trees), maximum features to consider at each node, for each time to make the split decision as log2 and split confidence as 0.001. The SIDS model is evaluated against the testing dataset, and the evaluation results are also shown in Table V. The final predictions are taken from the hybrid model and evaluated against the underwater testing dataset. Our proposed hybrid model has an accuracy of 0.9774, precision of 0.9717, recall of 1.0, F1-score of 0.9883, AUC of 0.977, TPR of 1.0, FNR of 0.0, TNR of 0.9769 and FPR of 0.023. The results are shown in Table V.
Throughout our experiment, we focused on improving the true positive ratio and reducing the false negative ratio. We also focused on reducing the false positive ratio to less than 5%, which helps in reducing false alerts or false alarms to a great extent. Several factors influenced the results of this experiment. Firstly, the performance of the OCSVM anomaly detection algorithm in AIDS played a crucial role in the overall results. Secondly, the effectiveness of the drift detection mechanism implemented by ARF and kdqTree was crucial in adapting to evolving data streams. Lastly, by optimising the performance metrics like true positive ratio and false negative ratio, the proposed hybrid solution demonstrated improved performance in accurately identifying anomalies while reducing false alarms.
#### Iv-B5 **Experiment 5 (Optimisation of ARF classification algorithm)**
In this experiment, we optimise the performance of the ARF classifier. To receive an optimal set of parameter values, we trained the ARF classification algorithm with different numbers of Hoeffding trees (20, 40, 60, 80 and 100). We used different concept drift detection algorithms (ADWIN (\(\delta=0.001\)), DDM, EDDM, HDDM_A, HDDM_W, KSWIN and PageHinkley) along with it, keeping all the other configurations from our proposed scheme constant. We then evaluated the algorithm's performance on the TPR metric. Figure 9 shows the visualisation from our analysis. The three-dimensional graph shows the ensemble of Hoeffding trees, different concept drift detectors used and the true positive ratio for each. This experiment gives an idea of the optimal number of estimators (number of ensemble Hoeffding trees) to make the ARF classification algorithm perform effectively when coupled with any of the above-mentioned concept drift detectors. For dealing with an evolving streaming data environment, it becomes essential to make the model optimal to reduce the computational complexity without compromising its performance. The graph depicts the optimal parameters (concept drift detector and the number of trees) that we need for our proposed scheme to reduce the computational overhead as well as maintain the TPR. By exploring the relationship between the number of Hoeffding trees, concept drift detectors, and the TPR metric, the experiment provides insights into the optimal configuration of the ARF classifier. These factors contribute to reducing computational overhead while maintaining high performance in intrusion detection tasks.
TPR is used as it is an important metric that calculates how accurately our proposed system can detect an intrusion event. ARF optimisation was also performed with other metrics and has been provided in the Appendix E of the Supplementary Material.
#### Iv-B6 **Experiment 6 (Scalability with 64 sensor nodes)**
In this experiment, to evaluate the scalability of the proposed scheme, we generate a scenario with 64 acoustic sensor nodes where
Fig. 9: Optimisation analysis for the proposed scheme.
20% were malicious nodes. In this scenario, malicious nodes conduct blackhole, grayhole, and flooding attacks against their neighbouring nodes. In this context, UWASN nodes' distribution, distance, and velocity are different from our proposed scheme development phase (e.g., where we trained our AIDS on normal network communication). Therefore, we now evaluate our proposed scheme against the OOD data stream. The derived OOD dataset has 67614 instances along with the same 16 features and one target variable.
After the final OOD dataset was generated, it was passed through the proposed scheme to evaluate the results. Our proposed hybrid model has an accuracy of 0.99, precision of 0.99, recall of 1.0, F1-score of 0.995, AUC of 0.99, TPR of 1.0, FNR of 0.0, TNR of 0.99 and FPR of 0.01. The OCSVM and SIDS model's evaluation results on the OOD dataset are given in Table VI. ARF and kdqTree detected 65 drift instances in the underwater OOD dataset. The proposed hybrid model gave the final predictions and was evaluated on the underwater OOD dataset. The results from this experiment prove the scalability of our proposed solution. As UW-ASNs can utilise large numbers of underwater wireless sensor nodes connected to work together, making the proposed IDS solution scale well to achieve specific tasks is essential. This, in turn, makes the proposed solution achieve DP4.
Several factors influenced the results obtained in this experiment. Firstly, for these evaluation results, the OOD dataset derived from a different distribution and representing scenarios not encountered during the development phase was used to assess the system's performance in recognizing unseen intrusions and adapting to new situations. Secondly, the hybrid model that combines multiple algorithms (OCSVM, ARF and kdqTree) is another key factor driving the results, allowing the system to effectively handle underwater intrusion detection tasks in dynamic environments. The OCSVM, ARF and kdqTree model cannot operate as standalone models as they won't be able to satisfy all Desirable Properties (for instance ARF alone fails to recognise unseen intrusions on-the-fly as it is signature-based, and OCSVM is not adaptive as its decision boundary is static). The proposed model combines the strength of all these algorithms and works together in tandem to achieve all the desirable properties listed in section I-A.
#### Iv-B7 **Experiment 7** (Generalisability)
In this experiment, we generate different types of concept drifts (e.g., abrupt, recurring, gradual and incrementally to measure up to what extent our proposed system can identify and adapt to concept drifts [22] and how well the concept drift detectors generalise on the new data stream. ADWIN (\(\delta=0.001\)), DDM, HDDM_A, KSWIN and PageRankley were used to detect the drifts. Different drift detectors were able to capture drifts in various instances. Figure A.1 of Appendix A of the Supplementary Material shows the drifts and the cases as and when ADWIN detected them. Figure A.2 from Appendix A of the Supplementary Material shows the drifts seen by DDM. The error rate decreases as the samples used for analysing increase, as long as the data distribution remains constant. Figure A.3, Fig. A.4 and Fig. A.5 of Appendix A of the Supplementary Material show the drifts detected by HDDM_A, KSWIN and PageRankley, respectively. The results of this experiment show that the concept drift detectors used as part of our proposed scheme could also be leveraged to be used in other streaming datasets, thereby making them reusable for any application area dealing with streaming data environments. The choice of concept drift detection algorithms and the analysis from the visualizations provided were key factors driving the results, highlighting the system's ability to detect and handle drifts in streaming data environments.
#### Iv-B8 **Experiment 8** (Benchmarking standard ML classifiers)
In this experiment, we benchmarked the IDS by applying it against standard machine learning classification algorithms and evaluating them against the metrics: accuracy, AUC, recall, precision, F1-score, kappa and MCC. The list of standard ML classifiers and their results for different metrics are provided in Table IV.
Light Gradient Boosting Machine (LGBM) outperformed all the other standard ML classifiers on all the above mentioned metrics. LGBM had an accuracy of 0.99954, AUC of 0.99843, recall of 0.99723, a precision of 0.991758, F1-score of 0.99976, a true positive ratio of 0.99723, a false negative ratio of 0.00276, a true negative ratio of 0.99964 and a false positive ratio of 0.00035 on the testing dataset (refer to Table IV). LGBM was used as the SIDS.
OCSVM is used as the anomaly-based detector and gave an accuracy of 0.9374, AUC of 0.9374, recall of 1.0, a precision of 0.935, F1-score of 0.9662, a true positive ratio of 1.0, a false negative ratio of 0.0, a true negative ratio of 0.9347, and a false positive ratio of 0.0626 on the testing dataset.
For the benchmarked model, final predictions were taken from the OCSVM when LGBM predicted the instance as a regular instance, else from LGBM when LGBM predicted the instance as an attack. The final predictions from the hybrid model were then evaluated against the ground truth. The IDS used for benchmarking had an accuracy of 0.9514, AUC of 0.9746, recall of 1.0, a precision of 0.4599, F1-score of 0.9740, a true positive ratio of 1.0, a false negative ratio of 0.0, a true negative ratio of 0.9493 and a false positive ratio of 0.0506 on the testing dataset.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**Model** & **Precision** & **Accuracy** & **Recall** & **F1-score** & **AUC** & **TPR** & **FNR** & **TNR** & **FPR** \\ \hline \hline OCSVM & 0.935 & 0.9374 & 1.0 & 0.9662 & 0.9374 & 1.0 & 0.0 & 0.9347 & 0.0626 \\ \hline ARF and kdqTree & 1.0 & 0.9774 & 0.9775 & 0.9883 & 0.9887 & 0.9775 & 0.0225 & 1.0 & 0.0 \\ \hline
**Proposed Hybrid Model** & **0.9717** & **0.9774** & **1.0** & **0.9883** & **0.977** & **1.0** & **0.0** & **0.9769** & **0.023** \\ \hline \end{tabular}
*TPR: True Positive Ratio. FNR: False Negative Ratio. FNR: True Negative Ratio. FPR: False Positive Ratio.
* OCSVM: One Class Support vector Machine. ARF: Adaptive Random Forest.
\end{table} TABLE V: Evaluation Results on Testing Dataset
In this experiment, we use the same features that were engineered in Section V-A (to have a fair comparison between our proposed scheme and the benchmarked algorithms). We employ an Auto-ML library (Pycaret) to automate the hyperparameter tuning process. It implements 10-fold cross-validation to tune the LightGBM model by finding the best set of parameter values. In this regard, the final optimised LightGBM model has a learning_rate of 0.1, max_depth of -1, num_leaves as 31, and n_estimators as 100. The detailed configurations of the remaining benchmarked classifiers from Table IV are provided in Appendix F of the Supplementary Material.
By comparing the performance of standard machine learning classifiers and selecting LightGBM as the top-performing classifier, the experiment demonstrated the effectiveness of the proposed IDS. The hyperparameter tuning process and the limitations of the benchmarked classifiers were factors that influenced the obtained results. Benchmarked classifiers in Tables IV are not adaptive and do not scale well. In addition, they fail to recognise unseen (zero-day) intrusions on-the-fly. Hence, they require continuous monitoring of the model's performance, which is expensive. A graphical representation of Table IV, Table V, and Table VI is presented in Appendix G of the Supplementary Material.
#### V-A9 **Experiment 9 (Model comparison with respect to concept drift)**
In this experiment, we compare signature-based IDS models like Light GBM (used for benchmarking), ARF with ADWIN (used for the proposed scheme), and ARF with DDM. The three IDSs were evaluated over the accuracy metric to analyse their performance against concept drifts. Figure 10 plots the streaming data samples along the x-axis with respect to the accuracy along the y-axis. The continuous red line, green line, and the blue line plots the accuracy of the Light GBM model, ARF with DDM concept drift detection algorithm, and ARF with ADWIN (used in the proposed scheme), respectively. The graph also plots four drift instances marked with red dotted lines along the x-axis. This graph shows that the proposed IDS model (ARF with ADWIN) maintains consistent accuracy even during the concept drifts as opposed to the other two IDS models. Factors driving the results of this experiment are adaptability to concept drift, the choice of drift detection algorithms, and the design of the IDS models.
#### V-A10 **Experiment 10 (Exploratory data analysis)**
In this experiment, we perform exploratory data analysis (EDA) to show the statistical analysis of all the numerical features with respect to the different types of attacks. In this regard, as part of the EDA process, we investigated more on the numerical features. Data distributions were compared by calculating the \(p\)-value, and outlier detection was visualised with the help of box plots. The graphs are shown in Appendix B of the Supplementary Material in Fig. B.6, Fig. B.7, Fig. B.8, Fig. B.9, Fig. B.10, Fig. B.11, Fig. B.12 and Fig. B.13 for ET, Sender_MAC, Energy, Sender_RTR, Sent_Packet_Number, RTR_Ratio, MAC_Ratio and ER, respectively. The factors driving the results of this experiment are as follows. The experiment aimed to perform exploratory data analysis (EDA) on numerical features in order to gain insights into their statistical characteristics in relation to different types of attacks. This involved a thorough investigation of the numerical features, comparing their data distributions through \(p\)-value calculations, and visualizing outlier detection using box plots. By conducting this EDA, a deeper understanding of the numerical features and their associations with different attack types was achieved.
#### V-A11 **Experiment 11 (Energy Consumption)**
Since the UW-ASN contains resource constrained sensor nodes, our proposed scheme has to be lightweight (_DP6_). The Raspberry Pi 3B+ microprocessor has been considered a resource-constrained sensor node. The pre-trained anomaly detector was deployed using a socket program in the microprocessor to make the sensor have lower computational overhead. The socket program at the client side (the central edge computer) sent each application packet one at a time to the server side (the microprocessor representing the parent sensor node enabled with listening mode). The microprocessor utilised the pre-trained anomaly detector to pass each packet through the model to get the prediction on-the-fly. UM25C was connected to the microprocessor to analyse the power consumption. The power consumption is captured over time (separately when the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**Model** & **Precision** & **Accuracy** & **Recall** & **F1-score** & **AUC** & **TPR** & **FNR** & **TNR** & **FPR** \\ \hline \hline OCSVM & 0.9144 & 0.9154 & 1.0 & 0.9553 & 0.9146 & 1.0 & 0.0 & 0.9145 & 0.0854 \\ \hline ARF and kdqTree & 1.0 & 0.9901 & 0.9138 & 0.995 & 0.9569 & 0.9138 & 0.0862 & 1.0 & 0.0 \\ \hline
**Proposed Hybrid Model** & **0.99** & **0.99** & **1.0** & **0.995** & **0.99** & **1.0** & **0.0** & **0.99** & **0.01** \\ \hline \multicolumn{9}{l}{\({}^{*}\)TPR: True Positive Ratio. FNR: False Negative Ratio. TNR: True Negative Ratio. FPR: False Positive Ratio.} \\ \multicolumn{9}{l}{OCSVM: One Class Support vector Machine. ARF: Adaptive Random Forest. OOD: Out-of-Distribution} \\ \end{tabular}
\end{table} TABLE VI: Evaluation Results on OOD Dataset
Fig. 10: Signature-based IDS model comparison against concept drifts.
socket program was not operational and when it was active). A side-channel analysis was performed (shown in Fig. 12) to analyse the energy overhead caused by the anomaly detector.
The average power usage (in Joule per second) when the algorithm was not operational for the microprocessor was 2.46 J/s, and when the algorithm was active was 3.18 J/s. The difference in energy overhead is 0.72 J/s which provides evidence for the proposed scheme having the desirable property DP6.
The CPU and memory usage for the microprocessor were logged every five seconds when the proposed scheme was not operational and when it is operational. The data is used to analyse and calculate the CPU and memory usage overhead.
Figure 11(a) shows the CPU usage comparison of a sensor node (Raspberry Pi 3B+) when it was operational with and without the proposed scheme. The CPU usage was logged every 5 seconds. The average CPU usage (in Hz) particularly when the proposed scheme is not operational for the microprocessor is 100.02 Hz, and when the proposed scheme is operational was 128.88 Hz. The difference in CPU usage overhead is 28.86 Hz, which provides evidence for the proposed scheme having the desirable property DP6.
Figure 11(b) shows the memory usage comparison of a sensor node (Raspberry Pi 3B+) when it was operational with and without the proposed scheme. The memory usage was logged every 5 seconds.
The factors driving the results of this experiment are as follows. Firstly, the proposed scheme was designed to be lightweight, addressing the resource constraints of UW-ASNs. The Raspberry Pi 3B+ microprocessor was chosen as a suitable resource-constrained sensor node. Secondly, the deployment of a pre-trained anomaly detector using a socket program minimized computational overhead, allowing real-time predictions. Thirdly, power consumption analysis using UM25C demonstrated a small energy overhead of 0.72 J/s, supporting the scheme's lightweight nature. Additionally, monitoring of CPU usage revealed a modest increase in usage (28.86 Hz) during operation. Overall, these results indicate that the proposed scheme satisfies DP6 of the desirable properties.
The outcomes of our experiment show that the proposed scheme is capable of satisfying all the desirable properties listed in Section I-A. This makes the proposed scheme efficient in detecting zero-day intrusions and scale well with the increase in the number of underwater sensor nodes, and is lightweight to work in a resource-constrained environment.
## VI Conclusion and Future Works
UW-ASNs find applications in many areas, and their threats are significant. Therefore, in this paper, we proposed an adaptive, hybrid, distributed IDS that is efficient in handling evolving underwater streaming data and detecting the changes in the underlying data patterns, and a cryptography-based IPS as an autonomous self-defence mechanism. We have also integrated several concepts like incremental machine learning and data distribution-based concept drift detection algorithms, which can be leveraged to be used in other application domains dealing with streaming data environments. An attack dataset
Fig. 11: CPU and Memory usages of a sensor node (Raspberry Pi 3B+)
Fig. 12: Energy Overhead
for UW-ASN-based IoT networks, which covers three attack types (blackhole, grayhole and flooding attacks) specific to UW-ASN, is introduced. The dataset (UW-ASN dataset) and the proposed IDS scheme can be benchmarked to be used for UW-ASNs-related works by the research community. The proposed scheme outperforms state-of-the-art benchmarking methods while providing a wider range of desirable features (DP1 to DP7). While our proposed scheme used a fixed set of rules for the anomaly detectors, we recommend enhancing its intelligence by generating dynamic rules and giving them priority. In this context, we propose the following future works related to the security of UW-ASNs:
* Low-Rate DoS attack: Low-Rate DoS attacks are an intelligent adaptation of DoS attacks where the malicious nodes flood the parent sensor nodes with application packets. However, the frequency of messages is kept below the approved threshold level to make it difficult for an IDS to detect such attacks [35].
* Deep Adversarial Reinforcement Learning (ARL) based IDS: Deep ARL for a signature-based IDS helps to generate a dynamic set of rules and prioritise them specifically for a given environment. This technique can enhance the performance of a SIDS.
* Data distribution-based concept drift detection: Enhance the performance of kdqTree to work efficiently with Incremental Machine Learning algorithms.
|
2309.11152 | Evaluating Gilbert Damping in Magnetic Insulators from First Principles | Magnetic damping has a significant impact on the performance of various
magnetic and spintronic devices, making it a long-standing focus of research.
The strength of magnetic damping is usually quantified by the Gilbert damping
constant in the Landau-Lifshitz-Gilbert equation. Here we propose a
first-principles based approach to evaluate the Gilbert damping constant
contributed by spin-lattice coupling in magnetic insulators. The approach
involves effective Hamiltonian models and spin-lattice dynamics simulations. As
a case study, we applied our method to Y$_3$Fe$_5$O$_{12}$, MnFe$_2$O$_4$ and
Cr$_2$O$_3$. Their damping constants were calculated to be $0.8\times10^{-4}$,
$0.2\times10^{-4}$, $2.2\times 10^{-4}$, respectively at a low temperature. The
results for Y$_3$Fe$_5$O$_{12}$ and Cr$_2$O$_3$ are in good agreement with
experimental measurements, while the discrepancy in MnFe$_2$O$_4$ can be
attributed to the inhomogeneity and small band gap in real samples. The
stronger damping observed in Cr$_2$O$_3$, compared to Y$_3$Fe$_5$O$_{12}$,
essentially results from its stronger spin-lattice coupling. In addition, we
confirmed a proportional relationship between damping constants and the
temperature difference of subsystems, which had been reported in previous
studies. These successful applications suggest that our approach serves as a
promising candidate for estimating the Gilbert damping constant in magnetic
insulators. | Liangliang Hong, Changsong Xu, Hongjun Xiang | 2023-09-20T09:02:25Z | http://arxiv.org/abs/2309.11152v1 | # Evaluating Gilbert Damping in Magnetic Insulators from First Principles
###### Abstract
Magnetic damping has a significant impact on the performance of various magnetic and spintronic devices, making it a long-standing focus of research. The strength of magnetic damping is usually quantified by the Gilbert damping constant in the Landau-Lifshitz-Gilbert equation. Here we propose a first-principles based approach to evaluate the Gilbert damping constant contributed by spin-lattice coupling in magnetic insulators. The approach involves effective Hamiltonian models and spin-lattice dynamics simulations. As a case study, we applied our method to Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\), MnFe\({}_{2}\)O\({}_{4}\) and Cr\({}_{2}\)O\({}_{3}\). Their damping constants were calculated to be \(0.8\times 10^{-4}\), \(0.2\times 10^{-4}\), \(2.2\times 10^{-4}\), respectively at a low temperature. The results for Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\) and Cr\({}_{2}\)O\({}_{3}\) are in good agreement with experimental measurements, while the discrepancy in MnFe\({}_{2}\)O\({}_{4}\) can be attributed to the inhomogeneity and small band gap in real samples. The stronger damping observed in Cr\({}_{2}\)O\({}_{3}\), compared to Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\), essentially results from its stronger spin-lattice coupling. In addition, we confirmed a proportional relationship between damping constants and the temperature difference of subsystems, which had been reported in previous studies. These successful applications suggest that our approach serves as a promising candidate for estimating the Gilbert damping constant in magnetic insulators.
## I Introduction
Recent decades have witnessed rapid developments in magnetics and spintronics [1; 2; 3]. A long-time pursuit in spintronics is to actively control and manipulate the spin degrees of freedom in solid-state systems. Related fundamental studies involve spin transport, spin dynamics and spin relaxation [4]. Within these domains, magnetic damping often plays a crucial role. Generally, stronger damping enables a faster writing rate for magnetic memories, while lower damping leads to a longer propagation distance of spin waves. Therefore, it is always essential to accurately evaluate the magnetic damping in different materials. For instance, yttrium iron garnet (YIG) is a highly promising spintronic material due to its ultra-low magnetic damping [5; 6; 7]. However, the intrinsic mechanism behind its unique property has yet to be fully elucidated, which partly motivates us to carry out this work.
At present, magnetic damping is typically represented by a phenomenological term in the well-known Landau-Lifshitz-Gilbert (LLG) equation, which has been widely employed to simulate magnetization dynamics [8; 9]. A basic form of this equation can be written as,
\[\frac{\partial\vec{m}}{\partial t}=-\gamma\vec{m}\times\vec{B}+\frac{\alpha} {m}\vec{m}\times\frac{\partial\vec{m}}{\partial t} \tag{1}\]
where \(\vec{B}\) represents the total magnetic field acting on the local dipole \(\vec{m}\), \(m\) denotes the norm of \(\vec{m}\), \(\gamma\) is the gyromagnetic ratio, and \(\alpha\) is the Gilbert damping constant. The second term on the right side, as we mentioned, leads directly to the relaxation process, in which the rate of energy dissipation is determined by the damping constant. Given the importance of \(\alpha\) in magnetization dynamics, its origin has been extensively studied in the literature [10; 11; 12; 13]. To the best of our knowledge, both intrinsic and extrinsic mechanisms contribute to the damping. Specifically, the intrinsic factors include spin-lattice and spin-electron couplings, while the extrinsic contributions primarily involve lattice imperfections and eddy currents [14; 15].
Two types of first-principles based methods have been developed to calculate the damping constants in the past. One approach involves the breathing Fermi surface model [16; 17] and the torque correlation model [18; 19], while the other is based on the scattering theory from linear response [20; 21; 22]. These methods have demonstrated remarkable success in studying the magnetic damping in transition metals such as Fe, Co, and Ni. Despite being free from complicated experiments, which are mostly based on ferromagnetic resonance, these theoretical approaches still exhibit several limitations. Firstly, when dealing with complex systems, we often have to spend a significant amount of computing resources on the first-principles calculations. In addition, these methods are more suitable for calculating the electronic contribution to Gilbert damping in metallic magnets, thus rarely taking the effect of spin-lattice coupling into consideration [14; 23].
Recently, spin-lattice dynamics (SLD) simulations [24] have been adopted as an alternative method to evaluate the Gilbert damping parameters. In Ref. [23], the authors constructed an empirically parameterized Hamiltonian model for a cobalt cluster. They coupled a preheated lattice with a fully ordered spin state, then performed SLD simulation. During the relaxation process,
the energy of lattice and spin subsystems were recorded and fitted to the following logistic functions,
\[U_{\text{lat}} =U_{0}^{\text{lat}}-\frac{\Delta U_{0}}{1+\exp[-\eta\Delta U_{0}t- \Theta]} \tag{2}\] \[U_{\text{mag}} =U_{0}^{\text{mag}}+\frac{\Delta U_{0}}{1+\exp[-\eta\Delta U_{0}t- \Theta]} \tag{3}\]
from which they extracted the relaxation rate \(\Gamma=\eta\Delta U_{0}\) and calculated the damping constant \(\alpha=\eta\mu_{S}/\gamma\). Here, \(\mu_{S}\) denotes the magnitude of magnetic moments. In Ref. [25], the authors also built an empirical potential model for a periodic bcc Fe system. They firstly applied an external magnetic field in the z-direction and thermalized the system to a finite temperature. Then, the magnetization orientation of each atom was rotated artificially by a same angle. Afterwards, the system would relax back to equilibrium, during which the averaged z component of atomic magnetization was recorded and fitted to the following function,
\[m_{z}(t)=\tanh\left[\frac{\alpha}{1+\alpha^{2}}\gamma B_{ext}(t+t_{0})\right] \tag{4}\]
where \(\alpha\) was exactly the Gilbert damping parameter to be estimated. Since these works selected transition metals as the research object, their results were both orders of magnitude smaller than the experimental values. In addition, the use of empirically parameterized models reduced the accuracy of their simulated results.
In this work, we combine SLD simulations with first-principles based effective Hamiltonian models to evaluate the damping constants in magnetic insulators, where the dominant contribution results from spin-lattice couplings. Compared to the previous studies, our work has made improvements mainly in two aspects. Firstly, the utilization of first-principles based Hamiltonian models in simulations enhances the reliability of our conclusions. Besides, the better choice of research objects allows for demonstrating the superiority of SLD simulations. In particular, the microscopic origin of low damping in YIG will be investigated. The paper is organized as follows. In Sec. II, we introduce our effective Hamiltonian model, parameterization methods, and a scheme for evaluating Gilbert damping parameters. Then, both the validation and application of our method are presented in Sec. III. Finally, we summarize this work and give a brief outlook in Sec. IV.
## II Model and Methods
This section is split into three parts. Firstly (in Sec. II.1), we introduce a generic form of our effective Hamiltonian model. Then, methods involving the calculation of model parameters are presented in Sec. II.2. At the last part (Sec. II.3), we propose a novel scheme to determine the Gilbert damping constant through dynamics simulations.
### The Hamiltonian Model
Since our purpose is to evaluate the contribution of spin-lattice coupling to magnetic damping, the effective Hamiltonian model must incorporate both spin and lattice degrees of freedom. A concise and generic formula that meets our basic requirements consists of the three terms as follows:
\[H=H_{L}\left(\left\{u_{i,\alpha}\right\}\right)+H_{S}\left(\left\{\vec{s}_{j} \right\}\right)+H_{SLC}\left(\left\{u_{i,\alpha},\vec{s}_{j}\right\}\right) \tag{5}\]
where \(\alpha\) abbreviates three orthogonal axes, \(u_{i,\alpha}\) represents the displacement of atom \(i\), and \(\vec{s}_{j}\) is a unit vector that represents the direction of spin \(j\).
The first term \(H_{L}\) in Hamiltonian model describes the dynamical behavior of individual phonons. Technically, we take the atomic displacements as independent variables and expand the Hamiltonian to the second order with Taylor series. Then, we have the form as,
\[H_{L}=\frac{1}{2}\sum_{ij}\sum_{\alpha\beta}K_{ij,\alpha\beta}u_{i,\alpha}u_{ j,\beta}+\frac{1}{2}\sum_{i,\alpha}M_{i}\dot{u}_{i,\alpha}\dot{u}_{i,\alpha} \tag{6}\]
where \(K_{ij,\alpha\beta}\) denotes the force constant tensor and \(M_{i}\) represents the mass of atom \(i\).
Similarly, the second term \(H_{S}\) describes the dynamical behavior of individual magnons. For simplicity but no loss of accuracy, we only considered the Heisenberg exchange interactions between neighbor magnetic atoms in this work, though more complex interactions could be taken into account in principle. Therefore, this term can be expressed as,
\[H_{S}=\sum_{\langle i,j\rangle}J_{ij}\vec{S}_{i}\cdot\vec{S}_{j} \tag{7}\]
where \(J_{ij}\) denotes the isotropic magnetic interaction coefficient.
The third term \(H_{SLC}\) represents the coupling of spin and lattice subsystems, and is expected to describe the scattering process between phonons and magnons. As an approximation of the lowest order, this term can be written as,
\[H_{SLC}=\sum_{\langle i,j\rangle}\sum_{k\alpha}\left(\frac{\partial J_{ij}}{ \partial u_{k,\alpha}}u_{k,\alpha}\right)\vec{S}_{i}\cdot\vec{S}_{j} \tag{8}\]
According to the theory of quantum mechanics, this coupling term provides a fundamental description of the single-phonon scattering process, which is believed to be dominant among all scatterings in the low-temperature region. This type of relaxation mechanism in ferromagnetic resonance was systematically studied by Kasuya and LeCraw for the first time [26]. It's worth noting that a higher order of Taylor expansion could have been conducted to improve the accuracy of Hamiltonian models directly. For instance, the scattering between individual phonons can be adequately described by the anharmonic terms. However, as one always has to make a trade-off
between the precision and complexity of models, in this work we choose to neglect the high order terms since the anharmonic effects in current investigated systems are not important.
In this study, we adopted the symmetry-adapted cluster expansion method implemented in the Property Analysis and Simulation Package for Materials (PASP) [27] to build the Hamiltonian model presented above. This package can identify the nonequivalent interactions and equivalent atom clusters in a crystal system by analyzing its structural properties based on the group theory. A significant benefit of working with PASP is we are enabled to describe the target system with the least number of parameters. In the next section, we will discuss how to calculate the model parameters for different materials.
### Calculation of Model Parameters
Firstly, the Heisenberg exchange coefficients \(J_{ij}\) and spin-lattice coupling constants \(\partial J_{ij}/\partial u_{k,\alpha}\) can be calculated with the four-state method [28; 29]. The basic flow is to construct four artificially designated spin states of the target system, calculate the corresponding energies and forces based on the density functional theory (DFT), then determine the parameters by proper combination of those results. At the last step, the following formulas will be used,
\[J_{ij} =\frac{E^{\uparrow\uparrow}+E^{\downarrow\downarrow}-E^{\uparrow \downarrow}-E^{\downarrow\uparrow}}{4S^{2}} \tag{9}\] \[\frac{\partial J_{ij}}{\partial u_{k,\alpha}} =\frac{F_{k,\alpha}^{\uparrow\uparrow}+F_{k,\alpha}^{\downarrow \downarrow}-F_{k,\alpha}^{\uparrow\downarrow}-F_{k,\alpha}^{\downarrow \uparrow}}{4S^{2}} \tag{10}\]
where \(S\) is the spin quantum number of magnetic atoms, \(E\) is the total energy of system and \(F_{k,\alpha}\) refers to one component of the force on atom \(k\). The superscripts (\(\uparrow\uparrow\), \(\downarrow\downarrow\), \(\uparrow\downarrow\), \(\downarrow\uparrow\)) specify the constrained spin states of system in the calculation. More technical information about the four-state method can be found in the references [28; 29]. Compared to other approaches, the four-state method offers an obvious advantage in that no additional DFT calculations are needed to determine the coupling constants \(\partial J_{ij}/\partial u_{k,\alpha}\) once the exchange coefficients \(J_{ij}\) have been obtained. This is because the energy and forces are typically provided simultaneously by one DFT calculation.
Since atomic masses \(M_{i}\) can be directly obtained from the periodic table, more efforts are needed to deal with the force constant tensor \(K_{ij,\alpha\beta}\). Currently, there are two commonly adopted ways to calculate the force constant tensor: density functional perturbation theory (DFPT) and finite displacement method. Both of these methods are applicable to our task.
However, we cannot directly take the force constant tensor obtained from first-principles calculations as the model parameter. This is because in dynamics simulations we usually expand crystal cells to reduce the undesired influence of thermal fluctuations, which leads to a conflict between the periodic boundary condition and the locality (also known as nearsightedness [30; 31]) of models. To be more specific, when calculating the contribution of one atom or spin to the total energy, we tend to set a well designed cutoff radius and ignore the interactions beyond it. This step is essential when dealing with a large-scale system, otherwise we will suffer from the model complexity and the computational cost. Nevertheless, if we set the elements of \(K_{ij,\alpha\beta}\) that represent out-of-range interactions to be zero and leave the others unchanged, we may violate the so-called acoustic summation rules:
\[\sum_{i}K_{ij,\alpha\beta}=0\quad\text{for all }j,\alpha,\beta. \tag{11}\]
It should be pointed out that a straightforward enforcement of the acoustic summation rules, achieved by subtracting errors uniformly from force constants, will break the inherent crystal symmetry inevitably, which is the technique employed in phonopy [32]. To address the above issues, we adopted a more appropriate method in this work. Before a detailed introduction, it's necessary to recall that not every element of the force constant tensor serves as an independent variable due to the crystal symmetries. Taking the cubic cell of Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\) (containing 160 atoms) for example, there are 230400 elements in the tensor. After symmetry analyses, we find that only 597 independent variables \(\{p_{n}\}\) are needed to adequately determine all the tensor elements \(\{K_{ij,\alpha\beta}\left(\{p_{n}\}\right)\}\), where the effect of locality is already considered. Afterwards, our method is to set a correction factor \(x_{n}\) for each variable \(p_{n}\) and minimize the deviation of parameters under the constraints of Eq. (11). A mathematical reformulation of this method can be written as,
\[\begin{split}&\min_{\{x_{n}\}}\;\sum_{n}(x_{n}-1)^{2},\;\text{ with}\\ &\sum_{i}K_{ij,\alpha\beta}(\{x_{n}p_{n}\})=0\quad\text{for all }j,\alpha,\beta.\end{split} \tag{12}\]
In the case of Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\), there are only 18 linearly independent constraints, which allow the extremum problem to be solved rigorously. The modified force constant tensor restores positive definiteness and translational symmetry while maintaining the crystal symmetries. Therefore, the modified tensor meets the requirements for dynamics simulations. In Sec. III.2, the effectiveness of this approximate method will be demonstrated through a specific example.
All the first-principles calculations mentioned in this section are carried out using the Vienna _ab initial_ simulation package (VASP) [33; 34; 35]. The force constants and phonon spectra are obtained by phonopy [32]. The optimizations formulated in (12) are accomplished with the function optimize_minimize implemented in SciPy [36].
### Evaluation of Damping Constants
After the construction and parameterization of Hamiltonian models, we are finally able to perform spin-lattice dynamics simulations. Before the evaluation of Gilbert damping constants, we briefly introduce the framework of SLD to cover some relevant concepts. In practice, the motion of magnetic moments follows the stochastic Landau-Lifshitz-Gilbert (SLLG) equation [14],
\[\frac{d\vec{m}_{i}}{dt}= -\gamma_{L}\vec{m}_{i}\times\left(\vec{B}_{i}+\vec{B}_{i}^{fl}\right)\] \[-\gamma_{L}\alpha\frac{\vec{m}_{i}}{|\vec{m}_{i}|}\times\left[ \vec{m}_{i}\times\left(\vec{B}_{i}+\vec{B}_{i}^{fl}\right)\right] \tag{13}\]
where \(\gamma_{L}\) is the renormalized gyromagnetic ratio, \(\vec{B}_{i}=-\partial H/\partial\vec{m}_{i}\) is the effective local magnetic field and \(\vec{B}_{i}^{fl}\) refers to a stochastic field introduced by Langevin thermostat. At the same time, the motion of atoms obeys the Newton's equation,
\[\frac{d\dot{u}_{i,\alpha}}{dt}=\frac{1}{M_{i}}\left(\vec{F}_{i,\alpha}+\vec{F }_{i,\alpha}^{fl}\right)-\nu\dot{u}_{i,\alpha} \tag{14}\]
where \(\nu\) is the damping constant and \(\vec{F}_{i,\alpha}^{fl}\) refers to a stochastic force caused by thermal fluctuations. In this work, \(\vec{B}_{i}^{fl}\) and \(\vec{F}_{i,\alpha}^{fl}\) are modeled as normally distributed noises with temperature-dependent variances,
\[B_{i,\beta}^{fl} \sim N\left(0,\sqrt{2\alpha k_{B}T_{S}/\gamma|\vec{m}_{i}|\delta t}\right) \tag{15}\] \[F_{i,\beta}^{fl} \sim N\left(0,\sqrt{2\nu M_{i}k_{B}T_{L}/\delta t}\right) \tag{16}\]
where \(T_{S}\) and \(T_{L}\) refer to the equilibrium temperature of spin and lattice subsystems respectively. During simulations, we can also measure the transient temperature of each subsystem with the following formulas [37],
\[T_{S}=\frac{\sum_{i}|\vec{m}_{i}\times\vec{B}_{i}|^{2}}{2k_{B}\sum_{i}\vec{m}_ {i}\cdot\vec{B}_{i}},\ T_{L}=\frac{1}{2k_{B}N}\sum_{i,\alpha}M_{i}\dot{u}_{i, \alpha}^{2} \tag{17}\]
In this work, the LLG equation is numerically solved with the semi-implicit SIB method proposed by Mentink et al. [38]. The Newton's motion equation is integrated using the Gronbech-Jensen-Farago Verlet-type method [39]. To ensure the stability of those algorithms, a step length of 0.5 or 0.2 fs is adopted [40], where the shorter one is used in energy-conserving simulations.
Based on the combination of atomistic spin dynamics (ASD) and SLD simulations, a new scheme is proposed to evaluate the damping constant in magnetic materials. Here is the basic flow of this method and more details of a specific application are presented in Sec. III.2.
1. Freeze the spin degree of freedom and thermalize the lattice from 0 to \(T_{L}\) in the simulation.
2. Fix atomic positions and raise the temperature of spin to \(T_{S}>T_{L}\). Compared to \(T_{L}>T_{S}\), this type of nonequilibrium state is more common in actual scenarios.
3. Perform an energy-conserving SLD simulation to relax the system. Normally, the spin temperature will decrease to the same as lattice and stay there till the end.
4. Conduct a series of ASD simulations with different Gilbert damping constants. The initial states are the same as in step 3 and the equilibrium temperatures are set to be \(T_{L}\).
5. Compare the cooling rates \(\partial T_{S}/\partial t\) of spin system between SLD and ASD simulations to evaluate the equivalent Gilbert damping constant contributed by spin-lattice coupling.
The key point behind step 5 is that the cooling rates observed in ASD simulations are related to the assigned damping constants, while in SLD simulation the cooling rate is determined by the strength of spin-lattice coupling. Note that the former relation can be viewed as a natural deduction of the LLG equation,
\[\frac{\partial T_{S}}{\partial t} =\frac{1}{C_{V}}\frac{\partial E_{mag}}{\partial t}\propto-\frac {1}{C_{V}}\left(\frac{\partial\vec{m}}{\partial t}\cdot\vec{B}\right)\] \[\propto-\frac{1}{C_{V}}\left[\left(\frac{\alpha}{m}\vec{m}\times \frac{\partial\vec{m}}{\partial t}\right)\cdot\vec{B}\right]\propto\alpha \tag{18}\]
where we have used Eq. (1) and simplified the formula of magnetic energy as \(E_{mag}\propto-\vec{m}\cdot\vec{B}\).
## III Results
This section is divided into four parts. In Sec. III.1, several test results are presented to validate the accuracy of SLD simulations, which are implemented in the PASP package. Subsequently, detailed calculations on three magnetic materials, namely Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\), MnFe\({}_{2}\)O\({}_{4}\) and Cr\({}_{2}\)O\({}_{3}\), are discussed in the rest parts.
### Validations
In order to guarantee the reliability of our conclusions obtained from dynamics simulations, a series of pretests were carried out. We select some representative results and present them in Fig. 1, where Cr\({}_{2}\)O\({}_{3}\) is taken as the object to be studied.
Firstly, we set the ground state of Cr\({}_{2}\)O\({}_{3}\) as the initial state and performed a NVT simulation with \(T_{set}=150K\). As shown in Fig. 1(a), the temperature of spin and lattice subsystems increased to \(150K\) in less than 5 ps and stayed there till the end. Since we can approximate \(E_{k}=0.5E_{L}\) and \(E_{p}=0.5E_{L}+E_{S}\), Fig. 1(b) also indicates that the contribution of phonons and magnons to the excited state energy is around 87.5% and 12.5% respectively. This result could be verified from another perspective. Note that there are totally 10 atoms in the
unit cell of Cr\({}_{2}\)O\({}_{3}\), which contribute \(30k_{B}\) to the heat capacity. Meanwhile, the 4 magnetic atoms will contribute another \(4k_{B}\) in the low temperature region. Therefore, we can estimate that the contribution of magnons to the total heat capacity is close to \(11.8\%\), which is consistent with the result from dynamics simulations.
In Figs. 1(c) & 1(d), the initial state was set to be a nonequilibrium state with \(T_{L}=30K\) and \(T_{S}=175K\). As we expected, the total energy was well conserved when the system evolved to equilibrium. In addition, the final temperature fell within the range of \(48K\sim 55K\), which agrees with our previous analysis of the heat capacities.
Lastly, we simulated the relaxation process using another nonequilibrium excited state with \(T_{L}=180K\) and \(T_{S}=30K\) as the initial state. As shown in Figs. 1(e) & 1(f), the temperature of spin system increased gradually to equilibrium with the total energy conserved throughout the simulation. Also, the final temperature is around \(160K\), which matches well with our analysis. It should be pointed out that there exist two notable differences between this case and the previous. Firstly, the subsystems ultimately evolved to a same temperature in a finite time, which alleviated our concerns about the accuracy of SLD simulations. Besides, the relaxation time (\(\tau_{2}\)) was much longer than that (\(\tau_{1}\)) in Fig. 1(c). For this phenomenon, a qualitative explanation is presented below.
Based on the theory of second quantization, the Hamiltonian model presented in Sec. II.1 can be expressed in the following form [41, 42],
\[H_{L} =\sum_{qp}\hbar\omega_{qp}(b_{qp}^{\dagger}b_{qp}+1/2) \tag{19}\] \[H_{S} =\sum_{\lambda}\epsilon_{\lambda}a_{\lambda}^{\dagger}a_{\lambda }+Const.\] (20) \[H_{SLC} =\sum_{\lambda,qp}M_{\lambda,qp}a_{\lambda-q}^{\dagger}a_{\lambda }\left(b_{qp}^{\dagger}-b_{-qp}\right) \tag{21}\]
where \(b_{qp}\) denotes the annihilation operator of phonons with wave vector \(q\) in branch \(p\), and \(a_{\lambda}\) represents the annihilation operator of magnons with wave vector \(\lambda\). All the parameters, namely \(\omega_{qp}\), \(\epsilon_{\lambda}\) and \(M_{\lambda,qp}\), can be determined from the effective Hamiltonian model in principle. According to the Fermi's golden rule, we have
\[W\left\{n_{\lambda-q},n_{\lambda},N_{qp}\to n_{\lambda-q}+1,n_{ \lambda}-1,N_{qp}+1\right\} =\frac{2\pi}{\hbar}|M_{\lambda,qp}|^{2}(n_{\lambda-q}+1)(n_{ \lambda})(N_{qp}+1)\delta(\epsilon_{\lambda-q}-\epsilon_{\lambda}+\hbar \omega_{qp}) \tag{22}\] \[W\left\{n_{\lambda-q},n_{\lambda},N_{-qp}\to n_{\lambda-q}+1,n_{ \lambda}-1,N_{-qp}-1\right\} =\frac{2\pi}{\hbar}|M_{\lambda,qp}|^{2}(n_{\lambda-q}+1)(n_{ \lambda})(N_{-qp})\delta(\epsilon_{\lambda-q}-\epsilon_{\lambda}-\hbar \omega_{-qp}) \tag{23}\]
Figure 1: NVT and NVE relaxations of a spin-lattice coupled system (Cr\({}_{2}\)O\({}_{3}\)) within the framework of spin-lattice dynamics. The top row plots the time evolution of temperatures and the bottom row shows the variation of potential, kinetic and total energies. (a) & (b): NVT thermalization from \(T_{L}=T_{S}=0K\) to \(T_{L}=T_{S}=150K\). (c) & (d): NVE relaxation with \(T_{L}=30K\), \(T_{S}=175K\) initially. (e) & (f): NVE relaxation with \(T_{L}=180K\), \(T_{S}=30K\) initially.
where \(W\) represents the probability of one-phonon emission or absorption, \(n_{\lambda}\) denotes the occupation number of magnons and \(N_{qp}\) stands for phonons. Both \(n_{\lambda}\) and \(N_{qp}\) can be evaluated approximately using the Bose-Einstein distribution. According to the above formulas, the scattering rate \(W\) grows linearly with \(N\) and quadratically with \(n\). Compared to Fig. 1(c), there are more phonons but fewer magnons in the case of Fig. 1(e), thus leading to a lower transition probability and a longer relaxation time. More technical details about the second quantization of interactions between phonons and magnons can be found in Ref. [41; 42].
### Damping constants in \(\mathbf{Y_{3}Fe_{5}O_{12}}\)
In the field of spintronics, \(\mathrm{Y_{3}Fe_{5}O_{12}}\) (ytrium iron garnet, YIG) has gained much attention due to its ultra-low magnetic damping [5; 6; 7]. The unique property of this material motivated us to investigate the intrinsic mechanism behind it. The crystal structure of YIG is presented in Fig. 3(a). There are totally 80 atoms in the primitive cell, of which 12 Fe ions are located in the center of oxygen tetrahedrons while the other 8 Fe ions are sited in oxygen octahedrons. The magnetic ground state of YIG is illustrated in Fig. 3(b). The Fe ions situated in different chemical environments contribute spins in opposite directions, which makes YIG a typical ferrimagnetic material.
In order to evaluate the Gilbert damping constants in YIG, our first step is to prepare an effective Hamiltonian model. Considering the balance between precision and efficiency, the cutoff radius of interactions was set to be 11.0 Bohr for atomic pairs and 6.7 Bohr for 3-body clusters. After symmetry analyses, we identified 612 nonequivalent interactions in total, which included 6 Heisenberg exchange terms and 9 spin-lattice coupling terms.
To determine the interaction parameters, we carried out a series of first-principles calculations, where a cubic cell was adopted to reduce the interference between adjacent cells caused by periodic boundary conditions. Following the settings in Ref. [43], we utilized the projector augmented-wave (PAW) method [44] and revised Perdew-Burke-Ernzerhof exchange-correlation functional for solids (PBEsol) [45] in our calculations. Besides, the DFT+U method in its simplified form [46] was employed where the effective Hubbard U parameter was set to be 4 eV for the \(3d\) electrons of Fe ions. In addition, a cutoff energy of 520 eV for plane wave basis and a \(\Gamma\)-centered \(2\times 2\times 2\) mesh of \(k\)-points were used in the DFT calculations.
In Figure 2(c), we present the density of states (DOS) for YIG. With a band gap of 1.863 eV, there is hardly any electric current occurring in the low temperature region. Moreover, the Heisenberg exchange coefficients of YIG is listed in Table 1. To verify the accuracy of these parameters, we conducted both Monte Carlo (MC) and ASD simulations. The temperature dependence of average magnetization is shown in Fig. 2(d), which reveals the critical temperature of YIG to be 530 K. This result is slightly lower than the measured Curie temperature, \(T_{C}=560K\)[5], but falls within our tolerance. The calculated results of coupling constants are provided in the supplementary material.
Next, we come to deal with the force constant tensor. In order to demonstrate the impact of locality and validate the effectiveness of our optimization method, we present some results pertaining to the tensor of YIG in Table 2. Here we use "VASP" to
\begin{table}
\begin{tabular}{c c c} \hline \hline Spin Pair. & Distance (Angst) & J (meV) \\ \hline
1NN \(\mathrm{Fe}^{T}-\)\(\mathrm{Fe}^{O}\) & 3.445 & 47.414 \\
1NN \(\mathrm{Fe}^{T}-\)\(\mathrm{Fe}^{T}\) & 3.774 & 2.399 \\
1NN \(\mathrm{Fe}^{O}-\)\(\mathrm{Fe}^{O}\) (\(\alpha\)) & 5.337 & 0.538 \\
1NN \(\mathrm{Fe}^{O}-\)\(\mathrm{Fe}^{O}\) (\(\beta\)) & 5.337 & 5.055 \\
2NN \(\mathrm{Fe}^{T}-\)\(\mathrm{Fe}^{O}\) & 5.555 & 0.285 \\
2NN \(\mathrm{Fe}^{T}-\)\(\mathrm{Fe}^{T}\) & 5.765 & 3.437 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The Heisenberg exchange coefficients J of YIG, where an effective spin \(S=1\) is adopted. For the \(\mathrm{Fe}^{O}-\)\(\mathrm{Fe}^{O}\) pairs, the Greek letters (\(\alpha\) & \(\beta\)) refer to different chemical environments. All the results are calculated with the four-state method.
Figure 2: (a) The primitive cell of \(\mathrm{Y_{3}Fe_{5}O_{12}}\). The golden balls represent iron atoms, the cyan ball stand for yttrium atoms, and the red balls refer to oxygen atoms. (b) The magnetic ground state of YIG. The arrows of different colors represent the spin directions of Fe atoms. (c) The density of states obtained by DFT calculations. (d) The temperature dependence of average magnetization measured in MC and ASD simulations. For YIG, the phase transition point from ferrimagnetic to paramagnetic lies in 530 K approximately.
obtained from DFT calculations, "PASP" to label the modified tensor in which interactions beyond the cutoff radius are eliminated, and "Modified" to label the tensor after optimization of independent variables. As shown in Table 2, the "PASP" tensor violated the acoustic sum rule and was not positive semi-definite, whereas these issues were resolved for the "Modified" tensor. Although an obvious difference existed between the "PASP" and "Modified" tensor in terms of their eigenvalues, we still assumed the target system could be reasonably described by the "Modified" tensor and the validity of this assumption would be verified by the calculated results of damping constants. Additional details regarding the selection of tensor elements and the deviation of phonon spectra are provided in Fig. 3. According to figure 3(b) and 3(c), the major deviation in phonon spectra resulted from the elimination of tensor elements, rather than the subsequent modification.
Completing the preparation of Hamiltonian model, we applied the scheme proposed in Sec. II.3 to our first object, \(\mathrm{Y_{3}Fe_{5}O_{12}}\). An instance is presented in Figure 4. We set \(T_{L}=30K\), \(T_{S}=180K\) for the initial nonequilibrium state and adopted an expanded supercell which contains 12800 atoms in the simulation. Fig. 4(a) shows the time evolution of spin temperature in different types of simulations. By comparing the curves, we could roughly estimate that the equivalent damping constant in SLD simulation fell within the range of \(10^{-3}\sim 10^{-4}\). To make the estimation more precise, we calculated the initial cooling rates \(\partial T_{S}/\partial t|_{t=0}\) through polynomial (or exponential) fittings and plotted them in Fig. 4(b). Afterwards, a linear regression was performed to determine the quantitative relation between \(\lg(-\partial T_{S}/\partial t|_{t=0})\) and \(\lg(\alpha)\). As we expected, the cooling rates in ASD simulations were proportional to the assigned damping constants. Then, we combined the results of SLD and ASD simulations to evaluate the equivalent damping constant. This step was accomplished by identifying the intersection of red and blue lines in Figure 4(b). Finally, the damping constant was determined to be \(\alpha_{f}=(2.87\pm 0.13)\times 10^{-4}\) in this case. To verify our method and result, we present a comparison between SLD and ASD (where we set \(\alpha=\alpha_{f}\)) simulations in Fig. 4(c). The curves agree well with each other in the initial stage but deviate in the second half. This phenomenon is within our expectation, because in the SLD simulation the lattice heats up as the spin cools down, thereby slowing the energy transfer between two subsystems.
In addition to the above case, we have measured the equivalent damping constants under different conditions to investigate the temperature dependence of magnetic damping. The final results are summarized in Figure 5. Details about the estimation of uncertainties are given in the supplementary material. For \(\mathrm{Y_{3}Fe_{5}O_{12}}\), the damping constants at different temperatures stay on the order of \(10^{-4}\), which is in good agreement with the experimental results (\(3.2\times 10^{-4}\)[47], \(2.2\times 10^{-4}\)[48], \(1.2\)-\(1.7\times 10^{-4}\)[49]). For example, the damping constant in bulk YIG was reported as \(0.4\times 10^{-4}\) in Ref. [50]. Meanwhile, our calculations yielded \(\alpha=(2.8\pm 0.3)\times 10^{-5}\) at \(\Delta T=15\) K and \(\alpha=(7.0\pm 0.7)\times 10^{-5}\) at \(\Delta T=30\) K, where both \(T_{L}=0\) K. Therefore, the experimental value corresponds roughly to the temperature region of \(\Delta T=15\sim 30\) K in our study. We believe such extent of thermal excitation is quite common in all kinds of spintronics experiments. Moreover, Fig. 5 indicates that \(\alpha\) is approximately proportional to the temperature difference between subsystems. This outcome is also consistent with some computational works in the past [25, 23]. By comparing the subfigures in Figure 5, we found that \(\alpha\) has little dependence on the lattice temperature, although here \(T_{L}\) could be viewed to some extent as the ambient temperature of the spin system.
As a supplement to Sec. III.1, we further validate our simulations by analyzing the measured cooling rates in Fig. 5(a). By subtracting Eq. (23) from Eq. (22), the transfer rate of energy between magnon and phonon systems can be expressed as,
\[\dot{Q}=\sum_{qp}\hbar\omega_{qp}\langle\dot{N}_{qp}\rangle=\sum_{\lambda,qp}T _{\lambda,qp} \tag{24}\]
where \(T_{\lambda,qp}\) denotes different transfer channels,
\[T_{\lambda,qp}\propto(n_{\lambda}-n_{\lambda-q})N_{qp}+n_{\lambda-q}n_{ \lambda}+1 \tag{25}\]
According to the Bose-Einstein distribution, the number of magnons and phonons can be expressed as,
\[n_{\lambda}=\frac{1}{e^{\nicefrac{{\alpha}}{{k_{B}}}T_{S}}-1},\ N_{qp}=\frac {1}{e^{\nicefrac{{\alpha}}{{k_{B}}}T_{L}}-1} \tag{26}\]
When \(T_{S}\) is high enough and \(T_{L}\) is close to zero, we can approximate \(n_{\lambda}=k_{B}T_{S}/\epsilon_{\lambda}\propto T_{S}\) and \(N_{qp}\) close to zero. Under these conditions, we have \(\dot{Q}\propto T_{S}^{2}\). This relation
\begin{table}
\begin{tabular}{l c c c c c c} & \multicolumn{2}{c}{VASP} & \multicolumn{2}{c}{PASP} & \multicolumn{2}{c}{Modified} \\ \cline{2-7} No. & A & B & A & B & A & B \\ \hline
1 & 0.000 & 0.000 & 1.587 & -0.102 & 0.000 & 0.000 \\
2 & 0.000 & 0.000 & 1.587 & -0.102 & 0.000 & 0.000 \\
3 & 0.000 & 0.000 & 1.587 & -0.102 & 0.000 & 0.000 \\
4 & 0.000 & 1.065 & 1.587 & 0.643 & 0.000 & 0.444 \\
5 & 0.000 & 1.065 & 1.587 & 0.643 & 0.000 & 0.444 \\
6 & 0.000 & 1.065 & 1.587 & 0.643 & 0.000 & 0.444 \\ \end{tabular}
\end{table}
Table 2: The force constant tensor of YIG. The columns labeled by A represent the sorted absolute values of \(\sum_{i}K_{ij,\alpha\beta}\) and the columns labeled by B list the sorted eigenvalues of \(K_{ij,\alpha\beta}\). For the cubic cell of YIG, we obtained the original tensor with the VASP package. Then, we eliminated the elements that represent interactions beyond the cutoff radius. This step was done by PASP. Finally, the tensor was modified to meet the requirement of translational symmetry through the optimization formulated in (12).
Figure 4: (a) The time evolution of spin temperature in SLD and ASD simulations. The gray line represents the SLD simulation while the others refer to the ASD simulations with different damping constants. (b) The initial cooling rates \(\partial T_{S}/\partial t|_{t=0}\) with respect to the damping constants \(\alpha\), where the scaling of axis is set to be logarithm. The gray squares refer to the results of ASD simulations and the blue line acts as the linear regression. The red circle is plotted by intersection of the blue line and the horizontal red dash line, which represents the initial cooling rate in the SLD simulation. Then we can obtain the equivalent damping constant from the abscissa of the red circle. (c) The comparison between ASD and SLD simulations. In the ASD simulation, the Gilbert damping constant is set to be \(\alpha=2.87\times 10^{-4}\), which is exactly the result of our evaluation from the SLD simulation.
Figure 5: The temperature dependence of Gilbert damping constants for Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\). The label of abscissa axis \(\Delta T\) refers to \(T_{S}-T_{L}\) of the initial state in dynamical simulations. Measurements on the magnetic damping are performed under different initial conditions of the lattice temperature: (a) \(T_{L}=0\), (b) \(T_{L}=30K\), (c) \(T_{L}=60K\).
Figure 3: (a) The selection of force constant tensor elements for the cubic cell of YIG. An \(160\times 160\) zero-one matrix is used to show the result of selection, in which ’1’ denotes the interactions within cutoff radius and ’0’ represents the elements that are artificially eliminated. (b) The phonon spectrum calculated from the force constant tensor before and after the elimination of tensor elements. (c) The phonon spectrum calculated from the force constant tensor before and after the optimization of independent variables.
is well verified by linear regressions and the details are provided in the supplementary material.
Furthermore, the accuracy of our simulations can also be proved from another perspective. According to Eqs. (22) and (23), the scattering rate \(W\) grows quadratically with the coupling parameters \(M_{\lambda,qp}\). Based on the theory of second quantization, \(M_{\lambda,qp}\) shall be proportional to the coupling constants \(\partial J_{ij}/\partial u_{k,\alpha}\). Therefore, under a definite condition of temperature, we have:
\[\alpha\propto\dot{Q}\propto\Delta W\propto M_{\lambda,qp}^{2}\propto(\partial J _{ij}/\partial u_{k,\alpha})^{2} \tag{27}\]
In order to verify this relation, we adjusted the spin-lattice coupling constants of YIG coherently while keeping the other model parameters unchanged. Then, SLD simulations were carried out to evaluate the corresponding damping constants. The result is plotted in Fig. 6, where the x-label "slcc" stands for the spin-lattice coupling constants and the subscript "0" refers to the original situation. From a linear fitting, the slope is determined to be 2.01, which agrees well with our prediction.
### Damping constants in MnFe\({}_{2}\)O\({}_{4}\)
After the calculation on YIG, we applied our method to MnFe\({}_{2}\)O\({}_{4}\) (MFO), which was reported to possess a large Gilbert damping constant in the literature [13, 51]. As shown in Fig. 7(a), MnFe\({}_{2}\)O\({}_{4}\) has a typical structure of spinels, where A sites are surrounded by four oxygen atoms and B sites are located in octahedrons. Generally, spinels can be classified into normal and inverse structures according to the distribution of divalent and trivalent cations between A/B sites. In experiments, MFO usually crystallizes into a mixed phase where the normal structure occupies the major part (80% in bulk MFO [52]). Here, we only considered its normal structure in this work. Also, the magnetic ground state of MFO is shown in Fig. 22(b), where the magnetic moments are antiparallel between A/B sites.
Firstly, we started to construct an effective Hamiltonian model for MFO. With the same cutoff settings for YIG, we found 105 nonequivalent interactions, including 4 Heisenberg exchange terms and 10 spin-lattice coupling terms. Subsequently, DFT calculations were carried out to determine the interaction parameters. In these calculations, we adopted a cubic cell containing 56 atoms and a \(\Gamma\)-centered \(4\times 4\times 4\) grid mesh in the reciprocal space. Besides, \(U_{\text{Mn}}=3.3\) eV and \(U_{\text{Fe}}=3.6\) eV were used as the effective Hubbard parameters [52]. With the exception of aforementioned settings, all the relevant first-principles calculations were performed under the same conditions as in Sec. III.2.
The DOS of MnFe\({}_{2}\)O\({}_{4}\) is plotted in Fig. 7(c), yielding a calculated band gap of 0.612 eV. This value does not match with the result of transport experiments, which reported a much smaller band gap (0.04-0.06 eV) [53]. In addition, MC and ASD simulations were performed using the Heisenberg exchange coefficients listed in Table 3. The temperature dependence of average magnetization, shown in Fig. 7(d), suggests the critical temperature to be around 730 K. This result is significantly higher than the measured value of 573 K [54]. Both of the above discrepancies may be attributed to the inevitable difference between the ideal normal spinel structure in calculations and the partially disordered samples in reality. Despite this problem, we proceeded to describe the target system with our Hamiltonian model and expected to see how far the calculated results of damping constants would differ
Figure 7: (a) The cubic cell of MnFe\({}_{2}\)O\({}_{4}\). The purple balls represent manganese atoms, the golden balls refer to iron atoms, and the red balls stand for oxygen atoms. (b) The magnetic ground state of MFO. The arrows of different colors represent the spin directions of Mn and Fe atoms separately. (c) The density of states obtained by DFT calculations. (d) The temperature dependence of average magnetization measured in MC and ASD simulations. For MnFe\({}_{2}\)O\({}_{4}\), the phase transition point from ferrimagnetic to paramagnetic lies in 730K approximately.
Figure 6: The relation between damping constants \(\alpha\) and spin-lattice coupling constants \(\partial J_{ij}/\partial u_{k,\alpha}\) in YIG. Through a linear fitting, the slope is determined to be 2.01, which agrees well with our theoretical predictions.
from experimental values.
After the preparation of Hamiltonian model, we conducted dynamics simulations to evaluate the equivalent damping parameters in MFO at different temperatures. A supercell containing 13440 atoms was adopted in the simulation, and the results are summarized in Fig. 10. The average of calculated damping constants is around \(8\times 10^{-5}\), which is much smaller than the measured value, \(1.0\times 10^{-2}\)[51; 13]. Two factors may account for this inconsistency. Firstly, the inhomogeneity in real MnFe\({}_{2}\)O\({}_{4}\) samples greatly enhances the scattering of magnons and phonons, thereby increasing the damping constants. Additionally, due to the narrow band gap observed in experiments, eddy currents can arise at finite temperatures, which leads to a rapid loss of energy in the form of joule heat. As the result of these factors, we failed to obtain a reasonable estimation of Gilbert damping constants for MnFe\({}_{2}\)O\({}_{4}\) with our methodology. On the other side, the contribution of different relaxation mechanisms to FMR linewidth has been studied comprehensively for MnFe\({}_{2}\)O\({}_{4}\) in Ref. [53], which further confirms our analyses.
### Damping constants in Cr\({}_{2}\)O\({}_{3}\)
Chroma (Cr\({}_{2}\)O\({}_{3}\)) is a well-known collinear magneto-electric antiferromagnet, which holds great prospects in the field of spintronics [55; 56; 57]. As shown in Fig. 8(a), the primitive cell of Cr\({}_{2}\)O\({}_{3}\) contains 10 atoms, with each chromium atom bonded to the six oxygen atoms around it. Additionally, Fig. 8(b) displays the magnetic ground state of Cr\({}_{2}\)O\({}_{3}\), where the spins of two nearest neighboring Cr atoms are oriented in opposite directions.
As a preliminary step in constructing the Hamiltonian model, we set the cutoff radius of interactions to be 11.0 Bohr for atomic pairs and 7.0 Bohr for 3-body clusters. Through symmetry analyses, we identified 319 nonequivalent interactions, including 5 Heisenberg exchange terms and 21 spin-lattice coupling terms.
Afterwards, a series of first-principles calculations were performed to determine the model parameters. Following the settings in Ref. [58], we adopted a hexagonal cell of Cr\({}_{2}\)O\({}_{3}\) which contained a total of 90 atoms in the calculations. Additionally, we used the LSDA+U method in its full spherically symmetric form [59]. As to the Hubbard parameters, \(J\) was fixed at its recommended value of 0.6 eV, and \(U\) was adjusted to fit the Neel temperature observed in experiments [60]. We found \(U=2.0\) eV was the optimal value for \(3d\) electrons of Cr ions. Except for the settings specified above, all the DFT calculations were conducted under the same conditions as in Sec. III.3.
The DOS of Cr\({}_{2}\)O\({}_{3}\) is plotted in Fig. 8(c), which yields a calculated band gap of 1.935 eV. This value indicates that the energy dissipation of electric currents can be neglected in this system. Additionally, we list the Heisenberg exchange coefficients of chromia in Table 4. Both MC and ASD simulations were performed to investigate the temperature dependence of sublattice magnetization. According to Fig. 8(d), the critical point was determined to be 310 K approximately, which was quite consistent with experimental observations. Also, the force constants of Cr\({}_{2}\)O\({}_{3}\) went through the modification formulated in Sec. II.2, and the spin-lattice coupling parameters are provided in the supplementary material.
After the construction of Hamiltonian model, we conducted a series of dynamics sim
\begin{table}
\begin{tabular}{c c c} Spin Pair. & Distance (Angst) & J (meV) \\ \hline
1NN Cr-Cr & 2.640 & 44.778 \\
2NN Cr-Cr & 2.873 & 29.269 \\
3NN Cr-Cr & 3.411 & -0.182 \\
4NN Cr-Cr & 3.635 & 0.007 \\
5NN Cr-Cr & 4.137 & -0.500 \\ \end{tabular}
\end{table}
Table 4: The exchange coefficients J of Cr\({}_{2}\)O\({}_{3}\), in which an effective spin \(S=1\) is adopted.
Figure 8: (a) The primitive cell of Cr\({}_{2}\)O\({}_{3}\). The dark blue balls represent chromium atoms, and the red balls stand for oxygen atoms. (b) The magnetic ground state. The arrows of different colors represent the spin directions of Cr atoms. (c) The density of states obtained by DFT calculations. (d) The temperature dependence of sublattice magnetization measured in MC and ASD simulations. For Cr\({}_{2}\)O\({}_{3}\), the phase transition point from ferrimagnetic to paramagnetic lies in 310K approximately.
\begin{table}
\begin{tabular}{c c c} Spin Pair. & Distance (Angst) & J (meV) \\ \hline
1NN Fe-Fe & 3.003 & 6.835 \\
1NN Mn-Fe & 3.521 & 33.224 \\
1NN Mn-Mn & 3.667 & 3.956 \\
2NN Fe-Fe & 5.201 & 0.929 \\ \end{tabular}
\end{table}
Table 3: The exchange coefficients J of MnFe\({}_{2}\)O\({}_{4}\), where an effective spin \(S=1\) is adopted.
equivalent damping parameters in Cr\({}_{2}\)O\({}_{3}\). An expanded hexagonal cell containing 14400 atoms was adopted for the simulation, and the results are summarized in Fig. 11. As two specific cases, our calculation yielded \(\alpha=(1.31\pm 0.14)\times 10^{-4}\) at \(\Delta T=15\) K and \(\alpha=(2.7\pm 0.3)\times 10^{-4}\) at \(\Delta T=30\) K, where both \(T_{L}=0\) K. Therefore, the calculated damping constants within \(\Delta T=15\sim 30\) K are quite close to \(2\times 10^{-4}\), which is the estimated value reported in Ref. [61].
Furthermore, the damping constants in Cr\({}_{2}\)O\({}_{3}\) exhibit a significant non-linear relation with the temperature difference of subsystems. Through logarithmic fittings, we calculated the power exponents for Figures 11(a) to 11(c), and the results were 1.17, 1.62, 1.38. If we disregard the difference between \(\Delta T\) and \(T\) for the moment, these values are in good agreement with the theoretical prediction of Kasuya and LeCraw [26]. According to their study, the relaxation rate varies as \(T^{n}\) where \(n=1\sim 2\) while \(n=2\) corresponds to a larger regime of temperatures.
Compared to YIG, the greater magnetic damping observed in chromia can be attributed to its significantly stronger spin-lattice coupling. As shown in Fig. 9, the magnitude of principal spin-lattice coupling constant in Cr\({}_{2}\)O\({}_{3}\) is two or three times larger than that in YIG. This could be explained by the fact that direct exchange interaction between two magnetic atoms decreases rapidly with their distance [62]. Therefore, owing to the shorter distance of Cr-Cr pair, the direct exchange interaction between neighboring Cr atoms is believed to have a great contribution to the spin-lattice coupling in Cr\({}_{2}\)O\({}_{3}\).
## IV Conclusions
In summary, we propose a scheme to evaluate the contribution of spin-lattice coupling to the Gilbert damping in insulating magnetic materials. Our methodology involves first-principles based Hamiltonian models and spin-lattice dynamics simulations. Following a series of validations, we applied our method to three magnetic materials, namely Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\), MnFe\({}_{2}\)O\({}_{4}\) and Cr\({}_{2}\)O\({}_{3}\). Their damping constants were estimated separately, and the results show that, in general, \(\alpha\) is approximately proportional to the temperature difference between spin and lattice subsystems. Under the condition of \(\Delta T=30\) K, the calculated damping constants are averaged to be \(0.8\times 10^{-4}\) for YIG, \(0.2\times 10^{-4}\) for MFO and \(2.2\times 10^{-4}\) for Cr\({}_{2}\)O\({}_{3}\). The results for YIG and Cr\({}_{2}\)O\({}_{3}\) are in good agreement with experimental measurements, while the discrepancy for MFO can be attributed to the inhomogeneity and small band gap in real samples. Overall, the approach presented in this work holds great promise for accurately predicting the Gilbert damping constants for magnetic insulators.
###### Acknowledgements.
This work is supported by the National Key R&D Program of China (No. 2022YFA1402901 ), the National Natural Science Foundation of China (Grant Nos. 11825403, 11991061, and 12188101), the Guangdong Major Project of the Basic and Applied Basic Research (Future functional materials under extreme conditions-2021B0301030005).
|
2308.16514 | On arrangements of smooth plane quartics and their bitangents | In the present paper we revisit the geometry of smooth plane quartics and
their bitangents from various viewpoints. First of all, we study in detail the
weak combinatorics of arrangements of bitangents associated with highly
symmetric quartic curves. We look at these quartic curves automorphism-wise and
we establish a lower bound on the number of quadruple intersection points for
arrangements of bitangents associated to smooth plane quartics being smooth and
reduced members of Ciani's pencil. Then we construct new examples of $3$-syzygy
reduced plane curves using smooth plane quartics and their bitangents. | Marek Janasz, Piotr Pokora, Marcin Zieliński | 2023-08-31T07:57:47Z | http://arxiv.org/abs/2308.16514v1 | # On arrangements of smooth plane quartics and their bitangents
###### Abstract
In the present paper we revisit the geometry of smooth plane quartics and their bitangents from various viewpoints. First of all, we study in detail the weak combinatorics of arrangements of bitangents associated with highly symmetric quartic curves. We look at these quartic curves automorphism-wise and we establish a lower bound on the number of quadruple intersection points for arrangements of bitangents associated to smooth plane quartics being smooth and reduced members of Ciani's pencil. Then we construct new examples of 3-syzygy reduced plane curves using smooth plane quartics and their bitangents.
**Keywords** automorphism groups, plane quartics, quartic-line arrangements, singular points
**Mathematics Subject Classification (2020)** 14N20, 14C20, 32S22
## 1 Introduction
In recent years there has been a growing interest among researchers in the geometry of plane curve arrangements. The classical approach is to study line arrangements in the (complex) projective plane and their important applications in many areas of the contemporary mathematics, including topology, commutative algebra, or combinatorics. On the other hand, the geometry of curve arrangements is not as well-established as in the case of line arrangements, mostly due to many difficulties that can arise. The first issue is the fact that singularities of curve arrangements are usually not quasi-homogeneous, which causes many perturbations, and it is usually difficult to define the right notion of the combinatorics associated with the arrangements which will have a universal meaning for many purposes. More recently, conic-line arrangements in the complex projective plane have come into play, mostly in the context of freeness and nearly-freeness of curves, or monodromy groups, see for instance [6, 17, 21, 22, 24]. Here we would like to follow a new line of research devoted to understanding the geometry of curve arrangements where the irreducible components are smooth curves of positive genus [5, 7]. Recently in [25], Talar studied arrangements of smooth plane cubics curves and lines, and under some natural assumptions on the singularities of arrangements he provided a degree-wise characterisation of certain free cubic-line arrangements. In this spirit, our aim here is to study the geometry of smooth plane quartics and lines that admit some quasi-homogeneous
singularities. Here we focus on particularly beautiful line arrangements associated with sufficiently general smooth quartic curves, namely arrangement of bitangents.
Our first results contained in Section 3 are devoted to the description of incidences between the 28 bitangents associated with very symmetric smooth plane quartics (like the Klein quartic). The reason for this is somewhat surprising, namely we have not been able to find a complete description of incidences of such line arrangements in the literature. It is known that the maximal multiplicity of the intersection points for arrangements of bitangents is four (see for instance [23]), but except this fact we do not know much. Our choice of smooth quartics is determined by the fact that these curves have large automorphism groups, suggesting the existence of points of high multiplicity. Using some classical results in Riemannian geometry, we prove that the number of quadruple points of arrangements of bitangents associated to smooth plane quartics is related with their involutions. Our result allows us to establish a lower-bound on the number quadruple points for bitangents arrangements associated with smooth plane quartics being smooth reduced members of Ciani's pencil, namely we have at least 9 such quadruple points. We then focus on homological properties of the Milnor algebras associated with our arrangements. In Section 4, we show that for degrees up to 6 there are no free arrangements of quartics and lines with some prescribed singularities. Then we provide new families of examples of the so-called 3-syzygy curves with degree up to 12. Our research in this context is motivated by results due to Dimca and Sticlaru devoted to 3-syzygy curves [8]. Finally, in Section 2, we give a general result which is a Hirzebruch-type inequality for arrangements of smooth quartics and lines in the complex projective plane admitting certain quasi-homogeneous singularities.
Through that paper we always work over the complex numbers. All symbolic computations were performed using SINGULAR[3].
## 2 Hirzebuch-type inequality for arrangements of smooth plane quartics
and lines with quasi-homogeneous singularities
The main aim of this section to present a Hirzebruch-type inequality for quartic-line arrangements with some prescribed singularities. This result is general enough for many purposes, especially in the context of quartics and their tangential or hyperosculating lines.
**Theorem 2.1**.: _Let \(\mathcal{QL}=\{\ell_{1},...,\ell_{d},Q_{1},...,Q_{k}\}\subset\mathbb{P}_{ \mathbb{C}}^{2}\) be an arrangement of \(d\geqslant 1\) lines and \(k\geqslant 1\) smooth quartics such that \(4k+d\geqslant 6\). Assume that \(\mathcal{QL}\) admits \(n_{2}\) nodes, \(t_{2}\) tacnodes, \(t_{5}\) singularities of type \(A_{5}\), \(d_{6}\) singularities of type \(D_{6}\), \(t_{7}\) singularities of type \(A_{7}\), \(n_{3}\) ordinary triple and \(n_{4}\) ordinary quadruple points. Then one has_
\[56k+n_{2}+\frac{3}{4}n_{3}\geqslant d+\frac{13}{8}d_{6}+\frac{5}{2}t_{2}+5t_{ 5}+\frac{29}{4}t_{7}.\]
Proof.: We present a short outline of the proof following the path explained in [19, 21]. Let \(D=\ell_{1}+...+\ell_{d}+Q_{1}+...+Q_{k}\) be our divisor with \(\deg(\mathcal{QL}):=m=4k+d\), where \(k\geqslant 1\) and \(d\geqslant 1\). We will work with the pair \(\left(\mathbb{P}_{\mathbb{C}}^{2},\frac{1}{2}D\right)\) which is log-canonical and effective since \(4k+d\geqslant 6\). We are going to use Langer's inequality proved in [16], namely
\[\sum_{p\in\mathrm{Sing}(\mathfrak{C})}3\bigg{(}\frac{1}{2}\bigg{(}\mu_{p}-1 \bigg{)}+1-e_{orb}\bigg{(}p,\mathbb{P}_{\mathbb{C}}^{2},\frac{1}{2}D\bigg{)} \bigg{)}\leqslant\frac{5}{4}m^{2}-\frac{3}{2}m, \tag{1}\]
where \(\mu_{p}\) is the Milnor number of a singular point \(p\in\text{Sing}(\mathcal{C})\) and \(e_{\text{orb}}(\cdot)\) denotes the local orbifold Euler number. The left-hand side of the inequality (1) is bounded from below by
\[\frac{9}{4}n_{2}+\frac{45}{8}t_{2}+\frac{117}{16}n_{3}+\frac{35}{4}t_{5}+\frac{ 333}{32}d_{6}+\frac{189}{16}t_{7}+15n_{4},\]
which is an easy computation, see for instance [21, Theorem B ] or [26]. Now we look at the right-hand side. First of all, observe that the following combinatorial count holds:
\[16{k\choose 2}+4kd+{d\choose 2}=n_{2}+2t_{2}+3n_{3}+3t_{5}+4d_{6}+4t_{7}+6n_{4}. \tag{2}\]
This gives us
\[5m^{2}-6m=5\cdot(16k+d+2n_{2}+4t_{2}+6n_{3}+6t_{5}+8d_{6}+8t_{7}+12n_{4})-6(4k +d).\]
Combining collected data above, we obtain
\[56k+n_{2}+\frac{3}{4}n_{3}\geqslant d+\frac{13}{8}d_{6}+\frac{5}{2}t_{2}+5t_{ 5}+\frac{29}{4}t_{7}, \tag{3}\]
which completes the proof.
## 3 Smooth plane quartics and bitangents
Let us consider the following pencil of plane quartics introduced by Ciani in [2]:
\[Q_{\lambda}\,:\quad x^{4}+y^{4}+z^{4}+\lambda(x^{2}y^{2}+x^{2}z^{2}+y^{2}z^{2}),\]
where \(\lambda\in\mathbb{C}\) and \(x,y,z\) are the homogeneous coordinates in the complex projective plane. There are certain particularly interesting curves being members of Ciani's pencil, namely
* if \(\lambda=0\), then we have the Dyck quartic curve that admits a group of \(96\) collineations. Moreover, the Dyck curves admits exactly \(12\) hyperflexes [13].
* if \(\lambda\) is either root of \(\lambda^{2}+3\lambda+18\), then we have the Klein quartic curve admitting a group of \(168\) collineations, and we know that the Klein quartic curve is a unique Hurwitz curve having the largest possible group of collineations.
* if \(\lambda=3\), then we have the Komiya-Kuribayashi quartic which admits exactly \(12\) hyperflexes [15]. Moreover, this quartic admits a group of \(24\) collineations that is isomorphic to \(S_{4}\).
It is classically known that a general complex plane quartic curve has \(28\) bitangents, and this result is attributed to J. Plucker [18]. In this section we are going to described explicitly the incidences between bitangents for three particularly symmetric quartics introduced above, namely for the Klein quartic, the Dyck quartic, and the Komiya-Kuribayashi quartic. Let us point out here directly that for us a bitangent line is a line that is tangent to a given curve at two points. However, **we do not assume** that these two points are **distinct**, i.e., we allow them to collide. In each case the associated line arrangement possesses only double and quadruple intersection points, which is an interesting phenomenon that we were not able to detect directly in the literature. Due to this reason, for the completeness of our investigations, we deliver, in each case, the incidence table for bitangents and quadruple intersections.
**Proposition 3.1**.: _The \(28\) bitangents to the Klein quartic curve intersect along \(252\) double and \(21\) quadruple points._
Proof.: The property that \(28\) bitangents intersect along \(21\) quadruple points is well-known, see for example [11, Exercise 6.22], but we still need to detect other intersections. Let \(Q\) be the defining equation of the arrangement of \(28\) bitangents (we denote this arrangement by \(\mathcal{L}\)), and denote by \(J_{Q}=\langle\frac{\partial Q}{\partial x},\frac{\partial Q}{\partial y},\frac {\partial Q}{\partial z}\rangle\) the Jacobian ideal. Recall that
\[\deg(J_{Q})=\tau(\mathcal{L})=\sum_{p\in\operatorname{Sing}(\mathcal{L})}( \operatorname{mult}_{\mathrm{p}}(\mathcal{L})-1)^{2},\]
where \(\tau(\mathcal{L})\) denotes the total Tjurina number of \(\mathcal{L}\), and the most right-hand side equality follows from the fact that all singularities are ordinary and quasi-homogeneous. Using SINGULAR, we can check that \(\deg(J_{Q})=441\). Now we are going to show that there are not triple intersections. Observe that \(441=\tau(\mathcal{L})=n_{2}+4n_{3}+9n_{4}\) and \(\binom{28}{2}=n_{2}+3n_{3}+6n_{4}\), and we obtain
\[63=n_{3}+3n_{4}.\]
Since \(n_{4}=21\), thus \(n_{3}=0\). Finally, using one of the counts presented above, we get \(n_{2}=252\), and this completes the proof.
Next, we detect intersection points for the \(28\) bitangents to the Dyck quartic curve.
**Proposition 3.2**.: _The \(28\) bitangents to the Dyck quartic curve intersect along \(288\) double and \(15\) quadruple points._
Proof.: The proof goes verbatim as in Proposition 3.1.
Finally, we present numerical description for bitangents associated with the Komiya-Kuribayashi quartic.
**Proposition 3.3**.: _The \(28\) bitangents to the Komiya-Kuribayashi quartic curve intersect along \(324\) double and \(9\) quadruple points._
Proof.: The proof goes verbatim as in Proposition 3.1.
Based on some experiments performed with smooth plane quartics with large automorphism groups (i.e., classically it means that the automorphism group has order at least \(9\)), we observed that the number of quadruple intersection points is at most \(21\), and the maximal value is obtained for \(28\) bitangents to the Klein quartic. Observe that using Theorem 2.1 we can show that for the \(28\) bitangents to a smooth plane quartic the number of quadruple points is less than or equal to \(44\). However, using classical results in Riemannian geometry, we can do much better. Let us emphasize here that our results may be known to experts in classical algebraic geometry, but we have not been able to find them in full in the literature, and for this reason we have decided to give a comprehensive outline. It is worth noting here that the idea behind this part was suggested to the second author by Igor Dolgachev in a private conversation. The first part of our outline is based on Dolgachev's notes [9, 10].
**Definition 3.4**.: We say that a smooth complex projective curve \(C\) is bielliptic if it admits a degree \(2\) cover of an elliptic curve.
Let \(C\) be a canonical curve of genus \(3\) over \(\mathbb{C}\) with a bielliptic involution \(\tau:C\to C\), i.e., this is an involution such that the genus of the quotient curve is \(1\). In its canonical plane model, \(\tau\) is induced by a projective involution \(\widetilde{\tau}\) whose set of fixed points consists of a point \(x_{0}\) and a line \(\ell_{0}\). The intersection \(\ell_{0}\cap C\) are the fixed points of \(\tau\) on \(C\).
**Theorem 3.5** (S. Kowalevskaya).: _The point \(x_{0}\) is the intersection point of four distinct bitangents of \(C\). Conversely, if a plane quartic has four bitangents intersecting at a point \(x_{0}\), then there exists a bielliptic involution \(\tau\) of \(C\) such that the projective involution \(\widetilde{\tau}\) has \(x_{0}\) as its isolated fixed point._
The above theorem tells us that the quadruple intersection points of bitangents are in correspondence with bielliptic involutions. It is classically known that for smooth complex projective curves of the genus \(3\) the maximum possible number of bielliptic involutions is \(21\) and this value is achieved for the Klein quartic. This means that we have exactly \(21\) quadruple intersection points for the \(28\) bitangents to the Klein quartic. Based on these observations, we can show the following general result.
**Proposition 3.6**.: _Let \(C\) be a smooth complex quartic curve in \(\mathbb{P}^{2}_{\mathbb{C}}\) and let \(\mathcal{L}\) be the associated arrangement of the \(28\) bitangent lines. Then the number of quadruple intersection points of \(\mathcal{L}\) is equal to the number of involutions of \(C\). Moreover, the number of quadruple intersection points is less than or equal to \(21\), and this upper-bound is achieved for the \(28\) bitangents to the Klein quartic curve._
Proof.: Recall that smooth plane quartics are never hyperelliptic. By a corollary to Accola's result [1], if a smooth complex projective curve \(C\) of genus \(3\) is not hyperelliptic, then any involution of \(C\) is a bielliptic involution. This proves the first part of our result. For the second part, using Kowalevskaya's result, we see that indeed the \(28\) bitangents to the Klein quartic deliver \(21\) quadruple intersections and since the maximum possible number of bielliptic involutions is \(21\) for smooth plane quartic curves, we get the sharp upper-bound.
Let us now describe the geometry of our two remaining symmetric quartic curves via their automorphisms groups. We start with the Dyck quartic curve and here the automorphism group is \(C_{4}^{2}\rtimes S_{3}\), which is order \(96\). This group is well-known in literature and described in [13, 14]. First of all, Dyck noticed himself that the set of bitangent points is divided into two groups, namely we have \(12\) hyperflexes (where the bitangent lines are hyperosculating) and ordinary tacnodes, so altogether there are \(12+32\) tangential points. Observe that the first \(12\) points are \(A_{7}\) singularities viewed as intersections of hyperosculating tangent lines with the Dyck curve. Moreover, it was observed that these \(12\) points can be divided into three groups of four elements - this can be done due to the existence of three special involutions. Each involution defines a line with the property that four of hyperflexes are lying on that line. We remind to the reader that these three lines are called perspective axes (Perspectivitataxen). Furthermore, we can geometrically find other \(12\) involutions, namely these are exactly the elements defining the \(12\) lines in the \(4\)-th CEVA arrangement [20]. This implies that the \(28\) bitangents to the Dyck curve deliver exactly \(15\) quadruple intersection points, as we have already seen in Proposition 3.2.
Let us take a look at the Komiya-Kuribayashi quartic. It is well-known that its automorphism group is \(\Sigma_{4}\), the full symmetric group of \(4\) elements. It is an easy exercises to check that in \(\Sigma_{4}\) we have exactly \(9\) involutions, which means that the \(28\) bitangents deliver exactly \(9\) quadruple intersections, as we have seen in Proposition 3.3.
Finishing discussion in this section, if we look at each reduced and smooth member of Ciani's pencil, it is invariant under the action of the group \(\Sigma_{4}\) which can be viewed as the semi-direct product \((\mathbb{Z}/2\mathbb{Z})^{2}\rtimes\Sigma_{3}\) with \(\Sigma_{3}\) acting by permutations on \(x,y,z\) and \((\mathbb{Z}/2\mathbb{Z})^{2}\) acting by changing sign. This fact allows us to formulate the following main observation of this section.
**Proposition 3.7**.: _Let \(C\) be a smooth complex plane quartic curve being a smooth and reduced member of Ciani's pencil, and let \(\mathcal{L}\) be the arrangement of the \(28\) bitangent lines to \(C\). Then \(\mathcal{L}\) has at least \(9\) quadruple intersection points._
In the next subsections we focus on our particularly chosen smooth plane quartics and we deliver, in each case, a combinatorial description of the associated arrangement of bitangents. This description will be very useful for our further considerations devoted to homological properties of quartic-line arrangements. Our computations were implemented and performed in Singular.
### The Klein quartic curve and its bitangents
In this section \(e\) is the number satisfying \(e^{2}+e+2=0\).
\begin{table}
\begin{tabular}{c l l l l l} \hline \(\ell_{i}\) & equation of line & \(\ell_{i}\) & equation of line & \(\ell_{i}\) & equation of line \\ \hline \(\ell_{1}:\) & \(y+ez\) & \(\ell_{11}:\) & \(x+y+(e-1)z\) & \(\ell_{21}:\) & \(x-ez\) \\ \(\ell_{2}:\) & \(y-ez\) & \(\ell_{12}:\) & \(x+ez\) & \(\ell_{22}:\) & \(x+y+(-e+1)z\) \\ \(\ell_{3}:\) & \(ex-y\) & \(\ell_{13}:\) & \(x+ey\) & \(\ell_{23}:\) & \((e-1)x+y-z\) \\ \(\ell_{4}:\) & \(ey-z\) & \(\ell_{14}:\) & \(x-y+z\) & \(\ell_{24}:\) & \(x+(-e+1)y-z\) \\ \(\ell_{5}:\) & \((-e+1)x+y-z\) & \(\ell_{15}:\) & \(x+(e-1)y+z\) & \(\ell_{25}:\) & \(x+(e-1)y-z\) \\ \(\ell_{6}:\) & \(x-y+(e-1)z\) & \(\ell_{16}:\) & \(ey+z\) & \(\ell_{26}:\) & \((e-1)x-y-z\) \\ \(\ell_{7}:\) & \(x+(1-e)y+z\) & \(\ell_{17}:\) & \((e-1)x+y+z\) & \(\ell_{27}:\) & \(x-y-z\) \\ \(\ell_{8}:\) & \(ex+y\) & \(\ell_{18}:\) & \(ex+z\) & \(\ell_{28}:\) & \(ex-z\) \\ \(\ell_{9}:\) & \(x-ey\) & \(\ell_{19}:\) & \(x+y-z\) & & \\ \(\ell_{10}:\) & \(x-y+(1-e)z\) & \(\ell_{20}:\) & \(x+y+z\) & & \\ \hline \end{tabular}
\end{table}
Table 1: Equations of bitangents to the Klein quartic.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(P_{i}\) & co-ordinates & \(P_{i}\) & co-ordinates \\ \hline \(P_{1}:\) & \((1:0:0)\) & \(P_{12}:\) & \((-1:0:1)\) \\ \(P_{2}:\) & \((e:-e-2:-e)\) & \(P_{13}:\) & \((-2e-2:e-1:e-1)\) \\ \(P_{3}:\) & \((e:e+2:e)\) & \(P_{14}:\) & \((e:-1:-1)\) \\ \(P_{4}:\) & \((-e:e+2:-e)\) & \(P_{15}:\) & \((-1:1:0)\) \\ \(P_{5}:\) & \((e:e+2:-e)\) & \(P_{16}:\) & \((0:1:0)\) \\ \(P_{6}:\) & \((0:0:1)\) & \(P_{17}:\) & \((e+2:e:-e)\) \\ \(P_{7}:\) & \((-e+1:e-1:-2e-2)\) & \(P_{18}:\) & \((-1:-1:e)\) \\ \(P_{8}:\) & \((-e+1:-e+1:2e+2)\) & \(P_{19}:\) & \((-1:1:-e)\) \\ \(P_{9}:\) & \((0:1:1)\) & \(P_{20}:\) & \((0:-1:1)\) \\ \(P_{10}:\) & \((-3e-2:e-2:-e+2)\) & \(P_{21}:\) & \((1:0:1)\) \\ \(P_{11}:\) & \((1:1:0)\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quadruple intersection points.
\begin{tabular}{|c||
### The Fermat quartic curve and its bitangents
In this section \(\omega\) is the number satisfying \(\omega^{4}+1=0\).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\ell_{i}\) & equation of line & \(\ell_{i}\) & equation of line & \(\ell_{i}\) & equation of line \\ \hline \(\ell_{1}:\) & \(-wy+z\) & \(\ell_{11}:\) & \(x-w^{2}y+z\) & \(\ell_{21}:\) & \(w^{2}x-w^{2}y+z\) \\ \(\ell_{2}:\) & \(wy+z\) & \(\ell_{12}:\) & \(x+w^{2}y+z\) & \(\ell_{22}:\) & \(w^{2}x+w^{2}y+z\) \\ \(\ell_{3}:\) & \(-w^{3}y+z\) & \(\ell_{13}:\) & \(-wx+z\) & \(\ell_{23}:\) & \(-w^{3}x+z\) \\ \(\ell_{4}:\) & \(w^{3}y+z\) & \(\ell_{14}:\) & \(wx+z\) & \(\ell_{24}:\) & \(w^{3}x+z\) \\ \(\ell_{5}:\) & \(-x-y+z\) & \(\ell_{15}:\) & \(-w^{2}x-y+z\) & \(\ell_{25}:\) & \(-wx+y\) \\ \(\ell_{6}:\) & \(-x+y+z\) & \(\ell_{16}:\) & \(-w^{2}x+y+z\) & \(\ell_{26}:\) & \(wx+y\) \\ \(\ell_{7}:\) & \(-x-w^{2}y+z\) & \(\ell_{17}:\) & \(-w^{2}x-w^{2}y+z\) & \(\ell_{27}:\) & \(-w^{3}x+y\) \\ \(\ell_{8}:\) & \(-x+w^{2}y+z\) & \(\ell_{18}:\) & \(-w^{2}x+w^{2}y+z\) & \(\ell_{28}:\) & \(w^{3}x+y\) \\ \(\ell_{9}:\) & \(x-y+z\) & \(\ell_{19}:\) & \(w^{2}x-y+z\) & & \\ \(\ell_{10}:\) & \(x+y+z\) & \(\ell_{20}:\) & \(w^{2}x+y+z\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Equations of bitangents to the Dyck quartic.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(P_{i}\) & co-ordinates & \(P_{i}\) & co-ordinates \\ \hline \(P_{1}:\) & \((1:0:0)\) & \(P_{9}:\) & \((w^{2}:1:0)\) \\ \(P_{2}:\) & \((1:0:1)\) & \(P_{10}:\) & \((0:1:-w^{2})\) \\ \(P_{3}:\) & \((0:1:1)\) & \(P_{11}:\) & \((-1:0:1)\) \\ \(P_{4}:\) & \((-1:1:0)\) & \(P_{12}:\) & \((0:1:0)\) \\ \(P_{5}:\) & \((1:1:0)\) & \(P_{13}:\) & \((-1:0:-w^{2})\) \\ \(P_{6}:\) & \((0:1:-1)\) & \(P_{14}:\) & \((-1:0:w^{2})\) \\ \(P_{7}:\) & \((0:1:w^{2})\) & \(P_{15}:\) & \((0:0:1)\) \\ \(P_{8}:\) & \((-w^{2}:1:0)\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Quadruple intersection points.
### The Komiya-Kuribayashi quartic curve and its bitangents
In this section, we have \(r=\sqrt{5}\) and \(i\) is the imaginary unit.
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|} \hline & \(P_{1}\) & \(P_{2}\) & \(P_{3}\) & \(P_{4}\) & \(P_{5}\) & \(P_{6}\) & \(P_{7}\) & \(P_{8}\) & \(P_{9}\) \\ \hline \hline \(\ell_{1}\) & & + & & & & & & & \\ \hline \(\ell_{2}\) & & + & & & & & & & \\ \hline \(\ell_{3}\) & & + & & & & & & & \\ \hline \(\ell_{4}\) & & + & & & & & & & \\ \hline \(\ell_{5}\) & & + & & & + & + & & & \\ \hline \(\ell_{6}\) & & + & + & & & & + & \\ \hline \(\ell_{7}\) & & & + & & & & & & \\ \hline \(\ell_{8}\) & & + & & & & & & & \\ \hline \(\ell_{9}\) & + & & & & & + & & + & \\ \hline \(\ell_{10}\) & + & & & + & & & + & & \\ \hline \(\ell_{11}\) & + & & & & & & & & \\ \hline \(\ell_{12}\) & + & & & & & & & & \\ \hline \(\ell_{13}\) & & & & & + & & & & \\ \hline \(\ell_{14}\) & & & + & & & & & & \\ \hline \(\ell_{15}\) & & & & & & & + & & \\ \hline \(\ell_{16}\) & & & & & & & & + & \\ \hline \(\ell_{17}\) & & & & & & & & + & \\ \hline \(\ell_{18}\) & & & & & & & + & & \\ \hline \(\ell_{19}\) & & & & & + & & & & \\ \hline \(\ell_{20}\) & & & + & & & & & & \\ \hline \(\ell_{21}\) & & & & + & & & & \\ \hline \(\ell_{22}\) & & & & + & & & & \\ \hline \(\ell_{23}\) & & & & + & & & & \\ \hline \(\ell_{24}\) & & & & + & & & & \\ \hline \(\ell_{25}\) & & & & & & & & + \\ \hline \(\ell_{26}\) & & & & & & & & + \\ \hline \(\ell_{27}\) & & & & & & & & + \\ \hline \(\ell_{28}\) & & & & & & & & + \\ \hline \end{tabular}
\end{table}
Table 9: Incidences between bitangents and quadruple intersections.
**Remark 3.8**.: As it was pointed out by X. Roulleau to the authors, the duals to \(9\) quadruple intersection points for bitangents to the Komiya-Kuribayashi quartic form an arrangement of \(9\) lines which is simplicial. We find this fact rather surprising.
## 4 \(3\)-syzygy curves constructed with plane quartics and lines
This section can be considered as a prequel to [5] and we will see in a moment what is the reason standing behind this claim. Before that, we need an algebraic preparation. Let us denote by \(S:=\mathbb{C}[x,y,z]\) the coordinate ring of \(\mathbb{P}^{2}_{\mathbb{C}}\) and for a homogeneous polynomial \(f\in S\) let us denote by \(J_{f}\) the Jacobian ideal associated with \(f\).
Let \(C:f=0\) be a reduced curve in \(\mathbb{P}^{2}_{\mathbb{C}}\) of degree \(d\) defined by \(f\in S\). Denote by \(M(f):=S/J_{f}\) the associated Milnor algebra.
**Definition 4.1**.: We say that a reduced plane curve \(C\) is an \(m\)-syzygy curve when \(M(f)\) has the following minimal graded free resolution:
\[0\rightarrow\bigoplus_{i=1}^{m-2}S(-e_{i})\rightarrow\bigoplus_{i=1}^{m}S(1- d-d_{i})\to S^{3}(1-d)\to S\to M(f)\to 0\]
with \(e_{1}\leqslant e_{2}\leqslant...\leqslant e_{m-2}\) and \(1\leqslant d_{1}\leqslant...\leqslant d_{m}\).
In the setting of the above definition, the minimal degree of the Jacobian relations among the partial derivatives of \(f\) is defined to be \(\mathrm{mdr}(f):=d_{1}\).
Among many examples of \(m\)-syzygy plane curves, we can distinguish the following important classes, and this can be done via the above homological description. Here by \(\tau(C)\) we denote the total Tjurina number of \(C\).
**Definition 4.2**.: We say that
* \(C\) is **free** if and only if \(m=2\) and \(d_{1}+d_{2}=d-1\). Moreover, [12] tells us that a reduced plane curve \(C\) with \(\mathrm{mdr}(f)\leqslant(d-1)/2\) is free if and only if \[(d-1)^{2}-d_{1}(d-d_{1}-1)=\tau(C).\] (4)
* \(C\) is **nearly-free** if and only if \(m=3\), \(d_{1}+d_{2}=d\), \(d_{2}=d_{3}\), and \(e_{1}=d+d_{2}\). Moreover, by a result due to Dimca [4], we know that \(C\) is nearly free if and only if \[(d-1)^{2}-d_{1}(d-d_{1}-1)=\tau(C)+1.\] (5)
* \(C\) is **plus-one generated** of level \(d_{3}\) if \(C\) is \(3\)-syzygy such that \(d_{1}+d_{2}=d\) and \(d_{3}>d_{2}\).
We will study homological properties of quartic-line arrangements in the plane such that they admit **quasi-homogeneous singularities**. More precisely, we allow only ADE-singularities and the \(X_{9}\) singularity, i.e.,
\[\begin{array}{ll}A_{k}\text{ with }k\geqslant 1&:\,x^{2}+y^{k+1}=0,\\ D_{k}\text{ with }k\geqslant 4&:\,y^{2}x+x^{k-1}=0,\\ E_{6}&:\,x^{3}+y^{4}=0,\\ E_{7}&:\,x^{3}+xy^{3}=0,\\ E_{8}&:\,x^{3}+y^{5}=0,\\ X_{9}&:\,x^{4}+y^{4}=0.\end{array}\]
We start with the following observation.
**Proposition 4.3**.: _There is no free arrangement consisting of one line \(\ell\) and one smooth plane quartic curve \(Q\)._
Proof.: Observe that we have the following possibilities as the intersections of one line and one smooth plane quartic:
* \(Q.\ell=P_{1}+P_{2}+P_{3}+P_{4}\), i.e., we have \(4\) nodes.
* \(Q.\ell=2P_{1}+P_{2}+P_{3}\), i.e., we have one tacnode and two nodes.
* \(Q.\ell=2P_{1}+2P_{2}\), i.e., we have two tacnodes.
* \(Q.\ell=3P_{1}+P_{2}\), i.e., we have one \(A_{5}\) singularity and one node.
* \(Q.\ell=4P\), so we have one singularity of type \(A_{7}\).
Based on the above analysis, the maximal possible total Tjurina number is equal to \(7\). On the other hand, such any arrangement would be free if the total Tjurina number is equal to either \(13\) or \(12\), which completes the proof.
Here we can say even more, namely there is no nearly-free arrangement consisting of a smooth plane quartic and a line since in such a situation the total Tjurina number would be equal to either \(11\) or \(12\).
Now let us focus on the following situation when we have two lines and one smooth plane quartic curve.
**Proposition 4.4**.: _There is no free arrangement \(C\) consisting of one smooth plane quartic and two lines admitting \(n_{2}\) nodes, \(t_{2}\) tacnodes, \(n_{3}\) ordinary triple intersections, \(d_{6}\) singularities of type \(D_{6}\), \(t_{5}\) singularities of type \(A_{5}\), and \(t_{7}\) singularities of type \(A_{7}\)._
Proof.: The proof is based on linear programming methods. First, we recall that for a reduced plane curve with only ADE singularities of even degree \(2m\) the minimal degree of the Jacobian relations satisfies \(d_{1}\geqslant m-1\) and this fact follows from [7]. Assume now that \(C\) is an arrangement satisfying the above list of assumptions that is free. Then \(d_{1}=2\) and
\[\tau(C)=d_{1}^{2}-d_{1}(d-1)+(d-1)^{2}=19.\]
We have the following system of Diophantine equations:
\[\begin{cases}n_{2}+2t_{2}+3n_{3}+3t_{5}+4d_{6}+4t_{7}=9\\ n_{2}+3t_{2}+4n_{3}+5t_{5}+6d_{6}+7t_{7}=19.\end{cases}\]
Using any solver one can check that the above system does not have any non-negative integer solution, which completes the proof.
In the next step we aim to construct new examples of plus-one generated arrangements consisting of three lines and one smooth plane quartic. In order to do so, we start with the Komiya-Kuribayashi quartic curve and lines.
**Proposition 4.5**.: _The arrangement consisting of \(4\) lines, so that we have two ordinary bitangents and two hyperosculating lines, intersecting at a quadruple point and the Komiya-Kuribayashi plane quartic itself is plus-one generated._
Proof.: We only present the proof in one case - other cases are verbatim. Denote by \(K(x,y,z)\) the defining equation of the Komiya-Kuribayashi plane quartic. Consider the arrangement \(\mathcal{KL}\) defined by
\[Q(x,y,z)=(-x-y+z)(-x+y+z)(-x-2iy+z)(-x+2iy+z)\cdot K(x,y,z).\]
The arrangement \(\mathcal{KL}\) has exactly one ordinary quadruple point, four tacnodes and two singularities of type \(A_{7}\), so the total Tjurina number of the arrangement is equal to \(\tau(\mathcal{KL})=14+12+9=35\). Using SINGULAR, we can compute the minimal free resolution of the Milnor algebra which has the following form:
\[0\to S(-13)\to S(-12)\oplus S^{2}(-11)\to S^{3}(-7)\to S.\]
Since \((d_{1},d_{2},d_{3})=(4,4,5)\), \(d_{1}+d_{2}=8\) and \(d_{3}>d_{2}\), the arrangement is plus-one generated.
Next, we pass to the Dyck plane quartic and three lines.
**Proposition 4.6**.: _In the setting of the Dyck quartic curve, the arrangement consisting of \(3\) hyperosculating lines intersecting at one triple point and the Dyck quartic is plus-one generated._
Proof.: Similar to above, we present our justification for only one case of \(12\) possible, as others are verbatim. Consider the arrangement \(\mathcal{DL}\) defined by
\[Q(x,y,z)=(ey+z)(-ey+z)(e^{3}y+z)\cdot(x^{4}+y^{4}+z^{4}),\]
where \(e^{4}+1=0\). The arrangement \(\mathcal{DL}\) has exactly one ordinary triple point and three singularities of type \(A_{7}\), so the total Tjurina number of the arrangement is equal to \(\tau(\mathcal{DL})=21+4=25\). Using SINGULAR we can compute the minimal free resolution of the Milnor algebra which has the following form:
\[0\to S(-12)\to S(-11)\oplus S(-10)\oplus S(-9)\to S^{3}(-6)\to S.\]
Since \((d_{1},d_{2},d_{3})=(3,4,5)\), \(d_{1}+d_{2}=7\), and \(d_{3}>d_{2}\), the arrangement \(\mathcal{DL}\) is plus-one generated.
This section is motivated by the so-called addition technique, i.e., starting with a reduced plane curve we add particularly chosen lines in such a way that the resulting curve is free. This methods was studied recently in the setting of inflectional tangent lines (aka hyperosculating lines) and smooth plane curves [5]. As a starting point, let us recall [5, Corollary 1.6] in the setting of our paper.
**Proposition 4.7**.: _In the situation of the Dyck quartic curve and its bitangents, let \(\mathcal{C}\) be an arrangement consisting of \(4\) hyperosculating lines intersecting at one ordinary quadruple point and the Dyck quartic itself. Then \(\mathcal{C}\) is one of three arrangements given by the following defining polynomials:_
* \(Q_{1}(x,y,z)=(x^{4}+y^{4})\cdot(x^{4}+y^{4}+z^{4})\)_,_
* \(Q_{2}(x,y,z)=(y^{4}+z^{4})\cdot(x^{4}+y^{4}+z^{4})\)_,_
* \(Q_{3}(x,y,z)=(x^{4}+z^{4})\cdot(x^{4}+y^{4}+z^{4})\)
_Furthermore, \(\mathcal{C}\) is free._
Proof.: The first part of our claim follows from the following observation. If we consider the arrangement of \(12\) hyperosculating lines \(\mathcal{H}\) to the Dyck quartic, given by \(Q(x,y,z)=(x^{4}+y^{4})\cdot(y^{4}+z^{4})\cdot(x^{4}+z^{4})\), then \(\mathcal{H}\) has \(48\) double intersections and exactly \(3\) quadruple intersections. Having already observed this, it is easy to find all possible equations. The second claim follows from [5, Theorem 1.5].
We should point out here that the situation when four hyperosculating lines to a smooth plane quartic curve meet in one point is rather rare. For example, in the case of the Klein quartic we do not have hyperosculating lines at all, and in the case of the Komiya-Kuribayashi quartic hyperosculating lines intersect only at double points.
Continuing our discussion with the Dyck curve and its bitangent, we present a new group of free curves.
**Proposition 4.8**.: _In the situation of the Dyck quartic curve and its bitangents, let \(\mathcal{C}\) be an arrangement consisting of five hyperosculating lines such that four of them intersecting at one ordinary quadruple point and the Dyck quartic itself. Then the arrangement \(\mathcal{C}\) is free._
Proof.: First we see that there are \(24\) of such arrangements. Let us describe the first \(8\) arrangements using their defining polynomials, namely
\[H^{1}_{k}(x,y,z)=Q_{1}(x,y,z)\cdot\ell_{k}(x,y,z),\]
where \(\ell_{k}(x,y,z)\) is one of the \(8\) linear forms given by linear factors of \((y^{4}+z^{4})\cdot(x^{4}+z^{4})\). In the same way we obtain arrangements given by polynomials \(H^{2}_{k}\) and \(H^{3}_{k}\) with \(k\in\{1,...,8\}\). In each case we have the same total Tjurina number that is equal to \(48\), i.e., we have five singular points of type \(A_{7}\), one ordinary quadruple intersection, and four double points. Using SINGULAR, we can check that for each arrangement the minimal degree of the Jacobian relations \(d_{1}\) is equal to \(4\). Then
\[48=d_{1}^{2}-d_{1}(d-1)+(d-1)^{2}=\tau(\mathcal{C})=48,\]
so \(\mathcal{C}\) is free.
We then present examples of nearly-free arrangements of degree \(10\). We continue with the setting of the Dyck curve and consider now arrangements given by the following defining polynomials:
\[G^{i,j}_{1}(x,y,z)=Q_{1}(x,y,z)\cdot\ell_{i}(x,y,z)\cdot\ell_{j}(x,y,z),\]
where \(\ell_{i}(x,y,z)\), \(\ell_{j}(x,y,z)\) are mutually distinct linear forms which are linear factors of \((x^{4}+z^{4})\cdot(y^{4}+z^{4})\) and \(Q_{1}\) is defined as in Proposition 4.7. In the same and analogous way, we define arrangements given by \(G^{i,j}_{2}\) and \(G^{i,j}_{3}\).
**Proposition 4.9**.: _The arrangements \(\mathcal{C}^{i,j}_{k}\) given by polynomials \(G^{i,j}_{k}\) with \(k\in\{1,2,3\}\) are nearly-free._
Proof.: Observe that in each case the total Tjurina number is equal to \(60\), i.e., we have \(6\) singular points of type \(A_{7}\), one ordinary quadruple point, and \(9\) double intersection points. Next, in each case we compute the minimal degree of the Jacobian relations \(d_{1}\) that is equal to \(5\). Then
\[61=d_{1}^{2}-d_{1}(d-1)+(d-1)^{2}=\tau(\mathcal{C}^{i,j}_{k})+1=60+1=61,\]
so \(\mathcal{C}^{i,j}_{k}\) are nearly-free.
Finally, let us present three examples of nearly-free arrangements of degree \(12\).
**Proposition 4.10**.: _Let us consider the following plane curves:_
* \(C_{1}=V((x^{4}+y^{4}+z^{4})\cdot(x^{4}+y^{4})\cdot(y^{4}+z^{4})),\)__
* \(C_{2}=V((x^{4}+y^{4}+z^{4})\cdot(x^{4}+y^{4})\cdot(x^{4}+z^{4})),\)__
* \(C_{3}=V((x^{4}+y^{4}+z^{4})\cdot(y^{4}+z^{4})\cdot(x^{4}+z^{4})).\)__
_Then the curves \(C_{1},C_{2},C_{3}\) are nearly-free with \((d_{1},d_{2})=(5,7)\)._
Proof.: We leave the proof to the reader.
To conclude this section, we focus on the Klein quartic case. We can construct a bunch of plus-one generated curves which turn out to have a very important meaning.
**Proposition 4.11**.: _In the setting of the Klein quartic curve, the arrangement \(\mathcal{QK}\) consisting of four bitangent lines intersecting at one quadruple point and the Klein quartic itself is plus-one generated with the exponents \((4,4,7)\)._
Proof.: We have \(21\) such arrangements. They have the same singularities, namely eight tacnodes and one ordinary quadruple point. We compute the minimal free resolution of the Milnor algebra which has the following form:
\[0\to S(-15)\to S(-14)\oplus S^{2}(-11)\to S^{3}(-7)\to S.\]
Since \((d_{1},d_{2},d_{3})=(4,4,7)\), \(d_{1}+d_{2}=8\) and \(d_{3}>d_{2}\), the arrangement is plus-one generated.
Now we need to explain why these last arrangements are interesting. Recently, Dimca and Sticlaru have studied homological properties of \(m\)-syzygy curves. Among many things, they proved the following results - see [8, Theorem 2.4] and [8, Corollary 5.3].
**Theorem 4.12**.: _Let \(C\,:\,f=0\) be an \(m\)-syzygy curve of degree \(d\) with \(1\leqslant d_{1}\leqslant...\leqslant d_{m}\) and \(m\geqslant 3\). Then \(d_{1}+d_{2}\geqslant d\) and \(d_{1}\leqslant d_{2}\leqslant d_{3}\leqslant d-1\)._
**Proposition 4.13**.: _Let \(C\,:\,f=0\) be a \(3\)-syzygy curve of degree \(d\geqslant 3\). If all irreducible components \(C_{i}\) of \(C\) are rational and \(C\) is not a plus-one generated curve, then \(d_{3}\leqslant d-2\)._
In the light of the above results, our Klein's arrangements are extreme cases, both in terms of achieving the upper bound on \(d_{3}\) as non-rational arrangements, and in terms of showing that all the assumptions in the above proposition are optimal.
**Acknowledgments**
Piotr Pokora would like to thank Xavier Roulleau for his help with numerical computations using MAGMA and for useful remarks, to Igor Dolgachev for his helpful conversations that allowed to understand quadruple intersections of bitangents, and to Ivan Cheltsov for discussions on the subject of this paper in one of Krakow's Sturbucks.
Piotr Pokora is partially supported by The Excellent Small Working Groups Programme **DNWZ.711/IDUB/ESWG/2023/01/00002** at the Pedagogical University of Cracow. |
2309.11634 | Conway and Doyle Can Divide by Three, But I Can't | Conway and Doyle have claimed to be able to divide by three. We attempt to
replicate their achievement and fail. In the process, we get tangled up in some
shoes and socks and forget how to multiply. | Patrick Lutz | 2023-09-20T20:51:11Z | http://arxiv.org/abs/2309.11634v1 | # Conway and Doyle Can Divide by Three, But I Can't
###### Abstract
Conway and Doyle have claimed to be able to divide by three. We attempt to replicate their achievement and fail. In the process, we get tangled up in some shoes and socks and forget how to multiply.
## 1 Introduction
In the paper "Division by Three" [1], Conway and Doyle show that it is possible to divide by 3 in cardinal arithmetic, even without the axiom of choice. Actually, they show that it is possible to divide by \(n\) for all natural numbers \(n>0\); they called their paper "Division by Three" rather than "Division by \(n\)" because the case \(n=3\) seems to capture all the difficulty of the full result. More precisely, they give a proof of the following theorem.
**Theorem 1**.: _It is provable in \(\mathbb{ZF}\) (Zermelo-Fraenkel set theory without the axiom of choice) that for any natural number \(n>0\) and any sets \(A\) and \(B\), if \(|A\times n|=|B\times n|\) then \(|A|=|B|\)._
Here we are using the notation \(A\times n\) to denote the set \(A\times\{1,2,\ldots,n\}\) and the notation \(|A|=|B|\) to mean that there is a bijection between \(A\) and \(B\).
The purpose of this article is to question whether the statement of Theorem 1 is really the correct definition of "dividing by \(n\) without choice." We will propose an alternative statement, show that it is not provable without the axiom of choice, and explain what all this has to do with Bertrand Russell's socks.
Of course, none of this should be taken too seriously. I'm not really here to argue about what "division by \(n\) without choice" means. Instead, the goal is to have fun with some interesting mathematics, and the question of what "division by \(n\) without choice" should really mean is merely an inviting jumping-off point.
### Mathematics Without Choice
What does it mean to do math without the axiom of choice? In brief, it means that if we are proving something and want to describe a construction that requires infinitely many choices then we must describe explicitly how these choices are to be made, rather than just assuming that they can be made any-which-way when the time comes.
There is a well-known example, due to Bertrand Russell, that illustrates this issue. Suppose there is a millionaire who loves to buy shoes and socks. Every day, he buys a pair of shoes and a pair of socks, and after infinitely many days have passed, he has amassed infinitely many pairs of each. He then asks his butler to pick out one shoe from each pair for him to display in his foyer. The butler wants to make sure he is following the millionaire's instructions precisely, so he asks how to decide which shoe to pick from each pair. The millionaire replies that he can pick the left shoe each time. The next day, the millionaire decides he would also like to display one sock from each pair and so he asks the butler to do so. When the butler again asks how he should decide which sock to pick from each pair, the millionaire is stymied--there is no obvious way to distinguish one sock in a pair from the other.1
Footnote 1: When Russell introduced this example, he was careful to point out that in real life there actually are ways to distinguish between socks–for instance, one of them probably weighs slightly more than the other—but he asked for “a little goodwill” on the part of the reader in interpreting the example.
The point of this example is that if we have a sequence \(\{A_{i}\}_{i\in\mathbb{N}}\) of sets of size 2, then there is no way to prove without the axiom of choice that \(\Pi_{i\in\mathbb{N}}A_{i}\) is nonempty. Doing so would require explicitly constructing an element of \(\Pi_{i\in\mathbb{N}}A_{i}\), which is analogous to giving a way to choose one sock from each pair in the millionaire's collection. On the other hand, if we have a fixed ordering on each set \(A_{i}\) in the sequence, then we _can_ show without choice that \(\Pi_{i\in\mathbb{N}}A_{i}\) is nonempty, just as it was possible to choose one shoe from each pair.
Russell's story about the shoes and socks may seem like just a charming and straightforward illustration of the axiom of choice, but we will return to it a few times throughout this article and see that there is more to it than is initially apparent.
Failing to Divide by Three
### You Can Divide by Three
As we mentioned earlier, in the paper "Division by Three," Conway and Doyle prove without the axiom of choice that for any natural number \(n>0\) and any sets \(A\) and \(B\), if \(|A\times n|=|B\times n|\) then \(|A|=|B|\). What this requires is giving an explicit procedure to go from a bijection between \(A\times n\) and \(B\times n\) to a bijection between \(A\) and \(B\).
This result has a long history. It was (probably) first proved by Lindenbaum and Tarski in 1926 [11], but the proof was not published and seems to have been forgotten. The first published proof was by Tarski in 1949 and is regarded as somewhat complicated [10]. Conway and Doyle gave a simpler (and more entertainingly exposited) proof, which they claimed may be the original proof by Lindenbaum and Tarski. Later, the proof was simplified even more by Doyle and Qiu in the paper "Division by Four" [1]. There is also a charming exposition of Doyle and Qiu's proof in the article "Pangalactic Division" by Schwartz [14].
### Can You Divide by Three?
Does the statement of Theorem 1 really capture what it means to divide by \(n\) without choice? To explain what we mean, we first need to say a little about how division by \(n\) is proved. Recall that we are given a bijection between \(A\times n\) and \(B\times n\), and we need to construct a bijection between \(A\) and \(B\). We can think of both \(A\times n\) and \(B\times n\) as unions of collections of disjoint sets of size \(n\). Namely,
\[A\times n =\bigcup_{a\in A}\{(a,1),(a,2),\ldots,(a,n)\}\] \[B\times n =\bigcup_{b\in B}\{(b,1),(b,2),\ldots,(b,n)\}.\]
A key point, which every known proof uses, is that we can simultaneously order every set in the two collections using the ordering induced by the usual ordering on \(\{1,2,\ldots,n\}\).
But if we are already working without the axiom of choice, this seems like an unfair restriction. Why not also allow collections of _unordered_ sets of size \(n\)? This gives us an alternative version of "division by \(n\) without choice" in which we replace the collections \(A\times n\) and \(B\times n\) with collections of unordered sets of size \(n\) (we will give a precise statement of this version below). Since collections of ordered sets of size \(n\) behave like the pairs of shoes from Russell's example while collections of unordered sets of size \(n\) behave like the pairs of socks, we will refer to the standard version as "shoe division" and the alternative version as "sock division."
**Definition 2**.: _Suppose \(n>0\) is a natural number. **Shoe division by \(n\)** is the principle that for any sets \(A\) and \(B\), if \(|A\times n|=|B\times n|\) then \(|A|=|B|\)._
**Definition 3**.: _Suppose \(n>0\) is a natural number. **Sock division by \(n\)** is the principle that for any sets \(A\) and \(B\) and any collections \(\{X_{a}\}_{a\in A}\) and \(\{Y_{b}\}_{b\in B}\) of disjoint sets of size \(n\), if \(|\bigcup_{a\in A}X_{a}|=|\bigcup_{b\in B}Y_{b}|\) then \(|A|=|B|\)._
Since we know that shoe division by \(n\) is provable without the axiom of choice, it is natural to wonder whether the same is true of sock division.
By the way, this is not the first time that someone has asked about the necessity of having collections of ordered rather than unordered sets when dividing by \(n\) in cardinal arithmetic. In the paper "Equivariant Division" [1], Bajpai and Doyle consider when it is possible to go from a bijection \(A\times n\to B\times n\) to a bijection \(A\to B\) when the bijections are required to respect certain group actions on \(A\), \(B\), and \(n\). Since the axiom of choice can be considered a way to "break symmetries," the question of whether sock division is provable without choice is conceptually very similar to the questions addressed by Bajpai and Doyle.
### You Can't Divide by Three
In this section we will show that sock division by \(3\) is not provable without the axiom of choice. In fact, neither is sock division by \(2\) or, for that matter, sock division by \(n\) for any \(n>1\).
**Theorem 4**.: _For any natural number \(n>1\), the principle of sock division by \(n\) is not provable in \(\mathbb{Z}\mathbb{F}\)._
Proof.: We will show that if sock division by \(2\) is possible then it is also possible to choose socks for Bertrand Russell's millionaire. The full theorem follows by noting that the proof works not just for human socks but also for socks for octopi with \(n>1\) tentacles.
More precisely, suppose sock division by \(2\) does hold and let \(\{A_{i}\}_{i\in\mathbb{N}}\) be a sequence of disjoint sets of size \(2\). We will show that \(\Pi_{i\in\mathbb{N}}A_{i}\) is nonempty by constructing a choice function for \(\{A_{i}\}_{i\in\mathbb{N}}\). We can picture \(\{A_{i}\}_{i\in\mathbb{N}}\) as a sequence of pairs of socks.
Now consider taking a single pair of socks--say \(A_{0}=\{x_{0},y_{0}\}\)--and forming the Cartesian product of this pair with the set \(\{0,1\}\). This gives us a 4 element set, depicted by the grid below.
We will divide this 4 element set into a pair of 2 element sets in two different ways. First, we can take the rows of the grid to get the sets \(\{(x_{0},0),(x_{0},1)\}\) and \(\{(y_{0},0),(y_{0},1)\}\).
Second, we can take the columns of the grid to get the sets \(\{(x_{0},0),(y_{0},0)\}\) and \(\{(x_{0},1),(y_{0},1)\}\).
If we repeat this procedure for every pair of socks, we end up with two collections of disjoint sets of size 2--one consisting of the rows of the grids formed from each pair and the other consisting of the columns.
Now we will observe a few things about these two collections.
* First, each set in the collection of rows has the form \(\{(x,0),(x,1)\}\) for some \(x\in\bigcup_{i\in\mathbb{N}}A_{i}\), so we can think of the collection of rows as being indexed by \(\bigcup_{i\in\mathbb{N}}A_{i}\) (i.e. indexed by the individual socks).
* Second, each set in the collection of columns either has the form \(A_{i}\times\{0\}\) for some \(i\in\mathbb{N}\) or the form \(A_{i}\times\{1\}\) for some \(i\in\mathbb{N}\), so we can think of the collection of columns as being indexed by \(\mathbb{N}\times\{0,1\}\).
* Lastly, the union of the collection of rows and the union of the collection of columns are identical--they are both just equal to \(\bigcup_{i\in\mathbb{N}}(A_{i}\times\{0,1\})\).
The principle of sock division by 2 says that if the unions of two collections of disjoint sets of size 2 are in bijection then the sets indexing those collections are also in bijection. Thus we can conclude that there is a bijection \(f\colon(\bigcup_{i\in\mathbb{N}}A_{i})\to\mathbb{N}\times\{0,1\}\).
We can now describe how to choose one sock from each pair. Consider a pair of socks, \(A_{i}=\{x,y\}\). We know that \(x\) is mapped by \(f\) to some pair \((n,b)\in\mathbb{N}\times\{0,1\}\) and \(y\) is mapped to some other pair, \((m,c)\). We can choose between \(x\) and \(y\) by picking whichever one is mapped to the smaller pair in the lexicographic ordering on \(\mathbb{N}\times\{0,1\}\).
## 3 Cardinal Arithmetic and the Power of Sock Division
In this section we will discover another view of sock division by considering how to define multiplication of cardinals without choice.
### Shoes and Socks, Revisited
Suppose we have two sets, \(A\) and \(B\). How should we define the product of their cardinalities? The standard answer is that it is the cardinality of their Cartesian product--i.e. \(|A\times B|\). But there is another possible definition. Suppose \(\{X_{a}\}_{a\in A}\) is a collection of disjoint sets such that each \(X_{a}\) has the same cardinality as \(B\). Since taking a disjoint union of sets corresponds to taking the sum of their cardinalities, we can think of \(|\bigcup_{a\in A}X_{a}|=\sum_{a\in A}|X_{a}|\) as an alternative
definition of "the cardinality of \(A\) times the cardinality of \(B\)."
One way to think of these two definitions is that the first interprets multiplication as the area of a rectangle, while the second interprets it as repeated addition.
\begin{tabular}{l l} \hline \hline
**Multiplication is...** & \\ \hline The area of a rectangle: & \(|A|\times|B|=|A\times B|\) \\ Repeated addition: & \(|A|\times|B|=|\bigcup_{a\in A}X_{a}|\) \\ \hline \hline \end{tabular}
One problem with thinking of multiplication as repeated addition, however, is that without the axiom of choice, it may not be well-defined. In particular, it is possible to have two collections \(\{X_{a}\}_{a\in A}\) and \(\{Y_{a}\}_{a\in A}\) of disjoint sets of size \(|B|\) such that \(|\bigcup_{a\in A}X_{a}|\neq|\bigcup_{a\in A}Y_{a}|\). In fact, this is actually the original context for Russell's example about shoes and socks. The following passage is from his 1919 book _Introduction to Mathematical Philosophy_[10] (note that he refers to the axiom of choice as the "multiplicative axiom," since it guarantees that every nonzero product of nonzero cardinalities is nonzero).
Another illustration may help to make the point clearer. We know that \(2\times\mathbf{N}_{0}=\mathbf{N}_{0}\). Hence we might suppose that the sum of \(\mathbf{N}_{0}\) pairs must have \(\mathbf{N}_{0}\) terms. But this, though we can prove that it is sometimes the case, cannot be proved to happen always unless we assume the multiplicative axiom. This is illustrated by the millionaire who bought a pair of socks whenever he bought a pair of boots, and never at any other time, and who had such a passion for buying both that at last he had \(\mathbf{N}_{0}\) pairs of boots and \(\mathbf{N}_{0}\) pairs of socks. The problem is: How many boots had he, and how many socks? One would naturally suppose that he had twice as many boots and twice as many socks as he had pairs of each, and that therefore he had \(\mathbf{N}_{0}\) of each, since that number is not increased by doubling. But this is an instance of the difficulty, already noted, of connecting the sum of \(v\) classes each having \(\mu\) terms with \(\mu\times v\). Sometimes this can be done, sometimes it cannot. In our case it can be done with the boots, but not with the socks, except by some very artificial device.
### Multiplication vs. Division
Let's revisit the difference between shoe division and sock division in light of what we have just discussed. When discussing "division by \(n\) without choice," we have implicitly defined division in terms of multiplication. Being able to divide by \(n\) means that whenever we have \(|A|\times n=|B|\times n\), we can cancel the \(n\)'s to get \(|A|=|B|\). The only difference between shoe division and sock division is what definition of multiplication is used (i.e. what is meant by \(|A|\times n\) and \(|B|\times n\)). In shoe division, multiplication is interpreted in the usual way, i.e. as "the area of a rectangle." In sock division, it is interpreted as "repeated addition."
When we view shoe division and sock division in this way, it is clear that if "multiplication by \(n\) as repeated addition of \(n\)" is well-defined then sock division by \(n\) holds (because in this case it is equivalent to shoe division by \(n\)). Thus it is natural to ask what the exact relationship is between these two principles.
A priori, they are fairly different statements. Sock division by \(n\) says that if we have two collections \(\{X_{a}\}_{a\in A}\) and \(\{Y_{b}\}_{b\in B}\) of disjoint sets of size \(n\) then we can go from a bijection between \(\bigcup_{a\in A}X_{a}\) and \(\bigcup_{b\in B}Y_{b}\) to a bijection between \(A\) and \(B\) while "multiplication by \(n\) as repeated addition of \(n\) is well-defined" says that we can go from a bijection between \(A\) and \(B\) to a bijection between \(\bigcup_{a\in A}X_{a}\) and \(\bigcup_{b\in B}Y_{b}\). However, it turns out that the two principles are actually equivalent and the proof of this is implicit in our proof of Theorem 4. Let's make all of this more precise.
**Definition 5**.: _Suppose \(n>0\) is a natural number. **Multiplication by \(n\) is equal to repeated addition of \(n\)** is the principle that for any set \(A\) and any collection \(\{X_{a}\}_{a\in A}\) of disjoint sets of size \(n\), \(|\bigcup_{a\in A}X_{a}|=|A\times n|\)._
What we can show is that in \(\mathsf{ZF}\), the principle of sock division by \(n\) is equivalent to the principle that multiplication by \(n\) is equal to repeated addition of \(n\).
**Theorem 6**.: _It is provable in \(\mathsf{ZF}\) that for any natural number \(n>0\), the principle of sock division by \(n\) holds if and only if the principle that multiplication by \(n\) is equal to repeated addition by \(n\) holds._
Proof.: (\(\iff\) ) First suppose "multiplication by \(n\) is equal to repeated addition of \(n\)" holds. Let \(A\) and \(B\) be any sets and let \(\{X_{a}\}_{a\in A}\) and \(\{Y_{b}\}_{b\in B}\) be two collections of disjoint sets of size \(n\) such that \(|\bigcup_{a\in A}X_{a}|=|\bigcup_{b\in B}Y_{b}|\). Applying "multiplication is repeated addition," we have
\[|A\times n|=\Big{|}\bigcup_{a\in A}X_{a}\Big{|}=\Big{|}\bigcup_{b\in B}Y_{b} \Big{|}=|B\times n|.\]
And by applying shoe division by \(n\), we get \(|A|=|B|\).
(\(\implies\) ) Now suppose sock division by \(n\) holds and let \(A\) be any set and \(\{X_{a}\}_{a\in A}\) be a collection of disjoint sets of size
\(n\). Consider the set \(\bigcup_{a\in A}(X_{a}\times n)\). We can view this set as a union of a collection of disjoint sets of size \(n\) in two different ways:
\[\bigcup\nolimits_{a\in A}(X_{a}\times n) =\bigcup\nolimits_{a\in A,\ i\leq n}\{(x,i)\ |\ x\in X_{a}\}\] \[\bigcup\nolimits_{a\in A}(X_{a}\times n) =\bigcup\nolimits_{a\in A,\ x\in X_{a}}\{(x,i)\ |\ i\leq n\}.\]
The first of these two collections is indexed by \(A\times n\) and the second is indexed by \(\bigcup_{a\in A}X_{a}\). And since the unions of the two collections are identical, sock division implies that \(|A\times n|=|\bigcup_{a\in A}X_{a}|\).
### Sock Geometry
Just how powerful is sock division? We have just seen that if sock division by \(n\) holds then multiplication by \(n\) is equal to repeated addition of \(n\). In other words, for any set \(A\) and any collection \(\{X_{a}\}_{a\in A}\) of disjoint sets of size \(n\), there is a bijection between \(\bigcup_{a\in A}X_{a}\) and \(A\times n\). However, this bijection does not necessarily respect the structure of \(\bigcup_{a\in A}X_{a}\) and \(A\times n\) as collections of size \(n\) sets indexed by \(A\): we are not guaranteed that the image of each \(X_{a}\) is equal to \(\{a\}\times n\). It seems reasonable, then, to ask whether sock division by \(n\) implies the existence of a bijection that does respect this structure.
It is natural to phrase this question using terms from geometry, and in particular in the language of fiber bundles. It is possible to understand everything in this section even if you do not know what a bundle is, but our choice of terminology may seem a bit odd.
We can think of a collection \(\{X_{a}\}_{a\in A}\) of disjoint sets of size \(n\) as a kind of bundle over \(A\). We will refer to it as an \(n\)**-sock bundle**, or just a **sock bundle** for short. We can think of the index set \(A\) as the **base space** of the bundle and the union \(\bigcup_{a\in A}X_{a}\) as the **total space**. If \(\{X_{a}\}_{a\in A}\) and \(\{Y_{a}\}_{a\in A}\) are two \(n\)-sock bundles then a **sock bundle isomorphism** between them is a bijection \(f\colon\ \bigcup_{a\in A}X_{a}\rightarrow\bigcup_{a\in A}Y_{a}\) such that for each \(a\), the image of \(X_{a}\) is \(Y_{a}\)--in other words, such that the following diagram commutes (where \(\pi\) and \(\pi^{\prime}\) denote the natural projections \(\bigcup_{a\in A}X_{a}\to A\) and \(\bigcup_{a\in A}Y_{a}\to A\)).
We will refer to \(A\times n\) as the **trivial \(n\)-sock bundle2** and call a sock bundle **trivializable** if it is isomorphic to \(A\times n\). We summarize some of these terms in the table below.
Footnote 2: It would also be reasonable to call it the \(n\)-shoe bundle.
Restated in these terms, here's the question we asked above.
**Question 7**.: _Let \(n>0\) be a natural number. Does \(\mathbb{ZF}\) prove that sock division by \(n\) implies that all \(n\)-sock bundles are trivializable?_
There is at least one special case in which this question has a positive answer: when the base space \(A\) can be linearly ordered. To see why, suppose sock division by \(n\) holds, let \(A\) be any set and let \(\preceq\) be a linear order on \(A\). If \(\{X_{a}\}_{a\in A}\) is a collection of disjoint sets of size \(n\) then we know by Theorem 6 that sock division by \(n\) implies that there is a bijection \(f\colon\ \bigcup_{a\in A}X_{a}\to A\times n\). Since \(\preceq\) is a linear order on \(A\), we can linearly order \(A\times n\) using \(\preceq\) and the standard ordering on \(\{1,2,\ldots,n\}\). Thus we can use \(f\) to simultaneously linearly order all the \(X_{a}\)'s and thereby trivialize the bundle \(\{X_{a}\}_{a\in A}\).
### Sock Division and Divisibility
We will end with one more question about the power of sock division. Consider the question of what it means for the cardinality of a set \(A\) to be divisible by a natural number \(n\). It seems natural to define divisibility in terms of multiplication: \(|A|\) is divisible by \(n\) if there is some set \(B\) such that \(|A|=|B|\times n\). However, we saw above that there are two possible ways to interpret \(|B|\times n\), and without the axiom of choice these two are not necessarily equivalent. Thus we have two possible notions of divisibility by \(n\) without choice, one based on interpreting multiplication as the area of a rectangle and the other based on interpreting multiplication as repeated addition.
These two notions were studied by Blair, Blass and Howard [1], who called them **strong divisibility by \(n\)** and **divisibility by \(n\)**, respectively. To help distinguish the two notions, we will refer to divisibility by \(n\) as **weak divisibility by \(n\)**.
**Definition 8**.: _Suppose \(n>0\) is a natural number. A set \(A\) is **strongly divisible by \(n\)** if there is a set \(B\) such that \(|A|=|B\times n|\) and **weakly divisible by \(n\)** if there is a set \(B\) and a collection \(\{X_{b}\}_{b\in B}\) of disjoint sets of size \(n\) such that \(|A|=|\bigcup_{b\in B}X_{b}|\)._
In the language of sock geometry, one might say that \(A\) is strongly divisible by \(n\) if it is in bijection with the total space of some trivial \(n\)-sock bundle and weakly divisible by \(n\) if it is in bijection with the total space of any \(n\)-sock bundle.
It is clear that strong divisibility by \(n\) implies weak divisibility by \(n\), but without the axiom of choice, weak divisibility does not imply strong divisibility (for example, this is implied by the results of Herrlich and Tachtsis in [10]). Since sock division by \(n\) implies that multiplication by \(n\) is equal to repeated addition of \(n\), sock division by \(n\) implies that strong and weak divisibility by \(n\) are equivalent. However, it is not clear whether the converse holds and this seems like an interesting question.
**Question 9**.: _Let \(n>0\) be a natural number. Does \(\mathsf{ZF}\) prove that if strong and weak divisibility by \(n\) are equivalent then sock division by \(n\) holds?_
I believe that answering questions 7 and 9 above would give insight into the world of set theory without choice more generally.
## 4 Acknowledgements
Thanks to Peter Doyle for a lively email conversation on the topic of this paper and for coming up with the name "sock division," an anonymous reviewer for suggesting question 9, Rahul Dalal for reading and commenting on a draft, Brandon Walker for inadvertently providing me with the motivation to work on this topic and, of course, Conway, Doyle, Qiu and all the rest for inspiration.
|
2309.06348 | Band-gap regression with architecture-optimized message-passing neural
networks | Graph-based neural networks and, specifically, message-passing neural
networks (MPNNs) have shown great potential in predicting physical properties
of solids. In this work, we train an MPNN to first classify materials through
density functional theory data from the AFLOW database as being metallic or
semiconducting/insulating. We then perform a neural-architecture search to
explore the model architecture and hyperparameter space of MPNNs to predict the
band gaps of the materials identified as non-metals. The parameters in the
search include the number of message-passing steps, latent size, and
activation-function, among others. The top-performing models from the search
are pooled into an ensemble that significantly outperforms existing models from
the literature. Uncertainty quantification is evaluated with Monte-Carlo
Dropout and ensembling, with the ensemble method proving superior. The domain
of applicability of the ensemble model is analyzed with respect to the crystal
systems, the inclusion of a Hubbard parameter in the density functional
calculations, and the atomic species building up the materials. | Tim Bechtel, Daniel T. Speckhard, Jonathan Godwin, Claudia Draxl | 2023-09-12T16:13:10Z | http://arxiv.org/abs/2309.06348v1 | # Band-gap regression with
###### Abstract
Graph-based neural networks and, specifically, message-passing neural networks (MPNNs) have shown great potential in predicting physical properties of solids. In this work, we train an MPNN to first classify materials through density functional theory data from the AFLOW database as being metallic or semiconducting/insulating. We then perform a neural-architecture search to explore the model architecture and hyper-parameter space of MPNNs to predict the band gaps of the materials identified as non-metals. The parameters in the search include the number of message-passing steps, latent size, and activation-function, among others. The top-performing models from the search are pooled into an ensemble that significantly outperforms existing models from the literature. Uncertainty quantification is evaluated with Monte-Carlo Dropout and ensembling, with the ensemble method proving superior. The domain of applicability of the ensemble model is analyzed with respect to the crystal systems, the inclusion of a Hubbard parameter in the density functional calculations, and the atomic species building up the materials.
**Keywords: graph neural networks, band gap prediction, neural architecture search, uncertainty quantification, density functional theory**
## 1 Introduction
The success of density functional theory (DFT) has allowed researchers to predict material properties outside of the laboratory. There are several materials databases, such as NOMAD (Novel Materials Discovery) [1], the Materials Project [2], AFLOW (Automatic Flow of Materials Discovery) [3], OQMD (The Open Quantum Materials Database) [4], and others, collecting DFT data. NOMAD, for instance, contains over 140 million ground-state calculations. These databases have not only allowed researchers to avoid performing the same calculations again and again, thus saving computational resources, but have also enabled the re-purposing of data. For instance, one can train statistical-learning models, e.g., neural networks (NNs) [5], to predict DFT results with great accuracy [6].
In order to infer properties of solid materials, details of the crystal structure are essential. In graph neural networks (GNNs), the geometrical information, i.e., unit cell and atomic basis, can effectively be fed into ML models by representing the atoms as nodes and the atomic distances as the edges between the nodes. GNNs have seen success in predicting bulk properties of materials using Materials Project and OQMD data [7]. In this work, we start with a message-passing neural network (MPNN) as described in Ref. [8]. This MPNN learns a representation for each atomic element in the first layer of the network, the embedding. The embedding is then iteratively updated for each atom using information from neighboring nodes. While the original implementation [8] was in Tensorflow, we make use of the Jraph library [9] developed at Deepmind, verifying our results with the QM9 [10] and Materials Project datasets. Our main efforts are, however, focused on the AFLOW database [3] where both formation energies and band gaps are available for \(62\,102\) structures. We train an MPNN to classify these materials as being metals or non-metals, and predict the DFT band gaps of the latter. The MPNN is also used to predict the formation energies of these materials. We perform a random-search-based neural architecture search on our network over various hyperparemeters, like number of MP steps, latent size, and learning rate, and demonstrate the complicated combined effect of these architecture parameters on the network performance. The best ten models by performance on the validation split are pooled in an NN ensemble [11] to average their predictions on individual structures, which results in average predictions better than the best single model and existing models in the literature. Moreover, the domain of applicability of the model is analyzed with respect to the atomic composition, the crystal structure, and the inclusion of a Hubbard parameter in the calculation. Finally, we evaluate the possibility of using standard deviations from the model predictions,
by comparing the ensemble with Monte-Carlo dropout (MCD), as a means of providing the user with uncertainty estimates on single predictions.
## 2 Methods
### Graph representation of solids
A crystalline material is described by a periodically repeated unit cell. It is fully characterized by the lattice vectors, the involved atoms, and their positions in the unit cell. Graphs allow one to construct representations of physical systems that have translational and rotational invariance and are thus well suited for our purpose. In a graph representation, the local neighborhood of each atom can be defined by the distance to every other atom within a specified cutoff radius. When we apply this to a crystalline material, the cutoff radius may extend beyond its unit cell. An example of this can be seen in Fig. 1 for the graph construction of two-dimensional NaCl. As all atoms within the unit cell, i.e., one Na and one Cl atom, represent _nodes_, we end up with two nodes in our example (bottom panel). All atoms within the respective circle are then connected to this respective atom via _edges_. Here, the Na node is connected with edges to four Cl atoms and four Na atoms in neighboring unit cells. The latter links are called _self-edges_ that encapsulate the periodic boundary conditions of the crystal in the resulting graph, even though the graph representation itself is not explicitly periodic. Note that this graph construction uses directional edges where the edge originates from a node and ends on a node. Directional edges are used throughout this paper. Their use enables asymmetric graph representations, like a k-nearest neighbor graph, which is often favorable.
Once the nodes and edges of the graph are determined, we can define the adjacency matrix as:
\[A_{ij}=\begin{cases}1&\text{if node $i$ is connected to node $j$}\\ 0&\text{if $i=j$ or otherwise}\end{cases} \tag{1}\]
The neighborhood of node \(i\) is then formally defined as
\[N(i)=\{j\mid A_{ij}=1\}. \tag{2}\]
It is not possible to construct the adjacency matrix for periodic systems, as there can be multiple edges for a pair of atoms. Such a representation is not considered a simple graph (which has at most one edge per pair of nodes); rather, it is considered a multi-graph (see Fig. 1). However, we can still define the neighbourhood of a node \(i\) by the set of other nodes to which it is connected. Apart from using a constant cutoff to define the atomic neighborhood, the edges can also be constructed by considering a fixed number (\(k\)) of nearest neighbors (termed KNN algorithm). When the cutoff radius is the same for all atoms in the unit cell, the resulting graph is symmetrical. However, in the KNN algorithm, the constructed graph is not necessarily symmetrical but each node
in the graph has the same number of neighbors. The KNN approach has several benefits. The cutoff radius can, in principle, produce isolated nodes, i.e., nodes without neighbors. As a reference to our studies, we refer to an architecture [8] that was optimized to fit formation energies of structures present in the Materials Project database [2] and OQMD [4], with the calculation of both being performed with the same DFT code and functional [12]. In that work, it has been shown that a KNN cutoff with \(k=24\) neighbors produces the lowest mean absolute error (MAE) for an MPNN with edge updates when predicting formation energies on a OQMD materials dataset, but the improvement over \(k=12\) neighbors is only marginal. In our experiments, training is sped up by about 20% using the lower number or neighbors, while not affecting the model performance significantly. Thus, in our search for an optimal message-passing architecture, we adopt \(k=12\), since -as we will show further below- this helps to reduce our already very large search space.
Figure 1: Construction of a graph representation (bottom) from the two-dimensional periodic unit cell of NaCl (top) using a fixed cutoff radius. The unit cell in the center is indicated by the bold square. Neighboring unit cells are also shown (dotted squares). The cutoff radii for Na and Cl, are shown with a red and blue circle, respectively, centered on their atomic positions.
### Message-passing algorithm
MPNNs and, more generally, GNNs work by iteratively updating hidden graph states. They contain information concerning the atoms in the material and their interactions. In this work, we use hidden node and edge states which we refer to as _node and edge vectors_, respectively. Each hidden vector has its own update equation, as described in Section 2.2.2. They are the same as used in Ref. [8]. After a fixed number of updates (\(L\)), the node vectors are fed into a readout function that predicts the target property.
#### 2.2.1 Node and edge embeddings
The raw node and edge features are first transformed into representations that facilitate the graph network to learn from the input data. The atomic number, \(Z_{i}\), of each node \(i\) is one-hot encoded (OHE) into a vector. Its dimension is the number of different atomic species in the dataset. For instance, in a dataset containing 74 different elements of the Periodic Table of Elements (PTE), each atom is assigned a 74-dimensional binary vector, where only a single bit is non-zero (e.g., hydrogen is represented with \([1,0,0,\ldots,0]\) and helium with \([0,1,0,\ldots,0]\)). The element type, represented as an OHE vector is then transformed into a vector with latent size \(C\) by multiplication with a trainable weight matrix \(W_{0}\):
\[h_{i}^{0}=W_{0}\cdot\mathrm{OHE}(Z_{i}). \tag{3}\]
With \(W_{i}\), we denote the weight matrices, whose elements are optimized during the training process.
Each edge in our graph is represented by a hidden edge-vector state. The initial state (embedding) is computed by feeding the pairwise atomic distance, \(d_{ij}\), between two nodes, \(i\) and \(j\), into a basis-function expansion. As \(d_{ij}\) is always translationally and rotationally invariant, the graph reflects this desired property. In this work, we use Gaussian basis functions:
\[(e_{ij})_{\kappa}=exp\left\{-\frac{\left[d_{ij}-(-\mu_{min}-\kappa\Delta) \right]^{2}}{\Delta}\right\},\quad\kappa=0\ldots\kappa_{max}-1. \tag{4}\]
The parameters \(\mu_{min}\) (offset of the basis functions), \(\Delta\) (width of the basis functions), and \(\kappa\) are chosen to span the range of input features. In Ref. [8], \(\mu_{min}\) is set to 0 A, \(\Delta\) to 0.1 A, and \(\kappa_{max}\) to 150. We use these values in this work as well. This dimensional expansion of the scalar distance into a vector of size \(\kappa_{max}\) might seem strange but is analogous to OHE in that the model is then able to decorrelate input and output more easily with the transformed, now higher-dimensional input [8].
#### 2.2.2 Node/edge update functions
The nodes and edges are updated at each message passing (MP) step \(l\). First, the edge update is performed, then the node update is applied using the
updated edges. For each edge, \(M_{ij}\) is defined as an edge-wise message that connects node \(i\) to node \(j\) (node \(j\) is sending, and node \(i\) is receiving the _message_):
\[M_{l}(h_{i}^{l},h_{j}^{l},e_{ij}^{l})=M_{l}(h_{j}^{l},e_{ij}^{l})=(W_{1}^{l}h_{j }^{l})\odot\sigma(W_{3}^{l}\sigma(W_{2}^{l}e_{ij}^{l})). \tag{5}\]
Here, the symbol \(\odot\) denotes element-wise multiplication, and \(\sigma\) an arbitrary, non-linear activation function. The element-wise multiplication can be seen as a continuous filter, where the edge feature attenuates the node feature, after both have been transformed by feed-forward layers.
Edge-wise messages are aggregated into node-wise messages by either taking the sum of neighboring features as in
\[m_{i}^{l+1}=\sum_{j\in N(i)}M_{l}(h_{i}^{l},h_{j}^{l},e_{ij}^{l}) \tag{6}\]
or any permutation-invariant aggregation function (e.g., mean, minimum, maximum, etc.) [13]. The edge-update function consists of concatenating the sending and receiving nodes with the edge feature \(e_{ij}\). This concatenation is then passed into a two-layer NN with two shifted soft-plus activation functions,
\[e_{ij}^{l+1}=\sigma(W_{l+1}^{E2}\sigma(W_{l+1}^{E1}(h_{i}^{l+1};\;h_{j}^{l+1}; \;e_{ij}^{l}))). \tag{7}\]
Nodes are then updated according to
\[h_{i}^{l+1}=S_{t}(h_{i}^{l},m_{i}^{l+1})=h_{i}^{l}+W_{5}^{l}\sigma(W_{4}^{l}m_ {i}^{l+1}) \tag{8}\]
by using the aggregated messages \(m_{i}^{l+1}\) and the original node features \(h_{i}\). The node-wise message is transformed in a two-layer NN with an activation function and is added to the previous node feature \(h_{i}^{l}\), to arrive at the updated node feature \(h_{i}^{l+1}\). This addition has similarities to the residual connections used in ResNet architectures, which enable training of deeper NNs [14].
#### Global readout function
After \(L\) MP steps, the procedure is stopped, and the node features are aggregated into a single scalar, transforming them by means of an NN with two layers and a hidden size of \(C/2\). Subsequently, one sums over all nodes in the graph or takes the mean. This step is required since the aggregation should be invariant with respect to the permutation of nodes, as their ordering should not matter. Whether the sum or the average is taken over all nodes depends on the dataset and the target property. Here, we show the equation for the summation:
\[R(\{h_{i}^{L}\in G\})=\sum_{h_{i}^{L}\in G}W_{7}\,\sigma(W_{6}h_{i}^{L}) \tag{9}\]
As the graphs can have variable sizes, the aggregation should also be able to handle varying numbers of nodes in the graph. Note, for the QM9 dataset, where the target is the total internal energy \(U_{0}\), we use a sum in the readout function. For the datasets, where the formation energy per atom is targeted, we take the mean. For further discussion on readout aggregation methods, see Ref. [15].
Combining the pieces of the node/edge embedding, the node/edge update functions, and the readout function, we arrive at the complete algorithm for a message-passing edge-update neural network, abbreviated as MPEU:
```
functionGNN(V,E) \(\triangleright\) First, embed edges and nodes for all\(d_{ij}\in E\)do \(\varepsilon_{ij}\gets RBF(d_{ij})\)\(\triangleright\) Expand distances in radial basis functions endfor for all\(Z_{i}\in V\)do\(\triangleright\) Loop over atomic number of nodes in graph \(h_{i}^{1}\gets W_{0}\cdot\mathrm{OHE}(Z_{i})\)\(\triangleright\) One hot encode elements and transform endfor \(e_{ij}^{0}\gets E_{0}(h_{i}^{0},h_{j}^{0},\varepsilon_{ij})\)\(\triangleright\) Apply zeroth edge update layer \(l\gets 0\)\(\triangleright\) Initialize layer counter while\(l\leq L\)do \(M_{ij}\gets M_{l}(h_{i}^{l},h_{j}^{l},e_{ij}^{l})\)\(\triangleright\) Calculate edge-wise messages \(m_{j}^{l+1}\leftarrow\sum_{i\in N(j)}M_{ij}\)\(\triangleright\) Aggregate edge-wise messages to nodes \(h_{i}^{l+1}\gets S_{t}(h_{i}^{l},m_{i}^{l+1})\)\(\triangleright\) Update nodes with incoming messages \(e_{ij}^{l+1}\gets E_{l}(h_{i}^{l+1},h_{j}^{l+1},e_{ij}^{l})\)\(\triangleright\) Update edges \(l\gets l+1\) endwhile \(\hat{y}\leftarrow\frac{1}{N_{N}}\sum_{h_{i}^{L}\in G}\mathrm{NN}(h_{i}^{L})\)\(\triangleright\) Apply NN readout function return\(\hat{y}\) endfunction
```
**Algorithm 1** Message-passing algorithm with edge updates.
### Architecture search
MPNNs as described above contain many architecture parameters. Not much work has been devoted to explore how they affect the model performance. In this work, we perform a neural architecture search (NAS) using a random search algorithm. We build our search space based on the MPNN model described in detail above, where the embedding and latent size (e.g., the node/edge vector dimension), the number of MP steps, the activation function, the number of layers in the MLP in the node/edge update functions, and the number of layers in the readout NN are varied. Other MPNN parameters, such as the initial learning rate, the learning-rate decay, the batch size, the dropout
and the layer norm are also varied concurrently, assuming that the importance of these variables is related to the number of parameters in the model. We note that the number of trainable weights, i.e., parameters optimized during training, scales linearly with the number of MP steps, and quadratically with the latent size. The number of trainable parameters (weights) ranges from 500,000 up to 20,000,000, with the best models usually having around 1,000,000 weights. Neural architecture searches are often performed with a mix of explorative and exploitative algorithms such as Bayesian optimization or genetic algorithms [16]. In this work, we opt to use a random search algorithm since we want to sample the large multidimensional space exploratively to gain a better understanding of the space. While Bayesian optimization and genetic algorithms sample the parameter space with a bias towards regions with well-performing models, random search samples the parameter space without bias, i.e., purely exploratively.
### Neural-network ensembles
Given that a large number of models is trained in the process of the NAS, it is a natural step to not only look at the best performing model, but also at the predictions of the other models. If a number of well-performing and diverse models make a prediction on a single input, it can be expected that the average prediction of the models in the ensemble outperform the individual models [17, 18]. There are three main reasons for this [11]: (i) Different models can achieve the same performance on the regression or classification task. An ensemble reduces the risk of choosing the poorly performing model when it is applied on the held-out test data. (ii) Due to the non-convex nature of optimizing a neural network, it is expected that training results in a local minimum with respect to the trainable parameters rather than the global minimum. This leads to the possibility of different locally optimal parameters given the same training data, if the initialization of the models is different. Again, the model ensemble reduces the risk of choosing a poorly performing model that is stuck in a local minimum far from the global minimum. (iii) By averaging across different models, the space of possible solutions is expanded, leading to an increased learning capacity of the model.
The main prerequisite for an ensemble to improve prediction quality is that the models are diverse, i.e., are trained with different data and/or have different parameter values and or architectures, and that the different models are well performing on their own. For our ensemble, we select the ten best candidate architectures with respect to the validation dataset from our NAS. We expect that the variety of architectures and hyperparameter values for the different candidate models should result in the ensemble being less prone to over-fitting and performing better than the top NAS models individually.
### Uncertainty estimates
Reliable uncertainty estimates are important when deploying a machine-learning model in a real application [19]. They provide information on the model's domain of applicability so that the user can understand whether to trust an inference [20]. Ensembling the top ten models from our NAS gives us a method to obtain an uncertainty estimate by looking at the predictions from all ten models in the ensemble and calculating the standard deviation. We compare this method with another popular method, Monte-Carlo dropout (MCD) [21]. Dropout means that nodes in the network are turned off/on probabilistically. In the case of MCD, the dropout is also used for model inferences (i.e., nodes are turned off randomly for each prediction the model makes) and not just for training the model. This enables stochastic predictions from a _virtual ensemble_. Ideally, aggregating these predictions gives an uncertainty estimate in the same way as a Gaussian process would. We employ dropout on all NN layers in the model (readout function, edge-update function, etc.). Dropout also helps to prevent overfitting during training for regularization purposes [22]. The dropout is kept on for inferences; ten predictions are made for each input, of which the mean and standard deviation are reported.
### Band-gap classification and regression
The Kohn-Sham band gaps obtained in density functional theory typically severely underestimate the corresponding quasi-particle gaps of the respective structures. This systematic error can be partly remedied by the use of hybrid functionals [23], but an expansive database has yet to be created using this method. It is expected, however, that a model that is fitted on biased data, carries the same bias during inference on unseen data. This should be kept in mind when discussing the use of GNNs trained on DFT data in high-throughput searches, e.g., for large-band-gap materials. To make sure that we use as consistent a data set as possible, the data used in this work were filtered to only contain DFT calculations performed with the PBE [24] functional. For more details on the data, see below.
Following the literature on predicting band gaps on AFLOW data [25], we train two separate models. The first one classifies materials as non-metals and metals (having a zero DFT band gap), using a binary cross-entropy loss. The second model is fitted to predict the band gaps of the materials classified as non-metals. This workflow is illustrated in Fig. 2. Both models -classification MPNN and regression MPNN- have a similar architecture. Since we find very high accuracy on the classification task without tuning the hyperparameters, we only perform a NAS on the band-gap regression task.
### Dataset
For band-gap prediction and classification, we use all materials in AFLOW that have a band gap. The AFLOW data are obtained with the DFT code VASP [26]. As mentioned above, we only use those calculated with the PBE
functional, and duplicates have been removed from the dataset. To simplify our analysis, we use the same dataset for the prediction of formation energies. That means that there are some materials for which a formation energy has been calculated but no band structure, and they are therefore excluded from the formation-energy regression.
Outliers with a formation energy of less than -10 eV/atom (i.e., two materials, S and SiO\({}_{2}\), both space group 70) or higher than 70 eV/atom (a single material, BrIPb, space group 59) were removed. These outliers have formation energies more than five \(\sigma\) away from the mean in the dataset. In this dataset, we have 46,090 metals and 16,012 non-metals. The dataset is therefore biased towards metals.
### PLMF and ElemNet Models
To evaluate and compare the performance of the models from our NAS, we include several models from the literature. In the PLMF model [25], the lattice structure was decomposed into fragments, and a ML model was trained on these fragments. To the best of our knowledge, the PLMF model is the only model in the literature that has been used to classify band structures and regress band gaps for the AFLOW dataset. One caveat to the comparison with this work is that neither the training/test splits, nor the code for the method employed in their work were shared in their original paper. We compare our results to the metrics reported in Ref. [25], however, we note that their dataset is from an earlier snapshot of AFLOW. Since the publishing of that article, the AFLOW database has grown and some data-points have been recomputed (e.g., with a newer version of the VASP code). More specific, we used the online API [27] to get results from their model trained with the earlier AFLOW dataset snapshot, evaluated on the current AFLOW test dataset. Therefore,
Figure 2: Workflow of band-gap classification in terms of metals/non-metals using an MPNN, followed by the prediction of band gaps using another MPNN.
not having access to the training/test splits used in Ref. [25], some of the test data might have already been seen by the trained PLMF.
We also compare our results with the deep neural network ElemNet [5], a model that does not use any structural information, but is given only the chemical formula (stoichiometry). This model was trained and evaluated using OQMD [28] data and demonstrated good performance for formation energies. We retrain the model on AFLOW data but keep the model architecture from the original publication.
## 3 Software implementation
For our computational framework, the JAX ecosystem is used because of its frequent use in recent state-of-the-art research [29, 30]. Features like automatic differentiation and just-in-time compilation, and active development that foster new scientific discoveries are especially appealing. In conjunction with JAX, we use the Haiku library for trainable NN layers and Optax for optimization routines which is itself built upon JAX [31, 32]. Finally, to encode our MPNN architecture and update equations we use Jraph which provides a functional API to apply transformations to arbitrary graphs [9]. These four libraries form a cohesive framework for the whole process of training a graph-based machine-learning model, apart from the database interface that we implemented.
As a local database for atomic structures, we employ the Atomic-Simulation-Environment database (ASE-DB) [33]. The conversion of atomic structures to graphs is done only once for each database using a maximum neighborhood of k-nearest neighbors, therefore the time-consuming graph generation does not have to be repeated for each hyper-parameter experiment.
The data are divided into training, validation, and test data in an 80:10:10 split. Training and validation data are used for cross-validation and early stopping, and the model is finally evaluated on the unseen test data to asses its performance on samples it has not yet encountered. Early stopping is implemented as described in [8], by checking if the validation loss has decreased compared to the loss 1 million steps before. The model is trained with dynamic batches of a maximum of \(N_{batch}\) graphs (including a padding graph) [34] using the Adam optimizer provided by Optax [35]. Batches are sampled without replacement from the training dataset and reshuffled in every epoch. Dynamic batches are created by calculating the average number of nodes and edges for \(N_{batch}-1\) graphs and rounding this result up, in this case to the next multiple of 64. This value (power of 2) is motivated by the processor architecture that is used, in that GPUs use banked memory and specific optimized kernels that work best with data sizes of \(2^{N}\). Then, during the training loop, graphs are sampled without replacement from the training dataset, until the maximum number of nodes or edges is reached, or \(N_{batch}-1\) graphs are retrieved. The rest of the budget is then used for padding, and the result is a static number
of nodes, edges, and graphs. This only needs to be just-in-time compiled once and therefore greatly increases the speed of each graph network evaluation.
The implementation is validated by training a model on formation energies from the Materials Project, specifically the MP-crystals-2018.6.1 snapshot provided in [36]. With this training data, we obtain similar error metrics to Ref. [8], despite using a different training/test split (no information on the split used was provided in Ref. [8]).
## 4 Results
We first train a model to classify materials as metals or non-metals, minimizing the binary cross-entropy. We evaluate the receiver operating characteristic (ROC) alongside the total accuracy of the model, since these two metrics are easier to interpret than the binary cross-entropy. The area under the curve (AUROC) of the receiver operator characteristic presents a balanced scalar metric of the classifier's performance. Recall, that an AUROC of one indicates a perfect fit, whereas an AUROC of 0.5 points at random predictions. The reference MPEU model performs quite well for the classification task with an accuracy of 0.98 and an AUROC over 0.99 as shown in Table 1. This is remarkable since the reference MPEU model has, to the best of our knowledge, not been trained to classify band gaps. Both the AUROC and accuracy are higher than the corresponding values for the PLMF model from the literature that are included for comparison. The high AUROC value indicates a very balanced classification performance across metals and non-metals, despite the dataset being biased towards metals. As a result of the satisfactory performance of the MPEU reference model, we decided not to perform further optimizations on the model with a NAS.
In Fig. 3, the performance of the classifier in terms of accuracy is analyzed depending on how often each type of material appears in the training split. As expected, we see a general trend that the more often such category appears in the dataset, the higher the classification accuracy of the model is. For instance, transition metals appear frequently in the training split and are generally classified correctly with more than 98% of the time. In contrast, the fewer alkali metals are classified with accuracies ranging from 94% to 99%. Oxides, however, are outliers in this trend. Despite being best represented in the training dataset with over ten thousand materials, they are classified with an accuracy of 97%, lower than the mean accuracy over the entire dataset (98%). One reason for this could be the fact that for many of the transition metal-oxides, a Hubbard-U correction has been applied in the production of the DFT data. We will come back to this point further below.
After classification, we predict the band gaps of those materials that have been classified as non-metals. We visualize in Fig. 4 the results of the neural-architecture search for band-gap regression on the validation split. Additional hyperparameters are shown in the appendix (Fig. A1). The results indicate
that, in general, the larger batch size of 64 is only slightly better than 32. Three message-passing steps give the lowest mean RMSE. A latent size of 256 is favored and a learning rate of 1E-4 is significantly preferred. Although, for instance, a latent size of 256 and a learning rate of 1E-5 are preferred on average, the best NAS model has a latent size of 128 and a learning rate of 2E-5 which shows the correlation between variables. We see that increasing the latent size and using a smaller learning rate, which should increase the model's learning capacity, does not result in a lower RMSE. This may hint at a rather small amount of training data for the problem. In total, 500 random models were trained; for 459, the loss converged and training was stopped early; for 41, the optimization was aborted due to an unstable loss value (as often observed by us when a higher learning rate is paired with no layer normalization). The NAS model that performed best on the validation dataset in terms of RMSE was selected as the model used for testing.
The regression metrics, RMSE, MAE, and median absolute error (MdAE), are collected in Table 1. The MdAE is an error metric that is unaffected by outliers, as it gives a midpoint where the same number of absolute errors lie above and below. The low MdAE value across models shows that both the RMSE and MAE are affected by outliers with a high absolute error. We observe that the best NAS architecture is overall similar to the reference model, however, the optimal learning rate at 2E-5 is lower, and the NAS model uses a dropout rate of 0.05 while the reference does not use any. The best NAS model actually performs worse than the reference model on which the NAS
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Property** & **Model** & **RMSE** & **MAE** & **MdAE** \\ \hline \(E_{g}\) [meV] & Ensemble & **379** & **168** & **26.2** \\ & Best in NAS & 469 & 205 & 35.0 \\ & Reference [8] & 399 & 180 & 32.9 \\ & SchNet [37] & 489 & 235 & 68.4 \\ & PLMF (new data) \({}^{1}\) & 1327 & 618 & 151 \\ & PLMF (reported) [25] & 510 & 350 & - \\ & ElemNet [5] & 816 & 515 & 303 \\ \hline \(E_{f}\) [meV/atom] & Ensemble & **56.3** & **15.0** & **6.29** \\ & Best in NAS & 65.4 & 21.0 & 10.7 \\ & Reference [8] & 57.5 & 17.9 & 8.32 \\ & SchNet [37] & 68.0 & 29.3 & 17.2 \\ & ElemNet [5] & 214 & 135 & 68.6 \\ \hline & & **Accuracy** & **AUROC** \\ \hline \multirow{2}{*}{Classification} & Reference & **0.98** & \multirow{2}{*}{\(>\)**0.99**} \\ & PLMF (new data) \({}^{1}\) & 0.97 & - \\ \cline{1-1} & PLMF (reported) [25] & 0.93 & 0.98 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of cross-validated models on formation energies (\(E_{f}\)) and band gaps (\(E_{g}\)) by our best performing MPEU model from the NAS. For comparison, literature results for the PLMF model on an earlier AFLOW dataset snapshot (marked by the \(*\)) are shown. Results of the classification into metals/non-metals are shown at the bottom.
space is created. A table comparing the validation results and the test results is shown in Table 1 in the appendix.
The validation and test RMSE are very similar for the best NAS Model (468 eV and 469 eV, respectively). We conclude that the best NAS model is not overfitting the validation dataset despite our NAS choosing the best model based on the validation results. In contrast, for the reference model, the validation results are very similar to the best model.
Figure 4: Neural architecture search results for the band gap, using the non-metals as predicted by the classifier (trained on materials from the AFLOW database). The RMSE on the validation split are shown for model-architecture parameters and settings in our search space. The effect of several additional parameters is shown in the appendix (Fig. 11).
Figure 3: Accuracy of the classification between metals/non-metals for different material classes evaluated on the test split.
the validation RMSE is much higher than the test RMSE (505 eV and 399 eV, respectively). This suggests that the reference model's superior test performance may depend on the split. Our ensemble model that combines the top ten NAS models, outperforms the reference model significantly in terms of MAE, RMSE, and MdAE. Note, that in order to evaluate all band gap regression models on equal footing, we use the same MPEU band-gap classifier so that each model uses the same training, validation, and test dataset.
Analyzing the performance of the ensemble model further, we see the largest RMSE occurs on data that our classification model incorrectly predicted as non-metals. This is seen in Fig. 5. All MPEU models significantly outperform the PLMF and ElemNet model indicating the superiority of graph-based models for this task and dataset. The crystal structure also plays an important role in how well the MPEU model for predicting band gaps performs, as seen in Fig. 6. In general, we see a better band-gap prediction for lattice types with more materials in the training set. For instance, cubic systems are quite common and have the lowest median error while triclinic structures are the most rare and have a poor median error. That said, there is a similar number of hexagonal training structures compared to triclinic structures (792 vs 603) but we observe a much lower median band-gap error for the former. The dependence of the model performance on the lattice type indicates that the model is learning from the input crystal structure, which is desired.
We also train our models to predict formation energies, performing a NAS on this task. The results are shown in Table 1; the effect of several NAS parameters is shown in the appendix. They exhibit similarities to those of the band-gap regression. The ensemble NAS models performs the best, but
Figure 5: Regression results using the ensemble MPEU model on AFLOW data for formation energies (left) and band gaps of predicted non-metals (right). Marginal histograms showing the distribution of predicted and calculated values are shown above the horizontal axes and to the right of the vertical axes.
the reference model out-performs the individual top-ranked NAS model. As the reference model [8] was trained on formation energies of the Materials Project, it is no surprise that its architecture transfers very well to another dataset based on the same code. The two databases show, however, significant differences in computational details, such as convergence criteria for geometry optimization, or the use of DFT+U, and alike, as analyzed in detail in Ref. [12]).
Uncertainty quantification is also provided by the MPEU models. For the individual NAS models, we perform MCD while for the ensemble model we get the variance in the predictions of the models in the ensemble. For band gap regression, we find that the uncertainty of the ensemble has a correlation of 0.63 with the absolute error. In general, the uncertainty underestimates the true error, which is a known problem with uncertainly quantification in neural networks [38]. The MCD method of the best NAS model performs much worse with a correlation of 0.38. To better understand the problem of uncertainty quantification, we look in Fig. 7 at the distribution of absolute errors of the ensemble model. In the test data set, 60% of the absolute errors are below 50 meV. At this level, we start to approach the numerical precision of DFT-PBE bandgaps, and much of the uncertainty we are trying to predict may be irreducible, i.e., aleatoric, or just noise. Similarly, for the formation energy, most of the absolute errors are below 10 meV and thus also in the range of the numerical precision of the DFT data. It may therefore not be surprising that the formation-energy models have an even worse correlation with the uncertainty of 0.53 and 0.24 for the ensemble and best NAS model, respectively.
Figure 6: Distribution of absolute errors in band gaps for different crystal systems using the NAS ensemble model. The number of training materials for each crystal system is displayed at the top. The dashed red line shows the MdAE. Points above the 95% quantile are shown to better understand the distribution.
Despite the lower correlation of the MCD uncertainty estimates, they provide the user with some insight on the model's behavior for different inputs. For formation energies, the MCD uncertainty is well correlated to whether the simulation was performed with a Hubbard \(U\) correction or not. This correction is applied to strongly correlated materials where the PBE functional is known to perform poorly. In the AFLOW database, PBE+\(U\) is used for systems with \(d\) and \(f\) bands where electron localization occurs via the splitting of the energy levels of these orbitals [39]. The impact of this method on the learning is seen in Fig. 8 where the violin plots show the distribution of the model's uncertainty estimate with respect to the \(U\) correction. The width of each curve corresponds with the empirical probability (i.e., relative frequency) of the magnitude of the inference uncertainty. As the vast majority of test data has been obtained without the Hubbard correction, our MPEU model appears to be more uncertain about data including it. As the \(U\) parameter is an ad-hoc correction, the corresponding results may not be as systematic as the others (that still have an intrinsic error). In contrast, when the MCD method is employed for the prediction of band gaps, the median standard deviation (center of the boxplot) is lower for materials with the Hubbard-U correction (see appendix for the plot). This higher confidence of the model for band gaps performed with the Hubbard \(U\) correction in comparison to the results for formation energies with the correction, indicates that the latter data are less consistent, as described in the AFLOW database [40, 41] These observations
Figure 7: Distribution of absolute errors for different material classes, when predicting band gaps using the ensemble NAS model. Materials are grouped into direct semiconductors (direct SC), indirect semiconductors (indirect SC), metals, and half-metals. These classes refer to the true classes of the data, not the classifier’s prediction.
are supported by the fact that also the absolute errors depend on the regression task. Formation energies (band gaps) are worse (better) predicted by the ensemble model for materials where the correction is applied.
Finally, we want to understand why the MPEU models work so well for both the band-gap and formation-energy regressions. To do so, we remove the edge updates in our algorithm, making it equivalent to the SchNet model [37]. For our implementation of SchNet we use a reduced latent size of 64, as was done in Ref. [37]. This reduction in model size is also needed in our case in order to converge the validation loss during training. We observe that SchNet performs superior to ElemNet [5] but falls short with respect to the MPEU models, supporting the hypothesis that edge updates increase the learning capacity of message-passing models significantly. The results are included in Table 1.
## 5 Summary and conclusions
In conclusion, we find that our NAS yields ensemble models that significantly outperform models from the literature in terms of band-gap and formation-energy regression. We find that the reference model [8] applied in our context performs well for band-gap classification, being superior to the PLMF model [25]. The best individual NAS model does not improve over the reference model on the test split. Our analysis shows that the reference model performs significantly better on the test split as compared to the validation split, while our best NAS model yields similar results for both splits.
Figure 8: Violin plots of the standard deviations obtained by Monte-Carlo Dropout when predicting formation energies of AFLOW data obtained by either PBE or PBE+\(U\). The red horizontal line shows the median standard deviations of the predictions over the whole test split, dashed lines show quartiles. The numbers of training examples are shown at the top.
We demonstrate the superiority of graph-based models over existing models in the literature. To improve the NAS, one could opt to use more complex search algorithms that are more exploitative (e.g., genetic or Bayesian optimization). This could, however, degrade the performance of the ensemble NAS model since more exploitative search algorithms will likely return less diverse top-ranked architectures.
The uncertainty of the models has also been analyzed. The absolute errors of our ensemble models being mostly below 50 meV and 10 meV for band-gap and formation-energy regression, respectively, approach the numerical precision of DFT results. For band-gap regression, we find a significant correlation (of 0.63) in the uncertainty for the ensemble. This also applies for data points including a Hubbard-U correction. For band gaps, the corresponding model is more certain and also less error-prone, while for formation energies, the trend is the opposite. We find that the ensemble model performs well for cubic structures but less well for triclinic materials where there are fewer training samples. We find oxide predictions to be anomalous when performing band-gap classification despite their relative abundance in the dataset. More work is required to explain this trend. Our findings may help to better understand when to apply such models and to motivate researchers to create balanced datasets with respect to structures and compositions.
Possible future applications of our work include material discovery by exploring much larger data spaces. The NAS and ensemble methods applied to MPEU models may also be used to explore more intricate material properties such as elastic, thermal, or transport properties. Additionally, the uncertainty that the model provides, may be used in an active-learning framework. Overall, our findings may motivate other researchers to employ this methodology and our code in very different applications beyond materials science.
Code and Data Availability.The MPEU models, code to perform the NAS, and data splits can be found online in this GitHub repository: [https://github.com/tisabe/jraph_mpeu](https://github.com/tisabe/jraph_mpeu).
Acknowledgments.Work carried out in part by the Max Planck Graduate Center for Quantum Materials. D.S. acknowledges support by the IMPRS for Elementary Processes in Physical Chemistry. Partial funding is appreciated from the European Union's Horizon 2020 research and innovation program under the grant agreement N\({}^{\text{o}}\) 951786 (NOMAD CoE) and the German Science Foundation (DFG) through the CRC FONDA, project 414984028. We are greatful to Salman Hussein for his input on the training of the ElemNet model. We thank Nakib Protik, Martin Kuban, Marcel Langer, Matthias Rupp, and Luca Ghiringhelli for fruitful discussions.
## Declarations
We declare no conflicts of interest.
## Appendix A
The plots shown here serve to better understand the model performance. We can see in Fig. 10 how different architecture parameters, not shown in Fig. 4, affect the band-gap model metrics.
The NAS results for the formation-energy task with respect to architecture and numerical parameters are depicted in Fig. 11.
Fig. 12 shows that MAE and RMSE are well correlated with each other. During our NAS training we made the assumption that we can train our models to minimize the RMSE and evaluate them on the MAE. This plot proves our assumption to be correct, i.e., that the two variables are positively correlated. Minimizing the RMSE during training is easier than the MAE since the absolute value has a discontinuous derivative.
Fig. A4 shows the distribution of the absolute errors in the formation energy in the AFLOW dataset.
The distribution of the absolute errors in the formation-energy models as a function of the crystal structure can be seen in Fig. A5.
The distribution of the MC dropout uncertainty estimates are shown for the best NAS band-gap MPEU regressor in Fig. A6.
The standard deviations from the best NAS model trained on formation energies are analyzed using Monte-Carlo Dropout. The result can be seen in Fig. A7.
The ensemble NAS model's performance on different materials as a function of the material class is shown in Fig. A8 for the formation-energy task and in Fig. A9 for band-gap regression. In both figures, we see that despite oxides being the majority class of materials in our dataset, they are not the best performing class in our dataset. |
2301.13601 | Qubit Lattice Algorithm Simulations of Maxwell's Equations for
Scattering from Anisotropic Dielectric Objects | A Dyson map explicitly determines the appropriate basis of electromagnetic
fields which yields a unitary representation of the Maxwell equations in an
inhomogeneous medium. A qubit lattice algorithm (QLA) is then developed
perturbatively to solve this representation of Maxwell equations. QLA consists
of an interleaved unitary sequence of collision operators (that entangle on
lattice-site qubits) and streaming operators (that move this entanglement
throughout the lattice).
External potential operators are introduced to handle gradients in the
refractive indices, and these operators are typically non-unitary, but sparse
matrices. By also interleaving the external potential operators with the
unitary collide-stream operators one achieves a QLA which conserves energy to
high accuracy.
Some two dimensional simulations results are presented for the scattering of
a one-dimensional (1D) pulse off a
localized anisotropic dielectric object. | George Vahala, Min Soe, Linda Vahala, Abhay K. Ram, Efstratios Koukoutsis, Kyriakos Hizanidis | 2023-01-31T12:58:19Z | http://arxiv.org/abs/2301.13601v1 | Qubit Lattice Algorithm Simulations of Maxwell's Equations for Scattering from Anisotropic Dielectric Objects
###### Abstract
A Dyson map explicitly determines the appropriate basis of electromagnetic fields which yields a unitary representation of the Maxwell equations in an inhomogeneous medium. A qubit lattice algorithm (QLA) is then developed perturbatively to solve this representation of Maxwell equations. QLA consists of an interleaved unitary sequence of collision operators (that entangle on lattice-site qubits) and streaming operators (that move this entanglement throughout the lattice). External potential operators are introduced to handle gradients in the refractive indices, and these operators are typically non-unitary, but sparse matrices. By also interleaving the external potential operators with the unitary collide-stream operators one achieves a QLA which conserves energy to high accuracy. Some two dimensional simulations results are presented for the scattering of a one-dimensional (1D) pulse off a localized anisotropic dielectric object.
## 1 Introduction
There is much interest in developing algorithms to solve specific classical problems that can be encoded onto a quantum computer. One class of such algorithms is the qubit lattice algorithm (QLA) [1-21]. After identifying an appropriate set of qubits, QLA proceeds to define a unitary set of interleaved non-commuting collision-streaming operators which acts on this basis set of qubits so as to perturbatively recover the classical physics of interest.
The entanglement of qubits is at the essence of an efficient quantum algorithm. A maximally entangled 2-qubit state is known as a Bell state [22]. Now the Hilbert space of a 2-qubit basis consists of the states \(\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\}\). Consider the collision operator
\[C=\left[\begin{array}{cc}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right] \tag{1}\]
acting on the subspace \(\{|00\rangle,|11\rangle\}\). The most general tensor product state that can be generated from the qubit states \(\{a_{0}|0\rangle+a_{1}|1\rangle\}\), and \(\{b_{0}|0\rangle+b_{1}|1\rangle\}\) is
\[a_{0}b_{0}|00\rangle+a_{0}b_{1}|01\rangle+a_{1}b_{0}|10\rangle+a_{1}b_{1}|11\rangle \tag{2}\]
Now consider the so-called Bell state
\[B_{+}=\frac{|00\rangle+|11\rangle}{\sqrt{2}}. \tag{3}\]
This state cannot be recovered from the tensor-product state of the 2 qubits, Eq. (2). Indeed, to eliminate the \(|01\rangle\) state from Eq. (2) one requires either \(a_{0}=0\) or \(b_{1}=0\) - and this would eliminate either the state \(|00\rangle\) or the state \(|11\rangle\). States that can not be recovered from tensor product states are called entangled states. The entangled Bell state Eq, (3) is obtained usingthe collision operator \(C\), Eq. (1), with angle \(\theta=\pi/4\).
It is simplest to develop a QLA for the two curl-Maxwell equations, treating the divergence equations as initial constraints on the electromagnetic fields \({\bf E},{\bf H}\). We shall do this in a Hermitian tensor dielectric medium, and comment on the discreteness effects on the time evolution of \(\nabla\cdot{\bf B}\). In Sec. 2 we shall see that in an inhomogeneous medium, the electromagnetic basis set \(({\bf E},{\bf B})\) cannot lead to a unitary evolution of the two curl Maxwell equations. However, a Dyson map is introduced that will map the basis \(({\bf E},{\bf B})\) into the basis \((n_{x}E_{x},n_{y}E_{y}.n_{z}E_{z},{\bf B})\) resulting in a fully unitary evolution for this basis set [23]. Here we have transformed to principal axes making the dielectric tensor diagonal with \(\epsilon_{i}=n_{i}^{2},i=x,y,z\). The more familiar complex Riemann-Silberstein-Weber basis \(F_{i}^{\pm}=(n_{i}E_{i}\pm iB_{i})\) is immediately generated from the real basis \((n_{i}E_{i},B_{i})\) by a unitary transformation so that this will also lead to a unitary time evolution representation.
In Sec. 2 we will develop a QLA for the solution of 2D Maxwell equations in a tensor Hermitian dielectric medium. All our previous Maxwell QLA [16-18, 21] were restricted to scalar dielectrics. We will present a simplified discussion of the Dyson map [23] that will permit us to transform from a non-unitary to unitary basis for the representation of the two curl equations of Maxwell. For these continuum qubit partial differential equations we will generate in Sec. 3 a discrete QLA for tensor dielectric media that recovers the desired equations to second order perturbation. While the collide-stream operator sequence of QLA is fully unitary, the external potential operators required to recover the derivatives of the refractive indices in Maxwell equations are not. However these non-unitary matrices are very sparse and should be amenable to some unitary approximate representation. The role of the perturbation parameter \(\delta\) introduced in the QLA for Maxwell equations is quite subtle. One important test of the QLA is the conservation of electromagnetic energy density. This will be seen to be very well satisfied, as \(\delta\to 0\). In Sec. 4 we present some 2D QLA simulations for a 1D Gaussian electromagnetic pulse scattering from an anisotropic dielectric localized object - showing results for both polarizations. Finally, in Sec. 5 we summaries the results of this paper.
## 2 A Unitary Representation of the two curl Maxwell Equations
### Scalar dielectric medium
First, consider a simple dielectric non-magnetic medium with the constitutive equations
\[{\bf D}=\epsilon{\bf E},\quad{\bf B}=\mu_{0}{\bf H}. \tag{4}\]
It is convenient to define \({\bf u}=({\bf E},{\bf H})^{\bf T}\) as the fundamental fields, and \({\bf d}=({\bf D},{\bf B})^{\bf T}\) the derived fields. Eq. (4), in matrix form, is
\[{\bf d}={\bf W}{\bf u}. \tag{5}\]
\({\bf W}\) is a Hermitian \(6\times 6\) matrix
\[{\bf W}=\left[\begin{array}{cc}e{\bf I}_{3\times 3}&0_{3\times 3}\\ 0_{3\times 3}&\mu_{0}{\bf I}_{3\times 3}\end{array}\right]. \tag{6}\]
\({\bf I}_{3\times 3}\) is the \(3\times 3\) identity matrix. and the superscript \({\bf T}\) is the transpose operator. The curl-curl Maxwell equations \(\nabla\times{\bf E}=-\partial{\bf B}/\partial t\), and \(\nabla\times{\bf H}=\partial{\bf D}/\partial t\) can then be written
\[i\frac{\partial{\bf d}}{\partial t}={\bf M}{\bf u} \tag{7}\]
where, under standard boundary conditions, the curl-matrix operator \({\bf M}\) is Hermitian :
\[{\bf M}=\left[\begin{array}{cc}0_{3\times 3}&i\nabla\times\\ -i\nabla\times&0_{3\times 3}\end{array}\right]. \tag{8}\]
Now \({\bf W}\) is invertible, so that Eq. (7) can finally be written in terms of the basic electromagnetic fields \({\bf u}=({\bf E},{\bf H})\)
\[i\frac{\partial{\bf u}}{\partial t}={\bf W}^{-1}{\bf M}{\bf u} \tag{9}\]
#### 2.1.1 inhomogeneous scalar dielectric media
We immediately note that for inhomogeneous dielectric media, \({\bf W}^{-1}\) will not commute with \({\bf M}\). Thus Eq. (9) will not yield unitary evolution for the fields \({\bf u}=({\bf E},{\bf H})^{\bf T}\). However Koukoutsis et. al. [23] have shown how to determine a Dyson map from the fields \({\bf u}\) to a new field representation \({\bf U}\) such that the resultant representation in terms of the new field \({\bf U}\) will result in a unitary evolution. In particular, the Dyson map [23]
\[{\rm U}={\rm W}^{1/2}{\rm u} \tag{10}\]
yields a unitary evolution equation for \({\bf U}\) :
\[i\frac{\partial{\bf U}}{\partial t}={\bf W}^{-1/2}{\bf M}{\bf W}^{-1/2}{\bf U} \tag{11}\]
since now the matrix operator \({\bf W}^{-1/2}{\bf M}{\bf W}^{-1/2}\) is indeed Hermitian.
Explicitly, the \({\bf U}\) vector for non-magnetic materials,is just
\[{\bf U}=\left(\epsilon^{1/2}{\bf E},\mu_{0}^{1/2}{\bf H}\right)^{T} \tag{12}\]
This can be rotated into the RWS unitary representation by the unitary matrix
\[{\bf L}=\frac{1}{\sqrt{2}}\left[\begin{array}{cc}I_{3\times 3}&iI_{3\times 3} \\ I_{3\times 3}&-iI_{3\times 3}\end{array}\right] \tag{13}\]
yielding \({\bf U}_{\bf RSW}={\bf L}{\bf U}\) with
\[{\bf U}_{\bf RSW}=\frac{1}{\sqrt{2}}\left[\begin{array}{c}\epsilon^{1/2}{ \bf E}+i\mu_{0}^{1/2}{\bf H}\\ \epsilon^{1/2}{\bf E}-i\mu_{0}^{1/2}{\bf H}\end{array}\right]. \tag{14}\]
### Inhomogeneous tensor dielectric media
The theory can be immediately extended to diagonal tensor dielectric media, with (assuming non-magnetic materials) the 6-qubit representation \({\bf Q}\) of the field
\[{\bf U}=\left(n_{x}E_{x},n_{y}E_{y},n_{z}E_{z},\mu_{0}^{1/2}{\bf H}\right)^{T} \equiv{\bf Q}. \tag{15}\]
(\(n_{x},n_{y},n_{z}\)) is the vector (diagonal) refractive index, with \(\epsilon_{x}=n_{x}^{2}\)....
The explicit unitary representation of the Maxwell equations for 2D x-y spatially dependent fields written in terms of the 6-\({\bf Q}\) qubit components are
\[\begin{array}{c}\frac{\partial q_{0}}{\partial t}=\frac{1}{n_{x}}\frac{ \partial q_{5}}{\partial y},\qquad\frac{\partial q_{1}}{\partial t}=-\frac{1} {n_{y}}\frac{\partial q_{5}}{\partial x},\qquad\frac{\partial q_{2}}{\partial t }=\frac{1}{n_{z}}\left[\frac{\partial q_{4}}{\partial x}-\frac{\partial q_{3 }}{\partial y}\right]\\ \frac{\partial q_{3}}{\partial t}=-\frac{\partial(q_{2}/n_{z})}{\partial y}, \qquad\frac{\partial q_{4}}{\partial t}=\frac{\partial(q_{2}/n_{z})}{\partial x },\qquad\frac{\partial q_{5}}{\partial t}=-\frac{\partial(q_{1}/n_{y})}{ \partial x}+\frac{\partial(q_{0}/n_{x})}{\partial y}\end{array} \tag{16}\]
## 3 A Qubit Lattice Representation for 2D Tensor Dielectric Media
We develop a QLA for the unitary system Eq. (16) by determining unitary collision and streaming operators that recover the derivatives \(\partial q_{i}/\partial t,\partial q_{j}/\partial x\) and \(\partial q_{j}/\partial y\). (\(i,j=1..6\)). Our finite difference scheme is to recover Eq. (16) to second order in a perturbation parameter \(\delta\), where the spatial lattice spacing is defined to be \(O(\delta)\). To recover the partial derivatives on the 6-qubit \({\bf Q}\) in the \(x-\)direction, we consider the unitary collision entangling operator
\[C_{X}=\left[\begin{array}{cccccc}1&0&0&0&0&0\\ 0&cos\,\theta_{1}&0&0&0&-sin\,\theta_{1}\\ 0&0&cos\,\theta_{2}&0&-sin\,\theta_{2}&0\\ 0&0&0&1&0&0\\ 0&0&sin\,\theta_{2}&0&cos\,\theta_{2}&0\\ 0&sin\,\theta_{1}&0&0&0&cos\,\theta_{1}\end{array}\right] \tag{17}\]
where we shall need two collision angles \(\theta_{1}\) and \(\theta_{2}\). The unitary streaming operators will be of the form \(S_{14}^{+x}\) which shifts qubits \(q_{1}\) and \(q_{4}\) one lattice unit \(\delta\) in the \(+x-\)direction, while leaving the other 4 qubit components invariant. The final unitary collide-stream sequence in the x-direction is
\[{\bf U_{X}}=S_{25}^{+x}.C_{X}^{\dagger}.S_{25}^{-x}.C_{X}.S_{14}^{-x}.C_{X}^{ \dagger}.S_{14}^{+x}.C_{X}.S_{25}^{-x}.C_{X}.S_{25}^{+x}.C_{X}^{\dagger}.S_{14} ^{+x}.C_{X}.S_{14}^{-x}.C_{X}^{\dagger} \tag{18}\]
Similarly for the y-direction, the corresponding unitary collision entangling operator is
\[C_{Y}=\left[\begin{array}{cccccc}cos\,\theta_{0}&0&0&0&0&sin\,\theta_{0} \\ 0&1&0&0&0&0\\ 0&0&cos\,\theta_{2}&0&sin\,\theta_{2}&0\\ 0&0&-sin\,\theta_{2}&cos\,\theta_{2}&0&0\\ 0&0&0&0&1&0\\ -sin\,\theta_{0}&0&0&0&0&cos\,\theta_{0}\end{array}\right], \tag{19}\]
and the corresponding unitary collide-stream sequence in the y-direction
\[{\bf U_{Y}}=S_{25}^{+y}.C_{Y}^{\dagger}.S_{25}^{-y}.C_{Y}.S_{03}^{-y}.C_{Y}^{ \dagger}.S_{03}^{+y}.C_{Y}.S_{25}^{-y}.C_{Y}.S_{25}^{+y}.C_{Y}^{\dagger}.S_{03} ^{+y}.C_{Y}.S_{03}^{-y}.C_{Y}^{\dagger} \tag{20}\]
We will discuss the specific collision angles \(\theta_{0},\theta_{1}\) and \(\theta_{2}\) after introducing the external potential operators.
The terms that remain to be recovered by the QLA are the spatial derivatives on the refractive index components \(\partial n_{i}/\partial x\) and \(\partial n_{i}/\partial y\). These terms will be recovered by the following (non-unitary) sparse external potential operators:
\[V_{X}=\left[\begin{array}{cccccc}1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&-sin\,\beta_{2}&0&cos\,\beta_{2}&0\\ 0&sin\,\beta_{0}&0&0&0&cos\,\beta_{0}\end{array}\right] \tag{21}\]
and
\[V_{Y}=\left[\begin{array}{cccccc}1&0&0&0&0&o\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&\cos\,\beta_{3}&\sin\,\beta_{3}&0&0\\ 0&0&0&0&1&0\\ -sin\,\beta_{1}&0&0&0&0&cos\,\beta_{1}\end{array}\right] \tag{22}\]
for particular angles \(\beta_{0}\,..\,\beta_{3}\).
Thus one possible QLA algorithm that advances the 6-qubit \({\bf Q}\) from time \(t\) to time \(t+\Delta t\) is
\[{\bf Q}(t+\Delta t)=V_{Y}.V_{X}.{\bf U_{Y}}.{\bf U_{X}}.{\bf Q}(t) \tag{23}\]
Indeed, using Mathematica, one can show that with the collision angles
\[\theta_{0}=\frac{\delta}{4n_{x}}\quad,\qquad\theta_{1}=\frac{\delta}{4n_{y}} \quad,\qquad\theta_{2}=\frac{\delta}{4n_{z}}, \tag{24}\]
and
\[\beta_{0}=\delta^{2}\frac{\partial n_{y}/\partial x}{n_{y}^{2}}\quad,\quad \beta_{1}=\delta^{2}\frac{\partial n_{x}/\partial y}{n_{x}^{2}}\quad,\quad \beta_{2}=\delta^{2}\frac{\partial n_{z}/\partial x}{n_{z}^{2}}\quad,\quad \beta_{3}=\delta^{2}\frac{\partial n_{z}/\partial y}{n_{z}^{2}} \tag{25}\]
we will have a second order QLA representation of the 2D Maxwell continuum equations
\[\begin{array}{c}\frac{\partial q_{0}}{\partial t}=\frac{\delta^{2}}{\Delta t }\frac{1}{n_{x}}\frac{\partial q_{5}}{\partial y}+O(\frac{\delta^{4}}{\Delta t })\\ \frac{\partial q_{1}}{\partial t}=-\frac{\delta^{2}}{\Delta t} \frac{1}{n_{y}}\frac{\partial q_{5}}{\partial x}+O(\frac{\delta^{4}}{\Delta t })\\ \frac{\partial q_{2}}{\partial t}=\frac{\delta^{2}}{\Delta t} \frac{1}{n_{z}}\left[\frac{\partial q_{4}}{\partial x}-\frac{\partial q_{3}} {\partial y}\right]+O(\frac{\delta^{4}}{\Delta t})\\ \frac{\partial q_{3}}{\partial t}=-\frac{\delta^{2}}{\Delta t} \left[\frac{1}{n_{z}}\frac{\partial q_{2}}{\partial y}-\frac{\partial n_{z}/ \partial y}{n_{z}^{2}}q_{2}\right]+O(\frac{\delta^{4}}{\Delta t})\\ \frac{\partial q_{4}}{\partial t}=\frac{\delta^{2}}{\Delta t} \left[\frac{1}{n_{z}}\frac{\partial q_{2}}{\partial x}-\frac{\partial n_{z}/ \partial x}{n_{z}^{2}}q_{2}\right]+O(\frac{\delta^{4}}{\Delta t})\\ \frac{\partial q_{5}}{\partial t}=\frac{\delta^{2}}{\Delta t} \left[-\frac{1}{n_{y}}\frac{\partial q_{1}}{\partial x}+\frac{\partial n_{y}/ \partial x}{n_{y}^{2}}q_{1}+\frac{1}{n_{x}}\frac{\partial q_{0}}{\partial y} -\frac{\partial n_{x}/\partial y}{n_{x}^{2}}q_{0}\right]+O(\frac{\delta^{4}}{ \Delta t})\end{array} \tag{26}\]
under diffusion ordering, \(\Delta t\approx\delta^{2}\).
### Conservation of Instantaneous Total Electromagnetic Energy in QLA Simulations
It is important to monitor the conservation of energy in the QLA, particularly since our current QLA is not fully unitary. The normalized total electromagnetic energy for a square lattice domain of length \(L\) is \(\mathcal{E}(t)\)
\[\mathcal{E}(t)=\frac{1}{L^{2}}\int_{0}^{L}\int_{0}^{L}dxdy\left[n_{x}^{2}E_{x}^ {2}+n_{y}^{2}E_{y}^{2}+n_{z}^{2}E_{z}^{2}+\mathbf{B}^{2}\right]=\frac{1}{L^{2} }\int_{0}^{L}\int_{0}^{L}dxdy\mathbf{Q}\cdot\mathbf{Q}\quad, \tag{27}\]
In our QLA simulations, we will consider the scattering of a 1D Gaussian pulse propagating in the \(x-\)direction, and scattering from a localized tensor 2D dielectric object in the \(x-y\) plane. We choose \(L\) to be significantly greater than the dielectric object so that for \(y\approx 0\), and for \(y\approx L\) the electromagnetic fields there will be that of the 1D Gaussian pulse yielding a Poynting vector \(\mathbf{E}\times\mathbf{B}\) in the \(\hat{\mathbf{x}}\). Thus the contribution to the Poynting flux \(\oint_{C}\mathbf{E}\times\mathbf{B}\cdot d\ell\) on \(y=0\) and on \(y=L\) is zero. In our time evolution QLA simulations, we integrate only to \(t<t_{max}\) so that there are no fields generated on the sides \(x=0\) and \(x=L\). Thus, in our QLA simulations we have set up parameters such that the total electromagnetic energy \(\mathcal{E}(t)=const.\), Eq. (27), for \(t<t_{max}\).
\(\mathcal{E}(t)\) is nothing but the norm of \(\mathbf{Q}-\)qubits, and will be exactly conserved in a fully unitary QLA. One must also be careful in the ordering of the external potential angles, Eq. (25) : they must be \(O(\delta^{2})\) in order to recover Maxwell equations.
While we will discuss in detail in Sect. 4 our numerical QLA simulation of a 1D electromagnetic pulse scattering from a localized dielectric object it is appropriate to discuss here some QLA simulation results for the total energy. Since QLA, Eq. (23) is a perturbation theory, it will recover the 2D Maxwell equations as \(\delta\to 0\). For \(\delta=0.3\), we find the following time variation in the total energy \(\mathcal{E}(t)\) in Fig. 0(a). \(t_{max}=20,000\) lattice time steps.
On lowering the perturbation parameter to \(\delta=0.1\) there is a nice reduction in the time variation of \(\mathcal{E}(t)\), Fig 0(b). To reach the same physics \(t_{max}=60K\).
Figure 1: The instantaneous total electromagnetic energy \(\mathcal{E}(t)\), Eq. (27), for various values of the perturbation parameter \(\delta\) : (a) \(\delta=0.3\), (b) \(\delta=0.1\). A more accurate QLA results from interleaving the external potentials with the unitary collide-stream operators. For \(\delta=0.01\), \(\mathcal{E}(t)\) shows no variation on this scale, with variations in the 9th significant figure. Lattice grid L = 8192.
However, if we interleave the external potential operators among the unitary collide-stream sequence (and similarly for the y-direction) in the form
\[\mathbf{V}_{\mathbf{X}}^{\prime}\mathbf{U}_{\mathbf{X}}=V_{X}^{\prime}S_{25}^{+ x}.C_{X}^{\dagger}.S_{25}^{-x}.C_{X}.S_{14}^{-x}.C_{X}^{\dagger}.S_{14}^{+x}.C_{X}.V_{X}^{ \prime}.S_{25}^{-x}.C_{X}.S_{25}^{+x}.C_{X}^{\dagger}.S_{14}^{+x}.C_{X}.S_{14}^ {-x}.C_{X}^{\dagger} \tag{28}\]
(with the corresponding potential angle reduced by a factor of 2) we find \(\mathcal{E}(t)\approx const.\) for all times, see Fig. 1(b). There is a further strong improvement in \(\mathcal{E}(t)=const.\) for \(\delta=0.01\).
## 4 Scattering of a Polarized Pulse from an Anisotropic Dielectric Object
We first consider a 1D Gaussian pulse propagating in a vacuum in the x-direction towards a localized anisotropic dielectric object, with diagonal tensor components which are conical in \(n_{z}(x,y)\), and cylindrical in the \(x\) and \(y\) directions with \(n_{x}(x,y)=n_{y}(x,y)\), Fig. 2
### Scattering of 1D pulse with \(E_{z}\) polarization
When the 1D pulse with non-zero \(E_{z}(x,t),B_{y}(x,t)\) fields starts to interact with the 2D tensor dielectric \(\mathbf{n}(x,y)\), the scattered fields become 2D (see Fig. 3), with \(B_{y}(x,y,t)\) dependence. The QLA will then spontaneously generate a \(B_{x}(x,y,t)\) field so that \(\partial B_{x}/\partial x+\partial B_{y}/\partial y\approx 0\).
Because of the relatively weak dielectric tensor gradients for a cone, there is very little reflection back into the vacuum of the incident \(E_{z}\) field (Fig. 4). There is a localized transmitted \(E_{z}\) within the dielectric.
At \(t=36k\) we plot both the \(E_{z}\) and the \(B_{y}\), Fig. 5-6. Of considerable interest is the spontaneously generated \(B_{x}(x,y,t)\) field so that \(\nabla\cdot\mathbf{B}=0\). From Fig. 7 we see that \(B_{x}\) has dipole
Figure 2: Anisotropic tensor dielectric : (a) conical in \(n_{z}\), and (b) cylindrical in \(n_{x}=n_{y}\). Initially, a 1D Gaussian pulse propagates in the \(x\)-direction, with either a polarization \(E_{z}(x,t)<0\) or a polarization \(E_{y}(x,t)\) and scatters off this tensor dielectric object. In the region away from the tensor dielectric object, we have a vacuum with \(n_{i}=1.0\). In the dielectric, \(n_{i,max}=3.0\). Lattice domain \(8192^{2}\).
structure, since the plane of the plot Fig. 7(b) for the field \(B_{x}<0\) is generated by rotating the plane through \(\pi\) about the axis \(x=L/2\)
We find in our QLA simulations, that \(max_{x,y}\left[\nabla\cdot\mathbf{B}/|\mathbf{B}|<10^{-3}\right]\)
### Scattering of 1D pulse with \(E_{y}\) polarization
We now turn to the 1D pulse with \(E_{y}\) polarization, propagating in the \(x-\)direction toward the 2D tensor dielectric object, Fig. 1. The other non-zero vacuum electromagnetic field is \(B_{z}(x,t)\). On interacting with the tensor dielectric \(\mathbf{n}(x,y)\), the scattered fields will develop a spatial dependence on \((x,y)\). Thus \(\nabla\cdot\mathbf{B}=0\) exactly, and no new magnetic filed components need be generated, This is recognized by the QLA and so the only non-zero magnetic field throughout the run is \(B_{z}(x,y,t)\).
In Fig. 8 we plot the \(E_{y}\)-field at time \(t=18k\), the same time snapshot as for the case of \(E_{z}\) polarization, Fig. 3. The significant differences in the scattered field arise from the differences between the cylinder dielectric dependence of \(n_{y}(x,y)\) and the cone \(n_{z}(x,y)\).
Also, what can be seen in Fig. 8 is the outward propagating circular-like wavefront which seems to be reminiscent of the reflected pulse in 1D scattering. In particular, one sees elements of a \(\pi\) phase change in this reflected wavefront.
The corresponding \(E_{y}\) wavefronts at \(t=36k\) are shown in Fig. 9
The accompanying \(B_{z}\) field of the initial 1D electromagnetic pulse is shown after its scattering from the tensor dielectric at times \(t=18k\), Fig 10, and at \(t=36k\), Fig. 11
Finally we consider the last of the Maxwell equations to be enforced: \(\nabla\cdot\mathbf{D}=0\). The QLA established a qubit basis for the curl-curl subset of Maxwell equations. For the initial polarization \(E_{z}(x,t)\) and refractive indices \(\mathbf{n}=\mathbf{n}(x,y)\), the \(\nabla\cdot\mathbf{D}=0\) is automatically satisfied, while \(\nabla\cdot\mathbf{B}=0\)
Figure 3: \(E_{z}\) after interacting with the localized tensor dielectric. Since the phase speed in the tensor dielectric is less than in the vacuum, the 2D structure in \(E_{z}\) lags the rest of the 1D pulse that has not interacted with the localized dielectric object (Fig. 1). The perspective (b) is obtained from (a) by rotating by \(\pi\) about the line \(y=L/2\).
Figure 4: For early times, from the perspective of the tensor dielectric object the electromagnetic pulse within the dielectric is the \(transmitted\) field and has a localized \(E_{z}\) which becomes greater than the original \(E_{z}\) in the vacuum region. There is little \(reflected\) field since \(E_{z}\) will be predominantly interacting with the \(n_{z}\) component of the tensor dielectric. The perspective (b) is obtained from (a) by rotating by \(\pi\) about the line \(y=L/2\).
Figure 5: The \(E_{z}\) field at a late stage of development. The perspective (b) is obtained from (a) by rotating by \(\pi\) about the line \(y=L/2\).
Figure 6: The corresponding \(B_{y}\) field at time \(t=36k\) to the \(E_{z}\) field in Fig. 5. The perspective (b) is obtained from (a) by rotating by \(\pi\) about the line \(y=L/2\).
Figure 7: The spontaneously \(B_{x}\) field at time \(t=36k\) that is generated by the QLA so that \(\nabla\cdot\mathbf{B}=0\). This time corresponds to the \(E_{z}\) field in Fig. 5, and \(B_{y}\) field in FIg. 6. The dipole structure of \(B_{x}\) is clear on comparing (a) and (b). The perspective (b) is obtained from (a) by rotating by \(\pi\) about the line \(y=L/2\).
Figure 8: \(E_{y}\) after interacting with the localized tensor dielectric. Since the cylindrical \(n_{y}\) dielectric has a sharper boundary layer than the conic \(n_{z}\) dielectric, there is now a marked ”reflected” wavefront propagating into the vacuum region together with the ”transmitted” part of the pulse into the dielectric region itself. This ”reflected” wavefront is absent when the major scattering is off the conic dielectric component, Fig. 3. The perspective (b) is obtained from (a) by rotating by \(\pi\) about the line \(y=L/2\).
Figure 10: The \(B_{z}\) wavefronts corresponding to the \(E_{y}\) - field in Fig. 8. The perspective (b) is obtained from (a) by rotating by \(\pi\) about the line \(y=L/2\).
Figure 9: The \(E_{y}\) wavefronts at a late stage of development, as the ”reflected” pulse is about to reach the lattice boundaries. The perspective (b) is obtained from (a) by rotating by \(\pi\) about the line \(y=L/2\).
was spontaneously satisfied by the self-consistent generation of a \(B_{x}\) field. Now, if the initial polarization was \(E_{y}(x,t)\), then \(\nabla\cdot{\bf B}=0\) is automatically satisfied, while a spontaneously generated \(E_{x}\) field is generated by the QLA so that \(\nabla\cdot{\bf D}=0\) is satisfied. In Fig. 12 we show the wavefronts of the \(E_{x}\) field at time \(t=18k\)
## 5 Summary
Determining a Dyson map, we have been able to develop a required basis from which the evolution equations for Maxwell equations can be unitary. In particular, we have shown that for inhomogeneous non-magnetic dielectric media, the field basis \(({\bf E},{\bf B})\) will not lead to a unitary representation. However, a particular Dyson map shows that \(({\bf n}.{\bf E},{\bf B})\), where \({\bf n}\) is a diagonal tensor dielectric, is a basis for a unitary representation. Other unitary representations can be immediately determined from this basis by unitary transformation, in particular the Riemann-Silberstein-Weber basis.
Here we have concentrated on the basis \(({\bf n}.{\bf E},{\bf B})\), primarily because the fields are real and so lead to quicker computations. Our QLA directly encodes these fields into qubit representation. A unitary set of interleaved collision-streaming operators are then applied to these qubits: the unitary collision operators entangle the qubits, while the streaming operators move this entanglement throughout the lattice. With our current set of unitary collision-streaming operators, we do not generate the effects of derivatives on the inhomogeneous medium. These effects are included by the introduction of external potential operators - but at the expense of loosing the unitarity of the complete algorithm.
In this paper we have performed QLA simulations on 2D scattering of a 1D electromagnetic pulse from a localized Hermitian tensor dielectric object. Both polsrizations are considered with different field evolutions because of the anisotropic in the tensor dielectric. The QLA we consider here are
Figure 11: The \(B_{z}\) wavefronts at a late stage of development, as the ”reflected” pulse is about to reach the lattice boundaries. The corresponding \(E_{y}\) field is shown in Fig. 9. The perspective (b) is obtained from (a) by rotating by \(\pi\) about the line \(y=L/2\).
based on the two curl equations of Maxwell. Moreover the QLA is a perturbative representation, with small parameter \(\delta\) representative of the spatial lattice width, with \(QLA\to curl-curl-Maxwell\) as \(\delta\to 0\). It is not at all obvious that the QLA has the right structure to recover Maxwell equations - but only through symbolic manipulations (Mathematica) do we determine this Maxwell limit. Hence it is of some interest to see how well QLA satisfies to two divergence equations of Maxwell that are not directly encoded in the QLA process. We find spontaneous generation in the QLA so that \(\nabla\cdot\mathbf{B}=0\), \(\nabla\cdot\mathbf{D}=0\).
Finally we comment on the conservation of energy \(\mathcal{E}\):
\[\mathcal{E}(t)=\frac{1}{L^{2}}\int_{0}^{L}\int_{0}^{L}dxdy\left[n_{x}^{2}E_{x }^{2}+n_{y}^{2}E_{y}^{2}+n_{z}^{2}E_{z}^{2}+\mathbf{B}^{2}\right]\]
In QLA, \(\mathcal{E}=\mathcal{E}(t,\delta)\). Under appropriate scaling of the QLA operator angles, one recovers perturbatively the curl-curl Maxwell equations as \(\delta\to 0\). Moreover, we find that \(\mathcal{E}_{QLA}\to const.\) as \(\delta\to 0\). The QLA simulations presented here were run on a lattice grid of \(8192^{2}\), with \(\delta=0.1\).
The next step is to determine a fully unitary QLA for the Maxwell equations in anisotropic media. The conservation of energy would be automatically satisfied as the norm of the qubit basis. This unitary would then permit the QLA to be immediately encodable on a quantum computer. In the meantime, while we await error-correcting qubits and long decoherence time quantum computes, our current QLA's are ideally parallelized on classical supercomputers without core saturation effects.
Figure 12: The \(E_{x}\) wavefronts at time \(18k\) spontaneously generated by QLA so as to (implicitly) satisfy the Maxwell equation \(\nabla\cdot\mathbf{D}=0\). As the \(E_{x}<0\) plot perspective is generated from the \(E_{x}>0\) plot by rotating about the \(y=L/2\) axis through an angle \(\pi\), it is immediately seen the the \(E_{x}\) field strongly exhibits dipole structure.
Acknowledgments
This research was partially supported by Department of Energy grants DE-SC0021647, DE-FG0291ER-54109, DE-SC0021651, DE-SC0021857, and DE-SC0021653. This work has been carried out partially within the framework of the EUROfusion Consortium. E.K has received funding from the Euratom research and training program WPEDU under grant agreement no. 101052200 as well as from the National Program for Controlled Thermonuclear Fusion, Hellenic Republic. K.H is supported by the National Program for Controlled Thermonuclear Fusion, Hellenic Republic. The views and opinions expressed herein do not necessarily reflect those of the European Commission.
|
2309.13023 | Local infrared safety in time-ordered perturbation theory | We develop a general expression for weighted cross sections in leptonic
annihilation to hadrons based on time-ordered perturbation theory (TOPT). The
analytic behavior of the resulting integrals over spatial momenta can be
analyzed in the language of Landau equations and infrared (IR) power counting.
For any infrared-safe weight, the cancellation of infrared divergences is
implemented locally at the integrand level, and in principle can be evaluated
numerically in four dimensions. We go on to show that it is possible to
eliminate unphysical singularities that appear in time-ordered perturbation
theory for arbitrary amplitudes. This is done by reorganizing TOPT into an
equivalent form that combines classes of time orderings into a ``partially
time-ordered perturbation theory". Applying the formalism to leptonic
annihilation, we show how to derive diagrammatic expressions with only physical
unitarity cuts. | George Sterman, Aniruddha Venkata | 2023-09-22T17:34:37Z | http://arxiv.org/abs/2309.13023v2 | ###### Abstract
###### Abstract
We develop a general expression for weighted cross sections in leptonic annihilation to hadrons based on time-ordered perturbation theory (TOPT). The analytic behavior of the resulting integrals over spatial momenta can be analyzed in the language of Landau equations and infrared (IR) power counting. For any infrared-safe weight, the cancellation of infrared divergences is implemented locally at the integrand level, and in principle can be evaluated numerically in four dimensions. We go on to show that it is possible to eliminate unphysical singularities that appear in time-ordered perturbation theory for arbitrary amplitudes. This is done by reorganizing TOPT into an equivalent form that combines classes of time orderings into a "partially time-ordered perturbation theory". Applying the formalism to leptonic annihilation, we show how to derive diagrammatic expressions with only physical unitarity cuts.
YITP-SB-2023-28
**Local infrared safety in**
**time-ordered perturbation theory**
George Sterman and Aniruddha Venkata
C.N. Yang Institute for Theoretical Physics and Department of Physics and Astronomy
Stony Brook University, Stony Brook, NY 11794-3840 USA
November 6, 2021
###### Contents
* 1 Introduction
* 2 Time-ordered perturbation theory and leptonic annihilation
* 2.1 Cross sections in TOPT
* 2.2 Unphysical cuts in TOPT
* 3 Local Infrared safety
* 3.1 Reorganized cross sections
* 3.2 Analysis of pinch surfaces
* 3.3 Logarithmic singularities and cancellation
* 3.4 All-order example: energy in a "cone"
* 3.5 Low-order example
* 4 Posets in TOPT
* 4.1 Definitions
* 4.2 Examples in vacuum polarization diagrams
* 5 PTOPT: TOPT without unphysical singularities
* 5.1 Time integrals of extremal vertices; the covering set
* 5.2 Integrals of induced extrema
* 6 Vacuum polarization graphs, leptonic annihilation, and weighted cross sections
* 7 Summary
* A Embedded Extrema
* B An explicit algorithm to identify \(\gamma^{k+1}\)
## 1 Introduction
In the light of prospects for increasingly high-statistics data from the Large Hadron Collider and proposed facilities, the need for precision in perturbative calculations of collider cross sections is widely recognized [1, 2, 3]. A recurring theme in these discussions has been the possibility of carrying out calculations directly in four dimensions [4, 5, 6]. For physical observables like cross sections, this involves canceling infrared (IR) singularities at the level of integrands, resulting in expressions amenable to numerical calculation [7, 8, 9]. The potential for such an approach is implicit in formal proofs of the infrared safety of jet and related weighted cross sections [10, 11], based on light-cone ordered perturbation theory, and in proofs of factorization based on time-ordered perturbation theory (TOPT) [12, 13, 14, 15]. In each case, cancellations of infrared divergences locally in momentum space become manifest after integration over loop energies or light-cone variables. The same fundamental result was found in Refs. [7, 8, 9], which carry out energy integrals using the technique of Loop Tree Duality [4, 16]. In this paper, inspired by the results of [7, 8, 9], and by the general analyses of perturbation theory in Refs. [17, 18], we return to time-ordered perturbation theory, to provide a complementary perspective on these important results.
We will reconsider infrared (IR) safe cross sections for hadronic states produced in lepton pair annihilation from this point of view. In this case, unitarity implies that cross sections can be computed as sums over the final states that correspond to cuts of vacuum polarization diagrams. The contribution of any fixed cut, of course, includes infrared divergences, which cancel in the sum over cuts. The result of our analysis is a manifestly power-counting finite expression for such a cross section, given as a sum
over time-ordered, cut vacuum polarization diagrams. All terms coming from a given diagram to this sum are evaluated as one three-dimensional integration per loop of the original uncut diagram, without change of variables as we sum over the cuts (states) of the diagram.
In the course of our discussion in TOPT, we will encounter apparent, unphysical contributions to the cross section, associated with the vanishing of energy denominators corresponding to cuts that divide the diagram into more than two parts. We'll see how these unphysical contributions cancel in the sum over time orders. This observation opens the door to a reformulation of TOPT, in which unphysical energy denominators are altogether absent. Such a form, in fact, has already been derived in Ref. [18], from a distinct but related point of view. We will present an alternative derivation, based on what we will call partially time-ordered perturbation theory (PTOPT). These results, like those in Ref. [18], are quite general, and provide a reformulation for arbitrary amplitudes, and to light-cone- as well as time-ordered perturbation theory.
In Sec. 2 we will review some properties of time-ordered perturbation theory, and we will observe that within a single time-ordered diagram, many contributions to the imaginary part are unphysical, involving the disconnected production of finite-energy particles out of the vacuum. We will show, however, that upon summing over time orders, these contributions cancel among themselves at the amplitude level. In Sec. 3 we construct a locally IR finite formula for weighted cross sections in TOPT for leptonic annihilation to hadrons. Each such expression is closely related to the imaginary part of the vacuum polarization amplitude for the current that couples to quantum chromodynamics in the Standard Model. We will derive the analogs of Landau equations for this class of cross sections, along with power counting at the "pinch surfaces" where these equations are satisfied.
In subsequent sections, we show how to reorganize TOPT to eliminate these unphysical singularities manifestly, by developing partially time-ordered perturbation theory, with results similar to those of Ref. [18]. These partial sums over time orders can be carried out algorithmically, and in effect reduce the number of terms necessary to compute amplitudes and cross sections. To this end, in Sec. 4, we will recall a mathematical structure that underlies each time order: the partially ordered set (poset), and show by examples how partial ordering can be used to eliminate unphysical cuts.
Section 5 is dedicated to combining the full TOPT contributions associated with each poset, implementing the constraints associated with the poset. In doing so, we will sum over all time orders consistent with a poset. Finally, in Sec. 6, we will return to weighted cross sections in leptonic annihilation, and adapt our formulas for the weighted cross section case. We conclude with a short summary and comments on future directions.
Time-ordered perturbation theory and leptonic annihilation
Time-ordered perturbation theory provides a systematic method for integrating all loop energies in an arbitrary diagram [14, 15]. For the treatment of electroweak annihilation, the diagrams of interest are two-point correlators of electroweak currents. More generally, we can consider any amplitude \(A\left(q^{\prime}_{j},q_{i}\right)\), where \(q_{i}\) represents a set of incoming momenta, and \(q^{\prime}_{j}\) a set of outgoing momenta, any or all of which may be off-shell. In time-ordered perturbation theory, the expression for such an amplitude is of the form
\[A\left(q^{\prime}_{j},q_{i}\right)\ =\sum_{G\in G_{A}}\ \int d{\cal L}_{G}\ \sum_{\tau_{G}}\ \mathbb{N}_{\tau_{G}}\ \prod_{i=1}^{N_{G}}\frac{1}{2\omega_{i}}\prod_{s=1}^{V_{G}-1}\frac{i}{E_{s}- \sum_{j\in s}\omega_{j}+i\epsilon}, \tag{2.1}\]
where \(G_{A}\) represents the set of graphs in Lorentz-invariant perturbation theory that contribute to the amplitude and \(\tau_{G}\) represents the time orders of the \(V_{G}\) vertices of each graph \(G\), with \(N_{G}\) lines. Each ordering, \(\tau_{G}\) of the vertices specifies a set of states, labeled \(s=1\ldots V_{G}-1\), whose total energy is the sum of the on-shell energies of all the particles in that state, defined to flow from earlier to later times. The denominators in Eq. (2.1) are "energy deficits", for each state \(s\), the difference between the sum of the on-shell energies \(\omega_{i}=\sqrt{\vec{p}_{i}^{2}+m_{i}^{2}}\) for lines of mass \(m_{i}\) in state \(s\) and the external energy, \(E_{s}\), that has flowed into the diagram before state \(s\). The external net energy of each state depends on the specific ordering \(\tau_{G}\).
In Eq. (2.1), \(\int d{\cal L}_{G}\) represents the spatial loop momentum space of the graph \(G\), which is the same for every time order. Each line momentum \(\vec{p}_{j}\) is a linear function of loop and external momenta, which in this way determine the energies \(\omega_{j}\). The factor \(\mathbb{N}_{\tau_{G}}\) represents the perturbative numerator, consisting of overall constants and spin-dependent momentum factors, with the energies of every line evaluated on-shell, that is with energies \(\omega_{i}\) defined as above. The signs of these energies are always positive for flow from the earlier to later vertex connected by the line in question. In this way, the numerator factor depends on the time order.
### Cross sections in TOPT
We consider annihilation cross sections for leptons to hadrons, at lowest order in electroweak couplings. In this case, we can represent our cross sections as
\[\sigma(Q)\quad\equiv\quad\sum_{N}\left(2\pi\right)^{4}\delta^{4}\left(q-P_{N} \right)\,|\langle 0|\,{\cal J}(0)\,|N\rangle|^{2}\,, \tag{2.2}\]
where the \({\cal J}\) represents an electroweak current, implicitly contracted with tree-level leptonic tensors and vector boson propagators, which we suppress, and where \(q^{2}=Q^{2}\).
The latter will play no role in these arguments, and we will consistently absorb them into the currents. As a total cross section, \(\sigma(Q)\) is related by the optical theorem to a forward-scattering amplitude, here a two-point correlation function of the currents \({\cal J}\),
\[\sigma(Q) = \ \ {\rm Im}\,\Pi(Q)\,,\] \[\Pi(Q) = i\,\int d^{4}x\,e^{-iq\cdot x}\langle 0|T[{\cal J}(0){\cal J}(x )]|0\rangle\,. \tag{2.3}\]
In this paper, we will use time-ordered perturbation theory to derive expressions for sets of weighted cross sections that generalize Eq. (2.2),
\[\Sigma[f,Q] \equiv \ \sum_{N}\,(2\pi)^{4}\delta^{4}\,(q-P_{N})\ f(N)\,\langle 0|\,{ \cal J}(0)\,|N\rangle|^{2}\,, \tag{2.4}\]
with \(f(N)\) a function of the kinematic variables that describe state \(N\), and with currents normalized as for the total cross section, Eq. (2.2). Our goal will be to derive a set of expressions that exhibit manifestly the cancellation of infrared divergences in such quantities [10, 11, 19]. First, however, we will discuss a bit more how the general relations (2.2) and (2.3) are realized in TOPT.
In the notation of Eq. (2.1), for the two-point correlation function of Eq. (2.3), a single momentum \(q\) flows into and out of the diagram, and we may choose to work in its rest frame, \(q^{\mu}=(Q,\vec{0})\). In this case, our vacuum polarization diagram \(\Pi(Q)\) is given by
\[\Pi(Q) = \ \sum_{G\in G_{\Pi}}\ \int d{\cal L}_{G}\ \prod_{i=1}^{N_{G}} \frac{1}{2\omega_{i}}\ \sum_{\tau_{G}}\ \mathbb{N}_{\tau_{G}}\ \pi_{\tau_{G}}(Q,{\cal L}_{G})\,,\] \[\pi_{\tau_{G}}(Q,{\cal L}_{G}) = \ \prod_{s=1}^{V_{G}-1}\frac{i}{Q\lambda_{s}-\sum_{j\in s}\omega_{ j}+i\epsilon}\,, \tag{2.5}\]
where the sum is now over the set of diagrams \(G_{\Pi}\) that contribute to the two-point correlation function that mediates the leptonic annihilation process. We have suppressed vector indices of the currents associated with the decay of the mediating vector bosons. In this expression, the state-dependent factor \(\lambda_{s}\) enforces the condition that the net external energy in state \(s\) is positive only when state \(s\) occurs after momentum \(q\) flows into the diagram, here and below at vertex \(i\), and before the same energy flows out, here and below at vertex \(o\),
\[\lambda_{s} = \ 1\,,\quad o>s\geq i\,,\] \[\lambda_{s} = -1\,,\quad i>s\geq o \tag{2.6}\] \[= \ 0\,,\quad{\rm otherwise}\,.\]
In Eq. (2.5), the set of states \(s\) depends implicitly on the time order, \(\tau_{G}\). Finally, we notice that energy denominators are negative semi-definite for massless particles and negative definite for massive particles except when \(o>s\geq i\).
Each covariant diagram with \(V_{G}\) vertices provides \(V_{G}!\) terms, the sum over \(\tau_{G}\) in Eq. (2.5), each with \(V_{G}-1\) denominators. The vanishing of a subset of these denominators results in a branch cut in the function \(\Pi(Q)\). Each such branch cut corresponds to an on-shell intermediate state that is at or above threshold. These are the physical, or unitarity cuts of the underlying diagram. In time-ordered perturbation theory, such states separate a vacuum polarization diagram \(G\) into a connected amplitude and complex-conjugate amplitude, where external momentum flows into the amplitude and out of the complex conjugate. Clearly, such cuts are possible only in orderings for which the vertex \(i\) is earlier than vertex \(o\). By unitarity in the form of the optical theorem, the sum of these singularities gives the total cross section for leptonic annihilation or equivalently the decay width of the relevant off-shell electroweak boson. We shall see below, however, that even when \(o>i\) in the uncut diagram, many cuts of \(\pi_{\tau_{G}}\) in Eq. (2.5) are actually unphysical, corresponding to the production of particles out of the vacuum. One of the aims of our discussion is to reorganize the sum of time orders to show how these unphysical singularities always cancel. In Secs. 4 and 5, we will show how to reorganize the time-ordered series, reducing the number of terms and eliminating unphysical singularities without having to rely on this cancellation.
Setting aside for a moment the cancellation of unphysical cuts, let us write the imaginary part of the amplitude \(\Pi(Q)\), Eq. (2.5), as a sum over its intermediate states, and generalize the resulting sum over final states to more general weighted cross sections. To do so, we will need first to write the TOPT expression for \(\Pi(Q)\) as a sum over fixed states, given by cuts \(C\) of an arbitrary diagram. Here each \(C\) is a particular state \(s\) in Eq. (2.5) that is set on shell. Summing over \(C\), we know from the optical theorem that we reconstruct the imaginary part of the forward scattering graph in Eq. (2.5),
\[{\rm Im}\,\Pi(Q) = \ \sum_{G}\,\sum_{C}\,\sum_{\tau_{L[G/C]}}\,\sum_{\tau_{R[G/C]}} \pi^{(C)}_{\tau_{L[G/C]}\,\cup\,\tau_{R[G/C]}}(Q)\] \[\pi^{(C)}_{\tau_{L[G/C]}\,\cup\,\tau_{R[G/C]}}(Q) = \ \int d{\cal L}_{G}\,\,{\mathbb{N}}_{\tau_{G}}\,\prod_{i=1}^{N} \frac{1}{2\omega_{i}}\,\prod_{s=C+1}^{V_{G}-1}\frac{i}{Q\lambda_{s}-\sum_{j \in s}\omega_{j}-i\epsilon}\] \[\ \ \ \ \ \times\ (2\pi)\,\delta\left(Q-\sum_{j\in C}\omega_{j} \right)\prod_{s=1}^{C-1}\frac{i}{Q\lambda_{s}-\sum_{j\in s}\omega_{j}+i\epsilon }\,,\]
where \(\tau_{L[G/C]}\) represents the time orders of the graph \(G\) on the left of the cut, \(C\), and \(\tau_{R[G/C]}\) represents the time orders of the graph \(G\) on the right of cut \(C\). The union of these orders, \(\tau_{L[G/C]}\cup\tau_{R[G/C]}\), uniquely specifies an ordering, \(\tau_{G}\) of the full diagram, with all vertices in the conjugate after all vertices in the amplitude1. For a
given underlying diagram, \(G\), orderings \(\tau_{L[G/C]}\) and \(\tau_{R[G/C]}\) specify diagrammatic contributions to the amplitude and complex conjugate amplitude, respectively, to produce final state \(C\) by the action of current \({\cal J}\). As above, the function \(\mathbb{N}_{\tau_{G}}\) absorbs overall factors associated with vertices, and the overall factor of \(i\) in Eq. (2.3), all of which we do not exhibit. We note, however, that each vertex is associated with a factor of \(i\), and that when all denominators are real and nonzero, the diagram is real. Note that to satisfy the delta function we must have \(\lambda_{C}=+1\), so that the external momentum must flow into the diagram in the amplitude (\(s<C\)) and out from the complex conjugate (\(s>C\)). Then \(\lambda_{s}=0\) or \(1\), and not \(-1\), in both the amplitude and complex conjugate.
Footnote 1: The \(\tau_{L[G/C]}\) is defined as the sum over the \(\tau_{L[G/C]}\) and \(\tau_{R[G/C]}\), and the sum over the \(\tau_{L[G/C]}\) is defined as the sum over the \(\tau_{L[G/C]}\).
A complete discussion of the proof of Eq. (2.7), including symmetry factors, is given in Ref. [14], but here we observe that it involves the relationship between time orders of the full diagram and of amplitudes separated by its cuts,
\[\sum_{\tau_{G}}\sum_{C}\ =\ \sum_{C}\sum_{\tau_{L[G/C]}}\sum_{\tau_{R[G/C]}}, \tag{2.8}\]
which the summations satisfy by definition. A subtle feature of the sums is that the diagrams \(L[G/C]\) and \(R[G/C]\) specified by these orderings need not be connected on the left and right of the cut. As a result, Eq. (2.7) includes unphysical terms that are not found on the right-hand side of Eq. (2.2), which defines the cross section in terms of connected amplitudes. In the next sub-section, we show how such contributions occur, and that they always cancel in the sum over time orders, as they must by the optical theorem. Notice further that many of the final states \(C\) on both sides of this relation do not contribute to the imaginary part, because they correspond to states for which \(\lambda_{s}\) equals \(0\) or \(-1\) in Eq. (2.6). The full cross section, of course, involves a leptonic tensor, and depends on the specific electroweak currents at the vertices we have labeled \(i\) and \(o\). As noted above, for simplicity, we have suppressed these familiar factors, which play no direct role in our discussion, and refer to expressions like Eq. (2.7) as cross sections.
We now consider a weighted lepton annihilation cross section where the sum over final states \(C\) is weighted by a set of functions \(f_{C}(p_{i})\), which depend only on particle momenta appearing in state \(C\). The relevant TOPT energy factors, denominators, and energy delta function for this weighted cross section can be organized as
\[\Sigma[f,Q] = \ \sum_{G}\sum_{C}\sum_{\tau_{L[G/C]}}\sum_{\tau_{R[G/C]}}\,\int d {\cal L}_{G}\ \sum_{\tau_{G}}\,\mathbb{N}_{\tau_{G}}\ \prod_{i=1}^{N_{G}}\frac{1}{2\omega_{i}}\,\sigma^{(C)}_{\tau_{L[G/C]}\cup\tau_ {R[G/C]}}[f,{\cal L}_{G}]\,,\] \[\sigma^{(C)}_{\tau_{G}}[f,{\cal L}_{G},Q] = \prod_{s=C+1}^{V_{G}-1}\frac{i}{Q\lambda_{s}-\sum_{j\in s}\omega _{j}-i\epsilon}\ f_{C}(\vec{q}_{1}\ldots\vec{q}_{k_{C}})\]
\[\times\,(2\pi)\,\delta\left(Q-\sum_{j\in C}\omega_{j}\right)\prod_{s=1}^{C -1}\frac{i}{Q\lambda_{s}-\sum_{j\in s}\omega_{j}+i\epsilon}\,, \tag{2.9}\]
where we understand that \(\tau_{G}=\tau_{L[G/C]}\cup\tau_{R[G/C]}\). Here \(f_{C}(\{\vec{q}_{i}\})\) is a weight function, depending in general on the spatial momenta, \(\{\vec{q}_{i}\}=\{\vec{q}_{1}\ldots\vec{q}_{k_{C}}\}\) and masses of particles in the state \(C\). This expression differs from the corresponding factor in the imaginary part of the forward scattering graph only by the weight function \(f_{C}\), and setting \(f_{C}=1\), gives back Eq. (2.7). At this point, we observe that the sum over time orders in Eq. (2.9) can be thought of as generated from connected diagrams only, so that the sum does not include the disconnected diagrams encountered in Eq. (2.7). Because, as we observed above and will show in the next section, these contributions cancel among themselves, the sums are nevertheless effectively identical.
In what follows, we will study infrared safe weight functions, for which
\[f_{C}(\vec{q}_{1},\ldots\vec{q}_{i}\ldots\vec{q}_{j-1},\xi\vec{ q}_{i},\vec{q}_{j+1},\ldots\vec{q}_{k_{C}})\quad=\] \[f_{C/j}(\vec{q}_{1},\ldots(1+\xi)\vec{q}_{i},\ldots\vec{q}_{j-1 },\vec{q}_{j+1},\ldots\vec{q}_{k_{C}})\,,\]
for any real \(\xi>-1\), where \(C/j\) denotes a state with \(k_{C}-1\) particles. Such weights are unchanged by the emission of zero-momentum particles (\(\xi=0\)) or by the emission or recombination of massless collinear particles. This is a familiar condition, of course, which ensures the cancellation of soft and collinear (collectively, infrared) singularities. Requirements on the manner in which the weight functions approach the equalities of Eq. (2.10) are discussed in Refs. [11, 19].
Our goal below is to exhibit an expression for such a weighted cross section in which the cancellations implied by Eq. (2.10) can be made explicit, so that all cancellations take place in four dimensions, without infrared regularization. First, however, we resolve the apparent difference between the sums over time-ordered diagrams in the unitarity condition, Eq. (2.7), and in the corresponding expression for weighted cross sections, Eq. (2.9).
### Unphysical cuts in TOPT
Let us now turn our attention to unphysical cuts in time-ordered perturbation theory. By unphysical cuts, we refer to states that contribute to the forward-scattering diagram but which, when they go on-shell (that is, when the corresponding denominator vanishes), do not separate the diagram into connected diagrams to the right and left. Such cuts of the diagram do not appear in the sum over final states in the optical theorem, Eq. (2.3). What we will show is that the sum over all time-ordered diagrams that share any specific unphysical cut vanishes. This result appears to be the analog in TOPT of the cancellation of so-called spurious singularities in loop-tree duality [9].
To establish some notation, we say the vertices are in the set \(V=\{i,b_{1},b_{2},b_{3},\ldots b_{n},o\}\), and \(i\), \(o\) are the vertices where the external momentum \(q\) flows in and out respectively. We take \((b_{P_{1}},b_{P_{2}},\ldots,b_{P_{n+2}})\) to represent an arbitrary time order labeled by the permutation \(P\). The way in which unphysicsl cuts appear in TOPT is illustrated by the low-order example in Fig. 1. In TOPT, intermediate states may include particles created from the vacuum, with no available external energy. If such a state precedes the vertex at which momentum flows into the diagram, as in state \(C_{1}\) of the figure, the corresponding energy denominator is negative semi-definite, vanishing only if all particles are massless, and then only on the set of zero measure where all particles carry zero spatial momentum. All denominators in states such as this are of the form \(\frac{i}{-\sum_{j}\omega_{j}+i\epsilon}\). We shall refer to such states as "vacuum states".
State \(C_{2}\) in Fig. 1 describes the amplitude for the same production of particles from the vacuum, which, however, appears mixed with particles that share the energy flowing into the diagram. Denominators like those of state \(C_{2}\) can be of the form \(\frac{i}{Q-\sum_{j}\omega_{j}+i\epsilon}\). The resulting mixed denominators can vanish in this configuration and can contribute to the imaginary part of the specific time-ordered diagram. We refer to these states as "pseudo-physical".
We now prove that the cross section evaluated on a pseudo-physical cut is identically zero. To do so, we start with the expression for the denominators, \(s\), of the amplitude \(a_{L[G/C]}\left(Q,{\cal L}_{L[G/C]}\right)\), \(C\geq s\) for any final state \(C\) of diagram \(G\), Eq. (2.7) summed over its time orders,
\[2\pi\delta\left(Q-\sum_{j\in C}\omega_{j}\right)\,a_{L[G/C]} \left(Q,{\cal L}_{L[G/C]}\right) = \,2\pi\delta\left(Q-\sum_{j\in C}\omega_{j}\right) \tag{2.11}\] \[\times\,\sum_{\tau_{L[G/C]}}\prod_{s=1}^{C}\frac{i}{Q\lambda_{s} -\sum\limits_{j\in s}\omega_{j}+i\epsilon}\,,\]
Figure 1: An example of a graph with unphysical cuts of two types. Here, no external momentum is ordered earlier than state \(C_{1}\), while momentum \(Q\) enters prior to state \(C_{2}\). The state \(C_{1}\) is a vacuum cut and the state \(C_{2}\) is a pseudo-physical cut.
where we exhibit as well the energy-conserving delta function of final state \(C\). (Note that \(C\) denotes both the final state and the integer label on the vertex that immediately precedes the final state.) At this point, diagram \(a_{L}\) may or may not be connected. What we will show is that if it is not connected, the sum over its time orderings vanishes when state \(C\) is on-shell.
A useful technique in our analysis, here and below, will be a representation of the denominators in terms of integrals over times,
\[2\pi\delta\left(Q-\sum_{j\in C}\omega_{j}\right)\,a_{L[G/C]}\,=\,\sum_{P(L[G/C ])}\prod_{\alpha_{P}=1}^{C}\int_{-\infty}^{t_{\alpha_{P}+1}}dt_{\alpha_{P}}\,\,e ^{-i\left(\delta_{\alpha_{P},i}Q+\eta_{j}^{(\alpha_{P})}(\omega_{j}-i\epsilon )\right)t_{\alpha}}\,. \tag{2.12}\]
Here, we have written the sum over time orders as the sum over all ordered permutations, \(P(L[G/C])\), of the set of \(C\) time variables \(\{t_{\alpha}\}\) associated with the vertices \(\alpha\) of diagram \(L[G/C]\). For each order, \(P\), we define \(\eta_{j}^{(\alpha_{P})}\) as the incidence matrix of the vertex \(\alpha_{P}\), defined in a given time order by
\[\eta_{j}^{(\alpha_{P})} = 1\,,\quad\mbox{line $j$ enters vertex $\alpha_{P}$}\,,\] \[\eta_{j}^{(\alpha_{P})} = -1\,,\,\,\mbox{line $j$ leaves vertex $\alpha_{P}$}\,,\] \[\eta_{j}^{(\alpha_{P})} = 0\,,\quad\mbox{otherwise}\,. \tag{2.13}\]
All lines are emitted and then absorbed, except for lines that appear in (final) state \(C\). As a result, we have in Eq. (2.12), for any state \(s\leq C\),
\[\sum_{\alpha_{P}\leq s}\eta_{j}^{(\alpha_{P})} = \quad 0\,,\,\,j\,\mbox{$\,\not{\!\epsilon}$}\,s\,,\] \[\sum_{\alpha_{P}\leq s}\eta_{j}^{(\alpha_{P})} = \quad-1\,,\,\,j\in s\,. \tag{2.14}\]
The term \(\delta_{\alpha_{P},i}Q\) in Eq. (2.12) contributes the energy flowing into the diagram at vertex \(i\), which must always be in the diagram specified by the time order \(\tau_{L[G/C]}\), for the final state \(C\) to be on-shell. Using this result and Eq. (2.14), we readily see that each \(\alpha_{P}\) time integral produces the TOPT denominator in Eq. (2.11) for the state immediately following it, times a phase whose exponent is proportional to the following time, \(t_{\alpha_{P}+1}\). We define \(t_{C+1}=\infty\), for the final integral in each order, which produces the delta function that enforces the condition that the energy flowing in, \(Q\), equals the energy flowing out \((\sum_{j\in C}\omega_{j})\).
For every ordering \(P(L)\) in Eq. (2.12), the energy-conservation delta function is generated by the final time integral, which is unbounded on both sides: \(\int_{-\infty}^{\infty}dt_{C}\). The vertex associated with this integral depends, of course, on the specific time order. At this point, it will be useful to rewrite (2.12) by introducing an auxiliary maximum time, \(t_{max}\) that is the same for all orders. We do this by using the following identity
for each order \(P(L)\),
\[\int_{-\infty}^{\infty}dt_{C}\,e^{-i\left(\delta_{C,i}Q\,+\,\eta_{j}^ {(C)}(\omega_{j}-i\epsilon)\right)t_{C}}\prod_{\alpha=1}^{C-1}\int_{-\infty}^{t _{\alpha+1}}dt_{\alpha}\,e^{-i\left(\delta_{\alpha,i}Q\,+\,\eta_{j}^{(\alpha)} (\omega_{j}-i\epsilon)\right)t_{\alpha}}\] \[= \left[-i(Q-\sum_{j\in C}\tilde{\omega}_{j})\right]\,\int_{- \infty}^{\infty}dt_{max}\,\int_{-\infty}^{t_{max}}dt_{C}\,e^{-i\left(\delta_{ C,i}Q\,+\,\eta_{j}^{(C)}(\omega_{j}-i\epsilon)\right)t_{C}} \tag{2.15}\] \[\times \prod_{\alpha=1}^{C-1}\int_{-\infty}^{t_{i+1}}dt_{\alpha}\,e^{-i \left(\delta_{\alpha_{P},i}Q\,+\,\eta_{j}^{(\alpha_{P})}(\omega_{j}-i\epsilon )\right)t_{\alpha}}\,.\]
On the right-hand side of this equation, the integral over \(t_{C}\) produces a denominator that cancels the overall factor of \(-i(Q-\sum\limits_{j\in C}\tilde{\omega}_{j})\) in Eq. (2.15), while the new, \(t_{max}\), integral reproduces the energy conservation delta function, of the same argument. We can implement this identity for all time the orders, \(P\) in Eq. (2.12).
In the case when the amplitude breaks up into two (or more) disconnected pieces, \(G_{1}\) and \(G_{2}\), we may write it as the product of two factors, with \(n_{1}\) and \(n_{2}\) vertices each, \(n_{1}+n_{2}=C\), with a sum over time orders, or permutations \(P^{(1)}\) and \(P^{(2)}\), of the independent disconnected vertices, whose times can be integrated independently to \(t_{max}\) in Eq. (2.15). For definiteness, we assume that vertex \(i\), at which external energy flows in, attaches to the subdiagram \(G_{1}\), with \(n_{1}\) vertices. For such a diagram, we then have in place of Eq. (2.12), the equivalent form
\[a_{L[G/C]}\left(Q,{\cal L}_{L[G/C]}\right)2\pi\delta\left(Q- \sum_{j\in C}\omega_{j}\right) = \left[-i(Q-\sum_{j\in C}\omega_{j})\right]\int_{-\infty}^{ \infty}dt_{max} \tag{2.16}\] \[\times \left(\sum_{P^{(1)}}\prod_{\alpha=1}^{n_{1}}\int_{-\infty}^{t_{ \alpha+1}}dt_{\alpha}e^{-i(\delta_{P^{(1)}_{\alpha,i}}Q+\eta_{j}^{(P^{(1)}_{ \alpha})}(\omega_{j}-i\epsilon))t_{\alpha}}\right)\] \[\times \left(\sum_{P^{(2)}}\prod_{\alpha=1}^{n_{2}}\int_{-\infty}^{ \tilde{t}_{\alpha+1}}d\tilde{t}_{\alpha}e^{-i(\eta_{j}^{(P^{(2)}_{\alpha})}( \omega_{j}-i\epsilon))\tilde{t}_{\alpha}}\right),\]
where now we have defined \(t_{n_{1}+1}=\tilde{t}_{n_{2}+1}=t_{max}\). Note that because of this independence, the number of terms in the sums of permutations is \(n_{1}!\,n_{2}!\), down from \(C!=(n_{1}+n_{2})!\).
The connected subdiagrams each possess the properties of the sums over vertices in Eq. (2.14). As a result, the final time integral \(t_{max}\) inherits the same energy-conservation phase, and the integrals in (2.16) can be done explicitly, giving for each choice of \(P^{(1)}\) and \(P^{(2)}\), the factor
\[-i\left(Q-\sum_{j\in C}\omega_{j}\right)2\pi\delta\left(Q-\sum_{j\in C} \omega_{j}\right)\times\left(\prod_{s_{2}=1}^{n_{2}}\frac{i}{-\sum\limits_{j \in s_{2}}\omega_{j}+i\epsilon}\right)\left(\prod_{s_{1}=1}^{n_{1}}\frac{i}{Q \lambda_{s,1}-\sum\limits_{j\in s_{1}}\omega_{j}+i\epsilon}\right)\,,\]
where \(s_{1}\) and \(s_{2}\) label states within the two disconnected factors, and where \(\lambda_{s,1}\) is defined by analogy to Eq. (2.6), this time for the diagram with \(n_{1}\) vertices, through which the external energy \(Q\) flows. Recall that in the case of an amplitude, \(\lambda_{s}=1\) or \(0\) only. Reorganized in this fashion, the time integrals in Eq. (2.16) and the denominators that appear in Eq. (2.17) lack a denominator that cancels the overall factor \(Q-\sum_{j\in C}\omega_{j}\). For fixed time orders of the full diagram, such a denominator is always present, but it cancels in the sum over time orders, and the integrand vanishes almost everywhere in phase space because energy is conserved.
In summary, we have found that in every term, Eq. (2.17) that results from the sum over the relative time orders of the two disconnected diagrams while keeping their internal time orders fixed, the energy conservation delta function is multiplied by its own argument. It is easy to check that when the argument of the delta function vanishes, the cut denominators are generically finite (except in a region with vanishing measure) while the quantity \(\left(Q-\sum_{j\in C}\tilde{\omega}_{j}\right)\) multiplies the delta function, forcing the integral over the phase space to zero.
## 3 Local Infrared safety
Our interest here is in IR safe weighted cross sections, inclusive cross sections in which the IR singularities of their exclusive channels cancel among themselves. In this section, we construct a general expression that implements this cancellation locally in momentum space. In principle, this can eliminate the need for infrared regularization.
### Reorganized cross sections
When integrated over loop momenta as it stands, the arbitrary weighted cross section Eq. (2.9) is a sum of infrared divergent terms in general, which, however, cancel in the sum over final states \(C\) for each time ordering \(\tau_{G}\). To make this cancellation manifest, we use the distribution identity,
\[2\pi\delta(x)=\frac{i}{x+i\epsilon}-\frac{i}{x-i\epsilon} \tag{3.1}\]
to rewrite the TOPT expression for a general weighted cross section, \(\Sigma[f,Q]\), Eq. (2.9). Recalling the identity for sums over states in Eq. (2.8), we express \(\Sigma[f,Q]\) as
\[\Sigma[f,Q] = \sum_{G}\ \sum_{\tau_{G}}\,\int d\mathcal{L}_{G}\ \mathbb{N}_{\tau_{G}}\,\prod_{i=1}^{N_{G}}\frac{1}{2\omega_{i}}\ \sum_{C}\,\sigma^{(C)}_{\tau_{L[G/C]}\cup\tau_{R[G/C]}}[f,\mathcal{L}_{G}]\,. \tag{3.2}\]
In this expression, we sum over all cuts of the vacuum diagram \(G\) at fixed time-order \(\tau_{G}\) and have used the independence of \(\mathbb{N}_{\tau_{G}}\) from the choice of \(C\). As noted above, the
full set of cuts of any diagram will include in general unphysical cuts, that is, cuts that include disconnected subdiagrams to the left and/or right of the cut. In the previous section we have shown, however, that all such terms cancel once the sum over time orders is carried out. In fact, in Sec. 6 we will show that we can re-express the cross section in a diagrammatic form in which all such cuts are absent. For now, however, we continue with this expression, in the knowledge that further cancellations will occur in a result that we will show is already infrared finite.
We now apply the identity, Eq. (3.1) to the integrand of (3.2), using (2.9) to get,
\[\sum_{C=1}^{n}\ \sigma^{(C)}_{\tau_{L[G/C]}\cup\tau_{R[G/C]}}[f, \mathcal{L}_{G},Q] = \sum_{C}\ \prod_{s=C+1}^{n+1}\frac{i}{Q\lambda_{s}-\sum_{j\in s} \omega_{j}-i\epsilon}\ f_{C}(\vec{q}_{1}\ldots\vec{q}_{k_{C}}) \tag{3.3}\] \[\times\ \left(\frac{i}{Q\lambda_{s}-\sum_{j\in C}\omega_{j}+i \epsilon}\,-\,\frac{i}{Q\lambda_{s}-\sum_{j\in C}\omega_{j}-i\epsilon}\right)\] \[\times\ \prod_{s=1}^{C-1}\frac{i}{Q\lambda_{s}-\sum_{j\in s} \omega_{j}+i\epsilon}\,.\]
Then, simply collecting terms with a fixed denominator structure yields,
\[\sum_{C}\ \sigma^{(C)}_{\tau_{L[G/C]}\cup\tau_{R[G/C]}}[f, \mathcal{L}_{G},Q] = \left(\prod_{s=1}^{n+1}\frac{i}{Q\lambda_{s}-\sum_{j\in s} \omega_{j}+i\epsilon}f_{n+1}\right. \tag{3.4}\] \[+\ \sum_{C=1}^{n}\prod_{s=C+1}^{n+1}\frac{i}{Q\lambda_{s}-\sum_{j \in s}\omega_{j}-i\epsilon}(f_{C}-f_{C+1})\prod_{s=1}^{C}\frac{i}{Q\lambda_{s} -\sum_{j\in s}\omega_{j}+i\epsilon}\] \[\qquad\qquad-\ \prod_{s=1}^{n+1}\frac{i}{Q\lambda_{s}-\sum_{j\in s} \omega_{j}-i\epsilon}f_{1}\Bigg{)}\.\]
In this equality, we have suppressed the arguments of the functions \(f_{C}\), which are the momenta of individual particle lines, and hence linear combinations of the spatial loop momenta of diagram \(G\). We notice that if all weight functions \(f_{C}\) were indeed equal to unity, we would get back the imaginary part of the forward scattering graph. The first and last terms in this expression have the analytic structure of the vacuum polarization diagram and its complex conjugate and hence are individually IR finite [10]. Although Eq. (3.4) is a simple reorganization of a standard expression, we will show that it provides a locally finite expression for the set of leptonic annihilation cross sections under consideration so long as the weight function satisfies the usual criteria for infrared safety in Eq. (2.10). We are not aware of such an expression in the previous literature.
In Eq. (3.4), the sum of terms labeled \(C=1\ldots n\) are somewhat unusual, having no energy-conserving delta function. Rather, they are given entirely by products of denominators with opposite \(i\epsilon\) prescriptions. We will refer to the product of denominators with \(+i\epsilon\) as the "generalized amplitude", and the product with \(-i\epsilon\) as the "generalized
conjugate amplitude". In the following subsection, we will study how infrared singularities can arise in TOPT generally, and in the product of generalized amplitudes and complex conjugates, and go on to verify that the sum in Eq. (3.4) is infrared safe locally in momentum space without the need for infrared regularization.
### Analysis of pinch surfaces
The individual terms in Eq. (3.4) have the same infrared singularities that are found in cross sections for fixed final states, which have explicit energy-conservation delta functions. In that case, singularities can be identified from solutions to the Landau equations for the amplitude and complex conjugate for each point in final-state phase space. These solutions are satisfied on subspaces of momentum space sometimes referred to as pinch surfaces [15].
The analysis leading to pinch surfaces in amplitudes can be applied to the spatial integrals in Eq. (3.2) for each choice of order, \(\tau_{G}\), and cut, \(C\) in the form of Eq. (3.4), to derive the Landau equations. We begin by combining state denominators via a Feynman parametrization, chosen so that the imaginary parts all add with the same sign at every point in parameter space. This requires that we factor out a \((-1)\) for each denominator in the generalized conjugate amplitude, with a \(-i\epsilon\),
\[\int d{\cal L}_{G}\ \frac{\mathbb{N}_{\tau_{G}}}{\prod_{i=1}^{N}2 \omega_{i}}\left(f_{C}-f_{C+1}\right)\ \prod_{s=C+1}^{n+1}\frac{i}{Q-\sum_{j\in s}\omega_{j}-i\epsilon}\ \prod_{s=1}^{C}\frac{i}{Q-\sum_{j\in s}\omega_{j}+i\epsilon}\] \[=\ \int\frac{d{\cal L}_{G}}{\prod_{i=1}^{N}2\omega_{i}}\ \frac{ \mathbb{N}_{\tau_{G}}\ (f_{C}-f_{C+1})\ (-1)^{n-C}\,[d\alpha_{s}]_{n+1}}{ \left(\sum_{s=1}^{C}\alpha_{s}\left(Q-\sum_{j\in s}\omega_{j}\right)-\sum_{s=C +1}^{n+1}\alpha_{s}\left(Q-\sum_{j\in s}\omega_{j}\right)+i\epsilon\right)^{n+ 1}}\,, \tag{3.5}\]
where, as usual,
\[[d\alpha_{s}]_{n+1}\ =\ n!\,\int_{0}^{1}d\alpha_{n+1}\ldots d\alpha_{1}\, \delta\left(1-\sum_{s=1}^{n+1}\alpha_{s}\right)\,. \tag{3.6}\]
We would like to show that for any infrared safe weight function \(f\), Eq. (3.5) is finite without infrared regularization. To do so, we must identify the origin and strength of IR singularities in these TOPT expressions.
As in covariant perturbation theory, infrared singularities in Eq. (3.5) can arise whenever the loop integrals are pinched between coalescing singularities. In this case, of course, we have three integrals per loop remaining. Singularities arise from two sources, the product of particle energies \(\omega_{i}\), and from the full parameterized denominator. We note that each energy factor, \(\omega_{i}\) depends quadratically on the spatial loop momenta of particle \(i\), and always produces a pinch at the point \(\omega_{i}=0\), for line \(i\) massless.
The identification of pinches from sets of on-shell denominators with lines of nonzero energy in Eq. (3.5) requires a time-ordered perturbation theory version of Landau
equations, which follow the same pattern as for integrals in covariant perturbation theory. In the parameterized form, an off-shell denominator \(D_{s}\neq 0\) must have \(\alpha_{s}=0\). Derivatives with respect to each loop momentum component of denominators with \(D_{s}=0\) must vanish,
\[\frac{\partial}{\partial l^{\mu}}\,\left[\sum_{s=1}^{C}\alpha_{s} \left(\sum_{i\in s}\omega_{i}\right)-\sum_{s=C+1}^{n+1}\alpha_{s}\left(\sum_{j \in s}\omega_{j}\right)\right]\ =\ 0\,. \tag{3.7}\]
Because the derivative of the energy \(\omega_{i}\) of a line with respect to its momentum gives its velocity, \(\vec{\beta}_{i}=\partial\omega_{i}/\partial\vec{p}_{i}\), the Landau equations are given as linear sums in velocities. For an arbitrary loop momentum \(l\), we can thus write
\[\sum_{s=1}^{C}\alpha_{s}\left(\sum_{i\in s}\eta_{l,i}\,\vec{\beta }_{i}\right)-\sum_{s=C+1}^{n+1}\alpha_{s}\left(\sum_{j\in s}\eta_{l,j}\,\vec{ \beta}_{j}\right)\ =\ 0\,, \tag{3.8}\]
with the \(\eta_{l,i,}=\pm 1,0\) incidence matrices,
\[\eta_{l.i}\quad=\quad\frac{\partial p_{i}^{\mu}}{\partial l^{\mu }}\quad\mbox{any $\mu$}\,. \tag{3.9}\]
To be specific, we define the momenta \(p_{i}^{\mu}\) to be in the direction of energy flow, so that \(p_{i}^{0}=\omega(\vec{p}_{i})\geq 0\). Note that for the amplitude this is the direction toward the final state, \(C\).
The equations (3.8) can be satisfied for any loop that appears in a subset of denominators \(\{t_{i}\}\), \(i=1\ldots k\), with \(t_{1}\leq t_{C}\) and \(t_{k}\geq C+1\), because such terms include denominators with both "\(i\epsilon\)" prescriptions. A loop whose on-shell states appear only with \(+i\epsilon\) or only with \(-i\epsilon\) cannot give a solution to Eq. (3.8), _unless_ all lines that carry this loop have collinear momenta. We can think of such a loop as internal to a jet of collinear moving particles. These solutions can be given a physical interpretation in the sense of Coleman and Norton [15, 20], by identifying the Feynman parameters \(\alpha_{s}\) as times so that their products with velocities are translations. The Landau equation for any loop then describes a sum of translations relative to the origin, in the direction of the velocities of lines carried by that loop.
For loops that are internal to a given jet, their contributions to Eq. (3.8) cancel within each state, because \(\eta_{li}=-\eta_{li^{\prime}}\) if the line flows forward along line \(i\) within the jet and back along line \(i^{\prime}\). For loops that extend over states with \(s<C\) to states with \(s>C\), Eq. (3.8) requires that the contribution of states in these two categories cancel for the two jets individually. Thus, for the jet along which the loop flows forward, we have
\[\sum_{s=1}^{C}\alpha_{s}-\sum_{s=C+1}^{n+1}\alpha_{s}\ =\ 0\,, \tag{3.10}\]
which has the interpretation that the jet flows forward away from the origin through a sequence of states up to state \(C\), and then backward to the origin for the same
amount of time in the same direction. Thus, for such terms, the _only_ momentum configurations that produce pinch singularities are those in which on-shell states differ only by the rearrangement of collinear momenta and the emission or absorption of lines with zero momentum. These states are characterized by "jets" of fixed total momentum, accompanied by arbitrary numbers of soft lines. The succession of on-shell states with different numbers of jets always results in at least one internal loop of the amplitude or complex conjugate that flows between two jets. For such a loop, the two jet velocities in Eq. (3.8) appear times parameters with only positive or only negative signs, which is inconsistent with a pinch. The only exception is for loops that carry zero momentum between different jets. Such "soft" loops are unconstrained by the Landau equations, in both TOPT and covariant perturbation theory, although as noted above, there is a pinch whenever any line carries exactly zero momentum [21].
We now turn to the role of off-shell states. Let us refer to the full set of states \(\{s_{i}\}\) for time order \(\tau_{G}\) of vacuum polarization diagram \(G\) as \({\cal S}=\{s_{1}\ldots s_{n+1}\}\). At a given pinch surface, states in \({\cal S}\) are either on-shell, with \(Q-\sum_{j\in s_{i}}\omega_{j}=0\), or off-shell, \(Q-\sum_{j\in s_{i}}\omega_{j}\neq 0\). These simple considerations limit how off-shell states can appear at a pinch surface. First of all, states adjacent to the external vertices (\(i\) and \(o\) above) at which momenta flow into and out of the forward-scattering diagram, can always be off-shell. These states correspond to the "hard part" of the scattering cross section. For an arbitrary pinch surface, \(\zeta\) there is thus an "earliest" state, \(s^{[\zeta]}_{\rm min}\geq s_{1}\), at which the relevant set of jets first appears, and a "latest" state, \(s^{[\zeta]}_{\rm max}\leq s_{n+1}\), in the complex conjugate amplitude, where they last appear.
We next examine when a subset of states can be off-shell at pinch surface \(\zeta\). Let us denote such an ordered subset as \(\Gamma^{[\zeta]}=\{\sigma^{[\zeta]}_{i}\}\subset{\cal S}\), \(\sigma^{[\zeta]}_{i}=\sigma^{[\zeta]}_{\rm min}\ldots\leq\sigma^{[\zeta]}_{ \rm max}\), and assume that all of these states are off-shell, that is, \(Q-\sum_{j\in\sigma^{[\zeta]}_{i}}\omega_{j}\neq 0\), and consecutive. There may be a number of these sets; first, let us consider the case with only a single such set, \(\Gamma^{[\zeta]}\). For any such set of off-shell states, \(\Gamma^{[\zeta]}\), we must have two sets of on-shell states: on-shell states \(\{s^{[\zeta]}\}_{\Gamma<}\) with \(s^{[\zeta]}_{\rm min}\leq s^{[\zeta]}<\sigma^{[\zeta]}_{\rm min}\) that are before the off-shell states, and similarly another set of on-shell states, \(\{s^{[\zeta]}\}_{\Gamma>}\), all of whose elements are between \(\sigma^{[\zeta]}_{\rm max}\) and \(s^{[\zeta]}_{\rm max}\).
Because states are ordered, the off-shell states of \(\Gamma^{[\zeta]}\) may be either entirely in the generalized amplitude or in the generalized conjugate amplitude, or may extend between them both. In all cases, however, one of the two sets of states, \(\{s^{[\zeta]}\}_{\Gamma<}\), and \(\{s^{[\zeta]}\}_{\Gamma>}\) will have only \(+i\epsilon\) or only \(-i\epsilon\) in its denominators. We use these considerations to show that at the pinch surface, \(\zeta\), no sets of lines that appear in the states of \(\Gamma^{[\zeta]}\) can carry finite momentum between any pair of distinct jets of pinch surface \(\zeta\).
To see why, consider the case that \(\Gamma^{[\zeta]}\) is in the amplitude (all \(+i\epsilon\) in its denominators). We suppose the two jets in question have velocities \(\vec{\beta}_{a}\) and \(\vec{\beta}_{b}\), in different directions. Let us suppose that some set of lines carries finite momenta and connects these two jets. We will then be able to find at least one loop consisting of lines within
the two jets in all the on-shell states of \(\{s^{[\zeta]}\}_{\Gamma<}\), closing the loop with the lines of finite momentum that appear in the states of \(\Gamma^{[\zeta]}\) and lines that appear only in the off-shell states, \(s\leq s^{[\zeta]}_{\rm min}\). The Landau equation for this loop would have no contributions from the states in \(\Gamma^{[\zeta]}\) or from states with \(s\leq s^{[\zeta]}_{\rm min}\), because the Feynman parameters of off-shell states are all zero. They do of course get contributions from the on-shell states, \(\{s^{[\zeta]}\}_{\Gamma<}\). We can always define the loop in question to flow through one line of jet \(a\) in the direction of the \(a\)-jet energy flow for each on-shell state. The velocity of all such lines is the same as the jet velocity, \(\vec{\beta}_{a}\). The loop then flows back, against the direction of the \(b\)-jet, for which the corresponding particle velocities are all \(\vec{\beta}_{b}\). The resulting Landau equations, Eq. (3.8), are then
\[\sum_{s\in\{s^{[\zeta]}\}_{\Gamma<}}\alpha_{s}\,\left(\vec{\beta}_{a}-\vec{ \beta}_{b}\right)\ =\ 0\,. \tag{3.11}\]
Because \(\vec{\beta}_{a}\) and \(\vec{\beta}_{b}\) are in different directions, there can be no solution to Eq. (3.11) for nonzero parameters \(\alpha_{s}\). As a result, \(\zeta\) is not a pinch surface after all. The generalization of this result to sets \(\Gamma^{[\zeta]}\) in the complex conjugate amplitude or between the amplitude and conjugate is immediate. Several sequences of off-shell states also follow the same pattern.
The only remaining possibility for an off-shell \(\Gamma^{[\zeta]}\) is one for which some set of off-shell states is associated with one or more _internal_ loop momenta of a jet, which are not in the direction of the jet. Such a loop will also take a set of consecutive states \(\sigma^{[\zeta]}\) off-shell. We will therefore need to consider this possibility in our discussion of local finiteness.
### Logarithmic singularities and cancellation
To identify which pinch surfaces result in actual infrared singularities in individual terms, we must recall the power counting analysis of covariant perturbation theory [22, 23]. Rather than repeating the details of this analysis, we can rely on the basic result. For a pinch surface of a cut vacuum polarization diagram, we identify the space of "normal" variables, which parameterize the space perpendicular to the pinch surface, and assign a dimensionless scaling variable (conventionally denoted by \(\lambda\)) so that each normal variable vanishes linearly as the scaling variable vanishes. The fundamental starting point of this analysis, when applied to leptonic annihilation, which shows that singularities are at worst logarithmic, can be summarized as
\[2L_{J}+4L_{S}+p_{\rm num}-N_{J}-2N_{S}\ \geq\ 0\,, \tag{3.12}\]
where \(L_{J}\) and \(N_{J}\) are the total number of loops and lines in jet subdiagrams, respectively, while \(L_{S}\) and \(N_{S}\) are the soft loops and lines, and \(p_{\rm num}\) is the scaling dimension of numerator factors at the pinch surface in question. We will see an example below.
We can apply Eq. (3.12) to TOPT expressions like Eq. (3.5) in a straightforward fashion. First, using the graphical Euler identity in the form
\[L_{J}+L_{S}\ =\ N_{J}+N_{S}-V+1\,, \tag{3.13}\]
we see immediately that Eq. (3.12) may be written equivalently as
\[L_{J}+3L_{S}+p_{\rm num}-N_{S}-(V-1)\ \geq\ 0\,, \tag{3.14}\]
where here \(V\) is the total number of vertices in the on-shell TOPT diagram found by contracting all states that are off-shell at this pinch surface. This is also the number of states that are pinched on-shell. In this expression, of course, \(L_{J}\) remains the total number of internal loops of all jets, and \(L_{S}\) is the number of loops that carry zero momentum in all three remaining components. For leptonic annihilation [22], the natural normal variables are all three remaining components of the \(L_{S}\) soft loops, \(\vec{l}_{\rm soft}\sim\lambda\), and for each jet loop momentum \(\vec{l}_{\rm jet}\), \(l_{\perp,\,{\rm jet}}^{2}\sim\lambda\). The dependence of all denominators is then linear in \(\lambda\). Because the number of denominators is \(V-1\), the inequality of Eq. (3.14) ensures that divergences in TOPT are at worst logarithmic, just as in covariant perturbation theory.
Notice that it is necessary to saturate the inequality (3.14) for a specific time order and pinch surface to contribute to an infrared divergence. This eliminates infrared divergences for time orders where a vertex that connects on-shell jet lines and/or soft lines appears between off-shell denominators, simply because choosing such an order sacrifices at least one on-shell state for an off-shell state. For this reason, in identifying pinch surfaces, we may capture their leading infrared behavior by their reduced diagrams, found by shrinking any loop momenta internal to the jets but not in the jet direction to points. The resulting time-ordered reduced diagrams automatically have the maximum number of on-shell denominators. The logarithmic nature of infrared divergences applies to each term in the original expression of the weighted cross section, Eq. (2.9) as well as to its linear combinations in Eq. (3.4) and (3.5) for an arbitrary weight function, \(f_{C}\).
In summary, a general weighted cross section encounters at worst logarithmic singularities in individual terms in the sum found by combining Eqs. (3.2) and (3.4),
\[\Sigma[f,Q] = \sum_{G}\ \sum_{\tau_{G}}\int d{\cal L}_{G}\ \mathbb{N}_{\tau_{G}}\ \prod_{i=1}^{N_{G}}\frac{1}{2\omega_{i}} \tag{3.15}\] \[\times\left(\prod_{s=1}^{n+1}\frac{i}{Q\lambda_{s}-\sum_{j\in s} \omega_{j}+i\epsilon}f_{n+1}\right.\] \[+\ \sum_{C=1}^{n}\prod_{s=C+1}^{n+1}\frac{i}{Q\lambda_{s}-\sum_{j \in s}\omega_{j}-i\epsilon}(f_{C}-f_{C+1})\prod_{s=1}^{C}\frac{i}{Q\lambda_{s }-\sum_{j\in s}\omega_{j}+i\epsilon}\] \[-\ \prod_{s=1}^{n+1}\frac{i}{Q\lambda_{s}-\sum_{j\in s}\omega_{j}-i \epsilon}f_{1}\Bigg{)}\.\]
In this form, we can use the properties of infrared safe weight functions \(f_{C}\), Eq. (2.10) to show that this expression is locally finite in loop momentum space if the sum over \(C\) is carried out before integration.
We first observe that the first and third terms in parentheses are infrared finite because their denominators can produce no pinches, just as for the total cross section. For the remaining terms in the sum, which involve denominators with both \(i\epsilon\) signs, let us start by considering a "leading" pinch surface, at which every state in Eq. (3.15) is pinched on-shell. Each pinch surface (leading or not) is at worst logarithmically divergent in the integral over normal variables. Differences \(f_{C}-f_{C+1}\) then need only vanish as any power of the normal variables for the \({\cal L}_{G}\) integrals to be finite. This is precisely the condition for IR safety found in Refs. [11, 19]. For any infrared safe weight function, by Eq. (2.10), the value of any \(f_{C}\) is the same for every state that is pinched on shell. This ensures that the difference \(f_{C}-f_{C+1}\) in Eq. (3.15) vanishes as a power of the normal variables. The integral in Eq. (3.15) is then finite for all leading pinch surfaces. To extend this result to pinch surfaces with intermediate off-shell states, we simply note that for an off-shell state \(\sigma^{[\zeta]}\), the corresponding denominator \(D_{\sigma^{[\zeta]}}\) is real, and the terms proportional to \(f_{\sigma^{[\zeta]}}\) cancel between conjugate terms in (3.15). The sum of all terms in Eq. (3.15) is thus finite at an arbitrary pinch surface. This is what we set out to demonstrate.
It is worth noting that a special class of event weights are those that count jet final states. These weights take the value unity when the state satisfies some criterion, and zero elsewhere. For IR safe jet algorithms, the weights will be identical for all states associated with a given pinch surface [22]. For such states, the differences \(f_{C}-f_{C+1}\) are exactly zero for the full range of momentum space where the weight function is unity for both states, a range that includes all pinch surfaces.
### All-order example: energy in a "cone"
It is useful to give an explicit example that illustrates the local finiteness of Eq. (3.4) for a specific infrared safe cross section at all orders. To this end, we begin by introducing an abbreviated notation for energy denominators,
\[D_{s}\ =\ E_{s}-\sum_{j\in s}\omega_{j}\,, \tag{3.16}\]
where in our case, \(E_{s}=Q\lambda_{s}\), with \(\lambda=\pm 1,0\). In these terms, a contribution to the integrand for a general weighted cross section, Eq. (3.4) becomes
\[\sum_{C=1}^{n+1}\,\sigma^{(C)}_{\tau_{G}}[f,{\cal L}_{G},Q] = \left(\prod_{s=1}^{n+1}\frac{i}{D_{s}+i\epsilon}f_{n+1}+\ \sum_{C=1}^{n}\prod_{s=C+1}^{n+1}\frac{i}{D_{s}-i\epsilon}(f_{C}-f_{C+1})\prod _{s=1}^{C}\frac{i}{D_{s}+i\epsilon}\right. \tag{3.17}\] \[-\ \prod_{s=1}^{n+1}\frac{i}{D_{s}-i\epsilon}f_{1}\Bigg{)}\.\]
To illustrate how Eq. (3.17) provides a locally finite sum, we consider the example of a "cone" weight, defined as the energy flowing into a cone fixed in space,
\[f_{s}^{(\Omega)}\ =\ \sum_{i\in_{s}\Omega}\omega_{i}\,. \tag{3.18}\]
Here, \(i\in_{s}\Omega\) means that particle \(i\) flows into angular region \(\Omega\) in state \(s\). For simplicity, we consider angular region \(\Omega\) to be fixed, so that this quantity is not a jet cross section in the usual sense. Clearly, this weight for state \(s\) is related to the denominator \(D_{s}\) by
\[f_{s}^{(\Omega)}\ =\ Q\,-\,\sum_{i\not\in_{s}\Omega}\omega_{i}\,-\,D_{s}\,, \tag{3.19}\]
where \(\sum_{i\not\in_{s}\Omega}\) sums the on-shell energies of all the particles of state \(s\) that are outside cone \(\Omega\). This leads to a relation between weights in consecutive final states,
\[f_{C}^{(\Omega)}-f_{C+1}^{(\Omega)} = \,\,\,D_{C+1}\,-\,D_{C}\,+\,\sum_{i\not\in_{C+1}\Omega}\omega_{i} \,-\,\sum_{i\not\in_{C}\Omega}\omega_{i} \tag{3.20}\] \[\equiv D_{C+1}\,-\,D_{C}\,+\,\delta^{(\Omega)}\omega_{C}\,,\]
where in the first relation we use (3.19), and in the second we define the quantity \(\delta^{(\Omega)}\omega_{C}\). As we see from its definition, \(\delta^{(\Omega)}\omega_{C}\) counts the energy of particles that are radiated outside the cone in the transition from state \(C\) to state \(C+1\), or that was outside the cone in state \(C\), but whose energy is absorbed into the cone in the transition. We then have, from the general relation, Eq. (3.17),
\[\sum_{C=1}^{n+1}\,\sigma_{\tau_{G}}^{(C)}[f^{(\Omega)},{\cal L}_{G },Q] = \left(\prod_{s=1}^{n+1}\frac{i}{D_{s}+i\epsilon}\left(Q\,-\,\sum_ {i\not\in_{n+1}\Omega}\omega_{i}\,-\,D_{n+1}\right)\right. \tag{3.21}\] \[+\,\left.\sum_{C=1}^{n}\prod_{s=C+1}^{n+1}\frac{i}{D_{s}-i \epsilon}\left(D_{C+1}\,-\,D_{C}\,+\,\delta^{(\Omega)}\omega_{C}\right)\prod_ {s=1}^{C}\frac{i}{D_{s}+i\epsilon}\right.\] \[-\,\left.\prod_{s=1}^{n+1}\frac{i}{D_{s}-i\epsilon}\left(Q\,-\, \sum_{i\not\in_{1}\Omega}\omega_{i}\,-\,D_{1}\right)\right)\,.\]
On the right, all terms \(D_{i}\) cancel, using the distribution identity,
\[D_{i}\,\left(\frac{i}{D_{i}-i\epsilon}\,-\,\frac{i}{D_{i}+i\epsilon}\right)\ =\ D_{i}\,2\pi\delta\left(D_{i}\right)\ =\ 0\,. \tag{3.22}\]
We thus have for the cone cross section an expression that depends on energies flowing into the cone in the first and last state, and on the energy transfer variables, \(\delta\omega_{C}\),
\[\sum_{C=1}^{n+1}\,\sigma_{\tau_{G}}^{(C)}[f^{(\Omega)},{\cal L}_{G },Q] = \prod_{s=1}^{n+1}\frac{i}{D_{s}+i\epsilon}\left(Q-\sum_{i\not\in_ {n+1}\Omega}\omega_{i}\right)\ -\ \prod_{s=1}^{n+1}\frac{i}{D_{s}-i\epsilon}\left(Q-\sum_{j\not\in_{1} \Omega}\omega_{j}\right)\]
\[+\ \sum_{C=1}^{n}\prod_{s=C+1}^{n+1}\frac{i}{D_{s}-i\epsilon}\,\delta^{( \Omega)}\omega_{C}\,\prod_{s=1}^{C}\frac{i}{D_{s}+i\epsilon}\,. \tag{3.23}\]
For this expression, we recall that at an arbitrary pinch surface, all finite energy is carried in jets of exactly collinear particles. For any such configuration, if state \(C\) and state \(C+1\) are both on-shell, \(\delta^{(\Omega)}\omega_{C}=0\) because consecutive states have exactly the same energy flow. That is, the quantities \(\delta^{(\Omega)}\omega_{C}\) vanish for every such final state \(C\). The contributions of these terms are integrable.
For nonzero values of any set of \(\delta^{(\Omega)}\omega_{C}\) in Eq. (3.23), following the general argument of the previous subsection, we consider a set of states \(\Gamma^{[\zeta]}\), that are off-shell at an arbitrary pinch surface \(\zeta\), due to loop momenta circulating in the subdiagram of a jet or between soft lines. As above, on-shell states appear both before and after \(\Gamma^{[\zeta]}\). Since all jets must appear with the same total cone energy in each on-shell state at the pinch surface, all nonzero contributions to \(\delta^{(\Omega)}\omega_{C}\) must cancel as we sum over off-shell states \(\Gamma^{[\zeta]}\). Thus, the product of off-shell denominators associated with states \(\Gamma^{[\zeta]}\) is multiplied by a sum of energy transfers \(\delta^{(\Omega)}\omega_{C}\) that cancel at the pinch surface.
It is worth noting that in the special case where \(\Omega\) grows to the entire two-sphere, there is no radiation outside the cone and all the \(\delta^{(\Omega)}\omega_{C}\) are zero, and we find
\[\sum_{C=1}^{n+1}\,\sigma_{\tau_{G}}^{(C)}[f^{(S_{2})}] = Q\,\left[\prod_{s=1}^{n+1}\frac{i}{D_{s}+i\epsilon}\ -\ \prod_{s=1}^{n+1}\frac{i}{D_{s}-i\epsilon}\right]\,. \tag{3.24}\]
This, as expected, is the energy times the imaginary part of the uncut TOPT diagram, in our notation, the total cross section.
### Low-order example
To illustrate the formalism further with an explicit example, we consider the two next-to-lowest-order vacuum polarization diagrams in Fig. 2.
Figure 2: Two loop vacuum polarization diagrams: (1) vector exchange, (2) fermion self-energy.
For simplicity, we take the trace on the external vector current indices, and we do not consider color in our examples. From the discussion above, the contributions of these diagrams to a general weighted cross section are given by
\[\Sigma_{i}[f,Q] = ig^{2}\,\int\frac{d^{3}\vec{k}}{(2\pi)^{3}}\,\frac{d^{3}\vec{k}^{ \prime}}{(2\pi)^{3}}\frac{\mathbb{N}_{i}}{2^{4}\omega_{k}\,\omega_{q-k}\, \omega_{k^{\prime}}\,\omega_{k-k^{\prime}}}\,\left(\delta_{i1}\,\frac{1}{2 \omega_{q-k^{\prime}}}+\delta_{i2}\,\frac{1}{2\omega_{k}}\right)\] \[\times\left[\frac{f_{iA}}{D_{iC}^{+}\,D_{iB}^{+}\,D_{iA}^{+}}\ -\ \frac{f_{iC}}{D_{iC}^{-}\,D_{iB}^{-}\,D_{iA}^{-}}\ +\ \frac{f_{iB}-f_{iA}}{D_{iC}^{-}\,D_{iB}^{-}\,D_{iA}^{+}}\ +\ \frac{f_{iC}-f_{iB}}{D_{iC}^{-}\,D_{iB}^{+}\,D_{iA}^{+}} \right]\,,\]
where \(i=1\) refers to the gluon exchange diagram and \(i=2\) to the fermion self energy diagram. 2
Footnote 2: The case where the two-particle weights \(f_{A}\) and \(f_{C}\) and the three-particle state weight \(f_{B}\) are defined to give unity for a two-jet final state and zero otherwise, is of interest. An example is the original cone jet cross section of Ref. [24] at this order. Using the symmetry between the two-particle states, \(A\) and \(C\), the expression, Eq. (3.25) is readily seen to reduce to the total cross section minus the three-jet cross section, as noted early on in Ref. [25], due to the exact cancellation of three- and two-particle momentum configurations in the two-jet region. At this order, the resulting expression needs no infrared regularization.
In terms of the momentum assignments shown in the figure, energy denominators in Eq. (3.25) are given by
\[D_{1A}^{\pm} = D_{2A}^{\pm}\ =\ Q-2\omega_{k}\pm i\epsilon\] \[D_{1B}^{\pm} = D_{1B}^{\pm}\ =\ Q-\omega_{k}-\omega_{k^{\prime}}-\omega_{k-k^{ \prime}}\pm i\epsilon\,,\] \[D_{1C}^{\pm} = Q-2\omega_{k^{\prime}}\pm i\epsilon\,,\] \[D_{2C}^{\pm} = Q-2\omega_{k}\pm i\epsilon\,. \tag{3.26}\]
The numerators simplify significantly in TOPT, where we must evaluate line momenta on-shell, \(k^{2}=k^{\prime 2}=0\), and the \(\mathbb{N}_{i}\) in Eq. (3.25) become
\[\mathbb{N}_{1} = -32\left(k^{\prime}\cdot(k-q)\right)\left(k\cdot(k^{\prime}-q) \right)\,,\] \[\mathbb{N}_{2} = -32\,k\cdot k^{\prime}\,q\cdot k\,. \tag{3.27}\]
To exhibit the pinch surfaces of these diagrams, we can choose spherical coordinates, with \(\vec{k}\) in the \(\hat{z}\) (\(\theta_{k}=0\)) direction, and the polar angle, \(\theta^{\prime}\) of \(\vec{k}^{\prime}\) measured relative to \(\hat{z}\). It is now natural to use the notation \(k=|\vec{k}|\)\(=\omega_{k}\), and \(k^{\prime}=|\vec{k}^{\prime}|\)\(=\omega_{k}^{\prime}\), but to leave \(\omega_{k-k^{\prime}}\) to represent \(|\vec{k}-\vec{k}^{\prime}|\), which is given by
\[\omega_{k-k^{\prime}} = \sqrt{k^{2}+k^{\prime 2}-2kk^{\prime}\cos\theta^{\prime}} \tag{3.28}\] \[= \sqrt{(k-k^{\prime})^{2}+2kk^{\prime}(1-\cos\theta^{\prime})}\,.\]
We assume that our weight functions can be expressed in terms of these variables only.
In Eq. (3.25), the first and second terms in square brackets are free of collinear pinches altogether, simply because their energy denominators enter with the same sign for the \(i\)\(e\)s. We denote the remaining, potentially singular, terms as \(\hat{\Sigma}_{1}\) and \(\hat{\Sigma}_{2}\). For \(\hat{\Sigma}_{1}\), corresponding to gluon exchange, Fig. 2(1), we have
\[\hat{\Sigma}_{1}[f,Q] = i\,\frac{g^{2}}{8\pi^{4}}\,\int_{0}^{\infty}\,dk\,dk^{\prime}\, \int_{-1}^{1}d\cos\theta^{\prime}\,\frac{k^{\prime}\cdot(q-k)\,k\cdot(q-k^{ \prime})}{\omega_{k-k^{\prime}}}\] \[\times\left[\frac{f_{1B}(k,k^{\prime})-f_{1A}(k)}{(Q-2k^{\prime} -i\epsilon)\,(Q-k-k^{\prime}-\omega_{k-k^{\prime}}-i\epsilon)\,(Q-2k+i \epsilon)}\right.\] \[+\left.\frac{f_{1C}(k^{\prime})-f_{1B}(k,k^{\prime})}{(Q-2k^{ \prime}-i\epsilon)\,(Q-k-k^{\prime}-\omega_{k-k^{\prime}}+i\epsilon)\,(Q-2k+ i\epsilon)}\right]\,,\]
where we have indicated the momentum dependence of the weight functions. For non-zero numerators, this integral has both collinear and soft pinches. There is a collinear pinch at \(k=Q/2\), \(\theta^{\prime}=0\), with \(|\vec{k}|>|\vec{k}^{\prime}|\), between the second and third denominators of the first term in square brackets. Similarly, a pinch appears at \(k^{\prime}=Q/2\), \(\theta^{\prime}=0\) with \(|\vec{k}^{\prime}|>|\vec{k}|\) between the first and second denominators of the second term in square brackets. These limits correspond to the gluon parallel to the quark (when \(k=Q/2\)) or the antiquark (when \(k^{\prime}=Q/2\)). The soft pinch, at which all three denominators vanish, appears in both terms when \(k\) and \(k^{\prime}\) both approach \(Q/2\) with \(\theta^{\prime}=0\).
All of these pinches lead to logarithmically divergent power counting, corresponding to two denominators vanishing while two variables (\(k\) or \(k^{\prime}\) and \(\theta^{\prime}\)) approach the endpoints of their integration regions. In all three cases, of course, the differences of the weight functions in the numerators vanish, so long as they depend only on overall energy flow. A simple example would be a weight function like the thrust, for which \(f_{A}=1\), while \(f_{B}\sim 1-2k\cdot(k-k^{\prime})/Q^{2}\sim 1-2kk^{\prime}(1-\cos\theta^{ \prime})/Q^{2}\) near \(\theta^{\prime}=0\).
For \(\hat{\Sigma}_{2}\), since the states \(A\) and \(C\) are identical for diagram 2, we can write
\[\hat{\Sigma}_{2}[f,Q] = i\,\frac{g^{2}}{8\pi^{4}}\,\int_{0}^{\infty}\,dk\,dk^{\prime}\, \int_{-1}^{1}d\cos\theta^{\prime}\,\frac{kk^{\prime 2}Q(1-\cos\theta^{ \prime})}{\omega_{k-k^{\prime}}} \tag{3.30}\] \[\times\left[\frac{f_{2B}(k,k^{\prime})-f_{2A}(k)}{(Q-2k-i \epsilon)\,(Q-k-k^{\prime}-\omega_{k-k^{\prime}}-i\epsilon)\,(Q-2k+i \epsilon)}\right.\] \[+\left.\frac{f_{2A}(k)-f_{2B}(k,k^{\prime})}{(Q-2k-i\epsilon)\, (Q-k-k^{\prime}-\omega_{k-k^{\prime}}+i\epsilon)\,(Q-2k+i\epsilon)}\right]\,.\] \[= i\,\frac{g^{2}}{8\pi^{4}}\,\int_{0}^{\infty}\,dk\,dk^{\prime}\, \int_{-1}^{1}d\cos\theta^{\prime}\,\frac{kk^{\prime 2}Q(1-\cos\theta^{ \prime})}{\omega_{k-k^{\prime}}}\] \[\times\left[\frac{f_{2B}(k,k^{\prime})-f_{2A}(k)}{(\omega_{k-k^{ \prime}}-(k-k^{\prime}))^{2}}(2\pi i)\delta\,(Q-k-k^{\prime}-\omega_{k-k^{ \prime}})\right]\,.\]
In the second relation, we have used the delta function to evaluate the squared denominator, \(Q-2k\). This denominator is negative semi-definite and vanishes only at the
integration endpoint, \(\cos\theta^{\prime}\to 1\), where \(k^{\prime}\) is collinear to \(k\). Although it has an apparent double pole at \(2k=Q\), the explicit numerator factor of \(1-\cos\theta^{\prime}\) reduces this to a single pole, and logarithmic power counting. This relative suppression is a contribution "\(+1\)" to the term \(p_{\rm num}\) to infrared power counting in Eq. (3.12), applied to the pinch surface where the self-energy consists of two collinear lines. Here again, in Eq. (3.30), the difference of weight functions in the numerator vanishes for any infrared safe weight and renders the integral finite. This confirms that in the original form of the integration, the two- and three-particle singularities cancel. For this example, the suppression appears locally in loop momentum space, with the standard form of the self energy subdiagram. At higher orders, however, the suppression requires in general integration over the internal loop momentum of the uncut self-energies (\(k^{\prime}\) here), to realize their contributions to the suppression factor \(p_{\rm num}\) in Eq. (3.12). For diagrams with more than a single self-energy on cut lines, we believe it will be natural to use alternative integrands for self-energies and their counterterms, which eliminate higher-order poles for the single-particle final state, as described for amplitudes in Refs. [5, 6].
## 4 Posets in TOPT
We now return to the question of pseudo-physical cuts in TOPT. We recall that a pseudo-physical cut of a time-ordered graph disconnects the diagram into more than two connected parts. In Sec 2.2, we saw that the cross section evaluated on any pseudo-physical cut vanishes upon summing over time orders for the fixed cut. In this section, we develop a poset formalism in order to show how pseudo-physical cuts can be avoided entirely. We note that posets have also been useful in discussions of eikonal exponentiation [26] and coordinate-space amplitudes [27].
The treatment that follows has much in common with the recent discussion of "flow-oriented" [17] and "cross-free" representations of perturbation theory [18]. Here, we work directly from TOPT to find a number of related results, which, we believe, will provide intuition for applications, one of which we discuss in Sec. 6.
In this section, we will review some standard poset terminology, introduce our method, and show how it works in representative examples involving vacuum polarization diagrams. A general discussion will be provided in Sec. 5.
### Definitions
The method we will use to reorganize TOPT is based on the construction of partially ordered sets (posets) on the vertices of the diagrams. To do so, we impose a binary relationship among the vertices, which partitions the set of time orders into distinct posets. A poset is a set together with a binary relationship. For any TOPT diagram, our binary relation can be defined from the incidence matrix introduced in the integral
for an arbitrary amplitude in Eq. (2.12) and defined in Eq. (2.13). We denote the incidence matrix by \(\eta_{j}^{(b)}\), where the superscript \((b)\) represents a vertex and the subscript \(j\) represents a line. Entry \(\eta_{j}^{(b)}\) is \(+1\) if the line \(j\) enters vertex \(b\), \(-1\) if the line \(j\) exits vertex \(b\) and, zero otherwise. Consider a TOPT diagram, given in the notation of Eq. (2.12). If there is a line joining vertices \(b_{i},b_{j}\), and \(t_{b_{i}}\geq t_{b_{j}}\), for each line \(k\) connecting both \(b_{i},b_{j}\), \(\eta_{k}^{(b_{i})}=1=-\eta_{k}^{(b_{j})}\). On the other hand, if \(t_{b_{j}}\geq t_{b_{i}}\) for each line \(k\) between \(b_{i},b_{j}\), \(\eta_{k}^{(b_{j})}=1=-\eta_{k}^{(b_{i})}\).
We begin by defining an ordering between the vertices \(\geq\), \(V=\{b_{1},b_{2},b_{3}\ldots,b_{n}\}\) of a Feynman graph \(G\). It satisfies:
* If \(b_{i}\) and \(b_{j}\) are connected directly by one or more lines, then either \(b_{i}\geq b_{j}\) or \(b_{j}\geq b_{i}\), i.e., \(b_{i}\) and \(b_{j}\) are ordered by \(\geq\). The ordering \(\geq\), abstracted from time ordering is transitive, \(a\geq b,\ b\geq c\ \to a\geq c\). It follows from the transitivity of the binary that if two vertices \(b_{i}\) and \(b_{j}\) are related by a sequence of increasing times, they are ordered by \(\geq\).
* If \(b_{i}\) and \(b_{j}\) are not connected by a line or a sequence of vertices that are increasing, then \(b_{i}\not\geq b_{j}\) and \(b_{j}\not\geq b_{i}\), i.e., \(b_{i}\) and \(b_{j}\) are not ordered by \(\geq\). It will be useful to introduce the notation \(b_{i}\sim b_{j}\), to mean \(b_{i}\not\geq b_{j}\) and \(b_{j}\not\geq b_{i}\). 3 Footnote 3: We shall assume that our TOPT diagrams do not include lowest-order tadpole subdiagrams, in which a single line emerges and is absorbed at a single vertex. Such lines, which are removed by normal ordering, cannot be assigned a poset (or time) order. We will say that \(b_{i}>b_{j}\) if \(b_{i}\geq b_{j}\) and \(b_{i}\neq b_{j}\) i.e., we require that \(b_{i},b_{j}\) are distinct in addition to being related.
Having introduced the binary relation, we are ready to define the posets. A poset \(D\) is the ordered pair \(D=(V,\geq)\), where the set \(V\) is the set of vertices of a Feynman graph, taken together with the binary relationship \(\geq\).
It is clear that multiple time orders are compatible with a given poset, and in each time order, we can uniquely identify an underlying poset. Therefore, posets partition the set of all time orders.
Our posets have the same information as the incidence matrix in the TOPT amplitudes, Eqs. (2.12), (2.14). It is clear that it is possible to read off a poset, given an incidence matrix. Also notice that since the integrand in Eq. (2.12) only depends on the incidence matrix, this is also fixed by the poset. Therefore, posets are in one-to-one correspondence with incidence matrices.
Let us make a few more definitions that relate to posets in order to develop this language more fully. An idea that we will find use for in what follows is that of minimal/maximal elements. An element \(b_{i}\) is said to be a minimal element of a poset \(D\) if \(\forall b_{j}\in D,b_{j}\geq b_{i}\) or \(b_{j}\sim b_{i}\). A sequence of reducing vertices, starting from any vertex, always ends at a minimal element.
An element \(b_{i}\) is said to be a maximal element of a poset \(D\) if \(\forall b_{j}\in D,b_{i}\geq b_{j}\) or
\(b_{i}\sim b_{j}\). A sequence of increasing vertices, starting from any vertex, always ends at a maximal element.
We will also find it useful to distinguish how two elements are related to each other. In particular if \(a<b\), then either there exists a single line between the vertices \(a\) and \(b\) or there exists a sequence of increasing, connected vertices that starts at \(a\) and ends at \(b\). It will be useful to reserve the symbol \(\prec\) for the first scenario when \(a\) and \(b\) are directly connected through a line. It is therefore natural to define a "covering relation". For elements \(x,y\in V\), we say that \(y\) covers \(x\) if \(y>x\) and there is no \(z\in V\) such that \(y>z>x\). We will denote it by \(x\prec y\). In a given graph, if \(x\prec y\), then \(x,y\) are vertices that are directly connected by one or more lines, while \(y\geq x\) includes the possibility that \(x,y\) are vertices that are connected by a sequence of lines.
Having defined our posets, we will now go on to show how to use them to eliminate pseudo-physical cuts in some explicit examples of vacuum polarization diagrams. A generalization to all processes at arbitrary loops will be given subsequently in Sec. 5.
### Examples in vacuum polarization diagrams
Let us now look at some low order examples that will show how the time integrals over minimal and maximal vertices can be carried out explicitly within a given poset. For definiteness, our examples will be drawn from the vacuum polarization diagrams of the lepton annihilation processes. They illustrate features of quite general application and open the way to an analogous development for general amplitudes and Green functions in the next section.
For vacuum polarization diagrams, we label vertices by permutations of the list, \(V=(i,b_{1},b_{2},\ldots,b_{n},o)\), where the external momentum flows into the diagram at vertex \(i\), and out at vertex \(o\). We can then divide all posets into four classes.
* Posets with \(o>i\) for which the external energy arrives at vertex \(i\) before it leaves at vertex \(o\).
* Posets with \(i>o\) for which the external energy leaves at vertex \(o\) before it arrives at vertex \(i\)
* Posets with \(o\sim i\), which split into two sub-classes by the imposition of an additional relationship in a time order: \(t_{o}>t_{i}\) and \(t_{i}>t_{o}\). At the level of the poset, there is no definite order among the vertices \(i(o)\), where the external energy enters(leaves). However, imposing the additional relationship \(t_{o}>t_{i}\) restricts us to time orders within the poset where the external energy arrives at vertex \(i\) before it leaves at vertex \(o\).
As we will see, there are unphysical cuts that one would like to eliminate in all four classes of posets. In what follows we will focus on time orders that satisfy \(t_{o}>t_{i}\). These are all time orders in posets with \(o>i\), and in posets with \(o\sim i\), with time orders \(t_{o}>t_{i}\). These time orders carry at least one physical cut but contain both physical
and unphysical cuts in general. There are no denominator singularities in time orders with \(t_{i}>t_{o}\), and the integrand is completely real (and negative semi-definite).
Using the notation introduced in Eq. (2.5), we consider here diagrammatic integrands, including numerator factors, written as sums over time orders \(\tau_{G}\) of vacuum polarization integrands for an arbitrary diagram, \(G\),
\[\pi_{G}[Q,{\cal L}_{G}] = \sum_{\tau_{G}}\;\mathbb{N}_{\tau_{G}}\;\pi_{\tau_{G}}(Q,{\cal L} _{G})\,. \tag{4.1}\]
We observe that although the notation \(\mathbb{N}_{\tau_{G}}\) suggests that the numerator factor depends on the time order, it is the same for all time orders within a poset \(D\). The numerator factor of a graph in covariant perturbation theory is derived from the Feynman rules and the numerator factor in TOPT is obtained by replacing the energy of a line \(j\) by the on-shell value of the energy \(\pm\omega_{j}\), where the sign is determined by the direction of the line alone. Therefore, the numerator factor is the same for all time orders within a given poset. We will henceforth use the notation \(\mathbb{N}_{D}\) to emphasize the dependence of the numerator factor.
Given that posets partition time orders into non-overlapping subsets, we may write \(\pi_{G}\) in Eq. (4.1) as
\[\pi_{G}(Q,{\cal L}_{G}) = \sum_{\tau_{G}}\;\mathbb{N}_{D}\;\pi_{\tau_{G}}(Q,{\cal L}_{G}) \tag{4.2}\] \[= \sum_{D}\;\mathbb{N}_{D}\;\sum_{\tau_{D}}\pi_{\tau_{D}}(Q,{\cal L }_{G})\,,\]
where the second equality follows from the fact that the sum of all time orders \(\tau_{G}\) of graph \(G\) is the sum over posets, \(D\), and within each poset the sum over time orders contained in \(D\),
\[\sum_{\tau_{G}}=\sum_{D}\sum_{\tau_{D}}\;. \tag{4.3}\]
Our interest is in obtaining an expression for \(\pi_{D}(Q,{\cal L}_{G})\) that is free of pseudo-physical cuts. We first notice that if a poset \(D\) has a single minimum \(i\) (where the external momentum flows in), and a single maximum \(o\) (where the external momentum flows out), every time order we induce on the poset exclusively carries physical cuts. To see this, we argue that in an arbitrary time order, and on any cut \(C\) of that time order, we may start at vertex \(v\) that lies to the left of \(C\), and follow a sequence of vertices that are less than \(v\) (in the poset ordering) to reach the unique minimum \(i\). Therefore every vertex on the left of the cut \(C\) is ordered with respect to \(i\). A similar argument shows that every vertex to the right of such a cut is also ordered with respect to the unique maximum \(o\). There may or may not be more than one time order for such a poset, depending on the incidence matrix, but every cut of such a time-ordered diagram will be physical.
If, on the other hand, there is more than one extremum in a poset, the poset is guaranteed to include time orders that differ by exchanging the relative positions of the minima or maxima among themselves. Our strategy in handling posets without a unique minimum or a unique maximum element is to reduce the problem to one where there exists a unique extremum. This will involve combining time orders that exchange extrema and other unordered pairs of vertices.
As a warm-up consider an example with a unique maximum but two minima, the three loop graph in Fig. 3. Although the figure shows a particular time order of its six vertices, \(i\), \(o\), and \(b_{1}\ldots b_{4}\), it represents only the poset structure. We recall that in TOPT every line carries energy forward in time (to the right in a TOPT diagram). This information provides us with an incidence matrix, and hence a poset. In this case, the relevant binary relations are
\[i<b_{3}<b_{4}<o\,,\] \[b_{1}<b_{2}<b_{4}\,,\] \[b_{1}<b_{3}\,,\] \[b_{1}\sim i\,,\] \[b_{2}\sim i,b_{3}\,. \tag{4.4}\]
Within this poset, there are two minima, \(b_{1}\) and \(i\). Vertex \(b_{1}\) is covered by both \(b_{2}\) and \(b_{3}\). There are five time orders. Two time orders have \(t_{b_{2}}>t_{b_{3}}\), which we label by permutation as
\[t_{b_{1}}<t_{i}\quad(1i324o)\,,\] \[t_{i}<t_{b_{1}}\quad(i1324o)\,. \tag{4.5}\]
The other three time orders, which have \(t_{b_{3}}>t_{b_{2}}\), are given by
\[t_{i}<t_{b_{1}}<t_{b_{2}}<t_{b_{3}}\quad(i1234o)\,,\] \[t_{b_{1}}<t_{i}<t_{b_{2}}<t_{b_{3}}\quad(1i234o)\,,\] \[t_{b_{1}}<t_{b_{2}}<t_{i}<t_{b_{3}}\quad(12i34o)\,. \tag{4.6}\]
In TOPT, after time integration, each of these orderings gives one or more pseudo-physical cuts. For example, the contribution of \((i1324o)\) can be written as
\[\pi_{(i1324o)} = \ e^{iQt_{o}}\ \int_{-\infty}^{t_{o}}dt_{b_{4}}e^{-i(\omega_{ch}- \omega_{d}+i\epsilon)t_{b_{4}}}\int_{-\infty}^{t_{b_{4}}}dt_{b_{2}}e^{-i( \omega_{ef}-\omega_{h}+i\epsilon)t_{b_{2}}}\int_{-\infty}^{t_{b_{2}}}dt_{b_{3} }e^{-i(\omega_{ag}-\omega_{f}+i\epsilon)t_{b_{3}}} \tag{4.7}\] \[\times\ \int_{-\infty}^{t_{b_{3}}}dt_{b_{1}}e^{-i(-\omega_{efg}+i \epsilon)t_{b_{1}}}\int_{-\infty}^{t_{b_{1}}}dt_{i}e^{-i(Q-\omega_{ab}+i \epsilon)t_{i}}\]
\[= \frac{i}{Q-\omega_{bd}+i\epsilon}\,\frac{i}{Q-\omega_{bch}+i \epsilon}\,\frac{i}{Q-\omega_{bcef}+i\epsilon}\,\frac{i}{Q-\omega_{abefg}+i \epsilon}\,\frac{i}{Q-\omega_{ab}+i\epsilon}\,,\]
where for compactness we use the notation
\[\omega_{ab\dots c}\ =\ \omega_{a}+\omega_{b}+\dots+\omega_{c}\,. \tag{4.8}\]
We have also set the outgoing energy \(Q^{\prime}\) to equal \(Q\), to suppress the phase that provides the energy conserving delta function after the integral over \(t_{o}\) in this case. In this representative TOPT term, the denominator \(Q-\omega_{abefg}\) corresponds to a pseudo-physical final state because the amplitude with this final state consists of two disconnected parts, one that includes vertex \(i\) and provides particles \(a\) and \(b\) in the final state, and one in which particles \(e\), \(f\), and \(g\) emerge from the vacuum into the final state. We shall see, however, that such singularities are absent in an evaluation of integrals based on the poset. In other words, pseudo-physical denominators all cancel in the sum over the time orderings that make up the poset. This will be the case separately for the combinations of time orders in Eq. (4.5) and (4.6).
The case of the two orders in Eq. (4.5) is particularly simple. Adding the two orders together, we find that times \(t_{b_{1}}\) and \(t_{i}\) integrate independently to \(t_{b_{3}}\), and the pseudo-physical cut disappears in TOPT, even though each of these orderings gives one or more pseudo-physical cuts,
\[\pi_{(i1324o)}+\pi_{(1i324o)} = e^{iQt_{o}}\ \int_{-\infty}^{t_{o}}dt_{b_{4}}e^{-i(\omega_{ch}- \omega_{d}+i\epsilon)t_{b_{4}}}\int_{-\infty}^{t_{b_{4}}}dt_{b_{2}}e^{-i( \omega_{ef}-\omega_{h}+i\epsilon)t_{b_{2}}}\] \[\times\ \int_{-\infty}^{t_{b_{2}}}dt_{b_{3}}e^{-i(\omega_{ag}- \omega_{f}+i\epsilon)t_{b_{3}}}\] \[\times\ \int_{-\infty}^{t_{b_{3}}}dt_{b_{1}}e^{-i(-\omega_{efg}+i \epsilon)t_{b_{1}}}\int_{-\infty}^{t_{b_{3}}}dt_{i}e^{-i(Q-\omega_{ab}+i \epsilon)t_{i}}\] \[=\ \left(\frac{i}{Q-\omega_{bd}+i\epsilon}\,\frac{i}{Q-\omega_{ bch}+i\epsilon}\,\frac{i}{Q-\omega_{bcef}+i\epsilon}\,\frac{i}{Q-\omega_{ab}+i \epsilon}\right)\,\frac{i}{-\omega_{efg}}\,,\]
where the rightmost fraction, which is negative semi-definite for massless lines, is the result of the integral \(\int\limits_{-\infty}^{t_{b_{3}}}dt_{1}e^{-i(-\omega_{efg}+i\epsilon)t_{b_{1}}}\). The remainder of the integral gives the factor in parentheses, consisting of four denominators, each of which provides a unitarity cut, corresponding to a physical final state.
A representation of the diagram after the \(t_{b_{1}}\) integral is given in Fig. 4a, which includes a modified vertex, which absorbs the fraction \(\frac{i}{-\omega_{efg}}\). This vertex, the denominator combined with coupling constants, is real and is effectively local in time (in this case at \(t_{b_{3}}\)). At this stage, all remaining vertices are ordered between \(i\) and \(o\) in the poset. We have thus achieved what we set out to do, reduce the diagram in Fig. 3 to
one with a unique maximum and unique minimum. The cuts of the modified diagram in Fig. 4a do not disconnect the graph into more than two connected subdiagrams, and are therefore unitarity cuts, that is, physical cuts of the forward scattering graph.
We can now turn to the other possibility, \(t_{b_{3}}>t_{b_{2}}\), the component of this poset given by the orders in Eq. (4.6). Rather than reproducing the four-dimensional time integral as in the previous case, we shall simply show the integrals over minimum vertices. Examining the ranges of integrations possible in Eq. (4.6), we find that we can carry out the \(t_{1}\) integral from \(-\infty\) to \(t_{2}\) independently of the value of \(t_{i}\). This gives the \(t_{2}\)-dependent factor
\[\int\limits_{-\infty}^{t_{2}}dt_{1}\;e^{-i(-\omega_{efg}+i\epsilon)t_{1}}= \frac{i}{-\omega_{efg}}\;e^{-i(-\omega_{efg}+i\epsilon)t_{2}}. \tag{4.10}\]
The composite graph obtained after integrating out vertex 1 is shown in Fig. 4b. In this case, the vertex 2 is still a minimum of the poset, and we would like to carry out the \(t_{2}\) integral as well. We can do this because in the combination of time orders in Eq. (4.6), the upper limit of the \(t_{2}\) integral is \(t_{3}\), independent of \(t_{i}\). We thus find,
\[\int\limits_{-\infty}^{t_{3}}dt_{2}\;e^{-i(-\omega_{efg}+i\epsilon)t_{2}}e^{- i(\omega_{ef}-\omega_{h}+i\epsilon)t_{2}}=\frac{i}{-\omega_{gh}}\;e^{-i(- \omega_{gh}+i\epsilon)t_{3}}. \tag{4.11}\]
A graphical representation of the diagram that results after this step is shown in Fig. 4c.
Figure 3: A three loop example of a graph with two minima. Vertices 1 and \(i\) are both minima of this graph, and our procedure eliminates the minimum 1, through the process of successive integration. This graph is to be understood as a poset graph rather than a time-ordered graph.
The complete result for the time orders in Eq. (4.6), analogous to Eq. (4.9) for the time orders of Eq. (4.5), is now easily found to be
\[\pi_{(i1234o)}+\pi_{(1i234o)}+\pi_{(12i34o)} = \left(\frac{i}{Q-\omega_{bd}+i\epsilon}\,\frac{i}{Q-\omega_{cbh}+ i\epsilon}\,\frac{i}{Q-\omega_{ab}+i\epsilon}\,\right)\frac{i}{-\omega_{gh}}\, \frac{i}{-\omega_{efg}}\,.\]
Together, the two negative semi-definite denominators combine with couplings to form the composite vertex 3 in Fig. 4c. Again, the remaining denominators are either physical (in parentheses) or negative semi-definite. The full expression for this poset (\(D\)), \(\pi_{D}\), is given by the sum of this result with Eq. (4.9).
Figure 4: (a) The poset ordered graph in Fig. 3. after integrating out the minimum vertex 1, with the choice \(2\geq 3\). Here the black dot represents the composite, modified vertex 3. (b) Integrating the time of vertex 1, making the choice \(3\geq 2\). The composite vertex 2 is represented with a black dot. 2 is still a minimum of the new poset. (c) The time of the composite vertex 2, which was a new minimum in the choice \(3\geq 2\), has been integrated. The new composite vertex 3 is represented with a black dot. Here, all vertices lie between \(i,o\).
For the next example, consider the two loop graph in Fig. 5. In this example, at the level of the poset, there exists no unique minimum or unique maximum. Our process of eliminating the additional extrema starts with eliminating the vertex we have labeled 1. The minimum vertex, 1 is connected to mutually un-ordered vertices 2 and \(o\). To integrate out the minimum 1, we will order these vertices as before. The two choices in ordering are \(t_{o}>t_{2}\) and \(t_{2}>t_{o}\), and as before, we must sum over both choices. Making the choice \(t_{o}>t_{2}\) and integrating \(t_{1}\), we obtain a unique minimum \(i\) and a unique maximum \(o\), giving
\[\int\limits_{-\infty}^{t_{u}}dt_{1}\;e^{-i(-\omega_{dec}+i\epsilon)t_{1}}= \frac{i}{-\omega_{dec}}\;e^{-i(-\omega_{dec}+i\epsilon)t_{u}}, \tag{4.13}\]
where the upper limit \(t_{u}\) is \(t_{2}\) if \(t_{o}>t_{2}\) and \(t_{o}\) if \(t_{2}>t_{o}\). Having made the choice \(t_{o}>t_{2}\), the rest of the graph is fully ordered and we may use TOPT for the remaining denominators,
\[\pi_{D}^{(1)}=\left(\frac{i}{Q-\omega_{ac}+i\epsilon}\;\frac{i}{Q-\omega_{ab}+ i\epsilon}\right)\,\frac{i}{-\omega_{ced}}. \tag{4.14}\]
The ordered graph that yields these denominators is shown in Fig. 6a.
The choice of ordering \(t_{2}>t_{o}\), leaves us with a single maximum vertex 2, which is not \(o\). We would therefore like to integrate the maximum time, \(t_{2}\), from \(t_{o}\) to \(\infty\),
\[\int\limits_{t_{o}}^{\infty}dt_{2}\;e^{-i(\omega_{bed}-i\epsilon)t_{2}}=\frac {i}{-\omega_{bed}}\;e^{-i(\omega_{bed}-i\epsilon)t_{o}}. \tag{4.15}\]
This procedure is represented diagrammatically in Figs. 6b, 6c. What remains is the integral of \(t_{i}\) in the range \(-\infty\) to \(t_{o}\) which is easily carried out. The contribution of the order \(t_{2}>t_{o}\) to the poset denominator is
\[\pi_{D}^{(2)}=\left(\frac{i}{Q-\omega_{ab}+i\epsilon}\right)\,\frac{i}{- \omega_{bed}}\,\frac{i}{-\omega_{ced}}\,. \tag{4.16}\]
As before, the full expression for the integrand from this poset is the sum of the two choices in orderings included in the poset, \(\pi_{D}=\pi_{D}^{(1)}+\pi_{D}^{(2)}\).
Figure 5: A two loop example of a graph with two minima and two maxima. Vertices 1 and \(i\) are both minima of this graph, and Vertices 2 and \(o\) are both maxima of this graph.
As a final example, consider the poset graph in Fig. 7, which has both a minimum (vertex 1) and a maximum (vertex 2). This example captures our procedure at four loops. In this poset, time \(t_{1}\) is bounded from above by \(t_{2}\) if \(t_{2}<t_{3}\), and by \(t_{3}\) if \(t_{3}<t_{2}\). In either case,
\[\int\limits_{-\infty}^{t_{u}}dt_{1}\ e^{-i(-\omega_{ijk}+i\epsilon)t_{1}}= \frac{i}{-\omega_{ijk}}\ e^{-i(-\omega_{ijk}+i\epsilon)t_{u}}, \tag{4.17}\]
where the upper limit \(t_{u}\) is \(t_{2}\) if \(t_{3}>t_{2}\) and \(t_{3}\) if \(t_{2}>t_{3}\). Making the first choice, \(t_{3}>t_{2}\), confines every remaining vertex to lie between \(i\) and \(o\). A graphical representation of this reduced graph is in Fig 8a. It is now fully ordered and as above, we can reconstruct
Figure 6: (a) The poset ordered graph in Fig. 5. after integrating the time of the minimum vertex, 1, with the choice \(o\geq 2\). Here the black dot represents the composite, modified vertex 2. (b) Integrating \(t_{1}\), making the choice \(2\geq o\). The composite vertex \(o\) is represented with a black dot. Vertex 2 is the new maximum of the poset. However, since the maximum is not \(o\), we continue by integrating the time for vertex 2. (c) The composite vertex \(o\), after \(t_{2}\) has been integrated. The new composite vertex \(o\) is represented with a black dot.
Figure 7: A four loop example of a graph with two minima and two maxima. Vertices 1 and \(i\) are both minima of this graph, and vertices 2 and \(o\) are both maxima of this graph.
the remaining integrals from the rules of TOPT, to find.
\[\pi_{D}^{(1)}=\left(\frac{i}{Q-\omega_{af}}\:\frac{i}{Q-\omega_{kae}}\:\frac{i}{Q -\omega_{adjk}}\:\frac{i}{Q-\omega_{ghad}}\:\frac{i}{Q-\omega_{gcd}}\:\frac{i}{Q -\omega_{ab}}\right)\frac{i}{-\omega_{ijk}}\,, \tag{4.18}\]
where we have suppressed the \(+i\epsilon\) terms present in each physical denominator.
Making the other choice, \(t_{2}>t_{3}\), and integrating the minimum time, \(t_{1}\) from \(-\infty\) to \(t_{3}\) leaves 2 as a maximum element. The reduced graph we obtain at this stage is shown in Fig. 8b. We can now integrate over \(t_{2}\),
\[\int\limits_{t_{3}}^{\infty}dt_{2}\:e^{-i(\omega_{igh}-i\epsilon)t_{2}}=\frac{ i}{-\omega_{igh}}\:e^{-i(\omega_{igh}-i\epsilon)t_{3}}. \tag{4.19}\]
The reduced graph we obtain is shown in Fig. 8c. This is a fully ordered diagram, for which every vertex lies between \(i\) and \(o\). We can proceed using TOPT rules for the new diagram, giving the final result
\[\pi_{D}^{(2)}=\left(\frac{i}{Q-\omega_{af}}\:\frac{i}{Q-\omega_{kae}}\:\frac{ i}{Q-\omega_{ghad}}\:\frac{i}{Q-\omega_{gcd}}\:\frac{i}{Q-\omega_{ab}}\right)\: \frac{i}{-\omega_{igh}}\:\frac{i}{-\omega_{ijk}}\,, \tag{4.20}\]
where again we suppress \(+i\epsilon\) terms in all the denominators. The full poset integrand is once again \(\pi_{D}=\pi_{D}^{(1)}+\pi_{D}^{(2)}\).
Let us summarize the process of integrating out of extrema that we have seen so far in these examples. We first identify a minimum vertex, \(v\) ( \(v\neq i\)), and order the vertices that cover \(v\). We then integrate out the minimum, at the cost of adding the energy carried by \(v\) into the next vertex, say \(v+1\). If there were more than one minimum, we will sequentially repeat these steps for all minima. We continue this process until each vertex that remains is greater than \(i\). We use an identical procedure with maxima, integrating them to generate negative definite denominators. After the elimination of both minima and maxima, the remaining vertices are all ordered relative to \(i\) and \(o\) and therefore carry only physical cuts of the graph.
We have seen in these examples that the resulting ordered diagrams include composite vertices with multiple lines emanating from or flowing into them. At each stage, the graph with composite vertices yields a new time-ordered graph, with fewer pseudo-physical cuts but the same physical cuts as in the original graph. The new graph thus obtained has fewer vertices and a modified incidence matrix. The composite vertex denotes a factorized expression for long time processes that were initiated in the vacuum and do not affect the cuts of the short distance process. The composite vertices may be interpreted intuitively as effective vertices in the short distance function, where all the long distance behavior has been absorbed into factorized, negative semi-definite, vacuum denominators. We next turn to a general implementation of these methods.
## 5 Ptopt: Topt without unphysical singularities
In this section, we describe a generalization to arbitrary order of the examples of the previous section, based on graphs' poset structures. We will term this approach, "partially time-ordered perturbation theory" (PTOPT). We will construct an algorithm that enables us to re-express the sum over time-ordered diagrams of any graph into a sum with fewer terms, all of whose denominators are either negative semi-definite, or physical. In this construction, all pseudo-physical cuts will be eliminated. We will present our discussion for an arbitrary amplitude with any numbers of incoming and outgoing momenta. Our result, given in Eq. (5.37) below, is essentially equivalent to that given Ref. [18]. In the following section, we show how PTOPT can be applied to cross sections written as sums over cuts, as in the weighted cross sections of Sec. 2, to provide expressions that involve only physical singularities, and which separate universal and process-dependent dynamics.
In this construction, it will be useful to turn our attention to integral representation Eq. (2.12) for the time-ordered denominators of functions like \(\pi_{\tau_{G}}\) in Eq. (4.1), written as sums over posets. Compared to Eq. (2.12), however, we associated with _every_ vertex \(\alpha\) an external energy \(E_{\alpha}\), defined to flow into that vertex. In a multiloop diagram, with a fixed number of external lines, most \(E_{\alpha}\) are zero. For an outgoing physical momentum flowing out of vertex \(\alpha\), \(E_{\alpha}\) is negative. The resulting functions are then
Figure 8: (a) The poset ordered graph in Fig 7 after integrating the time of the minimum vertex 1, with the choice \(3\geq 2\). Here the black dot represents the composite, modified vertex 2. (b) Integrating vertex 1, making the choice \(2\geq 3\). The composite vertex 3 is represented with a black dot. Vertex 2 is a new maximum of the poset. However, since the maximum is not \(o\), we continue integrating out vertex 2. (c) The vertex 2, which was the new maximum in the choice \(2\geq 3\), after \(t_{2}\) has been integrated. The new composite vertex 3 is represented with a black dot.
not limited to vacuum polarizations and may represent any scattering configuration, with arbitrary numbers of incoming and outgoing lines. To emphasize the generality of these considerations, we denote our functions as \(F_{G}(\{E_{\alpha}\},{\cal L}_{G})\) for graph \(G\), and \(F_{D}(\{E_{\alpha}\},{\cal L}_{G})\) for poset \(D\). When we return to the analysis of vacuum polarizations, we revert to the notation of \(\pi_{G}\) or \(\pi_{D}\).
In these terms, our expression for the integral associated with an arbitrary graph \(G\) with \(n\) vertices at fixed spatial loop momenta is
\[2\pi\delta\left(\sum_{\alpha}E_{\alpha}\right)\,F_{G}\left(\{E_{ \alpha}\},{\cal L}_{G}\right) = \sum_{\tau}{\mathbb{N}}_{\tau}\,\prod_{\alpha=1}^{n}\int_{-\infty }^{t_{\alpha+1}}dt_{\alpha}e^{-i(E_{\alpha}+\eta_{j}^{(\tau_{\alpha})}(\omega_ {j}-i\epsilon))t_{\alpha}}\] \[= 2\pi\left(\sum_{\alpha}E_{\alpha}\right)\,\sum_{\tau}\,{\mathbb{ N}}_{\tau}F_{\tau}\left(\{E_{\alpha}\},{\cal L}_{G}\right)\] \[= 2\pi\delta\left(\sum_{\alpha}E_{\alpha}\right)\,\sum_{D}\sum_{ \tau_{D}}{\mathbb{N}}_{D}\,F_{\tau_{D}}\left(\{E_{\alpha}\},{\cal L}_{G}\right)\] \[= 2\pi\delta\left(\sum_{\alpha}E_{\alpha}\right)\,\sum_{D}{\mathbb{ N}}_{D}\,F_{D}\left(\{E_{\alpha}\},{\cal L}_{G}\right)\,,\]
where we define, \(t_{n+1}=\infty\) in the first equality. The final, \(t_{n}\), integral gives the delta function that enforces the energy conservation in \(G\). Here, \(\eta_{j}^{(\tau_{\alpha})}\) is the incidence matrix of the vertex \(b_{\tau_{\alpha}}\) in the time order labeled by \(\tau\). The second equality defines \(F_{\tau}\) as the coefficient of the energy-conserving delta function from the full integral associated with time order \(\tau\). In the third, we reorganize the sum over time orders into a sum over posets, \(D\), as in Eq. (4.2), recalling that every time order is associated with a single poset. Finally, the fourth equality defines \(F_{D}\) as the complete contribution to \(F_{G}\) from all the time orders within poset \(D\). The functions \(F_{D}\) will be the subject of the following discussion.
### Time integrals of extremal vertices; the covering set
We consider the time integrals allowed within a given partial order for an arbitrary graph. Again, in this section, we will not restrict to forward scattering graphs that one encounters in leptonic annihilation. Rather, we treat graphs with a generic flow of external momenta. We will label this general partially-ordered diagram, or poset diagram, as \(D\), defined on a set of vertices \(V\) with ordering \(\geq\). In the following, we will use \(D\) to refer both to the poset and to the ordered diagram. Note that we will only encounter vertices that are local in time, whose time integrals we will carry out. After our first time integral, however, the composite vertices we encounter generally will not be local in space.
The object of interest is the function \(F_{D}\) for a fixed poset \(D=(V,\geq)\), which as we have noted before, is fixed by an incidence matrix \(\eta_{j}^{(\alpha)}\). (For compactness of notation, we do not label \(\eta_{j}^{(\alpha)}\) by \(D\).) We represent schematically the multidimensional region of
vertex times restricted to the orders associated with a specific poset \(D\) as
\[2\pi\delta\left(\sum_{\alpha=1}^{n}E_{\alpha}\right)\,F_{D}\left(\{E_{\alpha}\}, \mathcal{L}_{G}\right)=\prod_{\alpha=1}^{|V|}\int_{X_{D}}dt_{\alpha}e^{-i(E_{ \alpha}+\eta_{j}^{(\alpha)}(\omega_{j}-i\epsilon))t_{\alpha}}\,, \tag{5.2}\]
where we have _formally_ carried out the sum over time orders consistent with poset \(D\), corresponding to the integral of times over a region labeled by \(X_{D}\). We let \(|V|\) represent the number of vertices. The region \(X_{D}\) is defined by
\[X_{D}=\{(t_{1},t_{2}\ldots t_{|V|})|b_{k}\geq b_{j}\implies t_{k}\geq t_{j}\}\,. \tag{5.3}\]
The union of all possible regions \(X_{D}\) is the full set of time integrals in Eq. (5.1).
For a given poset \(D\), we can identify a unique set \(M^{1}_{D}\) of _embedded extrema_ (minima and maxima) of \(D\). We define an embedded vertex \(v\in M^{1}_{D}\), as one whose removal leaves behind a connected diagram consisting of vertices \(V/v\). Every connected diagram has a non-empty set of embedded vertices (see Appendix A). The time integrals of minima in \(X_{D}\) extend independently to negative infinity, and those of maxima to positive infinity. We can always begin the integration process for \(X_{D}\) by picking a subset of embedded extremal vertices that form an anti-chain (mutually incomparable elements). We will do so for a subset of the extremal embedded vertices, chosen to have the largest number of such vertices that form an anti-chain. Such a subset may include both maxima and minima, because the minima may not be ordered with respect to all maxima, and vice-versa.
Let us call this set the first _extremal antichain_, \(A^{1}\), defined to satisfy,
\[A^{1} = \{x\in M^{1}|x,y\in A^{1}\implies x\sim y\} \tag{5.4}\] \[= \{b_{1,1},\ldots,b_{1,r_{1}}\,;\,d_{1,1},\ldots,d_{1,s_{1}}\}\; \equiv\;\{e_{1,i}\}\;,\]
where we denote by \(r_{1}\) the number of minimal elements, \(b_{1,i}\), in \(A^{1}\) and by \(s_{1}\) the number of maximal elements, \(d_{1,i}\), and where \(e_{i,j}\) labels these vertices collectively, with \(j=1\ldots r_{1}+s_{1}\). \(A^{1}\) represents a set of vertices whose times can be integrated simultaneously and independently of each other.4 Clearly, each of these integrals must extend from a fixed upper (lower) limit to infinity (negative infinity). We now turn to the determination of these limits.
Footnote 4: This choice for \(A^{1}\) is not unique, and we need not adhere to \(A^{1}\) being the largest possible antichain in any one step.
Having identified the set \(A^{1}\), we identify a companion set, \(C^{1}\), of vertices that cover the minimal vertices in \(A^{1}\) or are covered by the maximal vertices in \(A^{1}\), that is, vertices that are connected to one or more elements of \(A^{1}\) by single line(s) with no intervening vertices,
\[C^{1} = \{x\in V|\exists v\in A^{1}:(v\prec x)\,\mbox{or}\,(x\prec v)\}\]
\[= \bigcup_{i=1}^{r_{1}}C_{b_{1,i}}\bigcup_{j=1}^{s_{1}}C_{d_{1,j}}, \tag{5.5}\] \[= \bigcup_{i=1}^{r_{1}}C_{b_{1,i}}\bigcup_{j=1}^{s_{1}}C_{d_{1,j}},\]
where we have defined the covering set for each element in \(A^{1}\) by
\[C_{b_{1,i}} = \{x\in V|b_{1,i}\prec x\},\] \[C_{d_{1,i}} = \{x\in V|x\prec d_{1,i}\}. \tag{5.6}\]
By construction, vertices in \(C^{1}\) are connected to extrema by lines that either emerge directly from minima or flow directly into maxima, as denoted by the covering relation \(\prec\), defined in Sec. 4.1. If there are several such vertices for any given extremal vertex, they form an anti-chain, that is, they are not mutually ordered within the poset \(D\). We note as well that a single element of \(C^{1}\) can cover more than a single extremum. We will refer to the elements of \(C^{1}\) as the first _covering set_ for the first extremal antichain, \(A^{1}\). We will use the covering set \(C^{1}\) to organize the time integrals for the extremal vertices in \(A^{1}\). This will enable us to begin a recursive analysis over induced poset diagrams and to do all subsequent time integrals in the same way as the first.
In the time integral over \(X_{D}\), corresponding to the partially ordered diagram \(D\), Eq. (5.2), we must include all choices within poset \(D\) of the earliest covering vertices for all minimum elements of \(A^{1}\) and of the latest covered vertex for all maximum elements. Region \(X_{D}\), and thus our integral, is given by a sum over these choices, which we will denote as \(\gamma^{1}\subset C^{1}\). It is worth pointing out that it is possible to construct the full list of compatible choices \(\gamma^{1}\) of next-to-extremal covering vertices systematically, by summing over available choices of covering (covered) vertices for each minimum (maximum) in turn. Because next-to-extremal vertices may cover more than one extremum, the number of resulting sets is not a simple product of the number of covering vertices for the extrema. For a concrete algorithm to generate all consistent \(\gamma^{1}\), see Appendix B.
Given a choice of next-to-extremal vertices, the integration range of each extremal vertex is -\(\infty\) to the time of its earliest covering vertex for a minimum, and from the time of its latest covered vertex to \(\infty\) for each maximum. We shall describe these choices collectively as sets of next-to-extremal covering vertices, and label them among the elements of \(C^{1}\) as \(x_{a}^{(e_{1,i})}\), where index \(e_{1,i}\) identifies the extremal vertex connected to the vertex \(x_{a}^{(e_{1,i})}\), while index \(a\) identifies the choice of covering (covered) vertex in set \(C_{e_{1,i}}\) that covers (is covered by) the minimal (maximal) vertex \(e_{1,i}\).
In terms of the next-to-extremal covering elements, \(\gamma^{1}\) is given by
\[\gamma^{1}\ =\ \left\{\cup_{e_{i}=1\ldots r_{1}+s_{1}}\ x_{a}^{(e_{1,i})} \right\}\,. \tag{5.7}\]
Whenever two extrema, say \(e_{1,i}\) and \(e_{1,i^{\prime}}\), have the same next-to-extremal covering vertex, we have \(x_{a}^{(e_{1,i})}=x_{a^{\prime}}^{(e_{1,i^{\prime}})}\). The number of elements in \(\gamma^{1}\) is therefore less than or equal to the number of elements in \(C^{1}\). In this notation, once we choose \(\gamma^{1}\), the
integrals over the times of all extremal vertices in set \(A^{1}\), minima and maxima, can be done explicitly and independently.
To relate the integration region for each \(\gamma^{1}\) to the original integration region \(X_{D}\) for poset \(D\) in Eq. (5.3), we proceed as follows. As noted above, the next-to-extremal vertices for any \(e_{1,i}\) form an antichain (mutually incomparable vertices) and therefore need additional ordering at the poset level to specify the ranges of the \(e_{1,i}\) time integrals. This additional ordering we impose is fixed by our choice in \(x_{a}^{(e_{1,i})}\). For each choice of \(\gamma^{1}\), we construct a partial order on the antichains in \(C^{1}\), by defining a new binary relation, denoted \(\geq_{\gamma^{1}}\) over the set \(V\) by
\[\mbox{If }a\geq b,\mbox{ then }a\geq_{\gamma^{1}}b\,,\] \[\forall y\in C_{b_{1,i}}:y\geq_{\gamma^{1}}x_{a}^{(b_{1,i})}>_{ \gamma^{1}}b_{1,i}\,,\] \[\forall y\in C_{d_{1,i}}:d_{1,i}>_{\gamma^{1}}x_{a}^{(d_{1,i})} \geq_{\gamma^{1}}y\,. \tag{5.8}\]
Here, the subscript on \(\geq_{\gamma^{1}}\) reflects the ordering that remains after we identify each extremal vertex \(e_{1,i}\) with its corresponding vertex \(x_{a}^{(e_{1,i})}\in C_{e_{1,i}}\) in the covering set \(\gamma^{1}\).
The relations, Eq. (5.8) that define the ordering \(\geq_{\gamma^{1}}\) identify new posets, whose mutually-disjoint integration regions allow us to integrate the times \(t_{e_{1,i}}\) explicitly. Again, although we formally have constructed \(\geq_{\gamma^{1}}\) as a "stronger" binary relationship than the original ordering \(\geq\) of the poset, it is the natural ordering inherited from \(\geq\) when every extremum \(e_{1,i}\) is identified with the specific covering element \(x_{a}^{(e_{1,i})}\), that is, contracting the vertex \(e_{1,i}\) onto the corresponding \(x_{a}^{(e_{1,i})}\) in \(\gamma^{1}\). We, therefore, think of a new, lower-order graph with all \(e_{1,i}\) removed from the set of vertices. Every line that previously emanated from a minimum \(b_{1,i}\), and was not absorbed at vertex \(x_{a}^{(b_{1,i})}\) now emanates from \(x_{a}^{(b_{1,i})}\), and therefore \(x_{a}^{(b_{1,i})}\) is "less than" every other vertex of \(C^{1}\) that was connected to \(b_{1,i}\), since after the contraction of \(b_{1,i}\) onto \(x_{a}^{(b_{1,i})}\) there is at least one line starting from \(x_{a}^{(b_{1i})}\) and ending on any such vertex. Maxima are contracted on their nearest covered vertices in an exactly analogous fashion.
The combination of the new, contracted vertex and the induced order defines a new poset,
\[D\left[\gamma^{1}\right]\ =\ \left\{V\left[\gamma^{1}\right]\,,\,\geq_{ \gamma^{1}}\right\}\,,\qquad V\left[\gamma^{1}\right]\ =\ V\setminus\cup_{i}e_{1,i}\,, \tag{5.9}\]
with \(\geq_{\gamma^{1}}\) defined by Eq. (5.8). The integration region for post \(D[\gamma^{1}]\), analogous to the original region, Eq. (5.3), is given by
\[X_{D[\gamma^{1}]}=\{(t_{1},t_{2}\ldots\hat{t}_{e_{1,1}}\ldots\hat{t}_{e_{1,r_ {1}+s_{1}}}\ldots t_{|V|})|b_{k}\geq_{\gamma^{1}}b_{j}\implies t_{k}\geq t_{ j}\}\,, \tag{5.10}\]
where \(\hat{t}_{e_{1,i}}\) means that \(t_{e_{1,i}}\) is excluded from the list. Again, the notation \(X_{D[\gamma^{1}]}\) identifies the subregion of \(X_{D}\) where the vertices in \(\gamma^{1}\) are the full set of next-to-extremal covering vertices.
It is clear that any time order in region \(X_{D}\) has a unique set \(\gamma^{1}\), and that distinct choices of \(\gamma^{1}\) correspond to different time orders. Therefore, when we sum over all choices, of \(\gamma^{1}\), we exhaust all time orders within \(X_{D}\). Explicitly, we may represent the full poset integration region, \(X_{D}\) as
\[X_{D} = \bigcup_{\gamma^{1}}X_{D[\gamma^{1}]}\times\bigcup_{\begin{subarray} {c}t_{b_{i,1}}\\ i=1\end{subarray}}^{r_{1}}(-\infty,t_{x_{a}^{(b_{1,i})}})\times\bigcup_{ \begin{subarray}{c}t_{d_{i,1}}\\ i=1\end{subarray}}^{s_{1}}(t_{x_{a}^{(d_{1,i})}},\infty)\,, \tag{5.11}\]
where again, \(\gamma^{1}\) is specified, as in Eq. (5.7), as the choice of next-to-extremal elements \(x_{a}^{(e_{1,i})}\).
In summary, for a given choice of \(\gamma^{1}\), we can carry out the \(t_{e_{1,i}}\) integrals up to (or down to) the times \(t_{x_{a}^{(e_{1,i})}}\in\gamma^{1}\). We can thus rewrite the full integral for \(F_{D}\), Eq. (5.2) using Eq. (5.11) as
\[2\pi\delta\left(\sum_{\alpha=1}^{n}E_{\alpha}\right)\,F_{D}\left( \{E_{\alpha}\},\mathcal{L}_{G}\right) = \sum_{\gamma^{1}}\,\prod_{\alpha=1}^{|V[\gamma^{1}]|}\int_{X_{D}[ \gamma^{1}]}dt_{\alpha}e^{-i(E_{\alpha}+\eta_{j}^{(\alpha)}(\omega_{j}-i \epsilon))t_{\alpha}} \tag{5.12}\] \[\times \prod_{i=1}^{r_{1}}\int_{-\infty}^{t[x_{a}^{(b_{1,i})}]}dt_{1,i} \,e^{-i\Big{(}E_{i}+\sum_{j}\eta_{j}^{(b_{1,i})}(\omega_{j}-i\epsilon)\Big{)}t _{1,i}}\] \[\times \prod_{i=1}^{s_{1}}\int_{t[x_{a}^{(d_{1,i})}]}^{\infty}dt_{1,i} \,e^{-i\Big{(}E_{i}+\sum_{j}\eta_{j}^{(d_{1,i})}(\omega_{j}-i\epsilon)\Big{)} t_{1,i}}\,.\]
Here we have labeled the times of the extremal vertices, over which we are integrating, with the subscripts of the corresponding vertices \(e_{1,i}\) themselves, and the times of the next-to-extremal covering vertices \(x_{a}^{(e_{1,i})}\) in a hopefully obvious notation. We note that each of the limits in the integrals over extremal vertices are times that appear in the remaining integration measure of \(X_{D[\gamma^{1}]}\).
In Eq. (5.12), the time integrals for minima all take the form
\[\int_{-\infty}^{t[x_{a}^{(b_{1,i})}]}dt_{1,i}\,e^{-i\Big{(}E_{b_{1,i}}+\sum_{j}\eta_{j}^{(b_{1,i})}(\omega_{j}-i\epsilon)\Big{)}t_{1,i}} = \frac{i}{\Delta_{b_{1,i}}[\gamma^{1}]}\,e^{-i\Big{(}E_{b_{1,i}}+ \sum_{j}\eta_{j}^{(b_{1,i})}(\omega_{j}-i\epsilon)\Big{)}t[x_{a}^{(b_{1,i})}]}\]
where we define
\[\Delta_{b_{1,i}}[\gamma^{1}] = E_{b_{1i}}-\sum_{j}\left|\eta_{j}^{(b_{1,i})}\right|\omega_{j}+ i\epsilon. \tag{5.14}\]
We note that every nonzero \(\eta_{j}^{(b_{1,i})}=-1\), since they all correspond to minimum vertices, which only emit particles. Such a denominator has the standard form of an energy deficit in the channel of vertex \(b_{1,i}\), where external energy \(E_{b_{1,i}}\) flows in. We will refer to the set of lines, \(j\) for which \(\eta_{j}^{(b_{1,i})}=-1\), all of whose on-shell energies flow out of vertex \(b_{1,i}\), as the "cut" of diagram \(D\) that separates it into two diagrams, one
consisting of vertex \(b_{1,i}\) and one with the remaining vertices, \(V\setminus b_{1,i}\). Since all \(b_{1,i}\) are embedded vertices, all the diagrams \(V\setminus b_{1,i}\) are connected.
Similarly, for all the time integrals for maxima in Eq. (5.12) we have
\[\int_{t[x_{a}^{(d_{1},i)}]}^{\infty}dt_{1,i}\,e^{-i\left(E_{d_{1i} }+\sum_{j}\eta_{j}^{(d_{1},i)}(\omega_{j}-i\epsilon)\right)t_{1,i}} = \frac{i}{\Delta_{d_{1,i}}[\gamma^{1}]}\,e^{-i\left(E_{d_{1i}}+ \sum_{j}\eta_{j}^{(d_{1},i)}(\omega_{j}-i\epsilon)\right)t[x_{a}^{(d_{1},i)}]}\,.\]
In this case we define
\[\Delta_{d_{1,i}}[\gamma^{1}] = -\left(E_{d_{1i}}+\sum_{j}\eta_{j}^{(d_{1},i)}\omega_{j}-i \epsilon\right) \tag{5.16}\] \[= (-E_{d_{1i}})-\sum_{j}\eta_{j}^{(d_{1},i)}\omega_{j}+i\epsilon\,,\]
where in the second equality we use that \(\eta^{(d_{1,i})}\) is either zero or \(1\) for maximum vertices. When \(-E_{d_{1,i}}\) is a positive energy flowing out of vertex \(d_{1,i}\), we again have a standard energy deficit form in the channel of vertex \(d_{1,i}\). As for minimal vertices, will refer to the set of lines, \(j\) for which \(\eta_{j}^{(d_{1,i})}=1\), as the "cut" of diagram \(D\) that separates vertex \(d_{1,j}\) from a connected diagram with vertices \(V\setminus d_{1,j}\).
Substituting the integrals in Eqs. (5.13) to (5.16) into the expression for \(F_{D}\), Eq. (5.12), we now have
\[2\pi\delta\left(\sum_{\alpha=1}^{n}E_{\alpha}\right)\,F_{D}\left( \{E_{\alpha}\},{\cal L}_{G}\right) = \sum_{\gamma^{1}}\;\prod_{i=1}^{r_{1}}\frac{i}{\Delta_{b_{1,i}}[ \gamma^{1}]}\;\prod_{i=1}^{s_{1}}\frac{i}{\Delta_{d_{1,i}}[\gamma^{1}]}\] \[\times\;\prod_{\alpha=1}^{|V[\gamma^{1}]|}\int_{X_{D}[\gamma^{1}] }dt_{\alpha}e^{-i(E_{\alpha}+\eta_{j}^{(\alpha)}(\omega_{j}-i\epsilon))t_{ \alpha}}\] \[\times\;\prod_{i=1}^{r_{1}}e^{-i\left(E_{b_{1,i}}+\sum_{j}\eta_{j }^{(b_{1,i})}(\omega_{j}-i\epsilon)\right)t[x_{a}^{(b_{1,i})}]}\prod_{i=1}^{s_ {1}}e^{-i\left(E_{d_{1,i}}+\sum_{j}\eta_{j}^{(d_{1,i})}(\omega_{j}-i\epsilon) \right)t[x_{a}^{(d_{1,i})}]}\,.\]
In this expression, every next-to-extremal vertex \(x_{a}^{(e_{1,i})}\) appears in the product over the vertices (\(\alpha\)) of diagram \(D[\gamma^{1}]\). The integral may thus be written in a more compact form by combining these contributions in the phases, using the definitions
\[E[\gamma^{1}]_{\alpha} = E_{\alpha}+\sum_{i=1}^{r_{1}}E_{b_{1,i}}\,\delta_{\alpha,x_{a}^{ (b_{1,i})}}+\sum_{i=1}^{s_{1}}E_{d_{1,i}}\,\delta_{\alpha,x_{a}^{(d_{1,i})}}\] \[\eta[\gamma^{1}]_{a}^{(\alpha)} = \eta_{j}^{(\alpha)}+\sum_{i=1}^{r_{1}}\eta_{j}^{(b_{1,i})}\delta _{\alpha,x_{a}^{(b_{1,i})}}+\sum_{i=1}^{s_{1}}\eta_{j}^{(d_{1,i})}\delta_{ \alpha,x_{a}^{(d_{1,i})}}\,. \tag{5.18}\]
An important feature of the new incidence matrix \(\eta[\gamma^{1}]_{j}^{(\alpha)}\) is that every line \(j\) for which it takes a non-zero value is connected to vertex \(\alpha\) in \(D[\gamma^{1}]\), either directly or through
an embedded extremal vertex whose time has been integrated up to the time of vertex \(\alpha\).
We can now write the full poset integral as
\[2\pi\delta\left(\sum_{\alpha=1}^{n}E_{\alpha}\right)\,F_{D}\left( \{E_{\alpha}\},{\cal L}_{G}\right) = \sum_{\gamma^{1}}\,\prod_{i=1}^{r_{1}}\frac{i}{\Delta_{b_{1,i}}[ \gamma^{1}]}\,\prod_{j=1}^{s_{1}}\frac{i}{\Delta_{d_{1,j}}[\gamma^{1}]} \tag{5.19}\] \[\times \prod_{\alpha=1}^{|V[\gamma^{1}]|}\,\int_{X_{D}[\gamma^{1}]}dt_{ \alpha}e^{-i(E[\gamma^{1}]_{\alpha}+\eta[\gamma^{1}]_{j}^{(\alpha)}(\omega_{j} -i\epsilon))t_{\alpha}}\,.\]
In this form, the original integral for poset \(D\) is expressed as a sum over \(\gamma^{1}\) (which is constructed explicitly in App. B) of products of energy deficit denominators multiplied by an integral of the original form, Eq. (5.2), this time for the poset diagram \(D[\gamma^{1}]\), with a reduced number of time integrals remaining. The energy deficit denominators correspond to the cuts of \(D\) associated with each of the embedded extremal vertices in the set \(A^{1}\). Because these vertices make up an antichain, these cuts have no lines in common. Finally, we recall from Eq. (5.9) that the original list of vertices, \(V\) is given by the union of \(V[\gamma^{1}]\) with the extremal vertices,
\[V[\gamma^{1}]\prod_{i=1}^{r_{1}}\cup b_{1,i}\,\prod_{j=1}^{s_{1}}\cup d_{1,j}\ =\ V\,. \tag{5.20}\]
We will encounter generalizations of all of these features below.
Equation (5.19) already exhibits the basic structure of our results. Let us summarize its essential features. The new, _induced_, poset diagram \(D[\gamma^{1}]\), defined by Eqs. (5.8) and (5.9), is a connected diagram with a set of vertices \(V[\gamma^{1}]\), all of which are local in time. The number of vertices has decreased by identifying pairs of embedded extremal and next-to-extremal vertices, with the resulting "composite" vertices inheriting the next-to-extremal times \(t[x_{j}^{(e_{1,i})}]\). The extremal vertices of \(D[\gamma^{1}]\) can be labeled \(\{b_{2,i},d_{2,i}\}\).
For these induced, embedded minimal vertices, the new incidence matrix defined in Eq. (5.18), \(\eta[\gamma^{1}]_{j}^{(b_{2,i})}\) takes only the values \(0,-1\), with only outgoing lines, although now these lines may have been emitted originally by the first round of vertices, \(\{e_{1,i}\}\), over whose times we have already integrated. These lines thus emerge from a subdiagram of the original poset \(D\) that is connected. Similar considerations apply to maximal extrema in \(D[\gamma^{1}]\), for which \(\eta[\gamma^{1}]_{j}^{(d_{2,i})}=0,+1\). In the new diagram, by analogy to Eqs. (5.5) and (5.6) we can identify the set of next-to-extremal vertices, \(C^{2}=\{C_{e_{2,i}}\}\), which will be labeled \(x_{a}^{(e_{2,i})}\).
Examples of this process are given above in the integrations from Fig. 3 to Fig. 4a and to Fig. 4b, which correspond to different choices of \(\gamma^{1}\) in this case. The explicit denominators of Eq. (5.19) provide the results of the time integrals of the chosen antichain \(A^{1}\) of embedded extremal vertices. By construction, they correspond to non-overlapping cuts, each separating the original diagram into two connected parts. As such, they are all "physical" denominators, in the sense we have identified above. The
poset \(D[\gamma^{1}]\) and the integral over the region \(X_{D[\gamma^{1}]}\) in Eq. (5.19) have all the properties of the original poset \(D\) and integral over region \(X_{D}\) of Eq. (5.2) that we used in the forgoing analysis. In the following subsection, we use this recursive structure to derive a general expression for \(F_{D}\) in which all unphysical denominators are eliminated.
### Integrals of induced extrema
The relation, Eq. (5.19) clearly carries fewer time integrals, and the remaining time integrals are in the same form that we encountered in Eq. (5.2). It is therefore straightforward to extend the reasoning recursively in order to carry out all the time integrals.
Let us assume that we are given the result after \(k-1\) iterations of the procedure outlined above. Each step consists of integration over the induced embedded extrema of an induced poset. We assume that the result takes the form
\[2\pi\delta\left(\sum_{\alpha=1}^{n}E_{\alpha}\right)\,F_{D}\left( \{E_{\alpha}\},{\cal L}_{G}\right) = \sum_{\gamma^{k-1}}\ \prod_{l=1}^{k-1}\prod_{i=1}^{r_{l}}\frac{i}{\Delta_{b_{l,i}}[ \gamma^{l}]}\ \prod_{j=1}^{s_{l}}\frac{i}{\Delta_{d_{l,j}}[\gamma^{l}]} \tag{5.21}\] \[\times\prod_{\alpha=1}^{|V[\gamma^{k-1}]|}\int_{X_{D}[\gamma^{k-1 }]}dt_{\alpha}e^{-i(E[\gamma^{k-1}]_{\alpha}+\eta[\gamma^{k-1}]^{(\alpha)}_{j }(\omega_{j}-i\epsilon))t_{\alpha}}\,.\]
This integral has a set of properties that hold in the initial case, Eq. (5.19), corresponding to \(k=2\), and which will remain true recursively.
(1) _Poset structure._\(D[\gamma^{k-1}]\) is a poset with integration region \(X_{D[\gamma^{k-1}]}\), where \(\gamma^{k-1}\) is specified by a set of vertices from the original diagram \(D\), and all vertices in \(V_{D[\gamma^{k-1}]}\) are local in time. That is, each remaining vertex \(\alpha\) is associated with a time integral over \(t_{\alpha}\). As for any such diagram, among the vertices of \(D[\gamma^{k-1}]\), there is a non-empty set \(M^{k}\) of embedded extremal vertices, \(\{b_{k,i},d_{k,i}\}\), each with a corresponding covering set, \(C^{k}=\cup_{i}C^{k}_{e_{k,i}}\), of potential next-to-extremal vertices, \(C^{k}_{e_{k,i}}=\{x_{a}^{(e_{k,i})}\}\).
(2) _Inductive functional dependence._ In Eq. (5.21), the explicit denominators and phases are defined by
\[\Delta_{b_{l,i}}[\gamma^{l}] = E_{b_{l,i}}[\gamma^{l-1}]-\sum_{j}\left|\eta[\gamma^{l-1}]^{(b_{ l,i})}_{j}\right|\omega_{j}+i\epsilon\, \tag{5.22}\]
and
\[\Delta_{d_{l,i}}[\gamma^{l}] = \left(-E_{d_{l,i}}[\gamma^{l-1}]\right)-\sum_{j}\eta[\gamma^{l-1} ]^{(d_{l,i})}_{j}\omega_{j}+i\epsilon\,, \tag{5.23}\]
in terms of the inductive relations
\[E_{\alpha}[\gamma^{l}] = E_{\alpha}[\gamma^{l-1}]+\sum_{i=1}^{r_{l}}E_{b_{l,i}}[\gamma^{ l-1}]\,\delta_{\alpha,x_{a}^{(b_{l,i})}}+\sum_{i=1}^{s_{l}}E_{d_{k,i}}[ \gamma^{l-1}]\,\delta_{\alpha,x_{a}^{(d_{l,i})}}\,,\] \[\eta[\gamma^{l}]^{(\alpha)}_{j} = \eta[\gamma^{l-1}]^{(\alpha)}_{j}+\sum_{i=1}^{r_{l}}\eta[\gamma^{ l-1}]^{(b_{l,i})}_{j}\delta_{\alpha,x_{a}^{(b_{l,i})}}+\sum_{i=1}^{s_{l}} \eta[\gamma^{l-1}]^{(d_{l,i})}_{j}\delta_{\alpha,x_{a}^{(d_{l,i})}}.\]
The expressions, Eqs. (5.22), (5.23) and (5.24) are direct generalizations of Eqs. (5.13), (5.15) and (5.18), with \(\gamma^{1}\) replaced by \(\gamma^{l}\), and \(e_{1,i}\) by \(e_{l,i}\).
(3) _Denominators, cuts, and merged sets._ The explicit denominators, \(\Delta_{e_{l,i}}\) (\(e=b,d\)) in Eq. (5.21), with \(l\leq k-1\) all correspond to cuts of the original diagram, \(D\), each of which separates \(D\) into exactly two connected components. One of these components is associated with an embedded extremal vertex, \(e_{l,i}\in A^{l-1}\), of \(D[\gamma^{l-1}]\), and with a connected set of vertices, \(\lambda[e_{l,i}]\) of the original diagram. We will refer to \(\lambda[e_{l,i}]\) as the "merged set" of vertices for \(e_{l,i}\), corresponding to one or more series of time integrals that terminate at \(t_{e_{l,i}}\). It may be a minimum or maximum. In either case, its merged set is given by
\[\lambda[e_{1,i}] = e_{1,i}\,,\] \[\lambda[e_{l,i}] = \left\{e_{l,i}\,\prod_{e_{l^{\prime},j}}\cup\lambda[e_{l^{\prime},j}]\,\bigg{|}\,x_{a^{\prime}}^{(e_{l^{\prime},j})}=e_{l,i}\,,l^{\prime}\leq l -1\right\}\,, \tag{5.25}\]
where the range of index \(i\) depends on the set \(\gamma^{l-1}\). That is, the merged set of extremal vertex \(e_{l,i}\) is found by merging all the merged sets of extremal vertices \(e_{l^{\prime},i}\), \(l^{\prime}\leq l-1\) for which the vertex \(e_{l,i}\) is the nearest vertex in its covering set. For a minimal extremal vertex in \(D[\gamma^{l-1}]\), all lines that cross the cut \(\Delta_{b_{l,i}}[\gamma^{l}]\) emerge from the minimum \(b_{l,i}\) and are absorbed in \(D[\gamma^{l-1}]\setminus b_{l,i}\). For a maximal extremal vertex, lines that cross the cut \(\Delta_{d_{l,i}}[\gamma^{l}]\) emerge from \(D[\gamma^{l-1}]\setminus d_{l,i}\) and are absorbed at \(d_{l,i}\). In both cases, in terms of the original poset diagram \(D\), the lines of each cut connect \(\lambda[e_{l,i}]\) with \(D\setminus\lambda[e_{l,i}]\), both of which are connected.
(4) _Partial nesting._ Any set of vertices picked from different \(\lambda[b_{k,i}]\) form an antichain, and similarly for maximal sets \(\lambda[d_{k,i}]\). Together, they satisfy the following properties, which may be described as partial nesting [18],
\[\lambda[e_{l_{1},i}]\subset\lambda[e_{l_{2},i^{\prime}}]\quad \mbox{or}\quad\lambda[e_{l_{1},i}]\cap\lambda[e_{l_{2},i^{\prime}}] = 0\,,l_{1}\ <l_{2}\,,\] \[\lambda[e_{l,i_{1}}]\cap\lambda[e_{l,i_{2}}] = 0\,,i_{1}\ \neq i_{2}\,,\] \[V[\gamma^{k}]\prod_{i}\cup\lambda[b_{k,i}]\,\prod_{l}\,\cup \lambda[d_{k,l}] = V\,. \tag{5.26}\]
Taken together, these conditions imply that the sets of vertices \(\lambda[e_{l,i}]\) are either nested, according to index \(l\), or disjoint, and that, together with \(V[\gamma^{k}]\), they include all the vertices of the original diagram \(D\). This implies that the cuts represented by the denominators \(\Delta_{e_{l,i}}\) do not cross, since this would lead to a non-nested relationship between at least two successive sets \(\lambda[e_{l,i}]\) and \(\lambda[e_{l+1,j}]\). Comparing to Eq. (5.20) above for the first set of integrals, we observe that indeed \(\lambda[b_{1,i}]\ =\ b_{1,i}\) and \(\lambda[d_{1,i}]\ =\ d_{1,i}\), as in Eq. (5.25).
The specific manner in which the posets and the sets \(\gamma^{k-1}\) and \(\lambda[e_{k,i}]\) appear will emerge from the following analysis, where we describe how to carry out the \(k\)th iteration, the next set of time integrals in Eq. (5.21). We will see that all of the features (1) - (4) of \(D[\gamma^{k-1}]\) are inherited by the resulting expression in terms of a poset \(D[\gamma^{k}]\).
We begin by characterizing the extremal time integrals in Eq. (5.21). In fact, we need only repeat the steps in Eqs. (5.4) to (5.9) that we applied to the original integral form for \(F_{D}(\{E_{\alpha}\},{\cal L}_{G})\) in Eq. (5.2). We can do so because these steps depend only on the poset structure of the diagram \(D[\gamma^{k-1}]\).
To organize the \(k\)th set of integrals, given the poset \(D[\gamma^{k-1}]\), we identify the set of its embedded extrema \(M^{k}\) (See App. A). Next, we pick a maximal antichain (a set of extrema that are not related pairwise), which we label \(A^{k}\). Generically, it contains both embedded minima and embedded maxima. We denote the number of minima in \(A^{k}\) by \(r_{k}\) and the number of maxima by \(s_{k}\). Analogous to Eq. (5.4), we can write
\[A^{k} = \ \{x\in M^{k}|x,y\in A^{k}\implies x\sim y\} \tag{5.27}\] \[= \{b_{k,1},\ldots,b_{k,r_{k}}\,;\,d_{k,1},\ldots,d_{k,s_{k}}\} \ \equiv\ \{e_{k,i}\}\,\]
where, as before, in Eq. (5.27) the \(b_{k,i}\) represent the minima in \(A^{k}\) and the \(d_{k,i}\) represent the maxima in \(A^{k}\). We use the symbol \(e_{k,i}\) to represent elements of the combined set.
Next, we identify a companion set \(C^{k}\), which is the set of next-to-extremal vertices. Analogous to Eqs. (5.5) and (5.6) we define
\[C^{k} = \left\{x\in V[\gamma^{k-1}]\right|\,\exists v\in A^{k}:(v\prec_ {\gamma^{k-1}}x)\,{\rm or}\,(x\prec_{\gamma^{k-1}}v)\right\} \tag{5.28}\] \[= \bigcup_{i=1}^{r_{k}}C_{b_{k,i}}\bigcup_{j=1}^{s_{k}}C_{d_{k,j}},\]
where we have defined the covering (or covered) set for each element in \(A^{k}\) by
\[C^{k}_{b_{k,i}} = \left\{x\in V[\gamma^{k-1}]\right|\,b_{k,i}\prec_{\gamma^{k-1}}x \right\},\] \[C^{k}_{d_{k,i}} = \left\{x\in V[\gamma^{k-1}]\right|\,x\prec_{\gamma^{k-1}}d_{k,i} \right\}. \tag{5.29}\]
As before, we identify every consistent set of next-to extremal elements. We label the next-to-extremal element of \(e_{k,i}\) by \(x_{a}^{(e_{k,i})}\). As above, the index \(a\) labels different choices in the next-to-extremal element \(x\), given the choice in next-to-extremal elements for all \(e_{l,m},\ l\leq k-1,m\leq r_{l}+s_{l}\), as well as \(e_{k,m},\ m\leq i-1\) (See App. B for more details). We can now define the object \(\gamma^{k}\) representing one of the consistent choices for next-to-extremal vertices, up to the stage \(k\),
\[\gamma^{k}\ =\ \left\{\gamma^{k-1}\bigcup_{i=1}^{r_{k}+s_{k}}\,x_{a}^{(e_{k,i} )}\right\}\,. \tag{5.30}\]
We now observe that a choice in \(\gamma^{k}\) naturally induces a poset structure on the set of vertices \(V\left[\gamma^{k}\right]\), which is defined by analogy to Eq. (5.9) as
\[V\left[\gamma^{k}\right]=V\left[\gamma^{k-1}\right]\setminus\bigcup_{i=1}^{r_{k }+s_{k}}e_{k,i}. \tag{5.31}\]
The corresponding binary relationship for the set of vertices \(V[\gamma^{k}]\) is uniquely defined by requiring that the chosen covering vertex of a given minimum, \(b_{k,i}\), say \(x_{j}^{(b_{k,i})}\), is itself a minimum in the set \(C_{b_{k,i}}\) and the chosen covered vertex of a given maximum, \(d_{k,i}\), say \(x_{j}^{(d_{k,i})}\) is itself a maximum in the set \(C_{d_{k,i}}\). We may therefore conclude that the new binary relation, constructed by analogy to Eq. (5.8), and labeled \(\geq_{\gamma^{k}}\), is given by
\[\mbox{for }a,\,b\in V[\gamma^{k}],\mbox{ and }a\geq_{\gamma^{k-1}}b,\mbox{ then }a\geq_{\gamma^{k}}b\] \[\forall y\in C_{b_{k,i}}:y\geq_{\gamma^{k}}x_{j}^{(b_{k,i})}>_{ \gamma^{k}}b_{k,i}\,,\] \[\forall y\in C_{d_{k,i}}:d_{k,i}>_{\gamma^{k}}x_{j}^{(d_{k,i})} \geq_{\gamma^{k}}y\,. \tag{5.32}\]
Together with the set of vertices, \(V^{k}\), the binary relation \(\geq_{k}\) defines a new poset \(D[\gamma^{k}]\).
With the new poset structure in place, we can carry out the time integrals corresponding to the extrema \(e_{k,i}\). To do so, we rewrite Eq. (5.21), separating out the integrals over extremal elements in \(A^{k}\), in analogy with Eq. (5.12). As before such a decomposition will enable us to carry out the extremal integrals, here in
\[2\pi\delta\left(\sum_{\alpha=1}^{n}E_{\alpha}\right)\,F_{D}\left( \{E_{\alpha}\},{\cal L}_{G}\right) = \sum_{\gamma^{k}}\,\prod_{l=1}^{k-1}\prod_{i=1}^{r_{l}}\frac{i}{ \Delta_{b_{l,i}}[\gamma^{l}]}\,\prod_{j=1}^{s_{l}}\frac{i}{\Delta_{d_{l,j}}[ \gamma^{l}]}\] \[\times\prod_{\alpha=1}^{|V[\gamma^{k}]|}\int_{X_{D}[\gamma^{k}]} dt_{\alpha}e^{-i(E[\gamma^{k-1}]_{\alpha}+\eta[\gamma^{k-1}]_{j}^{(\alpha)}( \omega_{j}-i\epsilon))t_{\alpha}}\] \[\times\,\prod_{i=1}^{r_{k}}\int_{-\infty}^{t[x_{a}^{(d_{k,i})}]} dt_{k,i}\,e^{-i\left(E_{i}[\gamma^{k-1}]+\sum_{j}\eta[\gamma^{k-1}]_{j}^{(b_{k,i})}( \omega_{j}-i\epsilon)\right)t_{k,i}}\] \[\times\,\prod_{j=1}^{s_{k}}\int_{t[x_{a}^{(d_{k,i})}]}^{\infty} dt_{k,i}\,e^{-i\left(E_{i}[\gamma^{k-1}]+\sum_{j}\eta[\gamma^{k-1}]_{j}^{(d_{k,i})}( \omega_{j}-i\epsilon)\right)t_{k,i}}\,.\]
We can repeat the steps from Eqs. (5.13) to (5.16) to perform extremal time integrals and define the denominators that arise at this stage of the integration procedure. The results are of exactly the same form as for the case \(k=2\), and reproduce the inductive forms of momentum dependence in denominators and phases in Eqs. (5.22) to (5.24),
\[2\pi\delta\left(\sum_{\alpha=1}^{n}E_{\alpha}\right)\,F_{D}\left( \{E_{\alpha}\},{\cal L}_{G}\right) = \sum_{\gamma^{k}}\,\prod_{l=1}^{k}\prod_{i=1}^{r_{l}}\frac{i}{ \Delta_{b_{l,i}}[\gamma^{l}]}\,\prod_{j=1}^{s_{l}}\frac{i}{\Delta_{d_{l,j}}[ \gamma^{l}]}\]
\[\times\prod_{\alpha=1}^{|V[\gamma^{k}]|}\,dt_{\alpha}e^{-i(E[\gamma^{k}]_{ \alpha}+\eta[\gamma^{k}]^{(\alpha)}_{j}(\omega_{j}-i\epsilon))t_{\alpha}}\,. \tag{5.34}\]
Comparing the expression in Eq. (5.34) with the form of the integral after \(k-1\) time integrals in Eq. (5.21), and the four conditions that follow that expression, we can check that we have completed an inductive construction of Eq. (5.34) with the quantities \(\Delta_{b_{k,i}}[\gamma^{k}],\Delta_{d_{k,i}}[\gamma^{k}]\) and \(E_{\alpha}\ [\gamma^{k}],\eta[\gamma^{k}]^{(\alpha)}_{j}\) defined recursively through Eqs. (5.22), (5.23) and (5.24) respectively. We also see that the four conditions that follow Eq. (5.21) remain true after the \(k\)th integral.
(1) _Poset structure._ By construction \(D[\gamma^{k}]\) is a new poset, which again represents a diagram with all vertices local in time, with a non-empty set of embedded extremal vertices \(M^{k+1}\).
(2) _Functional dependence._ The time integrals in Eq. (5.33) leading to Eq. (5.34) give precisely the results for denominators and phases specified by Eqs. (5.22) - (5.24), with \(l=k\).
(3) _Denominators, cuts, and merged sets._ The new denominators \(\Delta_{b_{k,i}}\) and \(\Delta_{d_{k,i}}\) arise from the time integrals of embedded extremal vertices in the chosen set \(A^{k}\) of \(D[\gamma^{k-1}]\). Their explicit expressions in Eq. (5.22) show that they have the standard forms of energy deficits. We can verify that they correspond to cuts of the original diagram, \(D\) that separate \(D\) into two connected components as follows. Consider first the minimal vertices, \(b_{k,i}\) of \(D[\gamma^{k-1}]\). Each emits lines from the set of vertices in the merged set, \(\lambda[b_{k,i}]\), all of whose elements are unordered with respect to the vertices in any other merged set \(\lambda[b_{k,j}]\), \(j\neq i\). As a result, all vertices in \(\lambda[b_{k,i}]\) are connected to the remainder of the original diagram \(D\) only through lines emitted by vertex \(b_{k,i}\), which may connect to any non-minimal vertices in \(D[\gamma^{k-1}]\). Then, cutting all lines emitted by \(b_{k,i}\) separates all vertices in \(\lambda[b_{k,i}]\) from the remainder of the diagram \(D\). But \(\lambda[b_{k,i}]\) is connected by construction, and so is \(V[\gamma^{k-1}]\setminus b_{k,i}\) since \(b_{k,i}\) is by construction an embedded minimal vertex. Thus, cutting the set of lines emerging from any \(b_{k,i}\) cuts the diagram into two connected components. Identical considerations apply to the embedded maximal vertices, \(d_{k,i}\) of \(D[\gamma^{k-1}]\). Lines emerging from (for minimal) or absorbed into (for maximal) extremal vertices of the next poset \(D[\gamma^{k}]\), labeled \(e_{k+1,i}\), are emitted or absorbed by sets of merged vertices, which we label \(\lambda[e_{k+1,i}]\), of the original diagram, \(D\). These sets are defined. as in Eq. (5.25), by
\[\lambda[e_{k+1,i}]\ =\ \left\{e_{k+1,i}\,\prod_{e_{k^{\prime},j}}\cup \lambda[e_{k^{\prime},j}]\,\bigg{|}\,x_{a}^{(e_{k^{\prime},j})}=e_{k+1,i}\,\,k^{\prime}\leq k-1 \right\}\,, \tag{5.35}\]
where index \(i\) varies over all choices of the set \(\gamma^{k}\).
(4) _Partial nesting._ Finally, the sets \(\lambda[e_{k+1,i}]\), given by Eq. (5.35), associated with posets \(D[\gamma^{k}]\), have the same properties as the \(\lambda[e_{k,i}]\) associated with the posets \(D[\gamma^{k-1}]\), as described in Eq. (5.26). The nesting features of Eq. (5.26) are inherited by the sets
defined by Eq. (5.35) precisely because the \(\lambda[e_{k+1,i}]\) are disjoint unions of smaller sets that satisfy (5.26). A consequence is that elements from different \(\lambda[b_{k+1,i}]\)s are unordered (form an antichain), and similarly for elements in different \(\lambda[d_{k+1,i}]\). In addition, the \(k+1\)st layer of merged sets satisfy the same relation as the \(k\)th, Eq. (5.26),
\[V[\gamma^{k}]\prod_{i}\,\cup\lambda[b_{k+1,i}]\,\prod_{l}\,\cup \lambda[d_{k+1,l}]\quad=\quad V\,, \tag{5.36}\]
because the difference between the set \(V[\gamma^{k}]\) and the corresponding quantity \(V[\gamma^{k-1}]\) in Eq. (5.26) is precisely the chosen set of embedded extremal vertices of \(D[\gamma^{k-1}]\), \(A^{k-1}\), which are identified with the new vertices in the sets \(\lambda[e_{k+1,i}]\). Thus, any vertices that are in \(V[\gamma^{k-1}]\) but not in \(V[\gamma^{k}]\) are absorbed into the union of the \(\lambda[e_{k+1,i}]\), along with all of the \(\lambda[e_{k,i}]\).
In summary, all of the recursive features of the integrals of extrema have been confirmed. In making the steps from the \((k-1)\)st to \(k\)th sets of time integrals, we used only the partial ordering and the existence of at least one embedded extremal vertex at each step. Starting with any finite-order diagram, however, it is clear that eventually, the process terminates at \(k=\kappa\), when the poset \(D[\gamma^{\kappa}]\) consists of a single vertex, \(e_{\kappa+1}\). Such a vertex will connect to no internal lines (none remain) but will connect to _all_ external energies, \(E_{\alpha}\). Its time integral yields the momentum conserving delta function in Eq. (5.1).
The general form for an arbitrary poset \(D\) is thus the energy conserving delta function in Eq. (5.2), times the amplitude function,
\[F_{D}\left(\{E_{\alpha}\},{\cal L}_{G}\right) = \sum_{\gamma^{\kappa}}\,\prod_{l=1}^{\kappa}\prod_{i=1}^{\tau_{l }}\frac{i}{E_{b_{l,i}}[\gamma^{l-1}]-\sum_{n}\left|\eta[\gamma^{l-1}]_{n}^{(b _{k,i})}\right|\omega_{j}+i\epsilon} \tag{5.37}\] \[\times\prod_{j=1}^{s_{l}}\frac{i}{\left(-E_{d_{l,i}}[\gamma^{l-1} ]\right)-\sum_{n}\eta[\gamma^{l-1}]_{n}^{(d_{l,i})}\omega_{j}+i\epsilon},\]
where we have used the explicit forms of denominators in Eqs. (5.22) and (5.23). Here all denominators correspond to cuts that divide the graph into exactly two connected components, and the sum over \(\gamma^{\kappa}\) represents all complete, recursive choices of next-to-extremal sets, starting with those of the original embedded extremal vertices of \(D\). Each denominator is of the form of an energy deficit, either with respect to a sum of energies flowing out of or into the diagram. In contrast to the normal TOPT form, however, singularities associated with any diagram are all physical, in that they divide the diagram into two connected subdiagrams.
In practical cases, most external energies are zero to begin with (corresponding to collections of internal vertices). Such denominators are negative semi-definite even for massless theories. An intriguing feature of Eq. (5.37) is that denominators for which
\(E_{i}[\gamma^{k}]=0\) are universal, in the sense that they are independent of the underlying process. Such denominators describe a series of states that emerge from the vacuum, and which couple diagrammatically to the process described by amplitude \(F\) at composite vertices of the sort illustrated in Figs. 4 and 8 above. We shall not pursue this concept of universality here, and leave it as a subject for future work.
As noted above, an equivalent expression for a general amplitude has been derived in Ref. [18] by a somewhat different graphical analysis. In the next section, we apply the method developed above to study weighted cross sections in leptonic annihilation.
We close this section by remarking briefly on the application of these methods to light-cone ordered perturbation theory (LCOPT) [28, 29, 30]. In LCOPT, vacuum orderings are absent altogether. The method here may be applied, however, whenever there are more than two external momenta, which leads, as in time ordering, to pseudo-physical cuts. The result is just of the form of Eq. (5.37), with the \(E_{i}\) and \(\omega_{j}\) replaced by external and on-shell minus momenta, respectively (in \(x^{+}\) ordering). In this way, pseudo-physical cuts are again eliminated.
## 6 Vacuum polarization graphs, leptonic annihilation, and weighted cross sections
In this section, we will adapt the general result for poset contributions to amplitudes, Eq. (5.37), to lepton annihilation processes, and their weighted cross sections. The essential feature of this analysis is that it eliminates the pseudo-physical denominators altogether. A specific diagram in the form of Eq. (5.37), however, does not have a manifest ordering between potential physical cuts. To provide such an order, we will adapt our procedure to the process at hand, in this case, leptonic annihilation.
Given a vacuum polarization graph \(G\), we single out two vertices \(i\) and \(o\), where the current carries positive and negative energy into the diagram, respectively. As in the vacuum polarization diagrams of Sec. 2, the external energies of all vertices \(\alpha\) vanish, with the exception of \(i\) and \(o\),
\[E_{\alpha}\ =\ Q\delta_{\alpha i}-Q^{\prime}\delta_{\alpha,o}\,, \tag{6.1}\]
where energy conservation will require \(Q=Q^{\prime}\). Next, we define the first set of embedded extrema \(\hat{M}^{1}\), using the extremal set \(\hat{S}=(\mbox{Min}(D)\cup\mbox{Max}(D))\setminus\{i,o\}\). We then continue with our process of integrating out the largest antichain in \(\hat{M}^{1}\). The main idea is to remove the two extrema \(i,o\) at each stage in the procedure until all other extrema, whose time integrations result in vacuum denominators (and soft pinches) only. This separates out, at the level of the integrand, long time processes that only carry soft singularities in the IR from short time processes that have the interpretation of taking place at the "hard scale". As in Sec. 5, we define a set of next-to-extremal vertices \(\gamma^{1}\)
and carry out the first set of integrals. In general, \(\hat{M}^{1}\) contains \(r_{1}\) embedded minima and \(s_{1}\) embedded maxima, and we find
\[2\pi\delta\left(Q-Q^{\prime}\right)\,\pi_{D}\left(Q,{\cal L}_{G}\right) = \sum_{\gamma^{1}}\,\prod_{i=1}^{r_{1}}\frac{i}{\tilde{\Delta}_{b_ {1,i}}[\gamma^{1}]}\,\prod_{i=1}^{s_{1}}\frac{i}{\tilde{\Delta}_{d_{1,i}}[ \gamma^{1}]} \tag{6.2}\] \[\times \prod_{\alpha=1}^{|V[\gamma^{1}]|}\,\int_{X_{D}[\gamma^{1}]}dt_{ \alpha}e^{-i(E_{\alpha}+\eta_{j}^{(\alpha)}[\gamma^{1}](\omega_{j}-i\epsilon)) t_{\alpha}},\]
where we have used notation identical to that introduced in Sec. 5. The difference from Eq. (5.17), however is that \(E_{\alpha}[\gamma^{1}]=E_{\alpha}\), given in Eq. (6.1), by construction. This follows from our choice to integrate out a set of vertices that do not carry external energy. As a consequence, the denominators \(\tilde{\Delta}_{b_{1,i}}\) and \(\tilde{\Delta}_{d_{1,i}}\) are of the "vacuum" form,
\[\tilde{\Delta}_{b_{1,i}}[\gamma^{1}] = -\sum_{j}\left|\eta_{j}^{(b_{1,i})}\right|\omega_{j}\;. \tag{6.3}\]
for minima and
\[\tilde{\Delta}_{d_{1,i}}[\gamma^{1}] = -\sum_{j}\eta_{j}^{(d_{1,i})}\omega_{j}\,, \tag{6.4}\]
for maxima, where once again we observe that the difference from Eqs. (5.14) and (5.16) is simply the absence of external energy. These denominators therefore do not vanish anywhere except when all lines in the state carry zero momenta (which is a genuine pinch surface in massless theories). In Eqs. (6.3) and (6.4) we have explicitly dropped the \(i\epsilon\)'s to emphasize the reality of these state denominators.
We now continue the process that we have described in Sec. 5, but continue to leave out vertices \(i,o\) at each stage, until these are the only extrema remaining. We emphasize that at each stage we only generate denominators of the kind we encountered in Eqs. (6.3), (6.4). The process terminates after \(\kappa_{1}\) steps, when the set of embedded extrema is empty, \(\hat{M}^{\kappa_{1}+1}=\emptyset\). This does not mean that all vertices have been exhausted since we have explicitly excluded \(i\) and \(o\) from the set of extrema at each stage and non-embedded extremal vertices may also remain. We thus can write for the poset integrand \(\pi_{D}\),
\[2\pi\delta\left(Q-Q^{\prime}\right)\,\pi_{D}\left(Q,{\cal L}_{G} \right) = \sum_{\gamma^{\kappa_{1}}}\,\prod_{l=1}^{\kappa_{1}}\prod_{i=1} ^{r_{l}}\frac{i}{-\sum_{k}\left|\eta\,\left[\gamma^{l}\right]_{k}^{(b_{l,i})} \right|\omega_{k}}\,\prod_{j=1}^{s_{l}}\frac{i}{-\sum_{k}\eta\,\left[\gamma^{ l}\right]_{k}^{(d_{l,j})}\omega_{k}} \tag{6.5}\] \[\times\prod_{\alpha=1}^{|V[\gamma^{\kappa_{1}}]|}\int_{X_{D}[ \gamma^{\kappa_{1}}]}dt_{\alpha}e^{-i(E_{\alpha}+\eta[\gamma^{\kappa_{1}}]_{j }^{(a)}(\omega_{j}-i\epsilon))t_{\alpha}}\,.\]
Having separated the vacuum denominators we would like to order the remaining integrand of the vacuum polarization poset diagram. The poset \(X_{D}[\gamma^{\kappa_{1}}]\) has no tadpole subdiagrams, lacking external momenta and connected to the remaining diagram by
a single vertex only. This is because, as shown in App. A, any nontrivial tadpole has an embedded extremal vertex. But the times of all embedded extrema except \(i\) and \(o\) have been integrated over in arriving at Eq. (6.5).
By construction, the remaining diagram has at most two embedded extremal vertices, \(i\) and \(o\). Any additional extremal vertices must be non-embedded. Each non-embedded vertex, whether extremal or not, connects two internally connected subdiagrams, one of which must include \(i\) and the other \(o\). This is because there are no tadpole subdiagrams.
We have three possibilities,
1. \(i\leq_{\gamma^{\kappa_{1}}}o\),
2. \(o\leq_{\gamma^{\kappa_{1}}}i\),
3. \(o\sim_{\gamma^{\kappa_{1}}}i\).
These three situations have been represented diagrammatically in Fig. 9.
Let us analyze cases (1), (2) first. Since we have integrated out all embedded extrema excluding \(i,o\), one of \(i,o\) is an embedded extremum. Suppose \(i\leq_{\gamma^{\kappa_{1}}}o\), and \(i\) were not a minimum. This would imply there exists a minimal vertex \(x<_{\gamma^{\kappa_{1}}}i\) which is not embedded. By definition the cut succeeding \(x\), \(\Delta_{x}\) would split the graph into two or more disconnected subdiagrams, say \(V_{1}\dots V_{k}\), \(k\geq 2\). Because \(x\) is a minimum, vertices in different \(V_{i}\) are all mutually unordered. Then \(i,o\) would be in the same disconnected component, say \(V_{1}\), since \(i\leq_{\gamma^{\kappa_{1}}}o\). This would therefore imply that all the other components \(V_{2}\dots V_{k}\) carry no external momenta, that is, that there are tadpole subdiagrams, which we have shown cannot be present in \(D[\gamma^{\kappa_{1}}]\). The same reasoning shows that \(i\) is itself an embedded minimum and that there are no other minima in the poset \(D[\gamma^{\kappa_{1}}]\). A similar argument reveals that \(o\) is the unique maximum in \(D[\gamma^{\kappa_{1}}]\) for case (1).
One can use similar reasoning to argue that if \(o\leq_{\gamma^{\kappa_{1}}}i\), \(i\) is the unique maximum and \(o\) is the unique minimum in \(D[\gamma^{\kappa_{1}}]\). In cases \((1),(2)\) we can either impose any total order (time order) or continue along the lines of the arguments made in Sec. 5. The remaining integrals in Eq. (6.5) can therefore be carried out either by constructing new posets or through the imposition of a sum of time orders. Every time-ordered graph in cases (1) and (2) has a unique minimum and a unique maximum. Carrying out the procedure outlined in Sec. 5 is exactly equivalent to generating all possible time orders.
Turning to case (3), we treat the possibility that \(o\sim_{\gamma^{\kappa_{1}}}i\). It is easy to see using the arguments made for the first two cases that both \(i,o\) are embedded extrema of the graph, using the absence of tadpole subgraphs. Suppose \(o\) was an embedded extremum and \(i\) was not an extremal element. This means there is an \(x\in(\mbox{Min}D[\gamma^{\kappa_{1}}])\) and \(y\in\mbox{Max}(D[\gamma^{\kappa_{1}}])\) such that \(x\leq_{\gamma^{\kappa_{1}}}i\leq_{\gamma^{\kappa_{1}}}y\). By assumption, both \(x,y\) are not embedded, and therefore, in the absence of tadpole subdiagrams, \(x\) divides the graph into precisely two additional components, \(V_{1}\) and \(V_{2}\), with \(i\) and \(y\) in the same component, say \(V_{1}\), and \(o\in V_{2}\). However, this means \(y\) cannot divide the graph into two components without
a tadpole subgraph. Thus, \(y\) must have been an embedded extremum, contrary to our assumption. Thus, \(i\) must be an embedded extremum.
In summary, we learn that in all three cases, \(i,o\) are both unique embedded extrema. In cases (1) and (2), vertices \(i\) and \(o\) are in a single, connected diagram, in which they are the only extremal vertices. That is, there are no non-embedded extrema. In these cases, any other vertex \(y\) satisfies
\[i<_{\gamma^{\kappa_{1}}}y <_{\gamma^{\kappa_{1}}}o\,,\mbox{case (1)}\,,\] \[o<_{\gamma^{\kappa_{1}}}y <_{\gamma^{\kappa_{1}}}i\,,\mbox{case (2)}\,. \tag{6.6}\]
Suppose we integrate \(t_{i}\) first, (up or down) to time \(t_{x_{i^{\prime}}^{(\kappa_{1}+1)}}\). In the resulting poset,
Figure 9: (a) A poset ordered graph after \(\kappa_{1}\) steps representing the situation when \(i<o\). Here, \(G\) is any graph containing no extrema. (b) A poset ordered graph after \(\kappa_{1}\) steps representing the situation when \(o<i\). Here, \(G^{\prime}\) is any graph containing no extrema. (c) A poset ordered graph with \(i\sim o\). Here \(i\) is a minimum and \(o\) is a maximum, but \(i\nleq o\) and \(G_{1},G_{2}\) and \(G_{3}\) are subgraphs containing no embedded extrema other than possibly \(i\) or \(o\). This represents a situation with two non-embedded extrema. Generically, \(i,o\) are either minima or maxima and there may be an arbitrary number of intervening non-embedded extrema. (d) An example of a poset ordered graph with \(i\sim o\) in QED at three loops. Integrating the times of vertices \(4,1\) yields a poset of the type in Fig. 9c.
\(D[\gamma^{\kappa_{1}+1}]\), any remaining vertices \(y\) continue to satisfy the same inequalities. That is,
\[x_{i^{\prime}}^{(\kappa_{1}+1)}<_{\gamma^{\kappa_{1}+1}}y <_{\gamma^{\kappa_{1}+1}}o\,,\;\;\;\mbox{case (1)}\,,\] \[o\;<_{\gamma^{\kappa_{1}+1}}y <_{\gamma^{\kappa_{1}+1}}x_{i^{\prime}}^{(\kappa_{1}+1)}\,,\mbox{ case (2)}\,. \tag{6.7}\]
Therefore, vertex \(x_{i^{\prime}}^{(\kappa_{1}+1)}\) is now an embedded extremum. The argument that \(x_{i^{\prime}}^{(\kappa_{1}+1)}\) is also an embedded extremum for case (3) in which \(i\) and \(o\) are in different subdiagrams, follows the same pattern, using the fact that \(i\) and \(o\) are unique embedded extrema.
In order to now be able to order the resulting expressions diagrammatically, we will only integrate the embedded extremum \(i\) (and not \(o\)). Following the procedure outlined in Sec. 5, we integrate over a single-element \(M^{\kappa+1}=\{i\}\). After integrating the time \(t_{i}\), we will find a new extremal \(i^{\prime}\) with external energy \(Q\) flowing into it. Because there are no additional embedded extremal vertices other than vertices \(i\) and \(o\), this will also be an embedded extremal vertex, although it may be a minimum or a maximum. If we again choose \(\kappa\) to represent the total number of sequences in the process, the procedure terminates after \(\kappa-\kappa_{1}+1\) steps (including the final integral integral that gives the energy delta function). At each integration step \(\kappa_{1}+1\leq l\leq\kappa\), there is a single time integral only, corresponding to the sequence that begins with vertex \(i\) at step \(\kappa_{1}+1\). This results in the formula,
\[\pi_{D}\left(Q,{\cal L}_{G}\right) = \sum_{\gamma^{\kappa_{1}}}\;\prod_{l=1}^{\kappa_{1}}\prod_{i=1}^ {r_{l}}\frac{i}{-\sum_{n}\left|\eta\;[\gamma^{l}]_{n}^{(b_{l,i})}\right| \omega_{n}}\;\prod_{j=1}^{s_{l}}\frac{i}{-\sum_{n}\eta\;[\gamma^{l}]_{n}^{(d_ {l,j})}\omega_{n}} \tag{6.8}\] \[\times\;\sum_{\gamma^{\kappa}[\gamma^{\kappa_{1}}]}\;\prod_{l= \kappa_{1}+1}^{\kappa}\frac{i}{\tilde{\Delta}_{e_{l}}[\gamma^{l}]+i\epsilon}\;,\]
where the sum over \(\gamma^{\kappa}\) is restricted to sets of next-to-extremal vertices consistent with the choice \(\gamma^{\kappa_{1}}\) (see App. B). Here \(e_{l}\) is defined to be the vertex that satisfies \(E_{e_{l}}[\gamma^{l}]=Q>0\) in the notation of Eq. (5.24), and the denominator is defined by
\[\tilde{\Delta}_{e_{l}}[\gamma^{l}]=\lambda_{e_{l}}Q\;-\;\sum_{k}\left|\eta[ \gamma^{l}]_{k}^{(e_{l})}\right|\omega_{k}, \tag{6.9}\]
where \(\lambda_{e_{l}}=+1\) when vertex \(e_{l}\) is a minimum, and \(-1\) when it is a maximum. In the latter case, the state denominators represented by \(\tilde{\Delta}_{e_{l}}\) are negative definite. We note that as \(l\) increases, the denominators change sign when vertex \(e_{l}\) reaches a previously non-embedded extremal vertex (in case (3) above), which then becomes embedded. We also notice that the denominators \(\tilde{\Delta}\) in Eq. (6.9) are defined slightly differently from the definition of \(\Delta\) in Eqs. (5.14) and (5.16) in order to explicitly display the \(i\epsilon\).
Having obtained the expression for the forward scattering locally in phase space, we immediately write down the contribution of a poset \(D\) to the \(e^{+}e^{-}\) total cross section's integrand, in terms of a sum over ordered final states, \(C\),
\[\mbox{Im }[\pi_{D}(Q,{\cal L}_{G})] = \sum_{\gamma^{\kappa_{1}}}\;\prod_{l=1}^{\kappa_{1}}\prod_{i=1}^ {r_{l}}\frac{i}{-\sum_{n}\left|\eta\;[\gamma^{l}]_{n}^{(b_{l,i})}\right| \omega_{n}}\;\prod_{j=1}^{s_{m}}\frac{i}{-\sum_{n}\eta\;[\gamma^{l}]_{n}^{(d_ {l,j})}\omega_{n}}\]
\[\times\,\sum_{\gamma^{\kappa}|\gamma^{\kappa_{1}}}\sum_{C=\kappa_{1}+1}^{ \kappa}\,\prod_{l=C+1}^{\kappa}\frac{i}{\bar{\Delta}_{e_{l}}[\gamma^{l}]-i \epsilon}2\pi\delta(\tilde{\Delta}_{C})\,\prod_{l=\kappa_{1}+1}^{C-1}\frac{i}{ \bar{\Delta}_{e_{l}}[\gamma^{l}]+i\epsilon}\;. \tag{6.10}\]
This result is equivalent to the TOPT expression in Eq. (2.7), but has been reorganized through partial ordering to eliminate all pseudo-physical cuts.
We recall that to get the total cross section as defined in Eq. (2.3), we multiply the expression in Eq. (6.10) by the poset numerator factor, sum over posets and finally over graphs, giving the integrand
\[\sigma(Q,{\cal L}_{G}) = \sum_{G}\sum_{D}{\mathbb{N}}_{D}\,\,\prod_{i=1}^{N_{G}}\frac{1}{2 \omega_{i}}\,\sum_{\gamma^{\kappa_{1}}}\,\prod_{l=1}^{\kappa_{1}}\prod_{i=1}^ {r_{l}}\frac{i}{-\sum_{n}\left|\eta\;[\gamma^{l}]^{(b_{l,i})}_{n}\right|\omega _{k}} \tag{6.11}\] \[\times\prod_{j=1}^{s_{l}}\frac{i}{-\sum_{n}\eta\;[\gamma^{l}]^{( d_{l,j})}\omega_{n}}\] \[\times\,\sum_{\gamma^{\kappa}|\gamma^{\kappa_{1}}}\sum_{C=\kappa _{1}+1}^{\kappa}\,\prod_{l=C+1}^{\kappa}\frac{i}{\bar{\Delta}_{e_{l}}[\gamma^ {l}]-i\epsilon}\,\,2\pi\delta(\tilde{\Delta}_{C})\,\prod_{l=\kappa_{1}+1}^{C- 1}\frac{i}{\bar{\Delta}_{e_{l}}[\gamma^{l}]+i\epsilon}\,.\]
This is the analog of the TOPT expression for the integrand of the cross section, again without pseudo-physical cuts.
It is now straightforward to obtain a new expression for the integrand of the weighted cross section defined in Eq. (2.9). The relation (6.11) is fully local in loop momenta \({\cal L}_{G}\), and hence specifies all contributions to the squared amplitude from each final state, \(C\). We thus have
\[\Sigma[f,Q] = \sum_{G}\int d{\cal L}_{G}\,\prod_{i=1}^{N_{G}}\frac{1}{2\omega_{ i}}\,\,\sum_{D}{\mathbb{N}}_{D}\sum_{\gamma^{\kappa_{1}}}\,\prod_{l=1}^{\kappa_{1}} \prod_{i=1}^{r_{l}}\frac{i}{-\sum_{n}\left|\eta\;[\gamma^{l}]^{(b_{l,i})}_{n} \right|\omega_{n}} \tag{6.12}\] \[\times\prod_{j=1}^{s_{l}}\frac{i}{-\sum_{n}\eta\;[\gamma^{l}]^{( d_{l,j})}_{n}\omega_{n}}\,\,\sum_{\gamma^{\kappa}|\gamma^{\kappa_{1}}}\sum_{C= \kappa_{1}+1}^{\kappa}\,\prod_{l=C+1}^{\kappa}\frac{i}{\bar{\Delta}_{e_{l}}[ \gamma^{l}]-i\epsilon}\,\,2\pi\delta(\tilde{\Delta}_{C})\] \[\times f_{C}(\vec{q}_{1}\ldots\vec{q}_{k_{C}})\,\,\left(\frac{1+ \lambda_{e_{C}}}{2}\right)\,\prod_{l=\kappa_{1}+1}^{C-1}\frac{i}{\bar{\Delta}_ {e_{l}}[\gamma^{l}]+i\epsilon}\,.\]
In Eq. (6.12) we have inserted a factor of \(\left(\frac{1+\lambda_{e_{C}}}{2}\right)\) to set the weight of cuts with negative external energy, and hence \(\lambda_{e_{C}}=-1\), to zero. We can also keep them, but they will always give zero because the argument of the energy conservation delta function is negative definite for such states. In the sum over posets \(D\), only those posets with \(i<_{\gamma^{\kappa_{1}}}o\) or \(i\sim_{\gamma^{\kappa_{1}}}o\) can have \(\lambda_{eC}=1\).
Using Eq. (6.12) and the \(\delta\) function identity in Eq. (3.1) we can collect terms with the same denominator structure to write the analog of Eq. (3.4)
\[\Sigma[f,Q] = \sum_{G}\int d{\cal L}_{G}\,\prod_{i=1}^{N_{G}}\frac{1}{2\omega_{ i}}\,\sum_{D}{\mathbb{N}}_{D}\sum_{\gamma^{\kappa_{1}}}\,\prod_{l=1}^{\kappa_{1}} \prod_{i=1}^{r_{l}}\frac{i}{-\sum_{n}\left|\eta\;[\gamma^{l}]^{(b_{l,i})}_{n} \right|\omega_{n}} \tag{6.13}\]
\[\times\prod_{j=1}^{s_{l}}\frac{i}{-\sum_{\pi}\eta\;[\gamma]_{n}^{n}(d_{ i,j})\omega_{n}}\;\sum_{\gamma^{\kappa}|\gamma^{\kappa}|\gamma^{\kappa}}\;\Bigg{(} \sum_{C=\kappa_{1}+1}^{\kappa-1}\prod_{j=C+1}^{\kappa-1}\frac{i}{\bar{\Delta}_ {e_{j}}[\gamma^{j}]-i\epsilon}\] \[\times\left(f_{C}(\vec{q}_{1}\ldots\vec{q}_{k_{C}})\left(\frac{1+ \lambda_{e_{C}}}{2}\right)-f_{C+1}(\vec{q}_{1}\ldots\vec{q}_{k_{C+1}})\;\left( \frac{1+\lambda_{e_{C+1}}}{2}\right)\right)\;\prod_{i=\kappa_{1}+1}^{C}\; \frac{i}{\bar{\Delta}_{e_{i}}[\gamma^{i}]+i\epsilon}\] \[-\prod_{j=\kappa_{1}+1}^{\kappa}\frac{i}{\bar{\Delta}_{e_{j}}[ \gamma^{j}]-i\epsilon}f_{1}(\vec{q}_{1}\ldots\vec{q}_{k_{1}})\left(\frac{1+ \lambda_{e_{1}}}{2}\right)+\prod_{i=\kappa_{1}+1}^{\kappa}\frac{i}{\bar{ \Delta}_{e_{i}}[\gamma^{i}]+i\epsilon}f_{\kappa}(\vec{q}_{1}\ldots\vec{q}_{k_ {\kappa}})\left(\frac{1+\lambda_{e_{\kappa}}}{2}\right)\Bigg{)}\,.\]
This represents our final result for leptonic annihilation cross sections. It carries no unphysical singularities and is power counting finite everywhere in the region of integration. It is a sum of terms corresponding to all cuts of a vacuum polarization diagram with, in general, elementary vertices and composite vertices that include vacuum denominators. Examples can be found in Figs. 8a and 8c of Sec. 4. As in the original form of Eq. (3.21), the final two terms in this expression only require contour deformations to manifest finiteness.
## 7 Summary
In this work, we have used TOPT to re-express infrared safe cross sections in electroweak annihilation in a form that is manifestly power-counnting finite in four dimensions. We observed that contributions to cross sections, in which the amplitude or complex conjugate are disconnected, vanish after a sum over appropriate time orders. Generalizing to arbitrary amplitudes, we reorganized TOPT expression using their poset structure, to derive results similar to Ref. [18]. We then applied this formalism to electroweak annihilation, to derive a locally finite diagrammatic expression in which all pseudo-physical cuts are eliminated.
We anticipate that it will be possible to use an expression like Eq. (6.13) as the starting point for the numerical evaluation of weighted cross sections in leptonic annihilation, complementing the loop-tree duality treatments of Refs. [7]-[9]. This, of course, will require a systematic method to use contour deformation or related methods to avoid or cancel non-pinched singularities [31]. Beyond leptonic annihilation, the methods described here may complement those of Refs. [5, 6] to control infrared singularities locally in hadronic cross sections.
### Acknowledgements
We thank Charalampos Anastasiou for many helpful discussions on these and related topics. We thank Zeno Capatti and Hoffie Hannesdottir, Valentin Hirschi, and Dario Kermanschah for very useful conversations. This work was supported in part by National Science Foundation Awards PHY-1915093 and PHY-2210533.
Embedded Extrema
We would like to show that every connected, finite-order poset diagram has at least one embedded extremal vertex.
First, we recall that embedded vertex, \(v\) in poset diagram \(D\) is one whose elimination from \(D\) leaves the remaining diagram, \(D\setminus v\), connected. If, on the contrary, \(u\) is a "non-embedded" vertex, we must have \(D\setminus u=\cup_{i=1}^{p_{u}}D_{i}^{[u]}\), with \(p_{u}\) the number of subdiagrams of \(D\) that share vertex \(u\) and are otherwise disconnected. This is the case whether \(u\) is an extremum or not.
We begin our argument by picking an arbitrary extremum, \(e_{0}\), which for definiteness we may choose to be a minimum. Say \(e_{0}\) is not embedded. Then, starting at \(e_{0}\), we choose arbitrarily a path \(x_{1},x_{2}\dots\), that follows the relation \(x_{j+1}>x_{i}\). We refer to this as an "increasing" path. Because \(D\) is finite-order, this path must terminate at some maximum vertex, \(e_{1}\). Either \(e_{1}\) is embedded, or not. If not, we know that \(D\setminus e_{1}=\cup_{i=1}^{p_{e_{1}}}D_{i}^{[e_{1}]}\). We repeat the process for vertex \(e_{1}\) in subdiagram, \(D_{i}^{[e_{1}]}\subset D\), chosen arbitrarily, where \(e_{0}\not\in D_{i}^{[e_{1}]}\). If \(e_{1}\) is not a maximum in \(D_{i}^{[e_{1}]}\), we can again follow an increasing path in the \(D^{[e_{1}]}\) to a second maximum. If \(e_{1}\) happens to be a maximum in \(D_{i}^{[e_{1}]}\), we can follow a decreasing path until we reach a minimum. In either case, the extremum, \(e_{2}\) at which we arrive may or may not be embedded. If it is not embedded we repeat the process by choosing any of the subdiagrams \(D_{j}^{[e_{2}]}\subset D_{i}^{[e_{1}]}\), \(e_{1}\not\in D_{j}^{[e_{2}]}\). Because the complete poset \(D\) is of finite order,
\[0\,<\,|D_{j}^{[e_{2}]}|\,<\,|D_{i}^{[e_{1}]}|\,,\] (A.1)
and this sequence of steps must terminate at an extremal vertex, \(e_{n}\) in some subdiagram \(D_{k}^{[e_{n-1}]}\), whose removal does not disconnect \(D_{k}^{[e_{n-1}]}\). This is because Eq. (A.1) represents the first two terms in a sequence of strictly reducing positive integers. Such a sequence must terminate at \(0\). The extremal vertex at which the sequence terminates is embedded.5
Footnote 5: Notice that extremum \(e_{n-1}\) is considered part of \(D_{k}^{[e_{n-1}]}\).
We conclude by observing that the arguments above apply to diagrams that are of the tadpole topology: no external momenta and connected to the remainder of the diagram by only a single vertex, which may or may not be extremal. Thus, as observed in Sec. 6, every tadpole has an embedded extremal vertex, by definition without an external momentum. In the intermediate expression of Eq. (6.5), all such time integrals have been carried out, and in the process all tadpoles subdiagrams have been eliminated in poset \(D[\gamma^{\kappa_{1}}]\).
An explicit algorithm to identify \(\gamma^{k+1}\)
In this Appendix, we would like to present one explicit algorithm to identify all the possible consistent choices in \(\gamma^{k+1}\), as defined for instance in Eq. (5.7), given a poset \(D\left[\gamma^{k}\right]=\{V\left[\gamma^{k}\right],\geq_{\gamma^{k}}\}\) and a choice in maximal embedded anti-chain \(A^{k+1}=\{b_{k+1,1}\ldots b_{k+1,r_{k+1}},d_{k+1,1}\ldots d_{k+1,s_{k+1}}\}= \{e_{k+1,j}\}\). We emphasize that the ordering of extremal elements in \(A^{k+1}\) is arbitrary. As in Sec. 5.1 we define next-to-extremal sets,
\[C_{b_{k+1,j}}=\{x\in V\left[\gamma^{k}\right]\biggm{|}b_{k+1,j} \prec_{\gamma^{k}}x\},\] \[C_{d_{k+1,j}}=\{x\in V\left[\gamma^{k}\right]\biggm{|}x\prec_{ \gamma^{k}}d_{k+1,j}\},\] \[C^{k+1}=\bigcup_{j=1}^{r_{k+1}}C_{b_{k+1,j}}\bigcup_{l=1}^{s_{k +1}}C_{d_{k+1,l}}.\] (B.1)
We also define a set of next-to-extremal vertices,
\[\gamma^{k}_{i} \equiv \bigcup_{q\in[1,k]}\bigcup_{a\in[1,\hat{r}_{q}(k,i)]}x_{j_{q,a}} ^{(e_{q,a})}\,,\] (B.2)
where
\[\hat{r}_{q}(k,i) = r_{q}+s_{q}\ \text{if}\ q\in[1,k-1]\,,\] \[\hat{r}_{q}(k,i) = i\ \text{if}\ q=k\,.\] (B.3)
In the notation of Eq. (B.3), \(D\left[\gamma^{k}_{0}\right]=D\left[\gamma^{k}\right]\), with \(D[\gamma^{k}]\) defined in terms of the bilinear relation after the \(k\)th set of time integrals, Eq. (5.32). We start with \(D[\gamma^{k}]\) and construct the full explicit sum over \(D[\gamma^{k+1}]\), one extremal vertex at a time. We will iteratively define the sum over \(D\left[\gamma^{k+1}\right]\) in terms of a sum over intermediate posets leading to \(D\left[\gamma^{k}_{r_{k+1}+s_{k+1}}\right]\).
Suppose we are given the poset \(D\left[\gamma^{k}_{i}\right]\), which is, as usual, a set of vertices \(D\left[\gamma^{k}_{i}\right]\), and a binary relation \(\geq_{\gamma^{k}_{i}}\), defined by direct analogy to Eq. (5.32). To identify the poset \(D\left[\gamma^{k}_{i+1}\right]\), we would like the minimum (maximum) \(e_{k+1,i+1}\) to be covered by (to cover) a unique element \(x_{j}^{(e_{k+1,i+1})}\). To do this, we find the set
\[\tilde{C}_{e_{k+1,i+1}} = \{x\in V\left[\gamma^{k}_{i+1}\right]\biggm{|}e_{k+1,i+1}\prec_{ \gamma^{k}_{i}}x\}\ \text{if}\ \ \ i+1\leq r_{k+1}\] \[\tilde{C}_{e_{k+1,i+1}} = \{x\in V\left[\gamma^{k}_{i+1}\right]\biggm{|}x\prec_{\gamma^{k} _{i}}e_{k+1,i+1}\}\ \text{if}\ \ \ i+1>r_{k+1}.\] (B.4) \[= \{x_{1}^{(e_{k+1},i+1)},x_{2}^{(e_{k+1},i+1)}\ldots x_{c_{k+1,i+1 }}^{(e_{k+1},i+1)}\}.\]
We notice that generically, \(\tilde{C}_{e}\) is distinct from \(C_{e}\) because we use the binary \(\geq_{\gamma^{k}}\) in Eq. (B.1) while we use the binary \(\geq_{\gamma^{k}_{i}}\) in Eq. (B.4). In fact, one generally finds \(\tilde{C}_{e}\subseteq C_{e}\) since we use the more restrictive binary \(\geq_{\gamma^{k}_{i}}\). We understand this to follow from the
observation that having identified unique covering (covered) vertices for the extrema \(e_{k+1,j}\;\;j\in[1,i]\), there are fewer consistent covering (covered) vertices for \(e_{k+1,i+1}\). We now define a new binary \(\gamma_{i+1}^{k}\) that satisfies
* If \(a\geq_{\gamma_{i}^{k}}b\), then \(a\geq_{\gamma_{i+1}^{k}}b\)
* If \(i+1\leq r_{k+1}\), we choose the earliest element \(x_{j_{k,i+1}}^{(e_{k,i+1})}\in\tilde{C}_{e_{k+1,i+1}}:y\geq_{\gamma_{i+1}^{k}} x_{j_{k,i+1}}^{(e_{k,i+1})},\forall y\in\tilde{C}_{e_{k+1,i+1}}\),
* If \(i+1>r_{k+1}\), we choose the latest element \(x_{j_{k,i+1}}^{(e_{k,i+1})}\in\tilde{C}_{e_{k+1,i+1}}:y\leq_{\gamma_{i+1}^{k}} x_{j_{k,i+1}}^{(e_{k,i+1})},\forall y\in\tilde{C}_{e_{k+1,i+1}}\).
With this definition of the binary relationship \(\gamma_{i+1}^{k}\), we have identified a unique element that provides a limit of integration for the extremum element \(e_{k,i+1}\). We may now define an intermediate poset \(D\left[\gamma_{i+1}^{k}\right]\),
\[D\left[\gamma_{i+1}^{k}\right]=\left\{V\left[\gamma^{k}\right]\setminus\left( \bigcup_{j=1}^{i+1}e_{k,j}\right),\geq_{\gamma_{i+1}^{k}}\right\}.\] (B.5)
Using this definition, we can start from \(\gamma_{0}^{k}\) and recursively find \(\gamma^{k+1}=\gamma_{r_{k+1}+s_{k+1}}^{k}\). It is clear from these definitions, that every consistent set of next to extremal vertices is generated. An inductive proof of this follows from assuming the result for \(\gamma_{i}^{k}\), and demonstrating that it follows for \(\gamma_{i+1}^{k}\).
It is now possible to define a recursive notation that makes explicit the many sums over sets \(\gamma^{k}\) in the main text,
\[\sum_{\gamma^{k}}F\left(\left\{\bigcup_{m=1}^{k}\bigcup_{l=1}^{r_{k}+s_{k}}x_ {j_{m,l}}^{(e_{m,l})}\right\}\right)=\sum_{j_{k,r_{k}+s_{k}}=1}^{c_{k,r_{k}+s_{ k}}}\ldots\sum_{j_{k,2}=1}^{c_{k,2}}\sum_{j_{k,1}=1}^{c_{k,1}}\sum_{\gamma^{k-1}}F \left(\left\{\bigcup_{m=1}^{k}\bigcup_{l=1}^{r_{k}+s_{k}}x_{j_{m,l}}^{(e_{m,l })}\right\}\right).\] (B.6)
In Eq. (B.6) \(F\) is any function of the next-to-extremal vertices and \(c_{k,i}\) is the number of elements in the companion set \(\tilde{C}_{k,i}\) as in Eq. (B.4). Notice that the sum in Eq. (B.6) is a nested sum where the outer summations depend on the inner summations. The summation over \(\gamma^{k}\) depends also on our choice in anti-chain \(A^{k}\).
|
2309.15452 | On invariance of John domains under quasisymmetric mappings | In this paper, we prove that if a homeomorphism is quasisymmetric relative to
the boundary of the domain then it maps a length John domain to a diameter John
domain. Moreover, we prove a necessary and sufficient condition for a diameter
John domain to be length John and thereby prove that if $f:G\rightarrow G'$ is
$(M, C)-$CQH map, where $G$ is a John domain, and the map extends to the
boundary such that the extension is $\eta-$QS relative to $\delta G$ then $G'$
is a John domain. In addition, we characterize distance John domains using the
weak minimizing property. | Vasudevarao Allu, Alan P Jose | 2023-09-27T07:38:24Z | http://arxiv.org/abs/2309.15452v2 | # On invariance of John domains under quasisymmetric mappings
###### Abstract.
In this paper, we prove that if a homeomorphism is quasisymmetric relative to the boundary of the domain then it maps a length John domain to a diameter John domain. As a consequence, we prove that bounded John domains in Euclidean spaces are preserved under quasisymmetric maps relative to the boundary with a dimension free control function. In addition, we provide a characterization for distance John domains.
Key words and phrases:Free quasiworld, John domains, Quasisymmetric mappings, Uniform domains 2010 Mathematics Subject Classification: 30C45, 30C50, 30C55
## 1. Introduction
The concept of quasiconformality in the plane was introduced by Grotozsch in 1928. The idea was later extended by a number of authors to the case where \(\mathbb{R}^{n}\) for \(n\geq 3\). Later, Gehring and his students Palka and Osgood [1, 2] have introduced the quasihyperbolic metric which is the generalization of the classical hyperbolic metric in the unit disk. Since then the quasihyperbolic metric has become an important tool in the studying the quasiconformal maps in Euclidean spaces. In the 1990s, Vaisala, through a series of papers, see [9, 10, 11], began his systematic investigation of quasiconformality in Banach spaces of infinite dimension, which he termed the free quasiworld. Since in infinite dimensional Banach spaces many of the methods and tools that are helpful in \(\mathbb{R}^{n}\) are unavailable, he constructed the theory using just elementary arguments.
John domain were introduced by F. John [4] in 1961 in connection with elasticity theory. Later, in 1978 O. Martio and J. Sarvas [5] coined the term "John domain" and used it to define uniform domains. These two classes were used to study injectivity properties of locally quasiconformal and locally bilipschitz maps.
In this paper, \(E\) and \(E^{\prime}\) will always denote real Banach spaces of dimension atleast two. Furthermore, \(G\subset E\), \(G^{\prime}\subset E^{\prime}\) will always be domains (open connected sets). For notations and definitions we refer to [12]. In [10], Vaisala obtained the following result relating the class of uniform domains and quasimobius maps in Banach spaces, which is the generalization of the corresponding result in \(\mathbb{R}^{n}\), see [7].
**Theorem 1.1** ([10]).: _Suppose that \(G\) is a \(c\)-uniform domain and that \(f:G\to G^{\prime}\) is freely \(\psi\)-quasiconformal, briefly \(\psi\)-FQC, where \(G\) and \(G^{\prime}\) are domains in the Banach spaces \(E\) and \(E^{\prime}\), respectively. Then the following conditions are quantitatively equivalent:_
1. \(G^{\prime}\) _is_ \(c_{1}\)_-uniform,_
2. \(f\) _is_ \(\eta\)_-quasimobius._
## 1. Introduction
In 2019, Li, Vuorinen and Zhou [13] characterized a \(\eta\)-quasimobius and a \(c\)-quasiconvex, rectifiably connected, non-complete metric space and thereby obtained that if \(f\) is an \(\eta\)-quasisymmetric homeomorphism then it maps John domain with center \(x_{0}\) to a John domain with center \(f(x_{0})\).
**Theorem 1.1**.: _Suppose that \(G\subset E\) is a domain. Then we have the following implications: \(G\) is \(c\)-uniform and that a homeomorphism \(f:\bar{G}\to\bar{G}^{\prime}\) is \(\eta\)-quasimobius._
**Theorem 1.2**.: _Suppose that \(G\) is a \(c\)-uniform domain and that a homeomorphism \(f:\bar{G}\to\bar{G}^{\prime}\) is \(\eta\)-quasimobius. Then \(G\) is \(c\)-uniform and that a homeomorphism \(f:\bar{G}\to\bar{G}^{\prime}\) is \(\eta\)-quasimobius._
In 2021, Li, Vuorinen and Zhou [13] characterized a \(\eta\)-quasisymmetric homeomorphism between two locally \((\lambda,c)\)- quasiconvex, rectifiably connected, non-complete metric spaces. If \(D\) is a length \(a\)-John space with center \(x_{0}\), then \(D^{\prime}\) is a length \(a^{\prime}\) -John space with center \(x_{0}^{\prime}=f(x_{0})\), where \(a^{\prime}\) depends on \(\lambda,c,\eta\) and \(a\).
In 2022, Duyang, Rasila and Guan [6] characterized a \(\eta\)-quasisymmetric homeomorphism between two locally \((\lambda,c)\)- quasiconvex, rectifiably connected, non-complete metric spaces. If \(D\) is a length \(a\)-John space with center \(x_{0}\), then \(D^{\prime}\) is a length \(a^{\prime}\) -John space with center \(x_{0}^{\prime}=f(x_{0})\), where \(a^{\prime}\) depends on \(\lambda,c,\eta\) and \(a\).
In 2021, Li, Vuorinen and Zhou [13] characterized a \(\eta\)-quasisymmetric homeomorphism between two locally \((\lambda,c)\)- quasiconvex, rectifiably connected, non-complete metric spaces. If \(D\) is a length \(a\)-John space with center \(x_{0}\), then \(D^{\prime}\) is a length \(a^{\prime}\) -John space with center \(x_{0}^{\prime}=f(x_{0})\), where \(a^{\prime}\) depends on \(\lambda,c,\eta\) and \(a\).
Later many enquired whether the result holds if you replace \(\psi\)-FQC maps with \((M,c)\)-coarsely quasihyperbolic, briefly \((M,c)\)-CQH, and \(\eta\)-quasimobius with \(\eta\)-quasimobius rel \(\delta G\)[3] or \(\eta\)-quasimobius on \(\delta G\)[14].
In 2021, Zhou and Rasila [15] considered the following question:
**Question 1.2**.: Suppose that \(G\) is a \(c\)-uniform domain and that a homeomorphism \(f:\bar{G}\to\bar{G}^{\prime}\) is \(\eta\) quasimobius relative to \(\delta G\) and maps \(G\) onto \(G^{\prime}\). Is \(G^{\prime}\) a \(c^{\prime}\) - uniform with \(c^{\prime}=c^{\prime}(c,\eta)\)?
Studying this problem, Zhou and Rasila [15] have obtained the following result concerning the implications of uniform domains, diameter uniform domains and min - max property:
**Theorem 1.3**.: _Suppose that \(G\subset E\) is a domain. Then we have the following implications: \(G\) is \(c\)-uniform \(\implies\)\(G\) satisfies the min-max property \(\implies\)\(G\) is diameter \(c_{1}\) uniform \(\iff\)\(G\) is \(\delta\) -uniform for some \(0<\delta<1\)._
Zhou and Rasila [15] have obtained the following result as a consequence of Theorem 1.3.
**Theorem 1.4**.: _Let \(G\) and \(G^{\prime}\) be proper domains in Banach spaces \(E\) and \(E^{\prime}\), respectively. Suppose that \(G\) is \(c\)-uniform and that a homeomorphism \(f:\bar{G}\to\bar{G}^{\prime}\) is \(\eta\)-quasimobius relative to \(\delta G\) and maps \(G\) onto \(G^{\prime}\). Then \(G^{\prime}\) is diameter \(c^{\prime}\)-uniform with some \(c^{\prime}=c^{\prime}(c,\eta)\)._
In 2022, Ouyang, Rasila and Guan [6] characterized diameter uniform domains using weak min-max property and showed that diameter uniform domains are invariant under quasimobius maps relative to the boundary. Since every uniform domain is a John domain it is natural to ask whether Theorem 1.4 holds if we replace uniform domains by John domains.
But if \(G=\mathbb{D}\setminus[-\frac{1}{2},1)\), where \(\mathbb{D}=\{z\in\mathbb{C}:|z|<1\}\) and \([-\frac{1}{2},1)\) denotes the line segment joining \(-\frac{1}{2}\) and \(1\), and \(f:G\to\mathbb{C}\) defined by \(f(z)=z/|z|^{2}\), then \(f\) is \(\eta\) quasimobius with \(\eta(t)=81t\). Nevertheless, \(f\) maps a John domain onto a domain which is not John.
Therefore, it is reasonable to ask the following question:
**Question 1.5**.: Let \(E\) and \(E^{\prime}\) be Banach spaces of dimension \(n\geq 2\) and \(G\subset E\), \(G^{\prime}\subset E^{\prime}\) be domains. Further, suppose that \(G\) is a \(c\)-John domain. If a homeomorphism \(f:G\to G^{\prime}\), what are the conditions \(f\) need to satisfy so that \(G^{\prime}\) is a \(c^{\prime}\)-John domain.
In 2019, Li, Vuorinen and Zhou [13] characterized John domain with a center in a locally \((\lambda,c)\)-quasiconvex, rectifiably connected, non-complete metric space and thereby obtained that if \(f\) is an \(\eta\)-quasisymmetric homeomorphism then it maps John domain with center \(x_{0}\) to a John domain with center \(f(x_{0})\).
**Theorem 1.6**.: _Suppose \(f:D\to D^{\prime}\) is an \(\eta\)- quasisymmetric homeomorphism between two locally \((\lambda,c)\)- quasiconvex, rectifiably connected, non-complete metric spaces. If \(D\) is a length \(a\)-John space with center \(x_{0}\), then \(D^{\prime}\) is a length \(a^{\prime}\) -John space with center \(x_{0}^{\prime}=f(x_{0})\), where \(a^{\prime}\) depends on \(\lambda,c,\eta\) and \(a\)._
Now we introduce minimizing and weak minimizing property for a domain \(G\subset E\).
**Definition 1.7**.: _Let \(G\subset E\) be a domain then we say \(G\) has minimizing property if there exists a family of curves \(\Gamma\) in \(G\) and a constant \(c\) such that each pair of points in \(G\) can be joined by a curve \(\gamma\in\Gamma\) satisfying_
\[\min_{j=1,2}|x_{j}-y|\leq c|x-y| \tag{1.1}\]
_for each ordered triple of points \(x_{1},x,x_{2}\in\gamma\) and each \(y\in\delta G\)._
**Definition 1.8**.: _Let \(G\) be a domain if there exists a family of curves \(\Gamma\) in \(G\) and a constant \(c\) such that for each \(x_{1},x_{2}\in G\) there exists a curve \(\gamma:x_{1}\curvearrowright x_{2}\) in \(\Gamma\) such that_
\[\min_{j=1,2}|x_{j}-y|\leq c|x-y| \tag{1.2}\]
_for each \(x\in\gamma\) and \(y\in\delta G\) then we say that \(G\) has weak minimizing property._
Now we state our main results.
**Theorem 1.9**.: _Let \(G\subset E\) be a domain then the following implications hold: \(G\) is \(c\)-John domain \(\implies\)\(G\) has the minimizing property \(\iff\)\(G\) is diameter \(c_{1}\) John._
As a consequence, we partially answer Question 1.5 as follows:
**Theorem 1.10**.: _Let \(G\) and \(G^{\prime}\) be proper domains in Banach spaces \(E\) and \(E^{\prime}\), respectively. Suppose that \(G\) is \(c\)-John domain and that \(f:\bar{G}\to\bar{G}^{\prime}\) is a homeomorphism that maps \(G\) onto \(G^{\prime}\) which is \(\eta\) quasisymmetric relative to \(\delta G\). Then \(G^{\prime}\) is diameter \(c^{\prime}\)-John domain with \(c^{\prime}=c^{\prime}(c,\eta)\)._
We prove that the weak minimizing property characterizes distance John domains.
**Theorem 1.11**.: _Let \(G\subset E\) be a domain, where \(E\) is a Banach space of dimension \(n\geq 2\). Then \(G\) is distance \(c\)-John domain if, and only if \(G\) satisfies the weak minimizing property._
Similar to Theorem 1.10 we get the following result corresponding to distance John domains.
**Theorem 1.12**.: _If \(f:\bar{G}\to\bar{G}^{\prime}\) is \(\eta\) quasisymmetric relative to \(\delta G\), maps \(G\) onto \(G^{\prime}\) and \(G\) is distance \(c\)- John domain then \(G^{\prime}\) is distance \(c^{\prime}\)- John domain, where \(c^{\prime}=c^{\prime}(c,\eta)\)._
## 2. Preliminary Results
In this section we present the basic definitions and preliminary results.
### Notations
The norm of a vector \(x\) in \(E\) is denoted as \(|x|\) and \([x,y]\) denotes the closed line segment with endpoints \(x\) and \(y\).
For a set \(D\subset E\), \(\bar{D}\) denotes the closure of \(D\) and \(\delta D=\bar{D}\setminus D\) be its boundary in the norm topology. For an element \(x\in D\), let \(\delta_{D}(x)\) denote the distance of the point \(x\) to the boundary of \(D\), _i.e.,_\(\delta_{D}(x)=\inf\left\{|x-y|:y\in\delta D\right\}\). For a bounded set \(D\) in \(E\), \(\operatorname{diam}(D)\) is the diameter of \(D\). A curve is a continuous function \(\gamma:[a,b]\to E\). The length of \(\gamma\) is defined by
\[l(\gamma)=\sup\left\{\sum_{i=1}^{n}|\gamma(t_{i})-\gamma(t_{i-1}|\right\},\]
where the supremum is taken over all partitions \(a=t_{0}<t_{1}<\cdots<t_{n}=b\).
### John domains
**Definition 2.1**.: _Let \(G\subsetneq E\) be a domain and let \(c\geq 1\). Then \(G\) is called \(c\)-John, if each pair of points \(x_{1},x_{2}\) in \(G\) can be joined by a curve \(\gamma\) in \(G\) satisfying the following:_
\[\min_{j=1,2}l(\gamma[x_{j},x])\leq c\delta_{G}(x)\text{ for all }x\in\gamma, \tag{2.1}\]
_where \(\gamma[x_{j},x]\) denotes the part of the curve \(\gamma\) from \(x_{j}\) to \(x\). We say that such a curve is a \(c\)-cone arc. This definition is known as the cigar definition of John domains._
If the length of the curve in (2.1) in the Definition 2.1 is replaced by \(\operatorname{diam}(\gamma[x_{j},x])\) then the domain is said to be diameter \(c\)- John domain. We say that \(G\) is distance \(c\)-John if (2.1) is replaced by
\[\min_{j=1,2}|x_{j}-x|\leq c\delta_{G}(x),\]
for all \(x\in\gamma\).
**Definition 2.2**.: _For \(c\geq 1\), a non complete metric space \((D,d)\) is called \(c\)-John domain, if there is \(x_{0}\in D\), called the John center of \(D\), such that every point \(x\) in \(D\) can be joined to \(x_{0}\) by a curve \(\alpha\) satisfying_
\[l\left(\alpha[x,z]\right)\leq c\delta_{D}(z),\]
_for each \(z\in\alpha\), and we say that \(\alpha\) is a \(c\)-carrot arc._
We define diameter \(c\)-John with center and distance \(c\)-John with center similarly as we have defined for general John domains.
**Remark 2.3**.: It is easy to see that if \(D\) is a John domain with center \(x_{0}\) then \(D\subset B(x_{0},c\delta_{D}(x_{0})\). In particular every John domain with center \(x_{0}\) is bounded. If the domain \(D\subset\mathbb{R}^{n}\) is bounded then both definitions 2.1 and 2.2 are quantitatively equivalent, (see [8]). In this article, we focus primarily on domains that are John in the cigar sense.
It is easy to see that length \(c\)-John \(\implies\) diameter \(c\)-John \(\implies\) distance \(c\)-John. In \(\mathbb{R}^{n}\), Martio and Sarvas [5] have shown that for a John domain \(D\subset\mathbb{R}^{n}\) with center \(x_{0}\) diameter \(c\)-John implies length \(c\)-John. In [8], Vaisala proved that for a bounded domain \(D\subset\mathbb{R}^{n}\), diameter \(c\)-John and distance \(c\)-John are quantitatively equivalent. Vaisala constructed an example of a diameter John domain in infinite dimensional Hilbert space which is not a length John domain.
### Quasisymmetric mappings
Let \(X\) be a metric space. By a triple in \(X\) we mean an ordered sequence \(T=(x,y,z)\) of three distinct points in \(X\). The ratio of \(T\) is the number
\[\rho(T)=\frac{|y-x|}{|z-x|}.\]
If \(f:X\to Y\) is an injective map, the image of a triple \(T=(x,y,z)\) is the triple \(f(T)=(f(x),f(y),f(z)).\) Suppose that \(A\subset X\). A triple \(T=(x,y,z)\) in \(X\) is said to be a triple in the pair \((X,A)\) if \(x\in A\) or if \(\{y,z\}\subset A\). Equivalently, both \(|y-x|\) and \(|z-x|\) are distances from a point in \(A\).
**Definition 2.4**.: _Let \(X\) and \(Y\) be metric spaces an embedding \(f:X\to Y\) is said to be \(\eta\)-quasisymmetric if there is a homeomorphism \(\eta:[0,\infty)\to[0,\infty)\) such that_
\[\rho(fT)\leq\eta(\rho(T)) \tag{2.2}\]
_holds for each triple \(T\) in \(X\)._
_Let \(A\subset X\), then \(f\) is said to be \(\eta\) -quasisymmetric relative to \(A\), abbreviated \(\eta\)-QS rel \(A\), if the condition (2.2) holds for every triple \(T\) in \((X,A)\). Thus quasisymmetry rel \(X\) is equivalent to ordinary quasisymmetry._
Note that an embedding \(f:X\to Y\) is \(\eta\)-QS rel \(A\) if, and only if, \(\rho(T)\leq t\) implies that \(\rho(f(T))\leq\eta(t)\) for each triple \(T\) in \((X,A)\) and \(t\geq 0\).
In 1999, Vaisala proved that [12] that (2.2) implies that \(f\) is an embedding unless \(f\) is a constant.
## 3. Main Results
The proof of Theorem 1.9 is divided into the following two lemmas.
**Lemma 3.1**.: _Suppose \(G\subset E\) is a \(c\)- John domain, then \(G\) satisfies the minimizing property._
Proof.: Let
\[\Gamma\coloneqq\left\{\gamma:x\curvearrowright y:\gamma\text{ is a $c$-cone arc joining $x$ and $y$ }\right\}.\]
Since \(G\) is \(c\)-John, \(\Gamma\) is not empty. Now, for \(u,v\in G\) there exists a \(c\)-cone arc \(\gamma\in\Gamma\) connecting \(u\) and \(v\). For a fixed \(\gamma:u\curvearrowright v\) in \(\Gamma\) and for an ordered triple of points \(x_{1},x,x_{2}\in\gamma\) and \(y\in\delta G\), it suffices to verify (1.1) for these points. We first show that the sub-arc \(\gamma[x_{1},x_{2}]\) is again a \(c\) -cone arc. Let \(w\in\gamma\) be such that \(l(\gamma[u,w])=l\left(\gamma[w,v]\right).\)
We divide this into three cases according to the position of the points \(x_{1}\) and \(x_{2}\) in \(\gamma\) with respect to \(w\).
_Case I_: If \(x_{1},x_{2}\in\gamma[u,w]\), then
\[\min\left\{l\left(\gamma[x_{1},x]\right),l\left(\gamma[x,x_{2}]\right)\right\} =l\left(\gamma[x_{1},x]\right)\leq l\left(\gamma[u,x]\right)\leq c\delta_{G}(x).\]
_Case II_: If \(x_{1},x_{2}\in\gamma[w,v]\), then
\[\min\left\{l\left(\gamma[x_{1},x]\right),l\left(\gamma[x,x_{2}]\right)\right\} =l\left(\gamma[x,x_{2}]\right)\leq l\left(\gamma[x,v]\right)\leq c\delta_{G}(x).\]
_Case III_: \(\gamma[x_{1},x_{2}]=\gamma[x_{1},w]\cup\gamma[w,x_{2}]\)
If \(x\in\gamma[x_{1},w]\), then similarly as in Case I we obtain
\[\min\left\{l\left(\gamma[x_{1},x]\right),l\left(\gamma[x,x_{2}]\right)\right\} \leq c\delta_{G}(x).\]
If \(x\in\gamma[w,x_{2}]\), then similarly as in Case II we obtain
\[\min\left\{l\left(\gamma[x_{1},x]\right),l\left(\gamma[x,x_{2}]\right)\right\} \leq c\delta_{G}(x).\]
Therefore, we have,
\[\min_{j=1,2}|x_{j}-x|\leq\min_{j=1,2}l\left(\gamma[x_{j},x]\right)\leq c\delta _{G}(x)\leq c|x-y|.\]
Thus, we obtain
\[\min_{j=1,2}|x_{j}-y|\leq\min_{j=1,2}|x_{j}-x|+|x-y|\leq(1+c)|x-y|. \tag{3.1}\]
Hence \(G\) satisfies the minimizing property for the family \(\Gamma\).
**Lemma 3.2**.: _Suppose that a domain \(G\subset E\) satisfies the minimizing property with some constant \(c\), then \(G\) is diameter \(c^{\prime}\)- John, where \(c^{\prime}=c^{\prime}(c)\)._
Proof.: Let \(x_{1},x_{2}\in G\), then there exists \(\gamma:x_{1}\curvearrowright x_{2}\) satisfying (1.1). Let \(x\in\gamma\), we choose a point \(z\in\delta G\) such that \(|x-z|\leq 2\delta_{G}(x).\) For \(j=1\) or \(j=2\), we claim that
\[\gamma[x_{j},x]\subseteq B\left(z,2c\delta_{G}(x)\right). \tag{3.2}\]
If it is not the case, then there exists \(u_{j}\in\gamma[x_{j},x]\) such that \(|u_{j}-z|>2c\delta_{G}(x)\) for \(j=1,2\), which implies that
\[c|x-z|<\min_{j=1,2}|u_{j}-z|,\]
which contradicts (1.7). From (3.2), it follows that
\[\min_{j=1,2}\text{diam }(\gamma[x_{j},x])\leq c^{\prime}\delta_{G}(x),\]
where \(c^{\prime}=4c\). Hence \(\gamma\) is the desired curve and the result follows.
Proof of Theorem 1.10.: Let \(G\) and \(G^{\prime}\) be proper domains in Banach spaces \(E\) and \(E^{\prime}\), respectively. Suppose \(G\) has the minimizing property, \(f:\bar{G}\to\bar{G}^{\prime}\) is \(\eta\) QS rel \(\delta G\), and maps \(G\) onto \(G^{\prime}\). We show that \(G^{\prime}\) has the minimizing property. Suppose \(\Gamma\) is the family of curves in \(G\) such that for any two given points there exists a curve \(\gamma\in\Gamma\) connecting them and satisfies (1.1) with constant \(c\) for any ordered triple of points \(x_{1},x,x_{2}\in\gamma\) and \(y\in\delta G\). We claim that \(\Gamma^{\prime}=f(\Gamma)\) is the required family of curves in \(G^{\prime}\).
Let \(\gamma^{\prime}\) be a curve in \(\Gamma^{\prime}\) and fix ordered triple of points \(x_{1}^{\prime}=f(x_{1}),x^{\prime}=f(x),x_{2}^{\prime}=f(x_{2})\) in \(\gamma^{\prime}\) and \(y^{\prime}=f(y)\) be in \(\delta G^{\prime}\). Then \(x_{1},x,x_{2}\) is an ordered triple of points in \(\gamma\) and \(y\) is in \(\delta G\). Without loss of generality assume that \(\min_{j=1,2}|x_{j}-y|=|x_{1}-y|\). Then by the minimizing property of \(G\) we obtain \(|x_{1}-y|\leq c|x-y|\), which implies \(|f(x_{1})-f(y)|\leq\eta(c)|f(x)-f(y)|\). Hence, \(G^{\prime}\) satisfies the minimizing property and Theorem 1.10 follows from Theorem 1.9.
Now we provide the proof of Theorem 1.11.
Proof of Theorem 1.11.: Suppose that \(G\subset E\) is a distance \(c\)- John domain. Set
\[\Gamma=\left\{\gamma:x_{1}\curvearrowright x_{2}:\gamma\text{ is the distance }c\text{-John arc },x_{1},x_{2}\in G\right\}.\]
Fix \(x_{1},x_{2}\in G\) and \(\gamma:x_{1}\curvearrowright x_{2}\) in \(\Gamma\). Then for \(x\in\gamma\) and \(y\in\delta G\), we have
\[\min_{j=1,2}|x_{j}-y|\leq\min_{j=1,2}|x_{j}-x|+|x-y|\leq c\delta_{G}(x)+|x-y| \leq(1+c)|x-y|. \tag{3.3}\]
Therefore, \(G\) satisfies the weak minimizing property with the family of curves \(\Gamma\) and constant \(1+c\).
Conversely, suppose \(G\) has the minimizing property with \(\Gamma^{\prime}\) as the family of curves and \(c^{\prime}\) as the constant satisfying (1.2). Let \(x_{1},x_{2}\in G\) and \(\alpha:x_{1}\curvearrowright x_{2}\) be the curve in \(\Gamma^{\prime}\). For \(x\in\alpha\), choose \(z\in\delta G\) with \(|x-z|\leq 2\delta_{G}(x).\) Then by (1.2), we have
\[\min_{j=1,2}|x_{j}-x|\leq\min_{j=1,2}|x_{j}-z|+|x-z|\leq(c+1)|z-x|\leq 2(c+1) \delta_{G}(x).\]
Therefore, \(G\) is distance \(c^{\prime}\)- John, where \(c^{\prime}=2(c+1)\).
In light of Theorem 1.11, it suffices to prove the following lemma to prove Theorem 1.12.
**Lemma 3.3**.: _Suppose \(G\subset E\) and \(G^{\prime}\subset E^{\prime}\) be domains. Let \(f:\bar{G}\to\bar{G}^{\prime}\) be \(\eta\)-QS rel \(\delta G\) and maps \(G\) onto \(G^{\prime}\). Further, suppose that \(G\) has the weak minimizing property, then \(G^{\prime}\) also has the weak minimizing property._
Proof.: Fix \(x_{1}^{\prime},x_{2}^{\prime}\in G^{\prime}\) and \(y^{\prime}\in\delta G^{\prime}\). Then there exist \(x_{1},x_{2}\in G\), \(y\in\delta G^{\prime}\) such that \(f(x_{i})=x_{i}^{\prime}\), for \(i=1,2\) and \(f(y)=y^{\prime}\). Further, there exists a curve \(\gamma:x_{1}\curvearrowright x_{2}\) in \(G\) and a constant \(c\) such that for \(x\in\gamma\),
\[\min_{j=1,2}|x_{j}-y|\leq c|x-y|.\]
Without loss of generality, assume that \(\min_{j=1,2}|x_{j}-y|=|x_{1}-y|\). Thus by the quasisymmetry of \(f\) we yield,
\[|x_{1}^{\prime}-y^{\prime}|\leq\eta(c)|f(x)-y^{\prime}|.\]
Hence, it follows that \(f(\gamma)\) is curve joining \(x_{1}\) and \(x_{2}\) satisfying (1.2). Therefore, \(G^{\prime}\) has the weak minimizing property with constant \(c^{\prime}=\eta(c)\).
Since, for a bounded John domain in \(\mathbb{R}^{n}\) every distance John domain are length John domain we have the following easy consequence of Theorem 1.12.
**Corollary 3.4**.: _If \(D\subset\mathbb{R}^{n}\) is a John domain in the sense Definition 2.2 and \(f:\bar{D}\to\bar{D}^{\prime}\subset\mathbb{R}^{n}\) is \(\eta\)-QS rel \(\delta D\), then \(D^{\prime}\) is also John domain as in Definition 2.2._
In [13, Corollary 3.2], Li, Vuorinen, and Zhou have shown that John domains in \(\mathbb{R}^{n}\) are preserved under quasisymmetric homeomorphisms with a dimension free control function. Since, every quasisymmetric map defined on a domain in a Banach space extends to the boundary of the domain,(see [12, Theorem 6.12]), Corollary 3.4 is an improvement the result of Li, Vuorinen, and Zhou [13].
**Acknowledgement.** The first author thanks SERB-CRG and the second author's research work is supported by CSIR-UGC.
|
2309.08832 | SLIDE: Reference-free Evaluation for Machine Translation using a Sliding
Document Window | Reference-based metrics that operate at the sentence-level typically
outperform quality estimation metrics, which have access only to the source and
system output. This is unsurprising, since references resolve ambiguities that
may be present in the source. In this paper, we investigate whether additional
source context can effectively substitute for a reference. We present a metric
named SLIDE (SLIding Document Evaluator), which operates on blocks of
sentences. SLIDE leverages a moving window that slides over each document in
the test set, feeding each chunk of sentences into an unmodified, off-the-shelf
quality estimation model. We find that SLIDE obtains significantly higher
pairwise system accuracy than its sentence-level baseline, in some cases even
eliminating the gap with reference-base metrics. This suggests that source
context may provide the same information as a human reference in disambiguating
source ambiguities. This finding is especially pertinent for reference-free
document-level evaluation, wherein SLIDE could provide higher-quality pairwise
system assessments while only requiring document boundary annotations. | Vikas Raunak, Tom Kocmi, Matt Post | 2023-09-16T01:30:58Z | http://arxiv.org/abs/2309.08832v2 | # SLIDE: Reference-free Evaluation for Machine Translation using a Sliding Document Window
###### Abstract
Reference-based metrics that operate at the sentence level typically outperform quality estimation metrics, which have access only to the source and system output. This is unsurprising, since references resolve ambiguities that may be present in the source. We investigate whether additional source context can effectively substitute for a reference. We present a metric, SLIDE (SLiding Document Evaluator), which operates on blocks of sentences using a window that slides over each document in the test set, feeding each chunk into an unmodified, off-the-shelf quality estimation model. We find that SLIDE obtains significantly higher pairwise system accuracy than its sentence-level baseline, in some cases even eliminating the gap with reference-base metrics. This suggests that source context may provide the same information as a human reference.
## 1 Introduction
The prevailing approach for neural machine translation metrics is to work at the sentence level, constructing sequences of contextualized encoder states from the source sentence, a reference translation, and a system output. The specific mechanics vary by metric, but a general approach, employed by COMET (Rei et al., 2020), is to pool these encodings into separate sentence-level embeddings, concatenate them, and fed them into a regressor, which is trained against human annotations. Quality Estimation (QE) approaches work similarly, but do not have access to a reference translation.
QE metrics typically trail their reference-based counterparts (Freitag et al., 2022), for obvious reasons. The default evaluation setting for QE is at the sentence level. But just as there exist many linguistic phenomena that cannot be translated without context, these same phenomena also cannot be properly evaluated in isolation. As an example, consider the following English sentences _with context_ and their translations into German.
1. _I need my hat_. Where is **it?** 2. _Ich brauche meinen Hut_. Wo is **er?**
Reference-based evaluation is aided by the fact that the human translation, presumably produced in context, resolves the ambiguity. QE approaches (operating at the sentence level), on the other hand, cannot correctly score this translation. There are many other document-level phenomena that might also be captured by references, many of them subtle and hard to measure (Maruf et al., 2019).
It therefore stands to reason that providing this context could be a useful extension to (reference-free) QE metrics. Indeed, we already have some evidence for this from Vernikos et al. (2022), who altered COMET to make use of context when producing sentence encodings; this context is then removed prior to passing each sentence to the COMET regression module, for both reference-based evaluation and quality estimation. We are motivated by different but related ideas: (i) neural metrics often make use of underlying language models trained on wider contexts, which means there is no real impediment to feeding them multiple sentences, and (ii) a sentence's evaluation will differ based on its order in a block of sentences, so it may be helpful to evaluate each sentence in multiple different contexts. In this work, we therefore experiment with a **strided window** approach applied to COMET, whose underlying encoder is InfoXLM (Lample and Conneau, 2019; Chi et al., 2021), trained on wide contexts. We apply a fixed-width sentence window and slide it across the documents within a test set, accumulating scores of each chunk in normal COMET fashion. We experiment with various windows and strides, and find that COMET-QE employed in this fashion outperforms its sentence-level reference-based counterparts in many settings. We conclude that this simple extension to QE might be profitably engaged wherever document annotations are available.
Does COMET-QE work with context?
Deutsch et al. (2023) explored a related question, looking into whether COMET and other neural metrics trained at the sentence level could be employed to evaluate paragraphs, instead. Their results were positive: sentence-level metrics work just as well with paragraph input, and retraining with paragraphs was not helpful. Their evaluation, however, was limited to reference-based metrics. In this section we describe an experiment designed to test whether source-context can be helpful when no reference is available.
ContraPro (Muller et al., 2018) is a large (12k) collection of English-German parallel sentences containing one of the three German subject pronouns: _er_, _es_, and _sie_. The data is surfaced automatically from an NLP pipeline applied to OpenSubtitles (Lison et al., 2018), and each instance is paired with _contrastive_ examples that swap out the correct pronoun from the incorrect one. The dataset's task is to evaluate a model by determining whether it assigns higher (better) scores to the correct, found sentence pair, compared to all of its incorrect contrastive variants.
Post and Junczys-Dowmunt (2023) note the inability of contrastive test sets like ContraPro (Muller et al., 2018) to discriminate contextual machine translation systems from sentence-based ones. They found that discriminative ability did not necessarily imply generative ability, suggesting also that evaluation is easier than generation. However, the task of discriminating between good outputs from bad or better ones is precisely the job of a metric, making these test sets well-suited to _metric evaluation_. We can do this by treating the reference sentences (both the original correct one, and the contrastive variants) as system outputs, providing all sentences with the relevant context.
Vernikos et al. (2022) investigated the ability of a modified COMET model on this very discriminative task. Their approach (DocCOMET) used the context of each sentence in the computation of the sentence encodings, but removed this context prior to COMET's concatenation and regression steps. We repeat this experiment using their code,1 but explore other amounts of context beyond the two used in their results. Curious whether a simpler approach would work, we conducted a proof-of-concept experiment that compared DocCOMET results to a COMET model variant that simply concatenates sentences in the source and hypothesis, treating them as a single undifferentiated unit, to be passed scored by an unmodified COMET model. We do this with two COMET models:
Footnote 1: [https://github.com/amazon-science/doc-mt-metrics](https://github.com/amazon-science/doc-mt-metrics)
* **COMET20-QE**(Rei et al., 2020): wmt20-comet-qe-da
* **COMETKiwi**(Rei et al., 2022): wmt22-cometwiki-da
The results can be found in Table 1. We discover Vernikos' numbers nearly exactly (at context=2, 74.1 vs. their 74.2), and also show improvement with their method on this task with longer contexts. This corroborates a note from their paper, that longer contexts help improve the encoding of the _payload_ sentence, and quantifies it. For the COMET models employed with no special tweaks, we see improvement linear in the context size for COMET20-QE, and scores that peak at context=2 for COMETKiwi. The numbers for the former system with the older underlying model are always higher than DocCOMET-QE. The COMETKiwi model, on the other hand, achieves accuracies that are significantly higher than either model, but its peak is at a context size of two sentences, after which it degrades. We don't have an explanation for this, but note that scoring the entire pseudo-document is unnatural for this task, since the only changes are a single word in the final sentences. In any case, the good performance passes our test, and we take this as proof-of-concept that contextualized COMET can perform well with simple concatenation, with no architecture modifications.
\begin{table}
\begin{tabular}{r|r r r} \hline \hline ctx & DocCOMET-QE & COMET20-QE & CometKiwi \\ \hline
0 & 54.5 & 54.6 & 48.4 \\
1 & 71.5 & 73.8 & 81.8 \\
2 & 74.1 & 75.5 & 83.1 \\
3 & 75.1 & 76.6 & 82.8 \\
4 & 75.9 & 77.0 & 81.4 \\
5 & 76.0 & 77.2 & 79.4 \\
6 & 76.3 & 77.4 & 77.6 \\
7 & 76.2 & 78.1 & 76.3 \\
8 & 76.6 & 78.0 & 75.8 \\
9 & 77.0 & 78.5 & 74.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Sentence-level contrastive accuracies on the ContraPro dataset as as function of context length (in sentences), comparing three source-based (QE) metrics.
The SLIDE Approach
Our main focus is the "to ship or not to ship" question (Kocmi et al., 2021): given two systems, which one is the better one? We turn next to this task.
Our task is to score system output over a test set, comprising multiple documents. We define a window size, \(w\), specifying how many sentences to include as input, and a stride, \(s{\leq}w\), defining how many sentences to advance the window. For a given \((w,s)\) setting, the window is placed at the beginning of the document covering \(w\) sentences, and the chunk is sent to COMET in the guise of a single input. The window is then incremented by \(s\) sentences and a new value computed. This proceeds over all documents in a test set. We allow a final, smaller window to be produced if the document size is not divide by \(w-s\). Finally, we accumulate all the scores and return their average as the system-level score, in typical COMET fashion.
These variables define two scenarios:
* _Overlapped_ (\(s<w\)). Each sentence in a document has the opportunity to contribute multiple times to the overall score, playing a role in different positions in the input.
* _Chunked_ (\(s=w\)). There is no overlap, so each sentence participates in the overall score only once.
We call our metric SLIDE, for SLiding Document Evaluator. Note that the chunked version of SLIDE presents a natural ablation of the second assumption behind our metric (SS 1).
## 4 Experiments
We experiment with window values \(1{\leq}w{\leq}10\) and strides of \(1{\leq}s{\leq}w\). Our evaluation metric is the percentage of pairwise system pairs that a metric correctly distinguishes, using the WMT22-MQM labels (Freitag et al., 2022), which covers a number of systems in three language pairs: English-German, English-Russian, and Chinese-English.
The heatmap in Figure 1 plots pairwise system-level accuracies for each \((w,s)\) value for three models. The first two are COMET20-QE and COMETKwi, as defined above. We also include the reference-based @COMET222 model.3
Footnote 2: As a service to the reader, we annotate reference-using models with @.
Footnote 3: COMET model wmt22-comet-da
Next, Table 2 displays the pairwise accuracy of a number of systems from the WMT22 metrics task, including the best metrics and metrics of interest. We include the best variant of SLIDE, the worst variant, and, in a nod towards model selection, the best variant with a stride of 1.
## 5 Discussion
From the heatmaps in Figure 1, we can easily see a number of trends. COMET20-QE does not fair well at all with context, with very few points improving over the sentence-level baseline (point \((1,1)\)) at all, and not in any consistent fashion. We see the same thing for model @COMET22, corroborating the finding of Deutsch et al. (2023). For the reference-based model, an explanation may be that context is redundant with the reference. As for why there are no improvements with the COMET20-QE model, we have no insights at the moment.
With the COMETKwi model, however, the results become quite interesting. Adding context helps significantly; the worst point in the grid improves by 3.3 points over the no-context baseline. Performance seems to rise as more context is added,
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Metric** & **MQM** \\ \hline @ metricx\_xl\_DA\_2019 & 0.865 \\ @ metricx\_xxl\_MQM\_2020 & 0.850 \\ @ BLEURT-20 & 0.847 \\ @ metricx\_xl\_MQM\_2020 & 0.843 \\ \(\mathbf{SLIDE}(6,6)\) & 0.843 \\ @ COMET-22 & 0.839 \\ \(\mathbf{SLIDE}(7,1)\) & 0.839 \\ @ COMET-20 & 0.836 \\ @ Doc-COMET & 0.836 \\ @ UniTE & 0.828 \\ @ MS-COMET-22 & 0.828 \\ @ UniTE-ref & 0.818 \\ @ MATESE & 0.810 \\ \(\mathbf{SLIDE}(2,1)\) & 0.807 \\ @ YiSi-1 & 0.792 \\ COMETKwi & 0.788 \\ Doc-COMET & 0.737 \\ @ chrF & 0.734 \\ @ BLEU & 0.708 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pairwise system accuracy against the WMT22-MQM annotations. Metrics that use a reference are marked with @. Our entries are of the form SLIDE \((w,s)\). We have retained many other WMT22 scores for comparison purposes.
although once too much context is used, the points begin to all again. Within a particular window (row), it does not seem to matter too much which stride is used.
Looking at chunked (\(w=s\)) vs. overlapped (\(w>s\)) values, we also see no particular pattern. Increasing the context size (up to _some_ point) matters, but overlapping the document chunks neither helps nor hurts, on average.
Finally, together with the results in Table 1, we note that SLIDE propels quality estimation up the chart, where it competes with reference-based versions of the underlying evaluators. In particular, SLIDE \((6,6)\) outperforms @COMET22, and a number of variants are at the same level as @COMET20. Of course, this doesn't answer the difficult question of model selection, but the trends are informative, and we note again that even the worst SLIDE model is already quite an improvement over the baseline.
For comparison purposes, we also looked into adding context to reference-based COMET in the same manner. This provided small very small improvements over sentence-based COMET, much smaller than the multi-point gains from the QE variants.
Finally, we note that it is not obvious that adding context to sentences might improve metric performance when one considers that the underlying systems are all sentence-based translations. Any information that is newly available for the evaluator would therefore _not_ have been available to the underlying translation engine. As a result, the improvements the metric is picking up on must come from another source. An interesting further evaluation would throw some document-context translation engines into the mix; contextualized quality estimation (like that provided by SLIDE) should help discriminate those systems, too.
## 6 Related Work
The vast majority of work in machine translation metrics is focused on sentence-level evaluation. At a high level, it is useful to distinguish two uses of context for machine translation evaluation.
In the first setting, contextualized metrics are designed to test document-translation capabilities of a model. Chief among these are contrastive test sets, in which a model is tested in its ability to rank good translations from bad ones in a contextualized manner (Muller et al., 2018; Bawden et al., 2018; Voita et al., 2019; Lopes et al., 2020). Other approaches have employed test-time NLP tool chains to target and reward correct prediction of discourse phenomena (Jiang et al., 2022; Fernandes et al., 2023).
In the second setting, metrics make use of context to provide or refine their model of what makes a good translation. This is useful even for ranking sentence-based translation systems. The first such system may be DocCOMET (Vernikos et al., 2022), which used context to modify the encodings of sentences, but then removed that context before invoking COMET's classifier. They looked at both reference-based and QE metrics, and evaluated on Pearson's correlation against human scores. Context-COMET (Hendy et al., 2023) took an approach similar to that described here, but was not evaluated at all. More recently, Deutsch et al. (2023) showed that paragraph-level evaluation works just as well for reference-based metrics, even with underlying metrics trained in a sentence
Figure 1: Plot of window vs. stride for COMET20-QE, COMETWiki, and @COMET22 models on the WMT22 MQM task (en-de, en-ru, and zh-en respectively). Neither COMET20-QE nor @COMET22 sees much improvement from adding context, but COMETWiki does. |
2309.10099 | Predictions on the execution of a CNOT gate of the reverse of the arrow
of time | Recently it has been pointed out that an outstanding application of an IBM
quantum computer is to reverse the arrow of time [Lesovik et al. Sci. Rep. 9, 1
(2019)]. The issue of the consequences of the reversal of the arrow of time on
the execution of a CNOT gate by a quantum device is addressed. It is predicted
that if the CNOT gate is executed towards the future then this was previously
executed in the past. The above might confirm the paradigm that an event in the
present time influences another event in the past time [2]. Such a physical
phenomena is called retrocausality. | G. Morales, M. Ávila, F. Soberanes | 2023-09-18T19:17:42Z | http://arxiv.org/abs/2309.10099v1 | # Predictions on the execution of a CNOT gate of the reverse of the arrow of time
###### Abstract
Recently it has been pointed out that an outstanding application of an IBM quantum computer is to reverse the arrow of time [Lesovik et al. Sci. Rep. **9**, 1 (2019)]. The issue of the consequences of the reversal of the arrow of time on the execution of a CNOT gate by a quantum device is addressed. It is predicted that if the CNOT gate is executed towards the future then this was previously executed in the past. The above might confirm the paradigm that an event in the present time influences another event in the past time [2]. Such a physical phenomena is called retrocausality.
A considerable amount of interest has been paid to the crucial observation done by Lesovik _et al_[1] in the sense that the arrow of time can be inverted in an IBM quantum computer. Undoubtedly the above is one of the most outstanding application of a quantum computer. It is worth to observe also that the result of Ref. [1] sheds light on the enigma of the correlation between time-symmetric fundamental laws of Physics and irreversibility of the world [3]-[4]. Furthermore, the above pave the way for a further revolutionary technological applications of a quantum computer. However, so far it has not been explored the issue of the consequences on the execution of a basic quantum gate by a quantum device of the reverse of the arrow of time. In the present work it is addressed such an issue. In order to proceed, we consider such a quantum device as the diamond quantum computer. It is worth to observe that a diamond quantum computer is a very promising technology with outstanding capabilities [7]-[8]. Then it is verified first, that this quantum computer executes the CNOT gate towards the future. After that, it is inverted the time towards the past and studied the consequence of the above on the execution of the CNOT gate. It is predicted that such a quantum gate was previously executed in the past. At this stage it result important to observe that the above result might confirm the old paradigm of retrocausality.
A quantum computer must work by executing of a primitive operations named quantum gates. A CNOT gate [5] is one of the the most primitive operation that can be performed by a quantum computer. At this stage it is worth to observe that a quantum computer architecture is often described by a Schroedinger equation with a suitable Hamiltonian [6]-[10]. In particular, the execution of several different quantum protocols in a diamond quantum computer can be performed with the use of a Schroedinger equation with a RF based Hamiltonian [7]-[8]. Within the above approach we first verify that a diamond quantum computer can execute a CNOT gate towards the future in the sense of the Second Law of Thermodynamics [11]. Then, it is inverted the time and studied the consequences of the above on the execution of the CNOT gate. The present work is organized as follows: it is first introduced the Hamiltonian describing a diamond quantum computer. Then it is proved that a diamond quantum computer executes a CNOT gate from the present time towards the past time. Finally a series of conclusions are given.
The corresponding Schroedinger equation associated the diamond quantum computer is [7]
\[i\hbar\frac{\partial}{\partial t}\mid\psi(t)\rangle = \left\{-\hbar\left(w_{1}S_{1}^{Z}+w_{2}S_{2}^{Z}\right)-\frac{J}{ \hbar}\left(S_{1}^{Z}S_{2}^{Z}\right)+\right. \tag{1}\] \[\left.\frac{\Omega}{2}\bigg{[}\Big{(}e^{i\theta_{1}}S_{1}^{-}+e^{ -i\theta_{1}}S_{1}^{+}\Big{)}+\right.\] \[\left.\left.\left(e^{i\theta_{2}}S_{2}^{-}+e^{-i\theta_{2}}S_{2}^ {+}\right)\right]\right\}\mid\psi(t)\rangle,\]
where \(\hbar=1\) is Planck's constant, \(w_{1}\) ( \(w_{2}\)) are Larmor's frequencies, \(S_{1}^{Z}\) (\(S_{2}^{Z}\)) are the \(z\)-components of the spin of qubit 1 (qubit 2), \(S_{1}^{-}\) (\(S_{2}^{-}\)) are the lowering spin operator of the qubit 1 (qubit 2), \(S_{1}^{+}\) (\(S_{2}^{+}\)) are the raising spin operator of the qubit 1 (qubit 2), \(J\) is a coupling constant1, \(\Omega\) is Rabi's frequency, and \(\theta_{i}=w_{i}t+\varphi_{i}\) (\(i=1,2\)) being \(\varphi_{i}\) (\(i=1,2\)) an arbitrary phases.
Footnote 1: If \(J>0\) the diamond quantum computer is in an antiferromagnetic regime but if \(J<0\) this is in a ferromagnetic regime [12].
By observing that a general two-qubit state can be written as
\[\mid\psi(t)\rangle=C_{0}(t)\mid 00\rangle+C_{1}(t)\mid 01\rangle+C_{2}(t)\mid 1 0\rangle+C_{3}(t)\mid 11\rangle, \tag{2}\]
where
\[\mid C_{0}(t)\mid^{2}+\mid C_{1}(t)\mid^{2}+\mid C_{2}(t)\mid^{2}+\mid C_{3}( t)\mid^{2}=1, \tag{3}\]
then Eq. (1) can also be written as
\[i\frac{dC_{0}(t)}{dt}=\left[-\left(w_{1}+w_{2}\right)-J\right]C_ {0}(t)+\left(\frac{\Omega}{2}\cdot e^{-i\theta_{2}}\right)C_{1}(t)+\left(\frac {\Omega}{2}\cdot e^{-i\theta_{1}}\right)C_{2}(t),\] \[i\frac{dC_{1}(t)}{dt}=\left(\frac{\Omega}{2}\cdot e^{i\theta_{2} }\right)C_{0}(t)+\left[-(w_{1}-w_{2})+J\right]C_{1}(t)+\left(\frac{\Omega}{2} \cdot e^{-i\theta_{1}}\right)C_{3}(t), \tag{4}\] \[i\frac{dC_{2}(t)}{dt}=\left(\frac{\Omega}{2}\cdot e^{i\theta_{1} }\right)C_{0}(t)+\left[-(-w_{1}+w_{2})+J\right]C_{2}(t)+\left(\frac{\Omega}{2} \cdot e^{-i\theta_{2}}\right)C_{3}(t),\] \[i\frac{dC_{3}(t)}{dt}=\left(\frac{\Omega}{2}\cdot e^{i\theta_{1} }\right)C_{1}(t)+\left(\frac{\Omega}{2}\cdot e^{i\theta_{2}}\right)C_{2}(t)+ \left[-(-w_{1}-w_{2})-J\right]C_{3}(t).\]
- _Execution of a CNOT gate towards the future_
In this case the arrow of time is taken according to the Second Law of Thermodynamics i.e. from the past towards the future.
Given the initial conditions
\[\begin{array}{l}C_{0}(t=0)=0,\\ C_{1}(t=0)=0,\\ \\ C_{2}(t=0)=1,\\ C_{3}(t=0)=0.\end{array} \tag{5}\]
It is said that the diamond quantum computer described by Eq. (1) executes a CNOT gate in a scale of time \(t=T\) (\(T>0\)) if
\[\begin{array}{l}\mid C_{0}(t=T)\mid^{2}=0,\\ \mid C_{1}(t=T)\mid^{2}=0,\\ \\ \mid C_{2}(t=T)\mid^{2}=0,\\ \\ \mid C_{3}(t=T)\mid^{2}=1.\end{array} \tag{6}\]
In order to verify the conservation of probability of Eq. (3) towards the future (\(t>0\)), it is first solved numerically Schroedinger equation. The results are shown in Figure 1. Together with the above, it is also verified that a diamond quantum computer executes the CNOT gate of Eq. (6) towards the future. The above is shown in Figures 2, 3, 4, and 5. The employed values for the constants are \(J=0.0015\), \(\Omega=0.01\), \(\varphi_{1}=\pi/2\), \(\varphi_{2}=\pi/4\), \(w_{1}=0.2\), and \(w_{2}=0.0015\).
- _Consequences of the reverse of the arrow of time on the execution of a CNOT gate_
It is said that the diamond quantum computer executes the CNOT gate towards the past if given the initial conditions of Eq. (5) then 2
Footnote 2: It is clear that negative times do not exist in nature. We employ mathematically this notation as a way of expressing the reverse of the arrow of time.
\[\begin{array}{l}\mid C_{0}(t=-T)\mid^{2}=0,\\ \mid C_{1}(t=-T)\mid^{2}=0,\\ \\ \mid C_{2}(t=-T)\mid^{2}=0,\\ \mid C_{3}(t=-T)\mid^{2}=1.\end{array} \tag{7}\]
By solving numerically Schroedinger equation of Eq. (4) for a negative times, it is verified that the conservation of probability of Eq. (3) was also preserved in the past. The respective results are shown in Figure 1. It is also shown that the diamond quantum computer executed the CNOT gate of Eq. (7) in the past. The above can be seen from Figures 2, 3, 4, and 5. The employed values for the constants are \(J=0.0015\), \(\Omega=0.01\), \(\varphi_{1}=\pi/2\), \(\varphi_{2}=\pi/4\), \(w_{1}=0.2\), and \(w_{2}=0.0015\).
To be one of the most promising quantum technologies for a quantum computer, it has been considered a diamond quantum computer as the basis of our study. It has been checked first that such a computer executes a CNOT gate with the arrow of time taken conventionally according to the Second Law of Thermodynamics. After that it is inverted the arrow of time and shown that the CNOT gate was executed in the past providing this was executed towards the future. From present findings it is predicted that if a CNOT gate is executed towards the future then this was previously executed in the past. Such a result might confirm the physical phenomenon of retrocausality.
Figure 3: The coefficient \(C_{1}(t)\) satisfies the conditions of a CNOT gate of Eqs. (6) and (7) for both cases towards the future (\(t>0\)) and towards the past (\(t<0\)).
Figure 2: The coefficient \(C_{0}(t)\) satisfies the conditions of a CNOT gate of Eqs. (6) and (7) for both cases towards the future (\(t>0\)) and towards the past (\(t<0\)).
Figure 4: The coefficient \(C_{2}(t)\) satisfies the conditions of a CNOT gate of Eqs. (6) and (7) for both cases towards the future (\(t>0\)) and towards the past (\(t<0\)).
Figure 5: The coefficient \(C_{3}(t)\) satisfies the conditions of a CNOT gate of Eqs. (6) and (7) for both cases towards the future (\(t>0\)) and towards the past (\(t<0\)).
## Acknowledgments
We thank SNI-Conacyt grant.
|
2309.12856 | Robotic Handling of Compliant Food Objects by Robust Learning from
Demonstration | The robotic handling of compliant and deformable food raw materials,
characterized by high biological variation, complex geometrical 3D shapes, and
mechanical structures and texture, is currently in huge demand in the ocean
space, agricultural, and food industries. Many tasks in these industries are
performed manually by human operators who, due to the laborious and tedious
nature of their tasks, exhibit high variability in execution, with variable
outcomes. The introduction of robotic automation for most complex processing
tasks has been challenging due to current robot learning policies. A more
consistent learning policy involving skilled operators is desired. In this
paper, we address the problem of robot learning when presented with
inconsistent demonstrations. To this end, we propose a robust learning policy
based on Learning from Demonstration (LfD) for robotic grasping of food
compliant objects. The approach uses a merging of RGB-D images and tactile data
in order to estimate the necessary pose of the gripper, gripper finger
configuration and forces exerted on the object in order to achieve effective
robot handling. During LfD training, the gripper pose, finger configurations
and tactile values for the fingers, as well as RGB-D images are saved. We
present an LfD learning policy that automatically removes inconsistent
demonstrations, and estimates the teacher's intended policy. The performance of
our approach is validated and demonstrated for fragile and compliant food
objects with complex 3D shapes. The proposed approach has a vast range of
potential applications in the aforementioned industry sectors. | Ekrem Misimi, Alexander Olofsson, Aleksander Eilertsen, Elling Ruud Øye, John Reidar Mathiassen | 2023-09-22T13:30:26Z | http://arxiv.org/abs/2309.12856v1 | # Robotic Handling of Compliant Food Objects by Robust Learning
###### Abstract
The robotic handling of compliant and deformable food raw materials, characterized by high biological variation, complex geometrical 3D shapes, and mechanical structures and texture, is currently in huge demand in the ocean space, agricultural, and food industries. Many tasks in these industries are performed manually by human operators who, due to the laborious and tedious nature of their tasks, exhibit high variability in execution, with variable outcomes. The introduction of robotic automation for most complex processing tasks has been challenging due to current robot learning policies. A more consistent learning policy involving skilled operators is desired. In this paper, we address the problem of robot learning when presented with inconsistent demonstrations. To this end, we propose a robust learning policy based on Learning from Demonstration (_LJD_) for robotic grasping of food compliant objects. The approach uses a merging of RGB-D images and tactile data in order to estimate the necessary pose of the gripper, gripper finger configuration and forces exerted on the object in order to achieve effective robot handling. During _LJD_ training, the gripper pose, finger configurations and tactile values for the fingers, as well as RGB-D images are saved. We present an _LJD_ learning policy that automatically removes inconsistent demonstrations, and estimates the teacher's intended policy. The performance of our approach is validated and demonstrated for fragile and compliant food objects with complex 3D shapes. The proposed approach has a vast range of potential applications in the aforementioned industry sectors.
_Index Terms_ - Compliant food objects, Learning from Demonstration, Robotic handling, Multifingered gripper.
## Supplementary Material
For supplementary video see: [https://youtu.be/cafFC9HgaFI](https://youtu.be/cafFC9HgaFI)
## I Introduction
Contact processing, or the interaction of a robot with objects that require manipulation, is one of the greatest challenges facing robotics today. Most approaches to robotic grasping and manipulation are purely vision-based. They either require a 3D model of the object, or attempt to build such models by full 3D reconstruction utilizing vision [1]. In many areas, knowledge of the 3D model of an object can be sufficient for a robot to achieve optimal grasping [2]. However, a lack of detailed information about the object's surface, and mechanical and textural properties, can affect grasping accuracy, causing handling to be much more challenging.
These challenges become even greater if the objects requiring manipulation are compliant, because there is an additional requirement not to degrade their quality. These issues are even greater if the objects also require dynamic dexterous manipulation, such as when they are moving. This is highlighted in particular when it comes to handling and grasping movements during harvesting, post-harvesting and production line processing operations across entire value chains involving fragile and compliant raw materials of oceanic and agricultural origin. Compliancy challenges both the classical approaches based on rigid-body assumptions [3], as well as reliance on purely vision-based approaches to robotic grasping and handling. Recently, the ocean space, agriculture and food sectors have increased their interest in flexible, robot-based, automation systems, suitable for small-scale production volumes and able to adapt to natural biological variation in food raw materials [4]. For example, the food processing industry is characterised by many manual tasks and complex processing operations that rely heavily on the dexterity of a human hand. Optimal grasping, manipulation and imitation of the complex manual dexterity of skilled human operators by robots is thus prerequisite for greater uptake and utilisation of robotic automation by the ocean space, agricultural and food industries [4, 5].
In some of the most complex processing operations, such as manually-based meat or fish harvesting, cutting or trimming, the skilled human operator uses a number of visual and tactile senses, and combines these with a task description and previous experience [4, 5]. For example, during cutting, a skilled operator will utilise his visual and tactile senses to adapt his efforts and make adjustments in order to execute essential changes in his hand position and the force he exerts to complete the task. This is an important skill for a robotic system to learn to be able to perform the same task with similar efficiency. This example illustrates the way in which humans
Figure 1: a) _Learning from Demonstration_: Teacher demonstrating grasps of compliant objects by remote control of the 6-DOF robot arm and gripper; b) _Autonomous grasping_: after learning the grasping skills from the teacher, the robot autonomously grasps compliant objects.
approach solving grasping and handling problems by using multi-modal perception combined with both their visual and tactile senses [6]. Both are essential to optimal grasping and handling, and are particularly important when grasping fragile and compliant objects such as food raw materials. Combining two senses is essential to a grasping action that is both gentle and efficient. A further challenge facing the food processing and similar sectors, where skilled operators perform complex operations, is the inconsistency of the operators' performance. In a scenario in which a robot has to be taught by humans to perform such complex operations, the accuracy of robotic manipulation will vary depending on the teacher and the high levels of variability involved in the teaching approach. Our approach here is motivated by the need to develop a consistent learning policy that is independent of teacher variance. This in turn alleviates the burden of accuracy on the teachers.
The key contributions of this work are: a) an _LfD_ learning policy that automatically discards the inconsistent demonstrations by human teachers, addressing the problem of learning when presented with inconsistent human demonstrations; b) the integration of visual (RGB-D) and tactile data into space-action pairs for the grasping of compliant objects using the aforementioned _LfD_ learning policy, thus enabling the automatic rejection of inconsistent data and the effective autonomous grasping of fragile and deformable food objects. Based on visual input (RGB-D images), we estimate the pose (6 DOF) and gripper configuration for grasping and, based on tactile data, we estimate the forces necessary to achieve the optimal grasping of previously unseen compliant objects. Both visual input and tactile data are gathered during the training stage implemented using an _LfD_ approach that is in turn facilitated by teleoperation control of the robotic grasping action using STEM controllers. This paper demonstrates the validity of the approach by illustrating how a robot is taught to grasp a fragile and compliant food object, in this case lettuces.
## II Related Work
Robotic grasping and handling of objects has been studied extensively and a wide variety of approaches can be found in the literature. Most of the research reported in the field is dedicated to robotic grasping and handling of rigid bodies [1, 7, 17], but recently research is also being focused on deformable objects [3, 8], and also compliant objects of different origin with application domains including ocean, agriculture and food processing industry [4, 5, 9, 10]. Most of the research reported on robotic grasping and manipulation is also based on developing models that estimate the grasping of objects solely based on the visual input data [7, 10, 11]. While vision enables localization in 3D of the objects of interest, the benefit of using tactile data is in accurate perception of the compliancy of the objects to be manipulated. This is particularly important when manipulating food objects whose quality cannot be degraded once the contact with the robot is established and during the interaction [4, 5]. In addition, robotic handling approaches based purely on vision are limited because food raw material comes with a high biological variation, and although they may have the same visual properties, they can exhibit varying material and mechanical properties based on how they are treated during the production in the value chain. Although the use of tactile sensors has been demonstrated for different robotic manipulation applications [12, 13], including grasping of fragile compliant objects [14] and food objects [9], the adoption of tactile sensors for robotic manipulation applications has been slow. This has been due to the difficulty to both integrate tactile sensors in the standard schemes but also due to the challenges to capture and model the tactile readings [3, 15]. Even scarcer are the cases of multi-modal perception combinations of visual and tactile information, mainly focused on robotic grasping of some standard rigid objects [15]. For example, Motamedi et al. [16] explore the use of haptic tactile and visual feedback during object-manipulation tasks assisted by human participants to shed light on how human approach to grasping combines the tactile and visual feedback. In both [15, 16], although for rigid objects, it is demonstrated how incorporating tactile sensing may improve the robotic grasping of objects.
One aspect of our work is inspired by the existing literature and the lack of multi-modal visual and tactile perception for robotic manipulation of highly fragile and compliant food raw materials, and especially regarding the use of RGB-D image data with tactile data, differing from the [15] in that they use pure 2D images in combination with tactile data for grasping of rigid objects. Our approach combines visual and tactile sensing and feeding these inputs to a robust learning policy, based on _LfD_. This policy enables not only to estimate the 6DOF grasping pose, by means of visual input, but also the necessary gripper configuration and forces exerted to the food object to accomplish handling without quality degradation.
One particular learning policy to use when endowing robots the autonomy to perform manipulation tasks on compliant food objects is the Learning from Demonstration (_LfD_). _LfD_ policy is learned from examples, or demonstrators, executed by a human teacher, and these examples constitute the state-action pairs that are recorded during the teacher's demonstrations [18]. This learning policy has been used to enable robust learning using regularized and maximum
Figure 2: Illustration of consistent and inconsistent demonstrations, and the three relevant regions in the demonstration (state-action) space – (1) state-action inliers, (2) state outliers and (3) state-action outliers that are not state outliers. Demonstrations in region (1) and (2) are consistent.
margin methods [19], but has also been applied to robotic grasping [20], more specifically to robot grasp pose estimation [21] and grasping based on input from 3D or depth images [11, 22]. A characteristic of _LfD_ is that it relies on successful demonstration of the desired task(s) from a human teacher, and that researchers usually manually discard the poor demonstrations. Recently there has been reported research work investigating the possibility to learn from so-called poor demonstrations [23, 24], namely to also learn what not to imitate. In the context of these previous studies and in the context of the outlined challenge regarding the need to alleviate the accuracy burden placed on the human teacher during the demonstration, our work differs from related work in the aspect that it presents a _LfD_ algorithm that is able to automatically discard the inconsistent demonstration examples, prior to learning the teacher's _intended_ policy. This enables the learner (robot) to act more consistently and with less variability in task performance compared to the teacher.
## III Problem Statement
The goal of this work is to learn a robust policy, in a teleoperated _LfD_ context, which is applicable to tactile-sensitive grasping of compliant objects based on visual input. Grasping tasks have a few unique challenges, including the possibly high-dimensional visual state space, and difficulty of achieving accurate teleoperation. Many visual features may be needed in order to represent the object state in sufficient detail to be able to predict a grasp. The number of training examples may be many times the number of features to create an accurate prediction model. The possibly tedious nature of one or more human teachers providing hundreds of grasp demonstration examples, may lead to some inconsistencies and errors made by teacher(s). Additionally, the action of using teleoperation for grasping may in itself create additional variance and inconsistencies in the demonstration examples. This motivates us to explicitly develop a robust policy learning algorithm that derives a policy only from consistent demonstrations. We consider a demonstration set \(D=(\mathcal{S},A)=\{\mathbf{d}_{i}\}_{i=1}^{n}\) of \(n\) demonstrations \(\mathbf{d}_{i}=(\mathbf{s}_{i},\mathbf{a}_{i})\in\mathcal{D}\), each consisting of a state \(\mathbf{s}_{i}\in\mathcal{S}\) and an action \(\mathbf{a}_{i}\in\mathcal{A}\), where \(\mathcal{D}=\mathcal{S}\times\mathcal{A}\) is the demonstration space, \(\mathcal{S}\) is the state space and \(\mathcal{A}\) is the action space.
Using the mapping function approach to LfD, we seek to learn from the demonstrations \(D\) a parameterized policy of the form \(\mathbf{\pi}_{\boldsymbol{\theta}}:\mathcal{S}\mapsto\mathcal{A}\), where \(\boldsymbol{\theta}\) is the parameter vector associated with the policy. We let the teacher's _executed_ policy be denoted by \(\mathbf{\vec{\pi}}:\mathcal{S}\mapsto\mathcal{A}\) and _intended_ policy be denoted by \(\mathbf{\vec{\pi}}^{*}:\mathcal{S}\mapsto\mathcal{A}\). The executed policy may contain unintended demonstration examples, inaccuracies and inconsistencies, and may be seen as a corrupted version of the teacher's intended policy. Let \(D^{*}=(\mathcal{S}^{*},A^{*})\) be the subset of the demonstration examples for which the teacher followed its intended policy. Despite following its intended policy for these demonstration examples, there may still be some noise (e.g. jitter in the hand of the teacher or measurement noise). Therefore, although \(\mathbf{\vec{\pi}}^{*}\) exists, we can only measure noisy realizations \(\mathbf{\vec{\pi}}^{*}(\mathbf{s})\in A^{*},\mathbf{s}\in\mathcal{S}^{*}\) of this intended policy. From these realizations, we find an optimal parameterized policy \(\mathbf{\vec{\pi}}_{\boldsymbol{\theta}}^{*}\) that estimates the teacher's intended policy \(\mathbf{\vec{\pi}}^{*}\). This is done by minimizing the expected loss function \(\ell:\mathcal{A}\times\mathcal{A}\mapsto\mathbb{R}\) over the probability distribution induced by the states \(\mathbf{s}\in\mathcal{S}^{*}\). The optimal parameters are found by solving
\[\underset{\boldsymbol{\theta}}{min}\,\varepsilon_{\text{s}\text{s}\text{s}^{ *}}[\ell(\mathbf{\pi}_{\boldsymbol{\theta}}(\mathbf{s}),\mathbf{\hat{\pi}}^{*} (\mathbf{s}))]. \tag{1}\]
For our use, we assume that the state distribution is stationary, and that the parameters therefore can be found using supervised learning in offline mode under that assumption.
## IV Robust Policy Learning
### _Discovering Demonstrations Consistent with the Teacher's Intended Policy_
In this section we will describe a method for discovering demonstrations consistent with the teacher's intended policy \(\mathbf{\vec{\pi}}^{*}\), from the executed policy \(\mathbf{\vec{\pi}}\) given by the demonstrations \(D\). This requires the assumption that the teacher actually has an intended policy. We let a consistent policy imply that, for a given state, the same action or set of actions will be taken. Besides noise, the executed policy may have two main types of deviations from this consistency: 1) deviation in intention, 2) deviation in execution. The first deviation type simply implies that the teacher did a mistake in what he intended - e.g. intending to pull the nail out of the board instead of intending to hammer it into the board. The second deviation implies a mistake in execution - e.g. intending to hit the nail, but missing it and hitting the board instead. If we can discover the demonstration examples representing these types of deviations from consistency, then we can possibly learn a consistent policy from the remaining demonstrations.
Formally, we define a demonstration \(\mathbf{d}_{i}=(\mathbf{s}_{i},\mathbf{a}_{i})\) to be consistent if its probability in demonstration space satisfies \(P_{D}(\mathbf{d}_{i})>\rho_{D}\), _or_ its probability in state space satisfies \(P_{S}(\mathbf{s}_{i})<\rho_{S}\), where \(P_{D}\) and \(P_{S}\) are probability densities induced by the demonstrations \(D\) and states \(S\), respectively. Here, \(\rho_{D},\rho_{S}\in[0,1]\) are respective probability thresholds. Assuming we have access to the exact empirical probability density functions \(P_{D}\) and \(P_{S}\), the consistent demonstrations are then found exactly as
\[D_{exact}^{*}=\{\mathbf{d}_{i}\in D:P_{D}(\mathbf{d}_{i})>\rho_{D}\;\;\forall\;P _{S}(\mathbf{s}_{i})<\rho_{S}\}. \tag{2}\]
We can gain an intuitive understanding of this from Figure 2. The consistent demonstrations are found in two regions in the demonstration (state-action) space. The first region corresponds to the demonstrations that are in a high-probability region of the _demonstration space_. The second region corresponds to those demonstrations that are of low probability in the _state_ space, and therefore where one cannot conclude whether they are consistent or not. One may argue that these outliers in state space could simply be removed from the data set, however in some settings one may wish to keep as many demonstrations as possible due to the small size of the data set. However, the biggest reason for keeping the state outliers is to allow for a policy that is valid in the entire sampled _state_ space, also in extreme states. In some cases, there may be unwanted consequences if extreme states cannot be handled, such as e.g. strong wind gusts and actuator failure when flying an autonomous drone. In these cases, it is better to have a potentially inconsistent policy than no policy at all. Keeping state outliers also enables us to use larger values of
\(\rho_{D}\), to remove as many inconsistent examples as possible in more populated regions of the demonstration space.
In practice, it is difficult to compute (2), since this requires the exact empirical probability density functions \(P_{D}\) and \(P_{S}\). Even obtaining estimates of these density functions may require a number of samples (i.e. demonstrations) that scale exponentially in the number of dimensions [25], which in our case is the number of dimensions in the demonstration and state spaces. An alternative, to using an exact empirical probability density function and a given probability threshold, is to estimate point membership decision functions that approximate a given quantile level set, such as the one-class support vector machine (SVM) method proposed in [26, 27]. This approach has previously been successfully used in a _LfD_ context [23] to determine point membership in an approximate quantile level set in the _state_ space. We use the one-class SVM method to compute two decision functions \(g_{D,v_{D}}:\mathcal{D}\mapsto\{-1,1\}\) and \(g_{S,v_{S}}:\mathcal{S}\mapsto\{-1,1\}\), using their respective one-class SVM optimization parameters \(v_{D},v_{S}\in[0,1]\) that specify the given quantile and thus the fraction of 'outliers' that result in a negative sign on the decision function. Based on this, we compute an approximate set of consistent demonstrations by
\[D^{*}=\{\mathbf{d}_{i}\in D:g_{D,v_{D}}(\mathbf{d}_{i})>0\;\;\forall\;g_{S,v_{ S}}(\mathbf{s}_{i})<0\}. \tag{3}\]
The one-class SVM is applicable to high-dimensional spaces with nonlinear decision boundaries, by using a feature map \(\Phi:\mathcal{X}\mapsto F\), from some set \(\mathcal{X}\) (in our case, \(\mathcal{D}\) or \(S\)) into a dot product space \(F\), such that the dot product in that space can be computed using a simple kernel [28, 29]\(k(\mathbf{x},\mathbf{y})=\Phi(\mathbf{x})\cdot\Phi(\mathbf{y})\), such as the radial basis function kernel
\[k(\mathbf{x},\mathbf{y})=e^{-\gamma|\mathbf{[x-y]}_{2}^{2}}.\]
Given a set of \(l\) feature vectors in \(\mathcal{X}\), the solution of the dual problem of the one-class SVM for a given parameter \(v\), provides us with the coefficient vector \(\alpha\) and bias \(\rho\) that, together with the kernel, defined the decision function
\[g_{v}(\mathbf{x})=\text{sgn}\left(\sum_{l}\alpha_{i}\,k(\mathbf{x}_{i}, \mathbf{x})-\rho\right),\]
where the coefficients \(\alpha\) are found by solving the optimization problem
\[\min_{\alpha}\frac{1}{2}\sum_{l,j}\alpha_{i}\alpha_{j}\,k\big{(} \mathbf{x}_{i},\mathbf{x}_{j}\big{)}\] \[s.\,t.\;\;0\leq\alpha_{i}\leq\frac{1}{v!}\sum_{i}\alpha_{i}=1,\]
and where the bias is recovered by computing \(\rho=\sum_{j}\alpha_{j}\,k\big{(}\mathbf{x}_{i},\mathbf{x}_{j}\big{)}\), where \(i\) is any \(i\) for which the corresponding coefficient satisfies \(0<\alpha_{i}<\frac{1}{v!}\), i.e. it is non-zero and is not equal to the upper bound.
The optimal value of \(\gamma\) is found by a one-dimensional cross-validated search for the value that minimizes the number of predicted outliers.
### Policy Learning using \(\varepsilon\)-insensitive Support Vector Regression
Using the approximate set of consistent demonstrations \(D^{*}=(S^{*},A^{*})\) found in (3), we will now present a method for learning the policy \(\mathbf{\pi}_{\theta}^{*}\) by solving (1). For this purpose, we use \(\varepsilon\)-insensitive support vector regression (SVR), which has previously been used to predict grasps from 3D shape [30]. As a regression method, SVR has similar robustness to outliers and good generalization ability [28], as is also the case with SVM classification [27, 28].
The loss function for which \(\varepsilon\)-insensitive SVR is a minimization algorithm, is the \(\varepsilon\)-insensitive loss function for scalar action arguments
\[l_{\varepsilon}(a,a^{*})=\begin{cases}0,&|a-a^{*}|\leq\varepsilon\\ |a-a^{*}|&\text{otherwise}.\end{cases}\]
In the case of \(d\)-dimensional vectors of scalar actions, we define the loss function
\[l_{\varepsilon}(\mathbf{a},\mathbf{a}^{*})=\sum_{k=1}^{d}l_{\varepsilon_{k}}( a_{k},a_{k}^{*}) \tag{4}\]
We let \(\pi_{\mathbf{\theta}_{k}}(\mathbf{\xi})\) denote the policy for determining the scalar action \(a_{k}\). The parameter vector \(\mathbf{\theta}_{k}\) contains the parameters that define the soft-margin radial basis function kernel SVR model [31], which we write as
\[\mathbf{\theta}_{k}=[\mathbf{\alpha}\quad b\quad\mathbf{S}\quad\gamma]_{k},\]
Figure 3: Generation of the dataset for the training phase. a) The setup with RGB-D sensor, robot arm with ReFlex TakkTile gripper and the lettuce as an example of fragile compliant food object placed in the grasping area. b) Human teacher teleoperating the robot arm with STEM controllers to demonstrate the grasps. c) During the generation of the dataset, 525 training examples were generated by random placement of different fetuses. Lettuses of different sizes and shapes were placed on random position and orientation in the grasping area. d) For the demonstrated grasps, the following data were saved constituting the state space: RGB-D image, 6DOF of the gripper hand, finger configuration, and tactile significant thresholds for the gripper fingers.
where \(\alpha\) are the \(n_{sv,k}\) non-zero coefficients, \(b\) is the bias, \(\mathbf{S}\) contains the \(n_{sv,k}\) support vectors, and \(\gamma\) is the radial basis function kernel scale parameter. An additional parameter, \(\mathcal{C}\), is the box constraint used in SVR. The optimal values of \(\mathcal{C}\) and \(\gamma\) are found by performing a cross-validated grid-search [32] that minimizes the expected loss function on the validation set.
```
Input: Demonstration set \(D\), one-class SVM parameters \(\nu_{D},\nu_{S}\), and SVR parameter \(\varepsilon\).
1. Compute demonstration set \(D^{*}\) consistent with teacher's intended policy, using equation (3) and one-class SVM.
2. Find teacher's intended policy by minimizing equation (1) using \(D^{*}\) and the loss function \(l_{\varepsilon}(\mathbf{a},\mathbf{a}^{*})\) from equation (4) using SVR. Output: Parameters \(\mathbf{\theta}_{k}=[\alpha\quad b\quad\mathbf{S}\quad\gamma]_{k}\) defining the policy \(\pi_{\mathbf{\theta}_{k}}(\mathbf{s})\), defined by equation (5).
```
**Algorithm 1** Learning teacher's intended policy
Using the SVR parameters, we obtain the following expression for the policy
\[\pi_{\mathbf{\theta}_{k}}(\mathbf{s})=\sum_{i=1}^{n_{sv,k}}\alpha_{i,k}\,e^{- \gamma_{k}}\big{|}\mathbf{s}_{i,k}-\big{|}_{2}^{2}-b_{k}, \tag{5}\]
where \(\mathbf{S}_{i,k}\) denotes the support vector indexed by \(i\) in parameter vector \(\mathbf{\theta}_{k}\). The resulting _LfD_-based learning policy that automatically discards inconsistent teacher's demonstrations is given in Algorithm 1.
## V Implementation Details
### _Robot platform and Software System_
Our experiments were carried out on a 6-DOF Denso VS 087 robot arm mounted on a steel platform. The robot arm was controlled with a software written in LabVIEW, using functions that control the robot arm through Cartesian position coordinates, and two vectors describing the orientation of two of the axes of the end effector relative to the robot's base coordinates. For developing a learning policy based on _LfD_, the teleoperation of the robot arm was done using a hand-held Sixense STEM motion tracker controllers. The STEM-system consisted of five trackers and one base station. Each controller is equipped with a joystick and several buttons. This allowed for wireless feedback of not only controllers' pose but also enables triggering of various events. The pose of the motion trackers was given relative to the position and orientation of the base station, so it was necessary to keep the base station in a fixed orientation relative to the robot arm. For gripping, we used the ReFlex TakkTile hand from Right Hand Robotics. The hand has 3 fingers with a joint feedback, one fixed, and two fingers that can rotate to alter their pose. Each finger is able to bend at a finger joint and is equipped with 9 tactile sensors. The hand is by default controlled in ROS environment. To control the hand via LabVIEW, a simple Python hand control script was run on a Raspberry Pi 3 (RPi3) and then LabVIEW was connected to RPi3. RPi3 is a credit-sized mini-computer with great capabilities similar to a PC. Our Python scripts work by importing rospy, a pure Python client for ROS, which is known to work with example code supplied alongside with the ReFlex hand. After each run, the position and orientation values of the robot arm were saved. In addition, we saved the different tactile values for each finger when grasping the compliant object, and the angles of each joint (Fig. 3). The tactile values were used when grasping to calculate how hard to close the hand and what forces should be exerted to the object for a successful grasping. A Kinect v2 camera was mounted on a rod perpendicularly looking at the robot arm and the grasping scene where the compliant objects were placed for grasping (Fig. 3). A full HD (High Definition) colour image with \(1920\times 1080\) resolution, IR image and depth image of \(512\times 424\) resolution was registered for each grasping example (Fig. 3). These images were taken several times for each example, once at the beginning, saving the initial position of the object. Another set of images was saved once the hand has been moved to the proper position and the object was grasped, just before the object was to be released and finally once the arm has been moved back to the initial position. The last image is significant because it is used to determine if the grasping was a success. These images were used for feature extraction (Figure 3) while developing the learning policy for autonomous grasping based on _LfD_.
### _Dataset and Data Augmentation_
Since _LfD_ policy is learned from examples, or demonstrators, executed by a human teacher, these examples were generated as described in Fig. 3. To generate the examples each object to be grasp was randomly placed into the grasping area (Fig. 2(a)) before the grasping was demonstrated with teleoperation (Fig. 2(b)). 20 different lettuce were used to generate 525 examples evenly spread over the grasping area (Fig. 2(c)). For each example, a set of data is saved constituting of RGB-D image, gripper finger configuration, and tactile values for the 3 gripper fingers. A data augmentation procedure was used over the generated examples to increase the dataset. Data augmentation was performed by translation of the coordinate space and flipping the gripper for 180 degrees, by randomly perturbing states \(x_{a}\), \(y_{a}\), and \(z_{a}\) with zero mean Gaussian noise with a standard deviation of 25 mm. Training of the policy was done using a split of 75% of the data, and validation was done on the remaining 25%. Validation and demonstration of our _LfD_ approach was done using a batch of green lettuce. From the grasping and handling point of view, lettuce is an irregular, fragile, compliant object of variable shape and size. lettuce is highly fragile product that requires adaptability when it comes to tactile sensing and exertion of forces during grasping. From the visual point of view, lettuce is 3D model-free object and
Figure 4: Close-up of the colour (left) and depth (right) images of a lettuce, with overlaid extracted visual features on the depth image.
has a biological variability that makes it a relevant and interesting object to test our _LfD_ approach based on RGB-D images and tactile perception. The lettuce blade has a compliancy and variability in structure that changes from one end to the other, making the lettuce as a representative and challenging compliant object to work with both from the visual and tactile point of view.
### _State and action spaces_
The state space consists of vectors \(\mathbf{s}\) describing visual features of the lettuce. These visual features are illustrated in Fig. 4. All features are extracted from the depth image of the Kinect v2 camera. After subtracting a reference image from the depth image, we obtain a height image that is approximately zero for the surface on which the lettuce is placed, and greater than zero for the lettuce. The lettuce is found by thresholding the height image, resulting in a binary mask. The first set of features is the centroid in the binary mask, which we denote by \(a\) in Fig. 4, for which we compute the coordinates \((x_{a},y_{a},z_{a})\) in the robot reference frame. The value of the height image at the centroid is denoted by \(h_{a}\). The major and minor axis of the binary mask gives us the angle \(\theta\) from the horizontal axis in the image, and the length of these axes give us the lettuce length \(l_{a}\) and width \(w_{a}\), respectively. To enable convex regression of \(\theta\), we include both components of the unit-length direction vector in the state space vector. To estimate the leaf end, leaf spread and height of the lettuce, we compute two additional points \(b\) and \(c\), as seen in Fig. 4, halfway between the centroid \(a\) and the respective end points of the major axis. The heights \(h_{b}\) and \(h_{c}\) are computed at these halfway points, as well as the widths \(w_{b}\) and \(w_{c}\). Concatenating all these features, gives us the state vector
\[\mathbf{s}=[x_{a}\ \ y_{a}\ \ z_{a}\ \ h_{a}\ \ l_{a}\ \ w_{a}\ \ \cos\theta\ \ \sin\theta\ \ h_{b}\ \ w_{b}\ \ h_{c}\ \ w_{c}].\]
The action space consists of vectors \(\mathbf{a}\) describing the lettuce grasp to be performed by the robot and its gripper. The robot end-effector pose is described by its position \(\mathbf{p}=[p_{x}\ \ p_{y}\ \ p_{2}]\), and its orientation described by two orthonormal vectors \(\mathbf{d}^{1}\) and \(\mathbf{d}^{2}\) and an additional vector \(\mathbf{d}^{3}\), each vector being of the form \(\mathbf{d}^{k}=\begin{bmatrix}d_{x}^{k}&d_{y}^{k}&d_{x}^{k}\end{bmatrix}\), \(k\in\{1,2,3\}\). Although two orthonormal vectors suffice to uniquely describe an orientation in \(\mathbb{R}^{3}\), the three vectors provide redundancy for a more robust regression when two ambiguous grips exist. The gripper configuration is described by a finger figure \(f\) and preshape \(pre\). The gripper continues to close its grip until certain significant pressure thresholds \(spt_{1}\), \(spt_{2}\) and \(spt_{3}\) are obtained in the tactile sensors in each of the three respective fingers. Each significant pressure threshold for one finger represents the mean of all tactile sensor values that are over 70% of the maximum value. When the significant pressure threshold is met in _one_ of its fingers, the gripper stops closing. Concatenating these parameters, gives us the following action vector:
\[\mathbf{a}=[\mathbf{p}\ \ d^{1}\ \ \mathbf{d}^{2}\ \ \mathbf{d}^{3}\ \ f\ \ pre\ \ spt_{1}\ \ spt_{2}\ \ spt_{3}].\]
## VI Results and Discussion
In this section, we present the experimental results of our _LfD_ learning policy by a quantitative and qualitative assessment. The entire system, including robot arm, gripper, depth camera and learning algorithm, was tested in a teaching phase (Fig. 3) and an execution phase (Fig. 5). In the teaching phase, a user provided recorded demonstrations of the form described in Section V, which were then used to derive a learning policy as described in Section IV. In the execution phase, the learning policy was validated to teaching a robot to grasp lettuce based on visual and tactile information.
The teaching phase consisted of 525 demonstrations (Fig. 3a, b, c), where a human operator used the STEM controllers to tele-operate the robot to place the gripper around the lettuce.
Figure 5: The gripping sequence shown in RGB (top row) and depth (bottom row) images based on our trained _LfD_ learning policy, where in a) an initial image is acquired and the visual state of the lettuce is computed, b) the robot places a grasp on the lettuce according to the action derived from the visual state, c) the robot moves and releases the lettuce to a predefined target point, d) the robot moves out of the way, enabling visual confirmation of whether the grasping sequence succeeded.
camera. The colored (RGB) depth image was used to extract visual states constituting the state space vector in both the teaching and execution phases. Although colour was not used, it is useful information for visual tracking in food production lines with moving objects. In step b), the robot moved based on a learnt policy in the execution phase to assume the gripper pose for grasping, while in step c), the lettuce was grasped, lifted (Fig. 8), and moved to a target drop point, whereupon in step d) the gripper released the lettuce and the robot returned to its starting position. Images were recorded in steps c) and d) for verification of success both in the teaching and execution phase after learning. In our experiments, 497 of the total 525 demonstrations from human teacher were successful, resulting in 28 inconsistent examples that were automatically removed by the _LfD_ algorithm. After analysis, the inconsistent examples consisted on either bad orientation of the gripper when attempting to grasp, or bad grip during the demonstration phase from the teacher. In Fig. 6 is shown a visualization of the gripper pose in the demonstration-training phase (green gripper) versus execution phase (red gripper), highlighting the nature of the inconsistent demonstrations consisting on orientation. In Fig. 7, we show the average success rate on the autonomous grasping of lettuces on the test set, resulting in an average accuracy of 75%. Here, 14 lettuces of different sizes and shapes constituted the test set, and each lettuce was grasped 5 times in random positions on the grasping area. The failures were associated to either bad prediction in orientation of the gripper or insufficient grasping force from fingers resulting in a bad grip and consequent slip. Lettuces are characterized by both high biological variation in size and shape, in addition their compliancy has a high variation from the leaves side towards the root. Some of the grasp failures were associated to correctly predicted finger pressures but resulting in a bad grip due to an incorrect orientation of the gripper, positioned more on the leaf side of the lettuce.
In Table I, we show the accuracy in the learned policy by comparing the accuracy between including the inconsistent human teacher's demonstrations versus automatic discarding of inconsistent demonstrations. The experimental results show a higher accuracy in the value of the coefficient R\({}^{2}\), related to position (p, p, p), orientation (R, R, R), and tactile values (spti, spti), generated by the learned policy when the inconsistent demonstrations were automatically discarded.
Explicit enforcing of few support vectors is also seen as a way to assist in acquiring a more robust learning policy. The focus on more cases, where the human teacher has a greater challenge in providing accurate demonstrations so that we may explore the potential of our method for discovering the teacher's intended policy, may be a natural extension of the current study. Examples of such challenging demonstration
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline _LfD_ policy & **Correlation R\({}^{2}\)** & **STD** \\ \hline Consistent only & 0.60 & 0.31 \\ \hline Including inconsistent & 0.545 & 0.33 \\ \hline \end{tabular}
\end{table} TABLE I: Overall comparison of the _LfD_ learning policy that automatically discards the inconsistent demonstrations of the human teacher. It is seen that automatic discarding of the inconsistent examples results in higher accuracy given here by the average regression coefficient and standard deviation for gripper position (p, p, p), orientation (R, R, R), and fingers’ significant pressure/tactile threshold (spti, spti, spti).
Figure 6: Visualization of the gripper pose during demonstration (green) and after training during autonomous grasping based on a learned policy (red) to qualitatively highlight some of the challenges in the estimation of the correct pose. While there is a general good estimation of the gripper position for robotic grasping (a, b), we also show an example of failure to predict correct orientation of the gripper (c).
Figure 7: Average success rate on autonomous grasping on the test set after learning policy. A total of 14 lettuces of different size and shape were used to test the learned policy, and each lettuce was grasped 5 times from random positions. The overall size for the testing set was 70 grasps resulting in 75% average success rate. The number on the bars is the number of successful grasps out of 5 attempts for each lettuce randomly placed on the grasping area.
tasks where the human teacher may have challenges to provide good demonstrations are: grasping of tiny fillet portion-bits of fish fillets which are challenging to grasp even with human hand due to the texture and stickiness towards the surface on which they are placed; cutting of ham carcasses with the purpose of teaching the robot to accomplish the same. Cutting of ham carcasses is a very challenging and complex task requiring human experience and involving both visual and tactile perception and high degree of dexterity.
## VII Conclusions
In this paper, we proposed a robust learning policy based on Learning from Demonstration (_LJD_) addressing the problem of learning when presented with inconsistent human demonstrations. We presented an _LJD_ approach to the automatic rejection of inconsistent demonstrations by human teachers. The approach is validated by successfully teaching the robot to grasp fragile and compliant food objects such as lettuces. It utilises the merging of RGB-D images and tactile data in order to estimate the pose of the gripper, the gripper's finger configuration, and the forces exerted on the object in order to achieve successful grasping. The robust policy learning method presented here will enable the learner (robot) to act more consistently and with less variance than the teacher (human). Such robust learning algorithm also alleviates the burden of accuracy on the human teacher. The proposed approach has a vast range of potential applications in the ocean space, agriculture, and food industries, where the manual nature of processes leads to high variation in the way skilled operators perform complex processing and handling tasks.
## Acknowledgment
The work is supported by The Research Council of Norway in iProcess - 255596 project.
|
2303.17975 | Topological paramagnetic excitons of localized f electrons on the
honeycomb lattice | We investigate the dispersive paramagnetic excitons on the honeycomb lattice
that originate from the crystalline-electric field (CEF) split localized
f-electron states in the paramagnetic state due to intersite exchange. We start
with a symmetry analysis of possible Ising-type singlet-singlet and xy-type
singlet-doublet models. The former supports only symmetric intersite-exchange
while the latter additionally allows for antisymmetric Dzyaloshinski-Moriya
(DM) exchange interactions. We calculate the closed expressions for magnetic
exciton dispersion using both response function formalism and the bosonic
Bogoliubov approach. We do this for the most general model that shows inversion
symmetry breaking on the honeycomb lattice but also discuss interesting special
cases. By calculating Berry curvatures and Chern numbers of paramagnetic
excitons we show that the xy model supports nontrivial topological states in a
wide range of parameters. This leads to the existence of excitonic topological
edge states with Dirac dispersion lying in the zone boundary gap without the
presence of magnetic order. | Alireza Akbari, Burkhard Schmidt, Peter Thalmeier | 2023-03-31T11:21:18Z | http://arxiv.org/abs/2303.17975v2 | # Topological paramagnetic excitons of localized f electrons on the honeycomb lattice
###### Abstract
We investigate the dispersive paramagnetic excitons on the honeycomb lattice that originate from the crystalline-electric field (CEF) split localized f-electron states in the _paramagnetic_ state due to intersite exchange. We start with a symmetry analysis of possible Ising-type singlet-singlet and \(xy\)-type singlet-doublet models. The former supports only symmetric intersite-exchange while the latter additionally allows for antisymmetric Dzyaloshinski-Moriya (DM) exchange interactions. We calculate the closed expressions for magnetic exciton dispersion using both response function formalism and bosonic Bogoliubov approach. We do this for the most general model that shows inversion symmetry breaking on the honeycomb lattice but also discuss interesting special cases. By calculating Berry curvatures and Chern numbers of paramagnetic excitons we show that the \(xy\) model supports nontrivial topological states in a wide range of parameters. This leads to the existence of excitonic topological edge states with Dirac dispersion lying in the zone boundary gap without the presence of magnetic order.
## I Introduction
Localized f-electron states with integer total angular momentum on a lattice may have nonmagnetic singlet ground states or a sequence of low energy singlets if the crystalline electric field (CEF) site symmetry is low enough [1]. The effective intersite exchange interaction turn the localized CEF excitations into bands of magnetic excitons which may be observed in inelastic neutron scattering (INS) experiments. For low temperatures these are bosonic excitations like magnons, except that they do not require magnetic order with time reversal symmetry breaking but appear already in the _paramagnetic_ state.
The aim of this work is twofold. Firstly we want to give a complete theory of magnetic excitons in the paramagnetic state for CEF split f-electrons on the honeycomb lattice comprising two trigonal sublattices A,B and \(C_{3v}\) site symmetry. This is relevant for various f-electron compounds like Na\({}_{2}\)PrO\({}_{3}\)[2], TmNi\({}_{3}\)Al\({}_{9}\)[3] with integer total angular momentum \(J\) For concreteness we focus on \(J=4\) realized in trivalent Pr(4f\({}^{2}\)) and possibly U(5f\({}^{2}\)) magnetic ions but may also be applicable to trivalent Tm and Tb with \(J=6\). We focus on two representative cases for \(C_{3v}\) CEF states: An Ising-type singlet-singlet system and an xy-type singlet-doublet level scheme. Thereby we make the most general assumption that inversion symmetry is broken leading to inequivalent CEF splitting and interaction parameters for sublattices A,B. This may be realized by integrating the 2D honeycomb lattice in a larger 3D structure such that the local environments of A,B are not the same.
The Ising-type model is convenient for demonstrating the two techniques of calculating the magnetic exciton modes, namely the RPA response function and bosonic Bogoliubov quasiparticle techniques. We will show that indeed they give equivalent results. Applied to the Ising case we calculate the dispersion and intensity of the two modes symmetrically split by the inter-sublattice interactions and an additional contribution resulting from the intra-sublattice terms. For equivalent sublattices the modes will be degenerate at specific zone boundary points \(\mathbf{K}_{\pm}\) and we demonstrate how they will be split when inversion symmetry breaking occurs. Using the same techniques we investigate the richer singlet-doublet xy-type model. Because of nonzero diagonal matrix elements for both \(J_{x},J_{y}\) total angular momentum components an asymmetric DM interaction is possible for the intra-sublattice exchange. Due to the doublet degeneracy four magnetic exciton modes exist in principle. For equivalent sublattices they consist of a pair of twofold degenerate modes which can develop a gap at the \(\mathbf{K}_{\pm}\) zone boundary due to the presence of the DM interaction. A further splitting into four modes occurs when the sublattices become inequivalent. This theory is sufficiently general to be used for modeling INS experiments for all possible singlet-singlet and singlet-doublet CEF systems on compounds with f-electrons located on the honeycomb lattice.
Secondly we show that in the xy-type model the DM term may lead to interesting nontrivial topology of the magnetic exciton bands. We stress that this happens in the _paramagnetic_ state of f electrons on the honeycomb lattice. It is already well known that in the ferromagnetic (FM) or antiferromagnetic (AFM) ordered honeycomb lattice magnon bands may become topologically nontrivial and support magnonic edge modes within the gap of split 2D bulk magnon modes [4; 5; 6; 7; 8; 9; 10; 11]. It is our intention to demonstrate that magnetic order is not a prerequisite for the existence of topological magnetic excitations and corresponding edge modes. For this purpose we investigate the behaviour of Berry curvature and associated Chern numbers of paramagnetic exciton bands and discuss their model parameter dependence. We show that as function of the size of inversion symmetry breaking transitions from zero to integer Chern numbers is possible. In the latter case we also derive the existence of the boundary magnetic exciton modes in a continuum approximation around the Dirac points \(\mathbf{K}_{\pm}\). Finally we discuss, that in contrast to topological magnons in a FM
the paramagnetic topological magnetic excitons do not lead to a thermal Hall effect as is indeed required by the absence of time reversal symmetry breaking.
In Sec. II we give a brief introduction to f-electron CEF states in less common \(C_{3v}\) symmetry with details relegated to Appendix A. Then Sec. III discusses the Ising-type models in various techniques and the principle of induced magnetic order. In Sec. IV the xy-type model, its characteristic four dispersion branches and their topological properties including edge modes are investigated. Sec. VI discusses some numerical results and finally Sec. VII gives the summary and conclusion.
## II CEF states on the honeycomb lattice, singlet-singlet and singlet doublet models
The point group symmetry for the sites on the 2D honeycomb lattice with two basis atoms (A,B) is \(C_{3v}\), composed of threefold rotations and reflections on perpendicular planes \(120^{\circ}\) apart (Fig.1). The A,B sublattice sites have no inversion symmetry in \(C_{3v}\). The honeycomb space group \(P6/mcc\), however, contains the inversion with centers given by the midpoint of bonds and the center of hexagons. The point group symmetry leads to a CEF potential (restricted to the lowest \(J\)-multiplet) given as a sum of Stevens operators \(O_{n}^{m}({\bf J})\) (\(m\leq n\leq 6\)) (see detailed analysis in Appendix A).
In this work we are interested exclusively in f-electron shells with integer \(J\) to have the possibility of a nonmagnetic singlet CEF ground state \(|0\rangle\) with \(\langle 0|{\bf J}|0\rangle=0\). Among the trivalent rare earth (RE) ions this is possible for \(J=4\) (Pr), \(J=6\) (Tb,Tm) and \(J=8\) (Ho). We will restrict to the simplest case of \(J=4\). The complete characterization of CEF energies and states in \(C_{3v}\) symmetry is given in Appendix A. In this group the \(J=4\) space decomposes into irreducible representations \(2\Gamma_{1}\oplus\Gamma_{2}\oplus 3\Gamma_{3}\), i.e. three singlets (\(\Gamma_{1}^{a,b},\Gamma_{2}\)) and three doublets (\(\Gamma_{3}^{a,b,c}\)) which are linear combinations of free ion states \(|J,M\rangle\) (\(|M|\leq J\)). The two \(\Gamma_{1}^{a,b}\) singlets are characterized by one (\(\theta\)) and the three \(\Gamma_{3}^{a,b,c}\) doublets by generally three (\(\chi,\phi,\alpha\)) mixing angles determined by the set of CEF parameters \(B_{m}^{n}\) in Eq. (10) while the unique \(\Gamma_{2}\) is fully determined by \(C_{3v}\) symmetry. Explicitly the full orthonormal CEF state basis is given in Appendix A. Here we list only the singlets and one representative doublet \(\Gamma_{3}^{a}\) necessary for the following analysis:
\[\begin{split}|\Gamma_{1a}\rangle&=\quad\cos\theta| 4,0\rangle+\frac{1}{\sqrt{2}}\sin\theta(|4,3\rangle-|4,-3\rangle),\\ |\Gamma_{1b}\rangle&=-\sin\theta|4,0\rangle+\frac{1} {\sqrt{2}}\cos\theta(|4,3\rangle-|4,-3\rangle),\\ |\Gamma_{2}\rangle&=\frac{1}{\sqrt{2}}(|4,3\rangle +|4,-3\rangle),\\ |\Gamma_{3a}^{\pm}\rangle&=\sin\chi(\cos\phi|4,\pm 4 \rangle+\sin\phi|4,\mp 2\rangle)\\ &\pm\cos\chi|4,\pm 1\rangle.\end{split} \tag{1}\]
The CEF energies \(E_{\Gamma}\) of these eigenstates are complicated combinations of the \(B_{m}^{m}\) (Appendix A). Because there are six independent parameters and six irreducible representations the energy levels can in principle take any ordering.
For investigating the magnetic exciton modes it is important to calculate the dipolar matrix elements between the CEF states. The \(J_{\alpha}\) (\(\alpha=x,y,z\)) operators connect states with \(M^{\prime}=M,M\pm 1\) and their full structure in the space of \(J=4\) CEF states listed above is given in Appendix A. Here we restrict to two important cases discussed in detail in the following: The singlet-singlet \(\Gamma_{1a,b}\)-\(\Gamma_{2}\) subspaces and the singlet-doublet \(\Gamma_{2}\)-\(\Gamma_{3a}\) subspaces. Their dipolar matrix elements are given by
\[\begin{split}\langle\Gamma_{2}|J_{z}|\Gamma_{1}\rangle=& m ;\\ \langle\Gamma_{2}|J_{x}|\Gamma_{3}^{\pm}\rangle=& \tilde{m}/\sqrt{2};\quad\langle\Gamma_{2}|J_{y}|\Gamma_{3}^{\pm}\rangle=\pm \tilde{m}/\sqrt{2},\end{split} \tag{2}\]
where we defined \(m=3\sin\theta\) or \(m=3\cos\theta\) for \(\Gamma_{1a,b}\) singlets, respectively and \(\tilde{m}=(1/\sqrt{2})\sin\chi(\sqrt{7}\sin\phi+2\cos\phi)) for \(\Gamma_{3a}\). All other \(J_{\alpha}\) matrix elements are zero in these subspaces. Therefore the singlet-singlet \(\Gamma_{1}\)-\(\Gamma_{2}\) model is of the Ising type while the singlet-doublet model \(\Gamma_{2}\)-\(\Gamma_{3}\) is of the xy type. The latter would also be realized in a \(\Gamma_{1}\)-\(\Gamma_{3}\) type model. These selection rules follow also directly from the group multiplication table [12] of \(C_{3v}\) considering the fact that \(J_{z}\) transforms like \(\Gamma_{2}\) and \((J_{x},J_{y})\) transform like \(\Gamma_{3}\).
To devise suitably general models for both cases in the following sections we start from two basic observations on the honeycomb structure: Firstly the center \(2^{\rm nd}\) neighbor bonds (A-A, B-B) is not an inversion center. Therefore in addition to symmetric exchange asymmetric Dzyaloshinski-Moriya (DM) exchange between \(2^{\rm nd}\) neighbors (dashed lines in Fig. 1) may be present. Secondly when the 2D honeycomb lattice is placed into a 3D crystal the chemical environment of the basis atoms A, B may be different. Therefore CEF potentials (multiplet splittings) and interactions on the A, B sublattices may also be generally different. This possibility should be incorporated in both models. It means that inversion symmetry with respect to center of \(1^{\rm st}\) neighbor A-B bonds and hexagon centers is also broken. Such models have also been investigated for FM ordered honeycomb lattice [13].
## III The singlet-singlet Ising-type model
First we address the more simple and instructive case of the singlet-singlet CEF model. Our calculations of exciton modes will be based on RPA response function theory as well as Bogoliubov transformation approach. The former can also be applied at finite temperatures while the latter allows to address topological properties of the modes due to a bosonic representation used for the local CEF excitations.
For concreteness we assume \(\Gamma_{2}\) to be the ground state and one of the \(\Gamma_{1a,b}\) the excited state, the inverted
scheme leads to identical results. Furthermore we do not distinguish between a, b representations and denote by \(m=m_{a},m_{b}\) any of the two matrix elements between ground and excited state. The singlet-singlet CEF Hamiltonian is then given by
\[H=\sum_{\Gamma\sigma i}E_{\Gamma}^{\sigma}|\Gamma_{\sigma i}\rangle\langle\Gamma _{\sigma i}|-I\sum_{\langle ij\rangle}J_{iA}^{z}J_{jB}^{z}-\sum_{\langle\langle ij \rangle\rangle\sigma}I_{2}^{\sigma}J_{i\sigma}^{z}J_{j\sigma}^{z}. \tag{3}\]
Here \(\sigma=A,B\) denotes the two sublattices and \(i,j\) the 1st neighbor lattice sites on each of them and \(\Gamma=\Gamma_{2},\Gamma_{1}\) the two singlet states. In the first term the CEF energies \(E_{\Gamma\sigma}\) (and the \(\Gamma_{1a,b}\) excited states) may depend on the sublattice A, B and similar for the exchange terms. We fix \(E_{\Gamma_{2}}^{\sigma}=0\) on each and denote the relative exited state energy by \(\Delta_{\sigma}=E_{\Gamma_{1}\sigma}\) (we suppress a,b index of both possible \(\Gamma_{1a,b}\) representations from now on). The second and third terms describe the symmetric exchange _between_ A, B sublattices (1st neighbors) and _within_ A and B sublattices (2nd neighbors), respectively. Note that in this model only \(J_{z}\) has nonzero matrix elements (Eq. (2)). Therefore it is of the Ising-type and in particular no DM exchange is supported because this needs at least two components of \(\mathbf{J}\) to have nonzero matrix elements (Sec. IV).
### Response functions and magnetic exciton modes
The interaction terms in Hamiltonian of Eq. (3) allow the \(\Gamma_{2}\leftrightarrow\Gamma_{1}\) excitations of the paramagnetic state to propagate from site to site and thus acquire a dispersion. They are commonly designated'magnetic excitons' to distinguish them from magnons which require a _magnetically ordered_ ground state with broken time reversal symmetry. The most convenient way to obtain the dispersion of magnetic excitons is the calculation of the dynamic magnetic susceptibility \(\hat{\chi}(\mathbf{q},i\omega_{n})\) in RPA. It is given by the \(2\times 2\) sublattice-space matrix
\[\hat{\chi}(\mathbf{k},i\omega_{n})=[1-\hat{I}(\mathbf{k})\hat{u}(i\omega_{n})] ^{-1}\hat{u}(i\omega_{n}), \tag{4}\]
where
\[\hat{u}(i\omega_{n})=\left(\begin{array}{cc}u_{A}(i\omega_{n})&0\\ 0&u_{B}(i\omega_{n})\end{array}\right), \tag{5}\]
and
\[\hat{I}(\mathbf{k})=\left(\begin{array}{cc}z_{2}I_{2}^{A}\gamma_{2}(\mathbf{ k})&zI\gamma(\mathbf{k})\\ zI\gamma^{*}(\mathbf{k})&z_{2}I_{2}^{B}\gamma_{2}(\mathbf{k})\end{array}\right) \tag{6}\]
are the single ion susceptibility and exchange matrices, respectively. In the latter \(z=3\) and \(z_{2}=6\) are first and second neighbor coordination number and \(\gamma(\mathbf{k}),\gamma_{2}(\mathbf{k})\) the corresponding structure functions of the honeycomb lattice (Eq. (2)). Furthermore in the singlet-singlet model we have (\(\sigma=A,B\)):
\[u_{\sigma}(i\omega_{n})=\frac{2m_{\sigma}^{2}\Delta_{\sigma}P_{\sigma}(T)}{ \Delta_{\sigma}^{2}-(i\omega_{n})^{2}}. \tag{7}\]
The temperature dependent factor \(P_{\sigma}(T)=\tanh\frac{\Delta_{\sigma}}{2T}\) in the numerator is equal to the difference of thermal occupations of ground and excited singlet state and \(\Delta_{\sigma}\) and \(m_{\sigma}\) are the (generally different) singlet-singlet splitting and matrix elements. The magnetic exciton bands (there are two (\(\kappa=\pm\)) due two the A,B sublattices) are then obtained as the collective modes, i.e. the singularities of the dynamic susceptibility as determined by \(det\hat{\chi}(\mathbf{k},i\omega_{n})=0\). Solving this equation a closed expression for the magnetic exciton dispersions \(\omega_{\kappa}(\mathbf{k})\) may be evaluated:
\[\omega_{\pm}^{2}(\mathbf{k})= \frac{1}{2}[\omega_{A}^{2}(\mathbf{k})+\omega_{B}^{2}(\mathbf{k}) ]\pm\Big{[}\frac{1}{4}(\omega_{A}^{2}(\mathbf{k})-\omega_{B}^{2}(\mathbf{k})) ^{2}+ \tag{8}\] \[4m_{A}^{2}m_{B}^{2}\Delta_{A}\Delta_{B}P_{A}P_{B}|I_{N}(\mathbf{ k})|^{2}\Big{]}^{\frac{1}{2}};\] \[\omega_{\sigma}^{2}(\mathbf{k})= \Delta_{\sigma}[\Delta_{\sigma}-2m_{\sigma}^{2}P_{\sigma}I_{D}^{ \sigma}(\mathbf{k})].\]
Here we use the abbreviations \(I_{D}^{\sigma}(\mathbf{k})=(z_{2}I_{2}^{\sigma})\gamma_{2}(\mathbf{k})\) and \(I_{N}(\mathbf{k})=(zI)\gamma(\mathbf{k})\) for diagonal (D) and nondiagonal (N) intra- and inter-sublattice exchange in Eq. (6), respectively. Furthermore the \(\omega_{A,B}(\mathbf{k})\) may be interpreted as the separate mode dispersions on \(\sigma=\)A,B sublattices when the nearest neighbor inter-sublattice coupling \(I_{N}(\mathbf{k})\) is set to zero. Explicitly this formula may also be
written as
\[\omega_{\pm}^{2}(\mathbf{k})= \tag{9}\] \[\frac{1}{2}(\Delta_{A}^{2}+\Delta_{B}^{2})-[m_{A}^{2}\Delta_{A}P_{A }I_{D}^{A}(\mathbf{k})+m_{B}^{2}\Delta_{B}P_{B}I_{D}^{B}(\mathbf{k})]\pm\] \[\Big{\{}\big{[}\frac{1}{2}(\Delta_{A}^{2}-\Delta_{B}^{2})-(m_{A}^ {2}\Delta_{A}P_{A}I_{D}^{A}(\mathbf{k})-m_{B}^{2}\Delta_{B}P_{B}I_{D}^{B}( \mathbf{k}))\big{]}^{2}\] \[+4m_{A}^{2}m_{B}^{2}\Delta_{A}\Delta_{B}P_{A}P_{B}|I_{N}(\mathbf{ k})|^{2}\Big{\}}^{\frac{1}{2}}.\]
For numerical calculations it is convenient to use three model parameters (dimension energy) \(v_{s}=(m_{A}m_{B}I)\) and \(v_{\sigma}^{2}=(m_{\sigma}^{2}I_{\sigma}^{2})\) and likewise \(|I_{N}(\mathbf{k})|=m_{A}m_{B}|I_{N}(\mathbf{k})|=(zv_{s})\gamma(\mathbf{k})\) and \(\bar{I}_{D}^{x}(\mathbf{k})=m_{\sigma}^{2}I_{D}^{\sigma}(\mathbf{k})=(zv_{ \sigma}^{2})\gamma_{2}(\mathbf{k})\) (see also Appendix B). At low temperatures \(T/\Delta_{\sigma}\ll 1\) we may replace \(P_{\sigma}(T)\to 1\). The dispersion simplifies further if the intra-sublattice exchange \(I_{D}^{x}(\mathbf{k})\) is absent. Then we get
\[\omega_{\pm}^{2}(\mathbf{k})=\frac{1}{2}(\Delta_{A}^{2}+\Delta_{ B}^{2})\pm \tag{10}\] \[\big{[}\frac{1}{4}(\Delta_{A}^{2}-\Delta_{B}^{2})^{2}+4m_{A}^{2} m_{B}^{2}\Delta_{A}\Delta_{B}P_{A}P_{B}|I_{N}(\mathbf{k})|^{2}\big{]}^{\frac{1}{2}}.\]
On the other hand if both \(1^{\text{st}}\) and \(2^{\text{nd}}\) neighbour exchange are kept but the two sublattice sites are assumed equivalent with \(\Delta_{A}=\Delta_{B}=\Delta\) and likewise \(I_{D}^{A}=I_{D}^{B}=I_{D}\) Eq.(9) reduces to
\[\omega_{\pm}^{2}(\mathbf{k})=\Delta\big{[}\Delta-2m^{2}(I_{D}(\mathbf{k})\mp| I_{N}(\mathbf{k})|)\tanh\frac{\Delta}{2T}\big{]}. \tag{11}\]
Here the mode splitting of \(\omega_{\kappa}(\mathbf{k})\) can be seen to be directly associated with the inter-sublattice coupling. The splitting vanishes at the \(\mathbf{K}_{\pm}\) zone boundary points in this special case. In the general case described by Eq. (9) the criterion for opening a gap at \(\mathbf{K}_{\pm}\) may be identified as i) for \(\Delta_{A}\neq\Delta_{B}\) the gap is always present and ii) for \(\Delta_{A}=\Delta_{B}\) one then must have \(I_{A}^{A}\neq I_{D}^{B}\) for the intra-sublattice exchange.
If the interactions are strong enough the lower mode, e.g. \(\omega_{-}(\mathbf{k})\) may become soft at specific, generally incommensurate wave vector \(\mathbf{k}\)=\(\mathbf{Q}\) and this heralds a spontaneous induced magnetic order with modulation wave vector \(\mathbf{Q}\) of the singlet-singlet system although both CEF singlets are nonmagnetic with \(\langle\Gamma_{\alpha}|J_{z}|\Gamma_{\alpha}\rangle=0\) (\(\alpha=1,2\)). In the above equivalent sublattice case this occurs when the control parameter
\[\xi=\frac{2m^{2}I(\mathbf{Q})}{\Delta}>1, \tag{12}\]
where \(I(\mathbf{Q})=I_{D}(\mathbf{Q}+|I_{N}(\mathbf{Q})|\) is the total exchange Fourier transform. For \(\xi>1\) the transition temperature \(T_{m}\) to the induced moment phase and the size of the induced moment \(M_{\mathbf{Q}}=\langle J_{z}\rangle\) (in units of \(\mu_{B}\)) along z are given by [15]
\[T_{m}\simeq\frac{\Delta}{2\tanh^{-1}\big{(}\frac{1}{\xi}\big{)} }\simeq\frac{\Delta}{|ln\xi^{\prime}|}; \tag{13}\] \[M_{\mathbf{Q}}/m=\frac{1}{\xi}(\xi^{2}-1)^{\frac{1}{2}}\simeq \big{(}2\xi^{\prime}\big{)}^{\frac{1}{2}},\]
where the approximate expressions hold close to the critical control parameter i.e. \(\xi\simeq 1+\xi^{\prime}\) with \(\xi^{\prime}\ll 1\). Both quantities increase with infinite slope above \(\xi=1\) (Fig. 2). This Ising type 2-singlet induced moment system has also been generalized for the frequently occurring three-singlet model in low symmetry 4f and 5f materials [1]. In the present case when the incipient soft mode (\(\xi<1\)) appears at \(\mathbf{Q}=\mathbf{K}_{\pm}\) zone boundary positions as is the case in Fig. 3 the magnetic order for critical \(\xi=1\) would correspond to a \(120^{\circ}\) commensurate spiral structure on each triangular sublattice A,B coupled ferro- or antiferromagnetically depending on the sign of intersublattice coupling \(I\) in Eq. (3).
This type of induced singlet-singlet magnetic order [16] is found e.g. in Pr metal (under pressure) [17; 18] and Pr compounds like PrSb [19] and Pr\({}_{3}\)Tl [20] and also in U-compounds [14; 15; 21]. We note that common magnetic order involves the spontaneous alignment of pre-existing magnetic moments and can be understood on a classical or quasi-classical way. On the other hand induced order is a true quantum effect where moments and their ordering are spontaneously created by superposition of nearby nonmagnetic states that have non-diagonal matrix elements of the moment operators. This mechanism is not restricted to dipolar magnetism, for example in YbRu\({}_{2}\)Ge\({}_{2}\) the lowest \(J=\frac{7}{2}\) Kramers doublets form a quasi quartet that supports induced quadrupolar order due to non-diagonal quadrupole matrix elements [22; 23] between them.
In this work, however, we restrict to the investigation to the _paramagnetic_ phase for both CEF models. In the response function formalism it is also straightforward to
Figure 2: Ising-type model induced order characteristics signified by control-parameter \(\xi\)-dependence of ground state moment \(\langle J_{z}\rangle\) (normalized to \(m\)), magnetic ordering temperature T\({}_{m}\) (normalized to CEF splitting \(\Delta\)) and their ratio, see also Ref. [14].
calculate the momentum and temperature dependence of the intensity of paramagnetic exciton modes that are essential for the interpretation of INS data. It is given by the dynamical structure function
\[S(\mathbf{k},\omega)=\frac{1}{\pi}\Big{[}\text{Im}\hat{\chi}_{AA}(\mathbf{k}, \omega)+\text{Im}\hat{\chi}_{BB}(\mathbf{k},\omega)\Big{]}. \tag{14}\]
This may be evaluated as
\[\begin{split} S(\mathbf{k},\omega>0)&=\sum_{\kappa= \pm}I_{\kappa}(\mathbf{k})\delta(\omega-\omega_{\kappa}(\mathbf{k}));\\ I_{+}(\mathbf{k})&=\frac{\sum_{\sigma=A,B}m_{ \sigma}^{2}\Delta_{\sigma}P_{\sigma}(\omega_{+}^{2}-\omega_{\sigma}^{2})}{ \omega_{+}(\mathbf{k})(\omega_{+}^{2}(\mathbf{k})-\omega_{-}^{2}(\mathbf{k}) )};\\ I_{-}(\mathbf{k})&=\frac{\sum_{\sigma=A,B}m_{ \sigma}^{2}\Delta_{\sigma}P_{\sigma}(\omega_{\bar{\sigma}}^{2}-\omega_{-}^{2} )}{\omega_{-}(\mathbf{k})(\omega_{+}^{2}(\mathbf{k})-\omega_{-}^{2}(\mathbf{k }))},\end{split} \tag{15}\]
with
\[\begin{split}&\omega_{+}^{2}(\mathbf{k})-\omega_{-}^{2}(\mathbf{k })=\\ & 2\Big{[}\frac{1}{4}(\Delta_{A}^{2}-\Delta_{B}^{2})^{2}+4m_{A}^{2 }m_{B}^{2}\Delta_{A}\Delta_{B}|I_{N}(\mathbf{k})|^{2}\Big{]}^{\frac{1}{2}}, \end{split} \tag{16}\]
where \(\bar{\sigma}=B,A\) for \(\sigma=A,B\). Here \(I_{\kappa}(\mathbf{k})\) denotes the bare intensity of each mode in the INS scattering without Bose-, polarization- and atomic form factors [24].
### Bosonic representation of interacting CEF excitations
An alternative approach to the magnetic exciton problem is provided by a bosonic representation of the Hamiltonian and a subsequent application of Bogoliubov technique for diagonalisation [25]. It has the advantage of not only providing the dispersion but also the eigenvectors or Bloch states of magnetic exciton modes. On the other hand it can only be used a temperatures low compared to the CEF splitting. We first apply it for the simple singlet-singlet system, restricting for simplicity to \(1^{\text{st}}\) neighbor interactions, in order to use it as a guidance for the more complicated singlet-doublet system.
In the restricted \(\Gamma_{2}-\Gamma_{1}\) space, considering Eq. (2) we may replace the angular momentum component \(J_{z}\) by sublattice bosonic operators according to
\[J_{iA}^{z}=m_{A}(a_{i}^{\dagger}+a_{i});\quad J_{iB}^{z}=m_{B}(b_{i}^{\dagger }+b_{i}), \tag{17}\]
where the \(a_{i},b_{i}\) and \(a_{i}^{\dagger},b_{i}^{\dagger}\) satisfy the usual bosonic commutation rules. This replacement produces the proper matrix elements but is restricted to low T because of the different commutation rules and statistics [25; 26]. Introducing Fourier transforms like \(a_{\mathbf{k}}=(1/\sqrt{N})\sum_{i}\exp(i\mathbf{k}\mathbf{R}_{i})a_{i}\) etc. and rearranging terms in the \(1^{\text{st}}\) neighbor exchange Hamiltonian in Eq. (3) we arrive at
\[\hat{H}=\frac{1}{2}\sum_{\mathbf{k}}\phi_{\mathbf{k}}^{\dagger}\hat{h}_{ \mathbf{k}}\phi_{\mathbf{k}}+E_{0};\quad\text{with}\quad\phi_{\mathbf{k}}=(a_ {\mathbf{k}},b_{\mathbf{k}},a_{-\mathbf{k}}^{\dagger},b_{-\mathbf{k}}^{ \dagger})^{T}, \tag{18}\]
here \(E_{0}=(N/2)(\Delta_{A}+\Delta_{B})\). The components of this four spinor satisfy the bosonic commutation relations \([\phi_{n}(\mathbf{k}),\phi_{m}^{\dagger}(\mathbf{k}^{\prime})]=\Sigma_{z}^{ nm}\theta_{\mathbf{k}\mathbf{k}^{\prime}}\) where \(\Sigma_{z}=\tau_{z}\otimes 1_{2}=\text{diag}(1_{2},-1_{2})\) is composed of the \(2\times 2\) unit \(1_{2}\). In this representation we can express
\[\hat{h}_{\mathbf{k}}=\left(\begin{array}{cccc}\Delta_{A}&-\bar{I}_{N}^{*}( \mathbf{k})&0&-\bar{I}_{N}^{*}(\mathbf{k})\\ -\bar{I}_{N}(\mathbf{k})&\Delta_{B}&-\bar{I}_{N}(\mathbf{k})&0\\ 0&-\bar{I}_{N}(-\mathbf{k})&\Delta_{A}&-\bar{I}_{N}(-\mathbf{k})\\ -\bar{I}_{N}^{*}(-\mathbf{k})&0&-\bar{I}_{N}^{*}(-\mathbf{k})&\Delta_{B}\\ \end{array}\right), \tag{19}\]
where we used \(\bar{I}_{N}(\mathbf{k})=(m_{A}m_{B})I_{N}(\mathbf{k})=(zv_{s})\gamma_{\mathbf{ k}}\) which satisfies \(\bar{I}_{N}(-\mathbf{k})=\bar{I}_{N}(\mathbf{k})^{*}\) (Eq. (14)). The magnetic exciton modes may be obtained by a paraunitary Bogoliubov transformation. The dispersions are then obtained as eigenvalues obtained from the secular equation \(|\Sigma_{z}\hat{h}_{\mathbf{k}}-\omega 1|=0\). The solution of this equation leads to the \(T=0\) exciton modes
\[\begin{split}\omega_{\pm}^{2}(\mathbf{k})&=\frac{1}{2}(\Delta_{A }^{2}+\Delta_{B}^{2})\\ &\quad\pm\big{[}\frac{1}{4}(\Delta_{A}^{2}-\Delta_{B}^{2})^{2}+4m_{A}^{2 }m_{B}^{2}\Delta_{A}\Delta_{B}|I_{N}(\mathbf{k})|^{2}\big{]}^{\frac{1}{2}}. \end{split} \tag{20}\]
Figure 3: Typical cases of the Ising model magnetic exciton dispersions. (a) high temperature case \(T=1.0\) with equal \(\Delta_{A,B}=1\), \(v_{s}^{A,B}=-0.11\) and \(v_{s}=-0.10\) shows moderate dispersion. (b) Same parameters but low temperature case exhibits large dispersion due to increase thermal population difference of \(\Gamma_{2},\Gamma_{3}\) levels. Because of A,B equivalent interaction constants \(K_{+}\) (and also \(K_{-}\)) is a Dirac point with degenerate and linearly dispersive exciton modes. The splitting of modes for all other \(\mathbf{k}\)-values is due to inter-sublattice interaction \(I_{N}(\mathbf{k})\sim v_{s}\). (c) \(T=0.1\) case now with distinct \(\Delta_{A,B}=\Delta(1\pm\epsilon)\) where \(\Delta=1.\) and \(\epsilon=0.07\) and other constants as in (a,b). Now the degeneracy at \(K_{\pm}\) is removed. This case shows incipient soft mode behaviour around \(K_{+}\) indicating closeness to commensurate spiral order. (c) Same case but small \(v_{2}^{A,B}=-0.02\) which reduces the overall dispersion.
This is identical to the RPA result for zero temperature (\(P_{A}=P_{B}=1\)) obtained before in Eq. (10). Therefore on the RPA level one may say that temperature enters in the theory just as a parametric change of the effective exchange coupling by modification of the matrix elements to effective ones with the replacement \(m_{\sigma}^{2}\to P_{\sigma}(T)m_{\sigma}^{2}\). In the case of equivalent sublattices A,B the above equation reproduces the \(T=0\) case of Eq. (11).
At this point, to obtain a preliminary impression of the behaviour of magnetic excitons in the honeycomb lattice we discuss the results for the Ising-type model as presented in Fig. 3. In (a,b) the symmetric case \(\Delta_{A}=\Delta_{B}\) is shown for elevated (a) and low temperature (b). In the former a moderate dispersion due to small thermal population differences \(P_{A,B}\) in Eq. (9) or Eq. (11) exists which becomes larger in the low temperature case. The dispersion of modes is controlled by both by intra- (\(v_{2}\)) and inter- (\(v_{s}\)) sublattice interaction strength while the mode splitting is only due to the latter (for \(v_{2}^{\sigma}=v_{2}\)). At the \({\bf K}_{\pm}-\) zone boundary points, however they become degenerate because \(\gamma({\bf K}_{\pm})=0\) (Appendix D). This degeneracy is lifted by introducing inequivalent A,B CEF splittings as demonstrated in (c,d) for two cases with different strength of intra-sublattice coupling \(v_{2}\). A similar removal of degeneracy at \({\bf K}_{\pm}\) occurs if the splittings are kept equal but the intra-sublattice couplings \(v_{2}^{A,B}\) become inequivalent.
The consistent results of two different techniques in this Section encourage us to consider the more involved and richer singlet-doublet xy-type model. It may also be treated within the response function approach by a simple extension (App. D). It has the drawback of giving only the spectral density of the magnetic excitons but not the composition of the eigenmodes which is important for discussing topological properties relevant in the \(xy\)-model. Therefore, in this case we employ the bosonic technique in the following.
## IV The singlet-doublet XY -type model
The alternative singlet-doublet model for honeycomb magnetic excitons leads to additional possibilities because of its xy-type exchange structure as enforced by the selection rules of Eq. (2). They show that in this model two of the total angular momentum operators \(J_{x},J_{y}\) have nonzero matrix elements complementary to the previous singlet-singlet case that involves only \(J_{z}\). Because the centers of \(2^{\rm nd}\) neighbor bonds are not inversion centers in any case this opens the possibility for asymmetric DM exchange \(H_{DM}=\sum_{\langle\langle ij\rangle\rangle}\nu_{ij}D_{J}(J_{ix}J_{jy}-J_{ iy}J_{jx})\) according to Moriya rules [27]. Here we defined \(D_{J}=(g_{J}-1)^{2}D\) (\(g_{J}=\)Lande factor) as the original DM spin-exchange constant \(D\) projected to the lowest angular momentum multiplet (\(J=4\)) considered in this work. It has to be staggered along each bond direction as expressed by \(\nu_{ij}=\pm 1\), i.e. \(2^{\rm nd}\) neighbors (\(-\tilde{\delta}_{i},\tilde{\delta}_{i}\)) (\(i=1-3\)) have DM exchange (\(-D_{J}^{A},D_{J}^{A}\)) on A sublattice and conversely (\(D_{J}^{B},-D_{J}^{B}\)) on the B sublattice. (Fig. 1). The total Hamiltonian in the \(\Gamma_{2}-\Gamma_{3}\) model is then given by
\[H= \sum_{\Gamma\sigma i}E_{\Gamma}^{\sigma}|\Gamma_{\sigma i}\rangle \langle\Gamma_{\sigma i}|-\sum_{\langle ij\rangle}I_{\sigma}(J_{iA}^{x}J_{jB}^ {x}+J_{iA}^{y}J_{jB}^{y}) \tag{21}\] \[-\sum_{\langle\langle ij\rangle\rangle}I_{2}^{\sigma}(J_{i\sigma }^{x}J_{j\sigma}^{x}+J_{i\sigma}^{y}J_{j\sigma}^{y})\] \[+\sum_{\langle\langle ij\rangle\rangle\sigma}\nu_{ij}D_{J}^{ \sigma}(J_{i\sigma}^{x}J_{j\sigma}^{y}-J_{i\sigma}^{y}J_{j\sigma}^{x}).\]
Here we formulated the most general case of the model with \(1^{\rm st}(\langle ij\rangle)\) and \(2^{\rm nd}(\langle\langle ij\rangle\rangle)\) neighbour exchange : The CEF splittings as well as the three types of interactions are assumed to be sublattice dependent. As in the Ising case this may be caused by a different chemical environment of the two sublattice sites when the bare 2D honeycomb lattice of 4f ions is integrated into a larger 3D structure. We treat this model again by using the bosonic representation which is now defined by (\(J_{\pm}=J_{x}\pm iJ_{y}\))
\[J_{+}^{iA}=\sqrt{2}\tilde{m}_{A}(a_{i+}^{\dagger}+a_{i-});\quad J _{+}^{iB}=\sqrt{2}\tilde{m}_{B}(b_{i+}^{\dagger}+b_{i-}); \tag{22}\] \[J_{-}^{iA}=\sqrt{2}\tilde{m}_{A}(a_{i-}^{\dagger}+a_{i+});\quad J _{-}^{iB}=\sqrt{2}\tilde{m}_{B}(b_{i-}^{\dagger}+b_{i+}).\]
We notice that there is an additional degree of freedom \(\lambda=\pm\) corresponding to the two doublet components \(|\Gamma_{3}^{\lambda}\rangle\) represented by the \(a_{i\lambda}^{\dagger},b_{i\lambda}^{\dagger}\) creation operators. Only for some special cases this will remain a degeneracy index throughout the Brillouin zone (BZ) for the diagonalised excitonic eigenmodes.
Now again we introduce the Fourier transformed bosonic operators \(a_{{\bf k}\lambda},b_{{\bf k}\lambda}\) and conjugates and express the Hamiltonian of Eq.(21) through them by using Eq.(22). We finally obtain
\[\hat{H}=\frac{1}{2}\sum_{{\bf k}\lambda}\phi_{{\bf k}\lambda}^{\dagger}\hat{h}_ {{\bf k}\lambda}\phi_{{\bf k}\lambda}+E_{0}, \tag{23}\]
with \(\phi_{{\bf k}\lambda}=(a_{{\bf k}\lambda},b_{{\bf k}\lambda},a_{-{\bf k}\bar{ \lambda}}^{\dagger},b_{-{\bf k}\bar{\lambda}}^{\dagger})^{T}\). Here we defined \(\bar{\lambda}=-\lambda\) and \(E_{0}=N(\Delta_{A}+\Delta_{B})\). Similar to the Ising-type model the four spinor components satisfy bosonic commutation relations \([\phi_{n}({\bf k}\lambda),\phi_{n}^{\dagger}({\bf k}^{\prime}\lambda^{\prime })]=\Sigma_{z}^{nm}\delta_{{\bf k}{\bf k}^{\prime}}\delta_{\lambda\lambda^{\prime}}\). In this representation we now have
\[\hat{h}_{\mathbf{k}\lambda}=\left(\begin{array}{cccc}\Delta_{A}-\bar{I}_{D}^{A}( \mathbf{k}\lambda)&-\bar{I}_{N}^{s}(\mathbf{k})&-\bar{I}_{D}^{A}(\mathbf{k} \lambda)&-\bar{I}_{N}^{s}(\mathbf{k})\\ -\bar{I}_{N}(\mathbf{k})&\Delta_{B}-\bar{I}_{D}^{B}(\mathbf{k}\lambda)&-\bar{I }_{N}(\mathbf{k})&-\bar{I}_{D}^{B}(\mathbf{k}\lambda)\\ -\bar{I}_{D}^{A}(-\mathbf{k}\bar{\lambda})&-\bar{I}_{N}(-\mathbf{k})&\Delta_ {A}-\bar{I}_{D}^{A}(-\mathbf{k}\bar{\lambda})&-\bar{I}_{N}(-\mathbf{k})\\ -\bar{I}_{N}^{s}(-\mathbf{k})&-\bar{I}_{D}^{B}(-\mathbf{k}\bar{\lambda})&- \bar{I}_{N}^{s}(-\mathbf{k})&\Delta_{B}-\bar{I}_{D}^{B}(-\mathbf{k}\bar{ \lambda})\end{array}\right). \tag{24}\]
Here the intra- (D) and inter- (N) sublattice interactions are defined by
\[\bar{I}_{D}^{A}(\mathbf{k}\lambda)= \tilde{m}_{A}^{2}I_{D}^{A}(\mathbf{k}\lambda);\] \[I_{D}^{A}(\mathbf{k}\lambda)= (z_{2}I_{A}^{2})\gamma_{2}(\mathbf{k})+\lambda(z_{2}D_{D}^{A}) \tilde{\gamma}_{D}(\mathbf{k})\] \[= I_{D}^{A}(-\mathbf{k}\bar{\lambda})=I_{D}^{B}(-\mathbf{k}\lambda);\] \[\bar{I}_{D}^{B}(\mathbf{k}\lambda)= \tilde{m}_{B}^{2}I_{D}^{B}(\mathbf{k}\lambda); \tag{25}\] \[I_{D}^{B}(\mathbf{k}\lambda)= (z_{2}I_{D}^{B})\gamma_{2}(\mathbf{k})-\lambda(z_{2}D_{D}^{B}) \tilde{\gamma}_{D}(\mathbf{k})\] \[= I_{D}^{B}(-\mathbf{k}\bar{\lambda})=I_{D}^{A}(-\mathbf{k}\lambda);\] \[\bar{I}_{N}(\mathbf{k})= \tilde{m}_{A}\tilde{m}_{B}I_{N}(\mathbf{k});\;\;\;I_{N}(\mathbf{k })=(zI)\gamma(\mathbf{k}).\]
Again for numerical computation it is convenient to use (now generally five) model parameters \(v_{s}=(\tilde{m}_{A}\tilde{m}_{B}I)\), \(v_{2}^{a}=(\tilde{m}_{2}^{a}I_{2}^{a})\) and \(v_{D}^{b}=(\tilde{m}_{2}^{a}I_{D}^{a})\) and likewise \(|\bar{I}_{N}(\mathbf{k})|=(zv_{s})\gamma(\mathbf{k})\), \(\bar{I}_{D}^{A}(\mathbf{k}\lambda)=(z_{2}v_{2}^{a})\gamma_{2}(\mathbf{k})+ \lambda(z_{2}v_{A}^{A})\tilde{\gamma}_{D}(\mathbf{k})\) and \(\bar{I}_{B}^{B}(\mathbf{k}\lambda)=(z_{2}v_{2}^{B})\gamma_{2}(\mathbf{k})- \lambda(z_{2}v_{D}^{B})\tilde{\gamma}_{D}(\mathbf{k})\) (see also Appendix B). Note the sign of the DM term changes with sublattice inversion and \(\Gamma_{3}\) degeneracy index which leads to the symmetry \(\bar{\lambda}\tilde{\gamma}_{D}(-\mathbf{k})=\lambda\tilde{\gamma}_{D}( \mathbf{k})\) which has been used in the construction of the Hamiltonian matrix Eq.(24). The excitonic eigenmodes in the present general model are then, similar as in previous section, obtained by solving \(|\Sigma_{z}\hat{h}_{\mathbf{k}}-\omega 1|=0\). The solution leads to a closed form of their dispersions \(\omega_{\kappa}^{2}(\mathbf{k}\lambda)(\kappa=\pm)\), given a by formally similar expression as Eq. (9) in the zero temperature limit:
\[\omega_{\pm}^{2}(\mathbf{k}\lambda)=\frac{1}{2}(\omega_{A}^{2}( \mathbf{k}\lambda)+\omega_{B}^{2}(\mathbf{k}\lambda))\pm\\ \big{[}\frac{1}{4}(\omega_{A}^{2}(\mathbf{k}\lambda)-\omega_{B}^{2}( \mathbf{k}\lambda))^{2}+4\tilde{m}_{A}^{2}\tilde{m}_{B}^{2}\Delta_{A}\Delta_{B} |I_{N}(\mathbf{k})|^{2}\big{]}^{\frac{1}{2}}; \tag{26}\]
with
\[\omega_{\sigma}^{2}(\mathbf{k}\lambda)=\Delta_{\sigma}[\Delta_{\sigma}-2\tilde {m}_{\sigma}^{2}I_{D}^{\sigma}(\mathbf{k}\lambda)].\]
It is, however, distinct from the singlet-singlet model in the following aspects. Firstly, in contrast to the latter the singlet-doublet model can realize the presence of a DM interaction in the intra-sublattice part because two components \(J_{x},J_{y}\) have nonzero matrix elements \(\tilde{m}\) between \(\Gamma_{2}\) and \(\Gamma_{3}\). Secondly due to the excited state \(\Gamma_{3}\) being a doublet (\(\lambda=\pm\)) the number of modes doubles to four. They are still degenerate at each \(\mathbf{k}\)-point for zero DM interaction. For nonzero \(D_{\sigma}^{\sigma}\) the modes still fulfill the symmetry relation \(\omega_{\pm}^{2}(\mathbf{k}\lambda)=\omega_{\pm}^{2}(-\mathbf{k}\bar{\lambda})\). Furthermore the matrix elements \(\tilde{m}_{\sigma}\) are different from those of the singlet-singlet model (\(m_{\sigma}\)), see below Eq.(2). Similar as in Sec. III.1 the above exciton dispersion \(\omega_{\kappa}^{2}(\mathbf{k}\lambda)(\kappa=\pm)\) can be written more explicitly as
\[\omega_{\pm}^{2}(\mathbf{k}\lambda)= \tag{27}\] \[\frac{1}{2}(\Delta_{A}^{2}+\Delta_{B}^{2})-[\tilde{m}_{A}^{2} \Delta_{A}I_{D}^{A}(\mathbf{k}\lambda)+\tilde{m}_{B}^{2}\Delta_{B}I_{D}^{B}( \mathbf{k}\lambda)]\pm\] \[\Big{\{}\big{[}\frac{1}{2}(\Delta_{A}^{2}-\Delta_{B}^{2})-( \tilde{m}_{A}^{2}\Delta_{A}I_{D}^{A}(\mathbf{k}\lambda)-\tilde{m}_{B}^{2} \Delta_{B}I_{D}^{B}(\mathbf{k}\lambda))\big{]}^{2}\] \[+4\tilde{m}_{A}^{2}\tilde{m}_{B}^{2}\Delta_{A}\Delta_{B}|I_{N}( \mathbf{k})|^{2}\Big{\}}^{\frac{1}{2}}.\]
When the DM interaction is set to zero and we replace \(\tilde{m}_{\sigma}\to m_{\sigma}\) and the degeneracy in the \(\Gamma_{3}^{\pm}\) index \(\lambda\) is ignored this becomes equivalent to the general case of the Ising-type singlet singlet model (Eq. (9)). The temperature dependence of the dispersions can be incorporated by reminding (Sec.III.2) that it enters in a parametric way by introducing effective matrix elements \(\tilde{m}_{\sigma}^{2}\to\tilde{m}_{\sigma}^{2}\tanh\frac{\Delta}{2\tilde{m}}( 1+f_{\sigma})^{-1}\) where the correction factor with \(f_{\sigma}=\frac{1}{2}(1-\tanh\frac{\Delta}{2\tilde{m}})\) is due to the twofold degeneracy of \(\Gamma_{3}\) doublet. This may be concluded from the complementary RPA approach for the xy-type model (Appendix C).
### Approximate dispersions from a reduced Hamiltonian
The exact expressions for the exciton dispersions of the \(4\times 4\) Hamiltonian in Eq. (24) as given by Eq. (26) exhibit the redundancy or doubling which is typical for the Bogoliubov technique, i.e. they appear in pairs \((+\omega_{\kappa},-\omega_{\kappa})\) ( in the RPA response function technique they correspond to poles at positive and negative frequencies). These expressions may be considerably simplified if certain conditions are fulfilled: i) the dispersion width is small compared to the CEF excitation energy \(\Delta\) which means that throughout the BZ it is far from soft mode behaviour. This requires \(\tilde{m}_{\sigma}^{2}I_{D}^{\sigma}(\mathbf{k}\lambda)/\Delta_{\sigma}\ll 1\). In this case \((+\omega_{\kappa},-\omega_{\kappa})\) pairs are sufficiently apart which means they correspond approximately to the solution of the diagonal \(2\times 2\) blocks in \(\Sigma_{z}\hat{h}_{\mathbf{k}\lambda}\). This approximation is reasonable if \(\Delta_{A,B}\) CEF splittings are not too different. More precisely if we define the various averages \(\Delta_{av}=\frac{1}{2}(\Delta_{A}+\Delta_{B}),\bar{\Delta}=(\Delta_{A}\Delta_{B} )^{\frac{1}{2}},\Delta_{m}=[\frac{1}{2}(\Delta_{A}^{2}+\Delta_{B}^{2})]^{\frac{1}{2}}\) the conditions \(\bar{\Delta}/\Delta_{av}\simeq 1,\Delta_{av}/\Delta_{m}\simeq 1\) should be respected. For \(\Delta_{A}=\Delta_{B}\) they hold identically. With these premises the exact dispersions of Eq. (27) may be approximated
by the (positive) exciton energies
\[\omega^{r}_{\pm}(\mathbf{k}\lambda)=\frac{1}{2}(\omega_{A0}( \mathbf{k}\lambda)+\omega_{B0}(\mathbf{k}\lambda)) \tag{28}\] \[\pm\frac{1}{2}\big{[}(\omega_{A0}(\mathbf{k}\lambda)-\omega_{B0}( \mathbf{k}\lambda))^{2}+4\tilde{m}_{A}^{2}\tilde{m}_{B}^{2}|I_{N}(\mathbf{k})| ^{2}\big{]}^{\frac{1}{2}},\] \[\omega_{\sigma 0}(\mathbf{k}\lambda)=\Delta_{\sigma}-\tilde{m}_{ \sigma}^{2}I_{D}^{p}(\mathbf{k}\lambda).\]
It can be seen easily that these modes correspond directly to the eigenvalues of the reduced \(2\times 2\) Hamiltonian
\[\hat{h}^{r}_{\mathbf{k}\lambda}=\left(\begin{array}{cc}\Delta_{A}-\bar{I}^{ A}_{D}(\mathbf{k}\lambda)&-\bar{I}^{s}_{N}(\mathbf{k})\\ -\bar{I}_{N}(\mathbf{k})&\Delta_{B}-\bar{I}^{B}_{D}(\mathbf{k}\lambda)\end{array} \right), \tag{29}\]
which corresponds only to the diagonal blocks in the \(4\times 4\) Hamiltonian Eq. (24). Effectively the non-diagonal blocks in \(\Sigma_{z}\hat{h}_{\mathbf{k}\lambda}\) have the effect of coupling the positive and negative frequency solutions \(\pm\omega^{r}_{\kappa}(\mathbf{k}\lambda)\) (\(\kappa=\pm\)) of the two diagonal blocks and produce the exact solutions \(\pm\omega_{\kappa}(\mathbf{k}\lambda)\) of Eq. (26) or Eq. (27).
### Special cases of the singlet-doublet model
Now we return to the exact and general dispersion model Eqs. (26,27). We will discuss a few interesting special cases which have either less coupling terms and/or more sublattice equivalences of model parameters.
_i)_ Absence of symmetric \(2^{\text{nd}}\) neighbor exchange and sublattice equivalence of DM terms: \(I_{2}^{\sigma}=0,I_{D}^{s}=I_{D}\).
In this case Eq. (27) reduces to the simpler form
\[\omega^{2}_{\pm}(\mathbf{k}\lambda)=\frac{1}{2}(\Delta_{A}^{2}+ \Delta_{B}^{2})-\lambda(z_{2}v_{D})(\Delta_{A}-\Delta_{B})\tilde{\gamma}_{D}( \mathbf{k}) \tag{30}\] \[\pm\Big{\{}\frac{1}{4}(\Delta_{A}+\Delta_{B})^{2}[(\Delta_{A}- \Delta_{B})-2\lambda(z_{2}v_{D})\tilde{\gamma}_{D}(\mathbf{k})]^{2}\] \[+4\Delta_{A}\Delta_{B}(zv_{s})^{2}|\gamma(\mathbf{k})|^{2}\Big{\}} ^{\frac{1}{2}},\]
where we introduced abbreviations \(v_{s}=\tilde{m}^{2}I\) and \(v_{D}=\tilde{m}^{2}D_{J}\). This form gives convenient access to the mode dispersions around the inequivalent zone boundary points \(\mathbf{K}_{\pm}\). The essential part is the'mass term' (first term in curly brackets) given by
\[M(\mathbf{K}_{\pm},\lambda)= \tag{31}\] \[\frac{1}{2}(\Delta_{A}+\Delta_{B})[(\Delta_{A}-\Delta_{B})-2 \lambda(z_{2}v_{D})\tilde{\gamma}_{D}(\mathbf{K}_{\pm})],\]
which may be both positive or negative depending on conditions and valley position \(\mathbf{K}_{\pm}\) (Secs. IV.2,V). The above equation shows that in general the \(\lambda\)-degeneracy resulting from \(\Gamma_{3}^{\pm}\) doublet is lifted if firstly, the CEF splittings are inequivalent and secondly, the DM term is nonzero. This becomes also clear from the next special case:
_ii)_ Additional equivalence \(\Delta_{A}=\Delta_{B}=\Delta\) in case i):
Then we obtain the further simplified dispersion form
\[\omega^{2}_{\pm}(\mathbf{k})= \Delta\big{\{}\Delta\pm 2[(z_{2}v_{D})^{2}\tilde{\gamma}_{D}( \mathbf{k})^{2}+(zv_{s})^{2}|\gamma(\mathbf{k})|^{2}]^{\frac{1}{2}}\big{\}}. \tag{32}\]
Due to the equivalent CEF splittings the dispersions now retain the twofold degeneracy (\(\lambda=\pm\)) throughout the BZ, therefore this index has been suppressed. As a result only two dispersion curves (\(\kappa=\pm\) due to two sublattices) are present. We also give the simplified dispersion of the reduced model from Eq. (28) for the same special case:
\[\omega^{r}_{\pm}(\mathbf{k})=\Delta\pm[(z_{2}v_{D})^{2}\tilde{\gamma}_{D}( \mathbf{k})^{2}+(zv_{s})^{2}|\gamma(\mathbf{k})|^{2}]^{\frac{1}{2}}, \tag{33}\]
It is obviously the approximation to Eq. (32) for moderate dispersion (\(v_{s},v_{D}\ll\Delta\)) far from the soft mode regime.
### Expansions of magnetic exciton dispersion around \(\mathbf{K}_{\pm}\) valleys
It is important to understand the behaviour of exciton bands around the inequivalent valley points \(\mathbf{K}_{\pm}\) because they influence their topological character. It is largely determined by the expansion of structure functions in Appendix E. For the general case of Eq. (27) we obtain the following result (now \(\kappa=\pm\) and \(\lambda=\pm\) for the two mode pairs and \(\mathbf{K}_{\pm}\) referring now to the two boundary points.):
\[\omega_{D}^{\kappa 2}(\mathbf{K}_{\pm},\lambda,\hat{q}) =\omega_{D0}^{2}(\mathbf{K}_{\pm},\lambda) \tag{34}\] \[\quad+\kappa\{M(\mathbf{K}_{\pm},\lambda)^{2}+3\pi^{2}\Delta_{A} \Delta_{B}(v_{s}\hat{q})^{2}\}^{\frac{1}{2}},\]
where we use the scaled momentum \(\hat{q}=(q_{x}^{2}+q_{y}^{2})^{\frac{1}{2}}/(\pi/a)\) with respect to the \(\mathbf{K}_{\pm}\) Dirac points, i.e. \(\mathbf{k}=\mathbf{K}_{\pm}+\mathbf{q}\). The generally distinct energies of the latter are given by \((v_{\sigma}=\tilde{m}_{\sigma}^{2}I_{2}^{\sigma},v_{D}^{\sigma}=\tilde{m}_{ \sigma}^{2}D_{J}^{\sigma}\) and \(\sigma=A,B)\)
\[\omega_{D0}^{2}(\mathbf{K}_{\pm},\lambda)= \frac{1}{2}(\Delta_{A}^{2}+\Delta_{B}^{2})+3(\Delta_{A}v_{A}+ \Delta_{B}v_{B}) \tag{35}\] \[\pm\lambda\sqrt{3}(\Delta_{A}v_{D}^{A}-\Delta_{B}v_{D}^{B}),\]
and depend on valley (\(\pm\)) and \(\Gamma_{3}\) degeneracy index \(\lambda\). The splitting of bands at \(\mathbf{K}_{\pm}\) is determined by the mass term of the square root in Eq. (34) given by
\[M(\mathbf{K}_{\pm},\lambda)= \frac{1}{2}(\Delta_{A}^{2}-\Delta_{B}^{2})+3(\Delta_{A}v_{A}- \Delta_{B}v_{B}) \tag{36}\] \[\pm\lambda\sqrt{3}(\Delta_{A}v_{D}^{A}+\Delta_{B}v_{D}^{B}),\]
The last term leads to different mass values and (generally) splittings at \(\mathbf{K}_{\pm}\) due to its different signs. This originates in the different signs of the DM structure function \(\tilde{\gamma}_{D}(\mathbf{K}_{\pm})=\mp\frac{3\sqrt{2}}{z_{2}}\) (Appendix E). If the mass term vanishes, the exciton bands are all degenerate at \(\mathbf{K}_{\pm}\) and show a linear dispersion around it due to the last term in Eq. (34).
Obviously interchanging valley \(\mathbf{K}_{\mp}\) position and simultaneously the \(\Gamma_{3}\) states \(\lambda=\pm\) leaves the Dirac point energy and mass term invariant, i,e, they fulfil the symmetry \(\omega_{D0}(\mathbf{K}_{\pm},\lambda)=\omega_{D0}(\mathbf{K}_{\mp},-\lambda)\) and \(M(\mathbf{K}_{\pm},\lambda)=M(\mathbf{K}_{\mp},-\lambda)\).
As in the previous subsection it is again useful to consider the special case \(i)\) where only the CEF splittings are different on A,B. Then we can simplify, defining the average gap by \(\Delta_{av}=\frac{1}{2}(\Delta_{A}+\Delta_{B})\), we have
\[\omega_{D0}^{2}(\mathbf{K}_{\pm},\lambda)= \frac{1}{2}(\Delta_{A}^{2}+\Delta_{B}^{2})\pm\lambda\sqrt{3}v_{D} (\Delta_{A}-\Delta_{B}), \tag{37}\] \[M(\mathbf{K}_{\pm},\lambda)= \Delta_{av}\big{[}(\Delta_{A}-\Delta_{B})\pm\lambda 2\sqrt{3}v_{D} \big{]}.\]
The square of the exciton dispersion is then given by
\[\omega_{D}^{\kappa 2}(\mathbf{K}_{\pm},\lambda,\hat{q}) =\omega_{D0}^{2}(\mathbf{K}_{\pm},\lambda)+\kappa[M(\mathbf{K}_{ \pm},\lambda)^{2}+D_{0}^{2}\hat{q}^{2}]^{\frac{1}{2}}; \tag{38}\] \[D_{0} =\sqrt{3}\pi(\Delta_{A}\Delta_{B})^{\frac{1}{2}}v_{s}.\]
It is instructive to evaluate directly the dispersion \(\omega_{D}^{\kappa}(\mathbf{K}_{\pm},\lambda,\hat{q})\) at small \(\hat{q}\) for the case of finite mass term
\[\omega_{D}^{\kappa}(\mathbf{K}_{\pm},\lambda,\hat{q}) =\omega_{D0}^{\kappa}(\mathbf{K}_{\pm},\lambda)+\kappa\frac{D_{0 }^{2}}{4|M|\omega_{D0}}\hat{q}^{2}; \tag{39}\] \[\omega_{D0}^{\kappa}(\mathbf{K}_{\pm},\lambda) =\omega_{D0}(\mathbf{K}_{\pm},\lambda)+\kappa\frac{|M|}{2\omega_{D 0}}.\]
The first term describes the split energies at the Dirac points or valleys \(\mathbf{K}_{\pm}\) (first of Eq. (37)). For \(\Delta_{A}\neq\Delta_{B}\) in Eq. (37) there are four distinct energies at each \(\mathbf{K}_{\pm}\) indexed by \((\kappa,\lambda)\) and four corresponding split parabolic exciton bands around them (Fig. 4(b-d)). When we consider the special case \(ii)\) with equal CEF splittings \(\Delta\) on both A,B sublattice these expressions further simplify in an obvious manner with \(\omega_{D0}=\Delta,M(\mathbf{K}_{\pm},\lambda)=\pm\lambda 2\sqrt{3}\Delta v_{D}\) and \(D_{0}=\sqrt{3}\pi\Delta v_{s}\) which results in two degenerate (\(\lambda=\pm\)) pairs of modes. If we turn off the DM interaction (\(v_{D}=0\)) the mass term vanishes and we have to go back to Eq. (38) which then leads to
\[\omega_{D}^{\kappa 5}(\hat{q})=\Delta+\kappa\frac{\sqrt{3}}{2}\pi v_{s}|\hat{q}|, \tag{40}\]
which describes two Dirac half cone (\(\kappa=\pm\)) exciton dispersions centered around the CEF excitation energy \(\Delta\)
which are identical for \({\bf K}_{\pm}\) and retain the twofold degeneracy with respect to \(\Gamma_{3}\) index \(\lambda\).
## V Topological properties of magnetic exciton modes
Like any kind dispersive modes, in particular magnons in the ferromagnetic honeycomb the paramagnetic exciton bands studied here can be characterized according to their topological properties. For 2D systems the relevant quantities to investigate for this purpose are the Berry curvature and the associated Chern number topological invariant.
### Berry curvature and Chern numbers
The topological character of magnetic exciton bands is determined by Berry curvature obtained from the effective Hamitionian matrix \(\tilde{h}({\bf k}\lambda)=\Sigma_{z}\hat{h}({\bf k}\lambda)\) (Eq. (24)) which has, for each \(\lambda=\pm\) two positive \(\omega_{\kappa}({\bf k},\lambda)\) (\(\kappa=\pm,\tau=+\)) and two negative \(-\omega_{\kappa}({\bf k},\lambda)\) (\(\kappa=\pm,\tau=-\)) eigenvalues (from Eq. (27)). The latter are a result of the doubling of degrees of freedom in the Bogoliubov method [28]. The index \(\tau=\pm\) corresponds to the positive or negative set (the sign in front of \(\tau\omega_{\kappa}({\bf k},\lambda)\)). Then we may combine positive and negative solutions to a single index \(n=(\kappa,\tau)=1-4\) resulting from sublattice degree of freedom and Bogoliubov doubling. This is done for each \(\lambda=\pm\) subspace resulting from the \(\Gamma_{3}\) CEF degrees of freedom. The index \(\lambda\) is suppressed as a dummy index in the following that simply refers to two different sets of bands (which may be completely degenerate in the BZ as discussed before in special cases). Physical relevant excitations are only the positive energy solutions. The negative solutions however do appear in the calculation of the topological quantities.
The topological properties of these bands are described by the Berry curvature given by
\[\Omega_{n}({\bf k})=\nabla_{\bf k}\times i\langle n({\bf k})|\nabla_{\bf k}|n( {\bf k})\rangle, \tag{41}\]
where \(|n({\bf k})\rangle\) denote the eigenvectors corresponding to the eigenvalue equation \(\tilde{h}({\bf k})|n({\bf k})\rangle=\omega_{n}({\bf k})|n({\bf k})\rangle\). This may also be written as (\(\omega_{n}({\bf k})>0\)) [29]:
\[\Omega_{n}({\bf k})=i\sum_{m\neq n}\langle m{\bf k}|\Sigma_{z}\nabla_{\bf k}|n {\bf k}\rangle^{*}\Sigma_{z}^{mm}\times\langle m{\bf k}|\Sigma_{z}\nabla_{\bf k }|n{\bf k}\rangle. \tag{42}\]
An alternative expression more useful for numerical computation is given by [29]
\[\Omega_{n}({\bf k})=\sum_{m\neq n}\frac{i\langle n{\bf k}|\nabla_{\bf k}\hat{ h}_{\bf k}|m{\bf k}\rangle\Sigma_{z}^{mm}\times\langle m{\bf k}|\nabla_{\bf k} \hat{h}_{\bf k}|n{\bf k}\rangle}{(\omega_{n}({\bf k})-\omega_{m}({\bf k}))^{2 }}, \tag{43}\]
where the sum over \(m\) runs over eigenstates with positive _and_ negative energies \(\omega_{m}({\bf k})\). Using the explicit expression of \(\hat{h}_{k}\) and its gradient \(\nabla_{\bf k}\hat{h}_{\bf k}\) as well as the eigenvalues and -vectors of \(\tilde{k}_{\bf k}=\Sigma_{z}\hat{h}_{\bf k}\) the Berry curvature \(\Omega_{n}({\bf k})\) may be computed numerically from the above expression. For the 2D honeycomb models only the \(\Omega_{n}^{z}({\bf k})\) component is nonzero. Explicitly it is given by
\[\Omega_{n}^{z}({\bf k})=\sum_{m\neq n}\frac{i[\langle n{\bf k}|\hat{h}_{\bf k} ^{x}|m{\bf k}\rangle\Sigma_{z}^{mm}(m{\bf k}|\hat{h}_{\bf k}^{y}|n{\bf k})- \langle n{\bf k}|\hat{h}_{\bf k}^{y}|m{\bf k}\rangle\Sigma_{z}^{mm}(m{\bf k}| \hat{h}_{\bf k}^{x}|n{\bf k})]}{(\omega_{n}({\bf k})-\omega_{m}({\bf k}))^{2}}. \tag{44}\]
The Chern number characterizing the topological character of magnetic exciton bands (reintroducing now the \(\Gamma_{3}\) index \(\lambda\)) is then obtained by (\(n=(\kappa,\tau)\))
\[C_{n}(\lambda)=\frac{1}{2\pi}\int_{BZ}d{\bf k}\Omega_{n}^{z}({\bf k},\lambda). \tag{45}\]
The \({\bf k}\)-dependence of \(\hat{h}_{\bf k}\) in Eqs. (19,24) stems entirely from that of the structure functions. Therefore the gradients \(\hat{h}_{\bf k\lambda}^{\alpha}=\partial\hat{h}_{\bf k}/\partial k_{\alpha}\) (\(\alpha=x,y\)) required in Eq. (44) may be computed analytically (Appendix F). Because the eigenvectors in Eq. (44) have to be obtained numerically this is also necessary for the Berry curvature. It is shown in Fig. 6 for some typical parameters for the positive in the irreducible BZ and will be discussed in more detail in Sec. VI. There are two typical cases to be observed with Berry curvature maximum (or negative minimum) located at the \({\bf K}_{\pm}\) zone boundary symmetry points, or at three (\(C_{3v}\)equivalent) off symmetry points. Whether the Chern number (i.e. the integral of the Berry curvature over the irreducible BZ) is zero (topologically trivial) or nonzero integer (topologically nontrivial) exciton bands depends to some extent on the amount of inversion symmetry breaking (difference of \(\sigma=A,B\) sublattice parameters \(\Delta_{\sigma},v_{2}^{\sigma},v_{D}^{\sigma}\), as discussed in Sec. VI. For the sublattice equivalent case when they are all equal the Chern numbers are all \(\pm 1\) for the four bands and therefore each of them is topologically nontrivial which should entail the existence of gapless 1D excitonic edge states inside the 2D bulk DM gap at \({\bf K}_{\pm}\). The symmetric case is conveniently accessible by a continuum approximation, i.e. small momentum approximation around \({\bf K}_{\pm}\) which indeed will predict the existence of edge states as we shall show now.
### Topological edge modes in continuum approximation
An alternative and direct way to approach the nontrivial topology is provided by the explicit construction of excitonic magnetic edge states within the 2D bulk gap at \(\mathbf{K}_{\pm}\) valleys which decay exponentially into the bulk. We demonstrate this in the simplified approach mentioned before that neglects the interaction of \(\pm\omega_{n}(\mathbf{k}\lambda)\) modes in the secular equation. This is acceptable as long one is not too close to a soft mode situation. It amounts to considering only the reduced \(2\times 2\) Hamiltonian of Eq. (29). For the reduced model we apply the continuum approximation around the \(\mathbf{K}_{\pm}\) by setting \(\mathbf{k}=\mathbf{K}_{\pm}+\mathbf{q}^{\prime}\) where \(\mathbf{q}^{\prime}\) is expressed in the rotated Cartesian coordinate systems defined in Appendix E. We first focus on \(\mathbf{K}_{+}\). The the
Figure 6: Density plot of Berry curvature in BZ corresponding to bands \((\kappa,\lambda)=(+,+),(-,+),(+,-),(-,-)\) from left to right in each row according to decreasing energy for each \(\lambda\). Parameters are same as in Fig. 5(a). First we show two cases according to symbols \((\star,\diamond)\) in Fig. 5(a) to the right and left of the topological boundary. (a-d) Chern number \(0\) at the \(\star\) point \(\epsilon=0.2\) and \(\epsilon_{D}=0.2\) in Fig. 5(a); (e-h) Chern number \(\pm 1\) at the \(\diamond\) left boundary point \(\epsilon=0.175\) and \(\epsilon_{D}=0.2\) in Fig. 5(a). Berry curvature has both \(\pm\) sign in for each panel of (a-d) so the integration gives a zero Chern number, however it shows only positive or negative values for each panel of (e-h) and therefore finite Chern number \(\pm 1\). In (i-l) \(\epsilon=0.15\) is relatively small compared to \(\epsilon_{D}\) leading to a shift of the Berry curvature extrema to three \(C_{3v}\) equivalent incommensurate positions closer to the M-point, however the Chern number still is \(\pm 1\) corresponding to \(\bullet\) in Fig. 5(a).
\(q^{\prime}_{x}\) direction corresponds to zigzag chain direction in real space which we consider as an edge of the semi-infinite honeycomb lattice. Then we have to replace the perpendicular coordinate according to \(q^{\prime}_{y}\rightarrow-i\partial_{y^{\prime}}\) in the reduced Hamiltonian above. For the simplified equivalent sublattice case \(ii)\) in Sec. IV.2 (\(v_{2}=0\)) we obtain
\[\hat{h}^{r}(q^{\prime}_{x}\lambda,y)=\left(\begin{array}{cc}\Delta+\lambda \delta_{D}&(zv_{s})\xi(q^{\prime}_{x}-\partial_{y^{\prime}})\\ (zv_{s})\xi(q^{\prime}_{x}+\partial_{y^{\prime}})&\Delta-\lambda\delta_{D} \end{array}\right), \tag{46}\]
where \(\xi=\frac{a}{2\sqrt{3}}\) and \(\lambda\delta_{D}=\lambda 3\sqrt{2}v_{D}\) describes the effect of the DM interaction which importantly has _opposite_ sign on the two sublattices. As an ansatz wave function for the excitonic edge eigenstate we use \({\bf w}(q^{\prime}_{x},y)={\bf w}_{0}e^{iq^{\prime}_{x}x^{\prime}}e^{-\kappa _{D}y^{\prime}}\). The corresponding eigenvalue equation \(\hat{h}^{r}(q^{\prime}_{x}\lambda,y){\bf w}(q^{\prime}_{x},y)=\omega{\bf w}( q^{\prime}_{x},y)\) then leads to the secular equation
\[\left|\begin{array}{cc}\Delta+\lambda\delta_{D}-\omega&(zv_{s})\xi(q^{ \prime}_{x}+\kappa_{D})\\ (zv_{s})\xi(q^{\prime}_{x}-\kappa_{D})&\Delta-\lambda\delta_{D}-\omega\end{array} \right|=0, \tag{47}\]
which has \(\lambda\) degenerate solutions
\[\omega_{\pm}=\Delta\pm\big{[}(\delta_{D}^{2}-(zv_{s})^{2}\xi^{2}\kappa_{D}^{2 })+\xi^{2}(zv_{s})^{2}q^{{}^{\prime}2}_{x}\big{]}^{\frac{1}{2}}. \tag{48}\]
Choosing \(\kappa_{D}=\frac{|\delta_{D}|}{zv_{s}\xi}=\frac{\sqrt{3}}{2}\frac{v_{D}}{v_{s}}\) we obtain gapless edge mode dispersions (\(\kappa=\pm\))
\[\omega_{\kappa}(q^{\prime}_{x})=\Delta+\kappa(zv_{s})\xi|q^{\prime}_{x}|= \Delta\pm\frac{\sqrt{3}}{2}\pi v_{s}|\hat{q}_{x}|, \tag{49}\]
where \(\hat{q}_{x}=q^{\prime}_{x}/(\pi/a)\). This describes a 1D Dirac cone of excitonic edge modes emerging from the Dirac point \(\omega_{D0}=\Delta\) with momentum oriented along the zigzag chain direction. The calculation is equivalent for the \({\bf K}_{-}\) value with the replacement \((\delta_{D},\kappa_{D})\rightarrow(-\delta_{D},-\kappa_{D})\) in Eq. (47) leading to the same dispersion for the edge modes around \({\bf K}_{-}\). The edge mode dispersion approaches asymptotically the gapped bulk mode dispersion (for \(\Delta_{A,B}=\Delta\)) of Eq. (38) for \(q^{\prime}_{y}=0\) and becomes identical to this mode (Eq. (40)) when the gap closes (\(v_{D}\to 0\) and \(\kappa_{D}\to 0\)).
It is interesting to consider and alternative case of the simplified model without the \(2^{\rm nd}\) neighbor DM exchange (\(v_{D}=0\)) but instead including the \(2^{\rm nd}\) neighbor symmetric exchange (\(v_{2}\neq 0\)). In this case the essential difference from Eq. (47) is the lack of sign change in \(\delta_{2}=3\sqrt{2}v_{2}\) between the sublattices leading simply to a renormalization of the CEF splitting \(\tilde{\Delta}=\Delta+\delta_{2}\). Therefore the secular equation has no solution for edge states for \(q^{\prime}_{x}\to 0\) and only bulk states are present. We conclude that the general structure of the magnetic exciton models discussed here always require a nonzero DM interaction for the existence of topological edge states.
## VI Discussion of numerical results for the XY type model
We already discussed the magnetic excitons in the simple Ising type model (Sec. III) and now focus on the more intricate results of the xy-type model (Sec. IV).
For a first impression one may restrict to the special models of Sec. IV.2. the restricted parameter set is then given by CEF splitting energies \(\Delta_{A},\Delta_{B}\) and the sublattice-equivalent interaction energy parameters \(v_{s}=\tilde{m}^{2}I\) and \(v_{D}=\tilde{m}^{2}D_{J}\) corresponding to \(1^{\rm st}\) neighbor (\(z=3\)) (A-B) symmetric exchange and \(2^{\rm nd}\) neighbor (\(z_{2}=6\)) (A-A,B-B) DM exchange. The energy unit for these parameters may be chosen as the average \(\Delta\) and we use the representation \(\Delta_{A,B}=\Delta(1\pm\epsilon)\) etc. (Appendix B).
Some representative dispersion results for these special cases for the xy model are shown in Fig. 4. In (a,b) we also set intra-sublattice \(v_{2}=0\) therefore the splitting of modes caused by inter-sublattice interaction \(v_{s}\) is nearly symmetric around \(\Delta\). In (a) when \(\epsilon=0\) the upper and lower modes (\(\kappa=\pm\)) inherit the twofold degeneracy with respect to the CEF \(\Gamma_{3}\) index \(\lambda=\pm\). If the DM interaction vanishes (\(v_{d}=0\)) the two pairs of mode are fully degenerate at \({\bf K}_{\pm}\) zone boundary points (dashed lines) but for non-vanishing \(v_{d}\) the degeneracy is lifted and a gap appears. The gap persists in the case of inequivalent CEF splittings (\(\epsilon\neq 0\)) (b). Now the fourfold degeneracy is completely removed because \(\lambda=\pm\) modes are no longer degenerate. This is also true when a finite \(v_{2}\) is included which removes the approximate reflection symmetry of lower and upper branches (d). Note the important point that in both cases the ordering of modes \(\lambda=\pm\) (corresponding to the coloring green/red) is interchanged at \({\bf K}_{\pm}\) This is due to the symmetry \(\omega_{\kappa}({\bf k},\lambda)=\omega(-{\bf k},\tilde{\lambda})\) and the fact that \({\bf K}_{-}\) is equivalent to \(-{\bf K}_{+}\). For comparison we also show a case where the CEF splittings are equivalent (\(\epsilon=0\)) but the DM coupling strengths \(v_{D}^{A,B}\) are not, again with \(v_{2}=0\) (c). It looks similar to (b) but the band ordering is changed such that the gaps at \({\bf K}_{\pm}\) do not depend on \(\lambda\) in contrast to (b).
Now we discuss the topological properties of the magnetic exciton bands. The crucial role there is played by the DM interaction which opens the necessary gap at \({\bf K}_{\pm}\) for nontrivial topology (nonzero Chern number). In the inversion symmetric case with all A,B sublattice parameters equivalent the Chern number is always nonzero in the (\(v_{s},v_{D}\)) plane as shown in Fig. 5. This agrees with the fact that in the inversion symmetric case the continuum approximation shows the existence of zone boundary modes as shown in the previous section. The introduction of A,B sublattice asymmetry e.g. by assuming different CEF splittings \(\Delta_{A,B}=\Delta(1\pm\epsilon)\) can destroy the topological state leading to vanishing Chern number as is shown in the example denoted by \(\star\) in Fig. 5 which shows that the inequivalence of \(\Delta_{A,B}\) should stay below a threshold to achieve topologically nontrivial bands with \(C=\pm 1\).
To obtain an intuition how the vanishing and non-zero Chern numbers are obtained we also plot the Berry curvature \(\Omega_{n}^{\rm a}({\bf k},\lambda)\) in the irreducible wedge of the BZ for the different sets of (positive energy) bands \(\omega_{n}({\bf k},\lambda)\) with \(n=(\kappa,\tau=+)\) leading to four panels in each row corresponding to all four choices of \((\lambda=\pm,\kappa=\pm)\). We show these four panels for three cases corresponding to the trivial (a-d) (\(C_{n}(\lambda)=0\)) and nontrivial (e-h, i-l) (\(C_{n}(\lambda)=\pm 1\)) regions of Fig. 5 marked by symbols \(\star,\diamond,\bullet\), respectively. According to Eq. (44) the extremum of Berry curvature occurs close to the points where the exciton band gap is smallest. This naturally happens at \({\bf K}_{\pm}\) unless the splitting is dominated by the DM interaction as discussed below. From the dispersion plots in Fig. 4 it is seen that for a given \(\lambda\) the gaps at \({\bf K}_{\pm}\) are unequal with an inverted order for the opposite \(\lambda\). This means the main extremum is situated either on \({\bf K}_{-}\) or \({\bf K}_{+}\) for a given \(\lambda\). In the trivial case (Fig. 6(a-d)) the (absolute) large Berry curvature values at the extrema are compensated by opposite sign values in the surrounding in the irreducible sector integrating to zero Chern number. In the nontrivial case (Fig. 6(e-h)) the sign is the same everywhere and the integration leads to Chern numbers \(\pm 1\). Depending on parameters, in particular when DM interaction \(v_{D}\) is large the minimum gap may shift from \({\bf K}_{\pm}\) to other (\(C_{3v}\) equivalent) incommensurate positions closer to the M point in the irreducible BZ sector. Such a case is presented in the Berry curvature plot of Fig. 6(i-l). However the Chern number is still C=\(\pm 1\) since one stays in the nontrivial regime of Fig. 5.
Finally we comment on the absence of a thermal Hall effect in the present paramagnetic case. The thermal Hall effect has been proposed and investigated many times [4; 6; 9; 30; 31; 32] for the FM ordered honeycomb lattice. In this case time reversal symmetry is broken and an intrinsic nonzero thermal Hall current carried by the topological magnonic edge states may appear. It vanishes however on the antiferromagnetic honeycomb lattice [33] due to the twofold degeneracy of magnon modes caused by a symmetry operation consisting ot the product of time reversal and inversion [34]). The situation is similar here in the equivalent sublattice model due to the \(\lambda\) degeneracy of magnetic excitons. But even in the asymmetric case when all modes are split we have the symmetry \(\Omega_{z}({\bf k},\lambda)=-\Omega_{z}(-{\bf k},\bar{\lambda})\) which can be seen from Fig 6. Since the thermal Hall conductivity involves a summation over \({\bf k},\lambda\) it will vanish also for the most general case of exciton bands which is consistent with the paramagnetic state.
## VII Summary and conclusion
In this work we have developed a comprehensive theory of paramagnetic excitons on the honeycomb lattice originating from the localized CEF excitations of 4f electrons of lanthanide elements on the two sublattice sites. We assumed a general case where the inversion symmetry may be broken due to different chemical environment of the sublattices. We focused on a model _without magnetic order_ which may be realized for integer \(J\) lanthanide ions like Pr or Tm where the CEF ground state can be a nonmagnetic singlet. Specifically we treated the \(J=4\) based case of an Ising-type singlet singlet model and an xy-type singlet-doublet model allowed by the \(C_{3v}\) site symmetry and with CEF splitting energies \(\Delta_{A,B}\). The effective inter-site interactions comprise symmetric intra- and inter- sublattice exchange in both models as as well as DM type asymmetric exchange for xy-type model allowed by lack of an inversion center on \(2^{\rm nd}\) neighbor A-A, B-B bonds. These interactions lead to dispersive magnetic excitons in the paramagnetic state with characteristic properties enforced by the underlying honeycomb symmetry. The dispersion increases with decreasing temperature due to the thermal population effect of CEF levels.
In the Ising-type case there are two modes which are split by the A,B inter-sublattice exchange. If inversion symmetry is present the honeycomb structure enforces the degeneracy of these modes at the zone boundary \({\bf K}_{\pm}\) points. This degeneracy is lifted if the two sublattices become inequivalent (e.g. have different splittings \(\Delta_{A,B}\)). For sufficiently strong exchange interactions one mode may turn into a precursor soft mode for an induced magnetic order ot the spiral type. The Ising-type model cannot support a DM asymmetric exchange and therefore its magnetic excitons are topologically trivial.
This changes in the xy-type singlet-doublet model which supports the DM exchange term. The Fourier transform of the asymmetric exchange is non-vanishing at the \({\bf K}_{\pm}\) points. Due to the doublet degeneracy there are now generally four modes present. The symmetric intersite exchange splits them only into two pairs if A,B sublattices are still equivalent, however even in this case the gap caused by the DM term is preserved at \({\bf K}_{\pm}\). The remaining pair degeneracy is lifted throughout the BZ for sublattice-inequivalent CEF splitting or exchange, except along the \(\Gamma M\) symmetry direction.
The xy-type model with a nonzero DM term can support topologically nontrivial magnetic excitons even though there is _no magnetic order_ present. This distinguishes the present model from all previous magnetic honeycomb models investigated [4] which all use (anti-)ferromagnetic order as precondition to obtain topological magnon states. We have shown that indeed the nonzero Chern numbers of topological paramagnetic excitons are stable over a wide range of parameter space, in particular for all parameters in the A,B sublattice equivalent case. The peculiar structure of the underlying Berry curvature in the irreducible BZ sector has been mapped out. Furthermore we have shown within a continuum approximation for the sublattice-symmetric case that magnetic exciton edge modes inside the 2D bulk magnetic exciton gap at \({\bf K}_{\pm}\) exist and their decay length is governed by the ratio of asymmetric DM exchange to sym
metric inter-sublattice exchange. This suggests to extend the present analysis and perform an investigation of edge states of the xy-type magnetic exciton model within a numerical diagonalization approach for various edge and stripe geometries of the honeycomb lattice. Because of the paramagnetic state time reversal symmetry is not broken and as a consequence these edge modes do not support a thermal Hall effect as another distinction to the magnon topological excitations in the magnetically ordered honeycomb lattice. However it is possible that, as in the magnetically ordered honeycomb models a finite temperature (pseudo-spin) Nernst effect [7; 33; 34] may exist in the paramagnetic exciton case which should be investigated based on the analysis in this work.
###### Acknowledgements.
A.A. greatfully acknowledges Ali G. Moghaddam for useful discussions.
## Appendix A CEF potential with \(\mathbf{C_{av}}\) symmetry, levels and eigenstates
Here we discuss to some detail the \(J=4\) CEF states for the less common \(C_{3v}\) symmetry of the crystalline electric field potential on the honeycomb lattice. The corresponding CEF Hamiltonian is given in terms of Stevens operators \(O_{n}^{m}(\mathbf{J})\) of the ground state J-multiplet which are polynomials of \((n-m)^{\text{th}}\) order in \(J_{z}\) and \(m^{\text{th}}\) order in \(J_{\pm}\):
\[H_{\text{CEF}} =B_{2}^{0}O_{2}^{0}+B_{4}^{0}O_{4}^{0}+B_{6}^{0}O_{6}^{0}\] \[+B_{4}^{3}O_{4}^{3}+B_{6}^{3}O_{6}^{3}+B_{6}^{6}O_{6}^{6}. \tag{10}\]
It may be represented as a \((2J+1)\times(2J+1)\) matrix in the space spanned by free ion states \(|J,M\rangle\) (\(|M|\leq J\)). If we rearrange the natural sequence (decreasing \(M\)) of \(|J,M\rangle\) states suitably \(H_{\text{CEF}}\) can be written in block-diagonal form according to
\[H_{\text{CEF}}=\left(\begin{array}{c|cccccccccc}&3&0&-3&4&1&-2&2&-1&-4\\ \hline 3&d_{3}&m_{30}&m_{33}&0&0&0&0&0&0\\ 0&m_{30}&d_{0}&-m_{30}&0&0&0&0&0&0\\ -3&m_{33}&-m_{30}&d_{3}&0&0&0&0&0&0\\ 4&0&0&0&d_{4}&m_{41}&m_{42}&0&0&0\\ 1&0&0&0&m_{41}&d_{1}&-m_{21}&0&0&0\\ -2&0&0&0&m_{42}&-m_{21}&d_{2}&0&0&0\\ 2&0&0&0&0&0&0&d_{2}&m_{21}&m_{42}\\ -1&0&0&0&0&0&0&m_{21}&d_{1}&-m_{41}\\ -4&0&0&0&0&0&0&m_{42}&-m_{41}&d_{4}\end{array}\right), \tag{11}\]
where the first row and column denote the free ion \(M\) value. In terms of the CEF parameters \(B_{n}^{m}\) the matrix entries are given by
\[\begin{split}& d_{4}&=28\left[B_{2}^{0}+30\left(B_{4}^{0}+6B_{6 }^{0}\right)\right],\\ & d_{3}&=7\left[B_{2}^{0}-180\left(B_{4}^{0}+17B_{6}^{0}\right) \right],\\ & d_{2}&=-8B_{2}^{0}-660\left(B_{4}^{0}-42B_{6}^{0}\right),\\ & d_{1}&=-17B_{2}^{0}+180\left(3B_{4}^{0}+7B_{6}^{0}\right),\\ & d_{0}&=-20\left(B_{2}^{0}-54B_{4}^{0}+1260B_{6}^{0}\right),\\ & m_{41}&=15\sqrt{14}\left(B_{4}^{3}+24B_{6}^{0}\right),\\ & m_{42}&=720\sqrt{7}B_{6}^{6},\\ & m_{30}&=9\sqrt{35}\left(B_{4}^{3}-20B_{6}^{3}\right),\\ & m_{33}&=2520B_{6}^{6},\\ & m_{21}&=15\sqrt{2}\left(B_{4}^{3}-42B_{6}^{3}\right).\end{split} \tag{12}\]
For the eigenvalues and eigenvectors of the three singlets (\(\Gamma_{1a,b},\Gamma_{2}\)) we obtain
\[\begin{split}& E_{1a}=\frac{1}{2}\left(\beta-\sqrt{8\gamma^{2}+ \delta^{2}}\right),\\ & E_{1b}=\frac{1}{2}\left(\beta+\sqrt{8\gamma^{2}+\delta^{2}} \right),\\ & E_{2}=\alpha.\end{split} \tag{13}\]
where we defined
\[\begin{split}&\alpha=d_{3}+m_{33},\\ &\beta=d_{0}+d_{3}-m_{33},\\ &\gamma=m_{30},\\ &\delta=d_{0}-(d_{3}-m_{33}).\end{split} \tag{14}\]
The corresponding singlet eigenfunctions are given by
\[\begin{split}\ket{\Gamma_{1a}}&=\cos\theta\ket{4,0}+ \sin\theta\frac{1}{\sqrt{2}}\left(\ket{4,3}-\ket{4,-3}\right),\\ \ket{\Gamma_{1b}}&=-\sin\theta\ket{4,0}+\cos\theta \frac{1}{\sqrt{2}}\left(\ket{4,3}-\ket{4,-3}\right),\\ \ket{\Gamma_{2}}&=\frac{1}{\sqrt{2}}\left(\ket{4,3}+ \ket{4,-3}\right),\\ \cos\theta&=\frac{1}{\sqrt{2}}\sqrt{1+\frac{1}{ \sqrt{1+t^{2}}}},\\ \sin\theta&=\frac{1}{\sqrt{2}}\sqrt{1-\frac{1}{ \sqrt{1+t^{2}}}},\\ & t:=\tan(2\theta)=\frac{2\sqrt{2}\gamma}{\delta},\quad 0\leq \theta\leq\frac{\pi}{4}.\end{split} \tag{10}\]
We note that the _anti_symmetric linear combination of the \(\ket{4,\pm 3}\) states belongs to the totally symmetric \(\Gamma_{1}\) representation while the symmetric linear combination belongs to \(\Gamma_{2}\).
Because \(\Gamma_{2}\) is determined by symmetry alone the eigenvalues and -vectors of the remaining singlets \(\Gamma_{1a,b}\) are obtained as explicit solutions of a quadratic equation. This factorisation of the original \(3\times 3\) matrix problem (upper left block in \(H_{\text{CEF}}\)) is due to the fact that two entries \((d_{3},d_{3})\) appear pairwise. However the second and third block (which give the twofold degenerate levels of the three doublets) the equivalent entries \((d_{2},d_{4})\) are generally different, therefore the eigenvalues and -vectors result from a true cubic equation. It is too tedious and not useful to give their explicit expressions. In the special case when CEF parameters fulfil a constraint such that \(d_{2}=d_{4}\) the three doublet eigenvalues will also factorize in one isolated value and a pair resulting from a quadratic equation.
Nevertheless it is possible to parameterize the form of the doublet eigenfunctions. From the second and third block of the matrix representation of \(H_{\text{CEF}}\) in Eq. (11) we can read off that they correspond to superpositions like
\[\ket{\Gamma_{3}^{\pm}}=u|4,\pm 4\rangle+v|4,\mp 2\rangle\pm w|4,\pm 1\rangle \tag{11}\]
with normalized coefficients \(u,v,w\) which we interpret as coordinates of a point on the surface of a 3d unit sphere spanned by the \(\ket{J,\pm M}\) states. Orthonormality is ensured by writing the doublets in the form
\[\begin{split}\ket{\Gamma_{3a}^{\pm}}&=\sin\chi \left(\cos\phi\ket{4,\pm 4}+\sin\phi\ket{4,\mp 2}\right)\\ &\pm\cos\chi\ket{4,\pm 1},\\ \ket{\Gamma_{3b}^{\pm}}&=\left(\cos\alpha\cos\chi\cos\phi- \sin\alpha\sin\phi\right)\ket{4,\pm 4}\\ &+\left(\cos\alpha\cos\chi\sin\phi+\sin\alpha\cos\phi\right)\ket{4,\mp 2}\\ &\mp\cos\alpha\sin\chi\ket{4,\pm 1},\\ \ket{\Gamma_{3c}^{\pm}}&=\left(-\sin\alpha\cos\chi\cos \phi-\cos\alpha\sin\phi\right)\ket{4,\pm 4}\\ &+\left(-\sin\alpha\cos\chi\sin\phi+\cos\alpha\cos\phi\right)\ket{4,\mp 2}\\ &\pm\sin\alpha\sin\chi\ket{4,\pm 1}.\end{split} \tag{12}\]
The three independent angles \(\chi\), \(\phi\), and \(\alpha\) are determined by the three roots of the secular equation of the Hamiltonian doublet block submatrix. The coefficients of the \(\ket{\Gamma_{3x}^{+}}\) states turn out to be nothing else than the columns of the Euler-angle parametrization of the 3d rotation matrix, associating \(\alpha\varepsilon_{\text{Euler}}\rightarrow\phi\), \(\beta_{\text{Euler}}\rightarrow\chi\), \(\gamma_{\text{Euler}}\rightarrow\alpha\), and the columns like \(1\rightarrow\ket{\Gamma_{3b}^{+}}\), \(2\rightarrow\ket{\Gamma_{3c}^{+}}\), \(3\rightarrow\ket{\Gamma_{3a}^{+}}\). This holds equivalently with \(\beta_{\text{Euler}}\rightarrow\pi-\chi\) for the \(\ket{\Gamma_{3x}^{-}}\) states.
## Appendix B Collection of parameters for numerical calculations
We use parameters that absorb the matrix elements \(m_{\sigma}\) and \(\tilde{m}_{\sigma}\) of the Ising and \(xy\) cases, respectively into the interaction parameters so that matrix elements do not appear explicitly. This is done by defining the quantities (dimension of energy) \(v_{s},v_{2}^{\sigma}\) and \(v_{D}^{\sigma}\) (\(\sigma=\)A,B sublattice), for brevity we also use the notation \(v_{2}^{A,B}=v_{2}(1\pm\epsilon_{2})\) and \(v_{D}^{A,B}=v_{2}(1\pm\epsilon_{D})\) in the same manner as we have used \(\Delta^{A,B}=v_{2}(1\pm\epsilon)\) before. Here \(\epsilon,\epsilon_{2}\) and \(\epsilon_{D}\) characterize the amount of inversion symmetry breaking between the sublattices. There are three (five) possible Ising (\(xy\)) model parameters given by (coordination numbers \(z=3\), \(z_{2}=6\)):
_Ising-type model:_
\[\begin{split} v_{s}=&(m_{A}m_{B}I);\\ v_{2}^{\sigma}=&(m_{\sigma}^{2}I_{2}^{\sigma}), \end{split} \tag{13}\]
leading to
\[\begin{split} m_{A}m_{B}|I_{N}(\mathbf{k})|=&(zv_{s} )|\gamma(\mathbf{k})|\\ m_{2}^{\sigma}I_{D}^{\sigma}(\mathbf{k})=&(z_{2}v_{2 }^{\sigma})\gamma_{2}(\mathbf{k}).\end{split} \tag{14}\]
_xy-type model:_
\[\begin{split} v_{s}=&(\tilde{m}_{A}\tilde{m}_{B}I);\\ v_{2}^{\sigma}=&(\tilde{m}_{\sigma}^{2}I_{2}^{ \sigma});\\ v_{D}^{\sigma}=&(\tilde{m}_{\sigma}^{2}I_{D}^{\sigma}), \end{split} \tag{15}\]
leading to
\[\begin{split}\tilde{m}_{A}\tilde{m}_{B}|I_{N}(\mathbf{k})|=|\tilde{I }_{N}(\mathbf{k})|=(zv_{s})|\gamma(\mathbf{k})|,\\ \tilde{m}_{A}^{2}I_{D}^{A}(\mathbf{k}\lambda)=\tilde{I}_{D}^{A}( \mathbf{k}\lambda)=(z_{2}v_{2}^{A})\gamma_{2}(\mathbf{k})+\lambda(z_{2}v_{D}^{A} )\tilde{\gamma}_{D}(\mathbf{k}),\\ \tilde{m}_{B}^{2}I_{B}^{B}(\mathbf{k}\lambda)=\tilde{I}_{D}^{B}( \mathbf{k}\lambda)=(z_{2}v_{D}^{B})\gamma_{2}(\mathbf{k})-\lambda(z_{2}v_{D}^{B} )\tilde{\gamma}_{D}(\mathbf{k}).\end{split} \tag{16}\]
It is clear that a full consideration of the model in the five-parameter space would be too exhaustive. Therefore only typical cases will be considered with some sublattice parameters equal and/or some parameters set to zero.
In the definition of the Hamiltonians we choose the convention that positive \(I,I_{2}^{\sigma}\) corresponds to FM exchange and negative ones to AF exchange. The same convention applies then to \(v_{s}\) and \(v_{2}^{\sigma}\) if we make the reasonable
restriction that \(m_{A}\) and \(m_{B}\) matrix elements have the same sign. The sign of \(I_{D}^{\sigma}\) is not essential as the DM interaction alternates from bond to bond and from A to B. A change in sign of \(I_{D}^{\sigma}\) or \(v_{D}^{\sigma}\) just means a redifinition of \(\lambda\rightarrow-\lambda\) notation in the exciton bands.
## Appendix C RPA response function approach for the xy-type model
In this model the twofold \(\Gamma_{3}^{\pm}\) excited state degeneracy (\(\lambda=\pm\)) and two sublattices lead in principle to a \(4\times 4\) susceptibility matrix, which however is the direct sum of \(2\times 2\) matrices so that instead of Eq.(6) we now have
\[\begin{split}\hat{\chi}(\mathbf{k},\lambda,i\omega_{n})& =[1-\hat{I}(\mathbf{k}\lambda)\hat{u}(i\omega_{n})]^{-1}\hat{u}(i \omega_{n});\\ \hat{u}(i\omega_{n})=&\left(\begin{array}{cc}u_{A }(i\omega_{n})&0\\ 0&u_{B}(i\omega_{n})\end{array}\right);\\ \hat{I}(\mathbf{k}\lambda)=&\left(\begin{array}{cc}I_{D}^{ \mathrm{A}}(\mathbf{k}\lambda)&I_{N}(\mathbf{k})\\ I_{N}^{\ast}(\mathbf{k})&I_{D}^{B}(\mathbf{k}\lambda)\end{array}\right),\end{split} \tag{28}\]
where the exchange matrix elements are defined in Appendix B above. The single ion susceptibility (the sum of \(xx\) and \(yy\) components) is given by
\[u_{\sigma}(i\omega_{n})=\frac{2\tilde{m}_{\sigma}^{2}\Delta_{\sigma}P_{\sigma }(T)}{\Delta_{\sigma}^{2}-(i\omega_{n})^{2}}. \tag{29}\]
Now the thermal population factor for the singlet-doublet case is \(P_{\sigma}(T)=\tanh\frac{\Delta_{\sigma}}{2T}(1+f_{\sigma})^{-1}\) where \(f_{\sigma}=\frac{1}{2}(1-\tanh\frac{\Delta_{\sigma}}{2T})\). The poles of the dynamical susceptibility associated with magnetic exciton modes may then be obtained in a completely analogous way to the Ising model case, except for the additional mode index \(\lambda\) resulting from the \(\Gamma_{3}\) degeneracy:
\[\begin{split}\omega_{\pm}^{2}(\mathbf{k}\lambda)=& \frac{1}{2}(\omega_{A}^{2}(\mathbf{k}\lambda)+\omega_{B}^{2}( \mathbf{k}\lambda))\\ &\pm\Big{[}\frac{1}{4}(\omega_{A}^{2}(\mathbf{k}\lambda)-\omega_{ B}^{2}(\mathbf{k}\lambda))^{2}\\ &+4\tilde{m}_{A}^{2}\tilde{m}_{B}^{2}\Delta_{A}\Delta_{B}P_{A}P_{B }|I_{N}(\mathbf{k})|^{2}\Big{]}^{\frac{1}{2}};\\ \omega_{\sigma}^{2}(\mathbf{k}\lambda)=&\Delta_{ \sigma}[\Delta_{\sigma}-2\tilde{m}_{\sigma}^{2}P_{\sigma}I_{D}^{\sigma}( \mathbf{k}\lambda)].\end{split} \tag{30}\]
In the zero temperature limit \(P_{\sigma}\to 1\) and the above expression is completely equivalent to the xy-model exciton dispersions obtained from the Bogoliubov approach (Eq. (26)). Likewise the spectral function of the magnetic response is given in an obvious generalization as
\[S(\mathbf{k},\omega)=\frac{1}{\pi}\sum_{\lambda}\Bigl{(}\mathrm{Im}\hat{\chi}_ {AA}(\mathbf{k}\lambda,\omega)+\mathrm{Im}\hat{\chi}_{BB}(\mathbf{k}\lambda, \omega)\Bigr{)}. \tag{31}\]
## Appendix D Geometric properties of honeycomb lattice and Brillouin zone
The honeycomb lattice (Fig.1) has two basis atoms denoted by A,B with a distance d apart (n.n. distance A-B). The lattice constant is denoted by a (n.n.n. distance A-A or B-B). They are related by \(d=a/\sqrt{3}\). We generally use the lattice constant \(a\) in the direct lattice and \(2\pi/a\) in the reciprocal lattice as units. The three vectors to n.n. sites \(\delta_{i}\) and and six vectors to n.n.n. sites \(\pm\tilde{\delta}_{i}\) (i=1-3) are given by
\[\begin{split}\delta_{1}=&(\frac{\sqrt{3}}{6},\frac{ 1}{2})a;\ \ \delta_{2}=(\frac{\sqrt{3}}{6},-\frac{1}{2})a;\ \ \delta_{3}=(-\frac{\sqrt{3}}{3},0)a;\\ \tilde{\delta}_{1}=&(\frac{\sqrt{3}}{2},\frac{1}{2})a; \ \ \tilde{\delta}_{2}=(-\frac{\sqrt{3}}{2},\frac{1}{2})a;\ \ \ \tilde{\delta}_{3}=(0,-1)a.\end{split} \tag{32}\]
As basis vectors of the unit cell and lattice we may use \(\mathbf{v}_{1}=-\tilde{\delta}_{2},\mathbf{v}_{2}=\tilde{\delta}_{1}\). The reciprocal lattice vectors \(\mathbf{G}_{1},\mathbf{G}_{2}\) are then defined via \(\mathbf{v}_{i}\cdot\mathbf{G}_{j}=2\pi\delta_{ij}\) (\(i,j=1,2\)). Explicitly we have
\[\begin{split}\mathbf{v}_{1}=&-\tilde{\delta}_{2}=( \frac{\sqrt{3}}{2},-\frac{1}{2})a;\ \ \mathbf{v}_{2}=\tilde{\delta}_{1}=\bigl{(}\frac{\sqrt{3}}{2},\frac{1}{2} \bigr{)}a;\\ \mathbf{G}_{1}=&(\frac{\sqrt{3}}{3},-1)\frac{2\pi}{a };\ \ \mathbf{G}_{2}=\bigl{(}\frac{\sqrt{3}}{3},1)\frac{2\pi}{a}.\end{split} \tag{33}\]
For the direct unit cell volume we have \(V_{c}=|\mathbf{v}_{1}\times\mathbf{v}_{2}|=\frac{\sqrt{3}}{2}a^{2}\) and likewise for the reciprocal cell volume \(\Omega_{c}=|\mathbf{G}_{1}\times\mathbf{G}_{2}|=\frac{2}{\sqrt{3}}\bigl{(}\frac{2 \pi}{a}\bigr{)}^{2}\) which fulfil the relation \(V_{c}\cdot\Omega_{c}=(2\pi)^{2}\). The inequivalent zone boundary vectors \(\mathbf{K}_{\pm}\) are given by
\[\begin{split}\mathbf{K}_{+}=&\frac{1}{3}[\mathbf{G}_{ 1}+2\mathbf{G}_{2}]=\bigl{(}\frac{\sqrt{3}}{3},\frac{1}{3}\bigr{)}\frac{2\pi}{a },\\ \mathbf{K}_{-}=&\frac{1}{3}[2\mathbf{G}_{1}+\mathbf{G}_{ 2}]=\bigl{(}\frac{\sqrt{3}}{3},-\frac{1}{3}\bigr{)}\frac{2\pi}{a}.\end{split} \tag{34}\]
## Appendix E properties of momentum dependent honeycomb structure functions
The momentum dependence and in particular gap existence of exciton modes at the zone boundary is determined by the structure functions of the nearest and next-nearest neighbor interactions depicted in Fig. 1. They are given by
\[\begin{split}\gamma(\mathbf{k})=\frac{1}{z}\sum_{\delta}\exp(i \mathbf{k}\cdot\delta);\ \ \gamma_{2}(\mathbf{k})=\frac{1}{z_{2}}\sum_{\delta}\exp(i\mathbf{k}\cdot\tilde{ \delta});\\ \gamma_{D}^{A,B}(\mathbf{k})=\frac{1}{z_{2}}\sum_{\tilde{\delta}} \nu_{\tilde{\delta}}^{A,B}\exp(i\mathbf{k}\cdot\tilde{\delta})=:\mp i\tilde{\gamma} _{D}(\mathbf{k}),\end{split} \tag{35}\]
where \(\gamma(\mathbf{k})\) and \(\gamma_{2}(\mathbf{k})\) correspond to the symmetric \(1^{\mathrm{st}}(\delta)\) and \(2^{\mathrm{nd}}(\tilde{\delta})\) neighbor exchange, with coordination numbers \(z=3\) and \(z_{2}=6\), respectively, whereas \(\tilde{\gamma}_{D}(\mathbf{k})\) is associated with the asymmetric DM exchange with second
neighbors. The first one is complex with \(\gamma(-{\bf k})=\gamma^{*}({\bf k})\) the second one is real and even \(\gamma(-{\bf k})=\gamma({\bf k})\) while the latter is real and odd \(\tilde{\gamma}_{D}(-{\bf k})=-\tilde{\gamma}_{D}({\bf k})\) under inversion. The latter is due to the staggered nature of the DM interaction leading to \(\nu_{\delta}=-\nu_{-\tilde{\delta}}=\pm 1\) and \(\nu_{\delta}^{B}=-\nu_{\delta}^{A}\). Explicitly we have, from Fig. 1.:
\[\begin{split}\gamma({\bf k})=&\frac{1}{3}\bigl{[} \exp i(\frac{\sqrt{3}}{6}ak_{x}+\frac{1}{2}ak_{y})\\ &+\exp i(\frac{\sqrt{3}}{6}ak_{x}-\frac{1}{2}ak_{y})+\exp{(-i \frac{\sqrt{3}}{3}ak_{x})}\bigr{]},\\ \gamma_{2}({\bf k})=&\frac{1}{3}\bigl{[}\cos(\frac {\sqrt{3}}{2}ak_{x}+\frac{1}{2}ak_{y})\\ &+\cos(-\frac{\sqrt{3}}{2}ak_{x}+\frac{1}{2}ak_{y})+\cos{ak_{y} }\bigr{]},\\ \tilde{\gamma}_{D}({\bf k})=&\frac{1}{3}\bigl{[} \sin(\frac{\sqrt{3}}{2}ak_{x}+\frac{1}{2}ak_{y})+\sin(-\frac{\sqrt{3}}{2}ak_{x }+\frac{1}{2}ak_{y})\\ &-\sin{ak_{y}}\bigr{]}.\end{split} \tag{100}\]
It is important to know the behaviour of the structure functions around the zone boundary valleys \({\bf K}_{\pm}=\bigl{(}\frac{\sqrt{3}}{3},\pm\frac{1}{3}\bigr{)}\frac{2\pi}{a}\). We express the momentum by \({\bf k}={\bf K}_{\pm}+{\bf q}\) with \(|{\bf q}|\ll\frac{\pi}{a}\). Then the structure functions in Eq. (100) may be expanded in terms of \({\bf q}\) to lowest order. It is more convenient to use hexagonal coordinates \((q_{x}^{\prime},q_{y}^{\prime})\) instead of the Cartesian \((q_{x},q_{y})\). The transformations between them, for each \({\bf K}_{\pm}\) are given by
\[\begin{split}{\bf K}_{+}:q_{x}^{\prime}=&\frac{1}{2 }(\sqrt{3}q_{x}+q_{y});\quad q_{y}^{\prime}=-\frac{1}{2}(q_{x}-\sqrt{3}q_{y}); \\ {\bf K}_{-}:q_{x}^{\prime}=&\frac{1}{2}(\sqrt{3}q_{x }-q_{y});\quad q_{y}^{\prime}=\frac{1}{2}(q_{x}+\sqrt{3}q_{y}).\end{split} \tag{101}\]
Then the expansion leads to
\[\begin{split}\gamma({\bf k})=&\gamma({\bf K}_{\pm} +{\bf q})=-\frac{a}{2\sqrt{3}}(q_{x}^{\prime}\pm iq_{y}^{\prime}),\\ |\gamma({\bf k})|^{2}=&\frac{a^{2}}{12}(q_{x}^{2}+q_ {y}^{2})=\frac{a^{2}}{12}({q_{x}^{\prime}}^{2}+{q_{y}^{\prime}}^{2})=\frac{ \pi^{2}}{12}\dot{q}^{2},\\ \gamma_{2}({\bf k})=&\gamma_{2}({\bf K}_{\pm})=- \frac{3}{z_{2}},\\ \tilde{\gamma}_{D}({\bf k})=&\tilde{\gamma}_{D}({\bf K }_{\pm})=\mp\frac{3\sqrt{2}}{z_{2}},\end{split} \tag{102}\]
where we defined \(\hat{q}=(q_{x}^{2}+q_{y}^{2})^{\frac{1}{2}}/(\pi/a)\). The lowest order term in \(\gamma({\bf k})\) is the term linear in \({\bf q}\) because \(\gamma({\bf K}_{\pm})=0\). On the other hand \(\gamma_{2}({\bf k})\) and \(\tilde{\gamma}_{D}({\bf k})\) have finite values at \({\bf K}_{\pm}\) and no linear terms in \({\bf q}\). Note that importantly \(\tilde{\gamma}_{D}({\bf K}_{\pm})\) changes sign between the nonequivalent BZ boundary points.
## Appendix F Momentum gradients of structure functions and Hamiltonian
The Hamiltonian gradients \(\hat{h}^{\alpha}_{{\bf k}\lambda}=\partial\hat{h}_{{\bf k}}/\partial k_{\alpha}\) (\(\alpha=x,y\)) appearing in the matrix elements for the Berry curvature \(\Omega^{z}_{n}({\bf k})\) of Eq. (44) are entirely determined by those of the structure functions \(\gamma^{\alpha}({\bf k})=\partial\gamma({\bf k})/\partial k_{\alpha}\) and likewise for \(\gamma_{2}({\bf k})\) and \(\tilde{\gamma}_{D}({\bf k})\). From Eq. (100) we get for first neighbors
\[\begin{split}\gamma^{x}({\bf k})=& i\bigl{(}\frac{a \sqrt{3}}{6}\bigr{)}\frac{1}{3}\bigl{[}\exp i(\frac{\sqrt{3}}{6}ak_{x}+\frac{ 1}{2}ak_{y})\\ &+\exp i(\frac{\sqrt{3}}{6}ak_{x}-\frac{1}{2}ak_{y})-2\exp{(-i \frac{\sqrt{3}}{3}ak_{x})}\bigr{]},\\ \gamma^{y}({\bf k})=& i\bigl{(}\frac{a}{2}\bigr{)} \frac{1}{3}\bigl{[}\exp i(\frac{\sqrt{3}}{6}ak_{x}+\frac{1}{2}ak_{y})\\ &-\exp i(\frac{\sqrt{3}}{6}ak_{x}-\frac{1}{2}ak_{y})\bigr{]},\end{split} \tag{103}\]
and for second neighbors
\[\begin{split}\gamma^{x}_{2}({\bf k})=&\bigl{(}-\frac{ \sqrt{3}a}{2}\bigr{)}\frac{1}{3}\bigl{[}\sin(\frac{\sqrt{3}}{2}ak_{x}+\frac{1}{2} ak_{y})\\ &-\sin(-\frac{\sqrt{3}}{2}ak_{x}+\frac{1}{2}ak_{y})\bigr{]},\\ \gamma^{y}_{2}({\bf k})=&\bigl{(}-\frac{a}{2}\bigr{)} \frac{1}{3}\bigl{[}\sin(\frac{\sqrt{3}}{2}ak_{x}+\frac{1}{2}ak_{y})\\ &+\sin(-\frac{\sqrt{3}}{2}ak_{x}+\frac{1}{2}ak_{y})+2\sin(ak_{y}) \bigr{]},\\ \tilde{\gamma}^{x}_{D}({\bf k})=&\bigl{(}\frac{\sqrt{3}a }{2}\bigr{)}\frac{1}{3}\bigl{[}\cos(\frac{\sqrt{3}}{2}ak_{x}+\frac{1}{2}ak_{y}) \\ &-\cos(-\frac{\sqrt{3}}{2}ak_{x}+\frac{1}{2}ak_{y})\bigr{]},\\ \tilde{\gamma}^{y}_{D}({\bf k})=&\bigl{(}\frac{a}{2} \bigr{)}\frac{1}{3}\bigl{[}\cos(\frac{\sqrt{3}}{2}ak_{x}+\frac{1}{2}ak_{y}) \\ &+\cos(-\frac{\sqrt{3}}{2}ak_{x}+\frac{1}{2}ak_{y})-2\cos(ak_{y}) \bigr{]}.\end{split} \tag{104}\]
Then, using Eq. (24) we obtain the Hamiltonian gradients as
\[\hat{h}^{\alpha}_{{\bf k}\lambda}=\left(\begin{array}{cccc}-\bar{I}^{A\alpha}_{D} ({\bf k}\lambda)&-\bar{I}^{\alpha*}_{N}({\bf k})&-\bar{I}^{A\alpha}_{D}({\bf k }\lambda)&-\bar{I}^{\alpha*}_{N}({\bf k})\\ -\bar{I}^{\alpha}_{N}({\bf k})&-\bar{I}^{B\alpha}_{D}({\bf k}\lambda)&-\bar{I}^{ \alpha}_{N}({\bf k})&-\bar{I}^{B\alpha}_{D}({\bf k}\lambda)\\ \bar{I}^{A\alpha}_{D}(-{\bf k}\bar{\lambda})&\bar{I}^{\alpha}_{N}(-{\bf k}\bar{ })&\bar{I}^{A\alpha}_{D}(-{\bf k}\bar{\lambda})&\bar{I}^{\alpha}_{N}(-{\bf k} )\\ \bar{I}^{\alpha*}_{N}(-{\bf k})&\bar{I}^{B\alpha}_{D}(-{\bf k}\bar{\lambda})& \bar{I}^{\alpha*}_{N}(-{\bf k})&\bar{I}^{B\alpha}_{D}(-{\bf k}\bar{\lambda}) \end{array}\right), \tag{105}\]
and the interaction derivatives are obtained from Eq. (25) as
\[\begin{split}\bar{I}^{A\alpha}_{D}({\bf k}\lambda)=&\tilde{m}^{2}_{A}I^{A \alpha}_{D}({\bf k}\lambda),\\ I^{A\alpha}_{D}({\bf k}\lambda)=&(z_{2}I^{A}_{2})\gamma^{\alpha}_{2}({\bf k })+\lambda(z_{2}D^{A}_{2})\tilde{\gamma}^{\alpha}_{D}({\bf k})\\ &=&-I^{A\alpha}_{D}(-{\bf k}\bar{\lambda})=-I^{B\alpha}_{D}(-{\bf k} \lambda),\\ \bar{I}^{B\alpha}_{D}({\bf k}\lambda)=&\tilde{m}^{ |
2309.15687 | Breaking On-Chip Communication Anonymity using Flow Correlation Attacks | Network-on-Chip (NoC) is widely used to facilitate communication between
components in sophisticated System-on-Chip (SoC) designs. Security of the
on-chip communication is crucial because exploiting any vulnerability in shared
NoC would be a goldmine for an attacker that puts the entire computing
infrastructure at risk. NoC security relies on effective countermeasures
against diverse attacks, including attacks on anonymity. We investigate the
security strength of existing anonymous routing protocols in NoC architectures.
Specifically, this paper makes two important contributions. We show that the
existing anonymous routing is vulnerable to machine learning (ML) based flow
correlation attacks on NoCs. We propose lightweight anonymous routing with
traffic obfuscation techniques to defend against ML-based flow correlation
attacks. Experimental studies using both real and synthetic traffic reveal that
our proposed attack is successful against state-of-the-art anonymous routing in
NoC architectures with high accuracy (up to 99%) for diverse traffic patterns,
while our lightweight countermeasure can defend against ML-based attacks with
minor hardware and performance overhead. | Hansika Weerasena, Prabhat Mishra | 2023-09-27T14:32:39Z | http://arxiv.org/abs/2309.15687v2 | # Breaking NoC Anonymity using Flow Correlation Attack
###### Abstract
Network-on-Chip (NoC) is widely used as the internal communication fabric in today's multicore System-on-Chip (SoC) designs. Security of the on-chip communication is crucial because exploiting any vulnerability in shared NoC would be a goldmine for an attacker. NoC security relies on effective countermeasures against diverse attacks. We investigate the security strength of existing anonymous routing protocols in NoC architectures. Specifically, this paper makes two important contributions. We show that the existing anonymous routing is vulnerable to machine learning (ML) based flow correlation attacks on NoCs. We propose a lightweight anonymous routing that use traffic obfuscation techniques which can defend against ML-based flow correlation attacks. Experimental studies using both real and synthetic traffic reveal that our proposed attack is successful against state-of-the-art anonymous routing in NoC architectures with a high accuracy (up to 99%) for diverse traffic patterns, while our lightweight countermeasure can defend against ML-based attacks with minor hardware and performance overhead.
On-Chip Communication, Network-on-Chip Security, Anonymous Routing, Flow Correlation, Machine Learning
## I Introduction
Advance manufacturing technology allows the integration of heterogeneous Intellectual Property (IP) cores on a single System-on-Chip (SoC). Commercial SoCs, such as the Intel "Xeon Phi" series [1] and Tilera "TILE-Gx" family [2], consist of SoCs up to 72 cores. Traditional bus architectures fail to scale up with the communication requirements of the increasing number of IP cores. Network-on-Chip (NoC) is the preferred communication fabric to meet the high throughput and scalability requirements between these cores. Due to time to market constraints and cost-effectiveness, SoC manufacturers tend to use third-party vendors and services from the global supply chain [3].
Typically only a few IP cores are designed in-house, while others are reusable IPs from third-party vendors. For example, FlexNoc interconnect is used by four out of the top five fabless companies to facilitate their on-chip communication [4]. A long and potentially untrusted supply chain can lead to the introduction of malicious implants through various avenues such as untrusted CAD tools, rogue designers, or at the foundry. Furthermore, these sophisticated SoC designs make it harder to do complete security verification [5]. While the design of energy-efficient NoC is the primary objective today, the security of the NoC is also crucial since exploiting NoC would be a goldmine for an attacker to access communication between various IP cores.
Figure 1 shows a \(4\times 4\) mesh NoC where mesh topology is the most commonly used topology in NoC despite the availability of many topologies. A single tile consists of an IP core, Network Interface (NI), and Router. Security issues in a typical NoC can be classified as eavesdropping, spoofing, denial-of-service, buffer overflow, and side-channel attacks [6]. There are efficient detection and mitigation of security vulnerabilities [6, 7, 8, 9, 10] for securing NoC-based SoCs. Anonymity ensures that there is no unauthorized disclosure of information about communicating parties. In a typical NoC, to enable fast packet forwarding, the header information is kept as plaintext while the packet data is encrypted. An adversary can implant a hardware Trojan in a router (\(R_{8}\) in Figure 1), which can collect packets from the same source and launch complex cryptanalysis attacks. For example, imagine a source node (\(IP_{S}\)) is a cryptographic accelerator that needs to communicate with a memory controller, destination node (\(IP_{D}\)), to facilitate memory requests for the cryptographic operation. An adversary can use a malicious router in the middle to collect packets between \(IP_{S}\) and \(IP_{D}\) over a period of time and recover the key by launching a ciphertext-only attack. A collection of packets belonging to the same communication session also can be analyzed to discover what program is running at \(IP_{S}\) or reverse engineer the architectural design [11, 12]. Charles et el. [7] presented an anonymous routing solution (ArNoC) for NoC based on onion routing [13] to ensure the anonymity of a communication session.
### _Research Contributions_
In this paper, we evaluate the security strength of the anonymous routing protocols in NoCs. Specifically, this paper makes the following major contributions.
* We propose an attack on existing anonymous routing by correlating NoC traffic flow using machine learning.
* We demonstrate that the proposed machine learning (ML)-based attack can break the anonymity provided by
Fig. 1: NoC with 4x4 mesh topology. Each IP is connected to NoC via a network interface followed by a router. A malicious router in the middle can collect packets when \(IP_{S}\) communicates with \(IP_{D}\).
the state-of-the-art anonymous routing (ArNoC [7]) in different configurations and traffic patterns.
* We propose a lightweight countermeasure with traffic obfuscation to defend against ML-based attacks with minor hardware and performance overhead.
The remainder of this paper is organized as follows. Section II provides relevant background and surveys the related efforts. Section III describes our ML-based attack on anonymous routing. Section IV proposes a lightweight solution that can defend against ML-based attacks. Section V presents the experimental results and evaluation. Finally, the paper is concluded in Section VI.
## II Background and Related Work
This section provides the relevant background and surveys the related efforts to highlight the novelty of this work.
### _Network-on-Chip (NoC) Traffic_
NoC enables communication by routing packets through a series of nodes. There are two types of packets that are injected into the network: control and data packets. Consider an example when a processor (\(IP_{S}\)) wants to load data from a particular memory controller (\(IP_{D}\)), it will issue a control packet requesting the data from memory. The packet will travel via routers based on a predefined routing protocol. Once the destination IP receives the control packet, it will reply with a data packet containing the requested data.
In general, header information is kept as plaintext and the payload data is encrypted. At each source NI, the packets are divided into fixed-size flits, which is the smallest unit used for flow control. There is a head fit followed by multiple body flits and tail flits. Routing in NoC can be either deterministic or adaptive; both approaches use header information to make routing decisions at each router. XY routing is the most commonly used routing in mesh-based electrical NoCs which basically takes all the X links first followed by Y links. NoC uses links to connect different components of the interconnects. Links can be either internal or boundary links. A boundary link refers to an outbound or inbound link that connects a router to a network interface, while internal links connect two routers. Our ML-based attack on anonymous routing makes use of the flow of flits (inter-flit delays), whereas, our countermeasure manipulates routing decisions to create virtual tunnels.
### _Anonymous Routing_
Anonymity hides the identity of the communicating pair to anyone listening in the middle. Tor network [13] (runs on top of onion routing) and I2P network [14] (runs on top of garlic routing) are popular anonymous routing examples in traditional computer networks. Onion routing builds tunnels through a series of hops and the source applies layered encryption on the message where the number of hops equals to the number of layers, then encryption is peeled off at each hop to reveal the original message. Garlic routing is an extension of the onion routing where multiple messages are bundled and encrypted together, similar to garlic cloves. There are a wide variety of attacks to break the anonymity of the Tor network, including flow correlation attack [15]. This attack cannot be directly applied to NoC because of the following three reasons. (i) Traffic characteristics of NoC and traditional networks are significantly different because NoC uses its traffic primarily for cache coherence messages. (ii) The existing attack relies heavily on packet size as a feature, whereas NoC flits are the fundamental unit of flow control, and they are of fixed size. (iii) All NoC nodes act as onion routers, whereas in the traditional network, there is a mixture of both normal and onion routers.
### _Related Work_
Anonymity is critical for secure on-chip communication; however, the solutions in the traditional networks are too expensive for resource-constrained NoCs. Sarihi et al. [16] presented an anonymous routing that needs NoC packets to be identified as secure and non-secure packets. ARNoC [7] presented a lightweight anonymous routing protocol that considers all NoC packets. It creates an on-demand anonymous tunnel from the source to the destination where intermediate nodes know only about the preceding and succeeding nodes. Our proposed ML-based attack can break the anonymity of ARNoC.
A threat model based on the insertion of Trojans in network links is addressed in [17, 18]. Qiaoyan et al. [17] show that the Trojans can be inserted in boundary links and center links that can do bit flips in the header packet that can lead to deadlock, livelock, and packet loss. Boraten et al. [18] discuss the denial of service (DoS) attacks that can be launched by malicious links. This specific Trojan performs packet injection faults at the links which will trigger re-transmissions from the error-correcting mechanism. Frequently injecting faults will lead to multiple re-transmissions and may eventually lead to traffic hot spots in the NoC. These Trojan-based attacks are hard to detect. Our proposed attack also assumes malicious links at the points of data collection.
ML-based techniques have been used to detect and mitigate attacks on NoCs in [19, 20, 21]. Sudusinghe et al. [19] used several ML techniques to detect DoS attacks on NoC traffic. Reinforcement learning is used by [20] to detect hardware Trojans in NoC at run time. Sinha et al. [21] use an ML-based approach to localize flooding-based DoS attacks. None of these approaches consider attacks or countermeasures related to anonymous routing in NoC architectures. _To the best of our knowledge, our proposed ML-based flow correlation attack is the first attempt on breaking anonymity in NoC-based SoCs._
### _Flow Correlation Challenges_
NoC traffic flow can be considered a time series data array with values of increasing timestamps in order. For example, in a communication session, we can consider an array of time differences between each packet coming into a node as a flow. Flow correlation is when we take two such pairs and compare if they are correlated in some manner. For example, in a network link, the flow of inter-flit delay entering and going out of the link are correlated. Though correlating outgoing and incoming traffic on a link seems straightforward, correlating
traffic between two nodes in a large network with multiple hops in NoC is extremely difficult for the following reasons:
* The queuing delay at each hop is unpredictable and can interfere with traffic flow characteristics.
* A pair of correlated nodes may communicate with other nodes, which is considered as noise.
* The communication path of the correlated pair may be shared by other nodes in SoC, which will interfere with the traffic flow characteristics between correlated pairs.
## III ML-based Attack on Anonymous Routing
We first outline the threat model used in the proposed attack. Next, we describe our data collection, training, and application of the ML model to accomplish the attack.
### _Threat Model_
The threat model considers an NoC that uses encrypted traffic and anonymous routing using ARNoC. Therefore, when we consider an individual packet, an attacker cannot recover the payload because it's encrypted and cannot recover the sender/receiver identity because they are hidden via ARNoC. The threat model to break anonymity consists of two major components: i) a malicious NoC and ii) two malicious programs (_collector_ and ML-model). The malicious NoC has malicious boundary links with Hardware Trojan (HT). The HT is capable of counting the number of cycles between flits (inter-flit-delay). Specifically, the HT can count inter-flit-delay of incoming and outgoing flits to/from an IP. After specific intervals, HT gathers all inter-flit delay into an array and sends it to the IP where the malicious program (_collector_) is running. A similar threat model of inserting HT at NoC links has been discussed in [17, 18]. Note that the area and power overhead of an HT with a small counter is insignificant in a large MPSoC [11].
The first malicious program (_collector_) runs in one of the nodes of the SoC, and it activates/deactivates HT to keep it hidden from any run-time HT detection mechanisms. The main functionality of the _collector_ is to collect inter-flit-delays from HT-infected links and send them to ML-model. The second malicious program is a pre-trained ML model that runs in a remote server/cloud controlled by the adversary. The flow correlation uses the attacking phase out of two phases (attacking and training) of the ML model. The training phase is discussed in detail in Section III-D. The attacking phase is trained to classify if two inter-flit delay arrays are correlated. Figure 2 shows a high-level overview of the proposed flow correlation attack categorized from the perspective of the ML model. The training phase is responsible for collecting data for training and conducting training of the ML model. The training is performed before the attack, and the pre-trained model predicts the correlation of inter-flit-delays collected at runtime.
Figure 3 shows an example of the attacking phase on ARNoC. In ARNoC, a tunnel exists between source and destination routers if their associated IPs are in a communication session. ARNoC forms the tunnel to ensure anonymity by hiding the headers. The HTs in links are in the inactive state by default. The _collector_ periodically checks the state of all infected boundary links and flags communicating links as suspicious. Imagine a scenario where the adversary gets suspicious of the ongoing communication between the source (\(IP_{S}\)) and destination (\(IP_{D}\)); the _collector_ activates HT associated with the boundary links of \(IP_{S}\) and \(IP_{D}\). On activation, HTs start sending periodic inter-flit delay arrays to the collector. More specifically, the Trojan will observe and leak both outbound (\(IFD_{S}^{o}\)) and inbound (\(IFD_{D}^{i}\)) traffic flows. Here, \(IFD_{S}^{o}\) refers to the outbound inter-flit delay arrays from the source IP, and \(IFD_{D}^{i}\) refers to the inbound inter-flit delay arrays at the destination IP. Upon receiving inter-flit delay arrays, the _collector_ is responsible for sending collected data on inter-flit delay to the ML model. The adversary uses the ML model to pinpoint two specific nodes that are communicating and breaks the anonymity.
### _Collecting Data for Training_
Algorithm 1 outlines the training data collection when running ARNoC. We collect inbound and outbound inter-flit delays for all source and destination IPs (line 4). Then, we label each flow pair as either '1' or '0' according to the ground truth (line 5). If \(IP_{S}\) and \(IP_{D}\) of flow pair \(\{IFD_{S}^{o},IFD_{D}^{i}\}\) are correlated to each other (\(IP_{S}\) and \(IP_{D}\) communicating in a session), the flow pair is tagged as '1' and otherwise '0'. These tagged flow pairs are utilized as the training set. Note that only the first \(l\) elements of each flow of flow pair (\(\{IFD_{S}^{o},IFD_{D}^{i}\}\)) will be used in the training and testing. We consider the interference of external traffic (via shared path or shared resources) to correlated traffic flow characteristics by having other nodes communicating simultaneously with correlated pair communication. To evaluate the applicability of our model, we use a separate and unseen pair set during the
Fig. 3: Malicious boundary links outside the anonymous tunnel extract flow pair (\(IFD_{S}^{o}\), \(IFD_{D}^{i}\)) and send them to the collector. Then, _collector_ sends them to ML-model.
Fig. 2: Overview of our proposed ML-based attack that consists of two phases (training and attacking).
testing phase. We utilize a deep neural network (DNN) as the ML model for our proposed flow correlation attack. In order to collect sufficient data to train DNN and make the dataset generic, we conduct multiple iterations of the data collection (Algorithm 1) by changing the mapping of correlated pairs to different NoC nodes each time. Section V elaborates on synthetic and real traffic data collection.
```
1:X, Y \(\leftarrow\)\(\emptyset\)
2:procedureCollectData ()
3:for\(\forall\ (s,\ d)\in(S,\ D)\)do
4:\(X\gets X\ \cup\ \{\ IFD_{s}^{o},\ IFD_{d}^{i}\ \}\)
5:\(Y\gets Y\ \cup\ c:\ c\in\{\ 0,\ 1\ \}\)
6:return\(X\), \(Y\)
```
**Algorithm 1** Data Collection
### _DNN Architecture_
We carefully examined various configurations and reached out to the final DNN architecture shown in Figure 4. We selected Convolution Neural Networks (CNN) [22] as our model architecture for the following reasons. First, since multivariate time series have the same 2-dimensional data structures as images, CNN for analyzing images is suitable for handling multivariate time series [23]. Second, recently published works using CNN for flow correlation [15, 24] has shown promising results. Inspired by existing efforts, our final architecture has two convolution layers followed by three fully connected layers to achieve promising performance. The first convolution layer (C1) has \(k_{1}\) number of kernels of size (2, \(w_{1}\)). The second convolution layer (C2) has \(k_{2}\) number of kernels of size (2, \(w_{2}\)). The main intuition of C1 is to identify and extract the relationship between two traffic flows (\(IFD_{S}^{o},\ IFD_{D}^{i}\)), while we assign the task of advancing features to C2. In our approach, both C1 and C2 have a stride of (2, 1). A max-pooling layer immediately follows both convolution layers. Max pooling uses a max operation to reduce the dimension of features, which also logically reduces overfitting. Finally, the result of C2 is flattened and fed to a fully connected network with three layers. Additionally, the set (\(k_{1}\), \(k_{2}\), \(w_{1}\), \(w_{2}\)) are considered as hyper-parameters. We provide details on hyper-parameter tuning in Section V-B. We use ReLU as the activation function for all convolution and fully connected layers to avoid the vanishing gradient problem and improve performance. Due to the fact that our task is a binary classification, we apply a _sigmoid_ function in the last output layer to produce predictions.
### _Training the DNN Model_
Algorithm 2 outlines the major steps in the training process of the ML model. Specific sizes and parameters used in training are outlined in Section V. We train the DNN for multiple epochs (line 6) for better performance of the model. We train the DNN by providing labeled inter-flit delay distributions. During the training phase, the stochastic gradient descent (_sgd_) optimizer minimizes the loss and updates the weights in the DNN (line 10). To achieve this binary classification results from the last fully connected layer pass through a _sigmoid_ layer [25] (line 8) to produce classification labels.
```
1:\(X\) : [\(x_{1}\),..., \(x_{j}\),..., \(x_{N}\)] where \(x_{j}=\{\ IFD_{s}^{o},\ IFD_{d}^{i}\ \}_{j}\)
2:\(Y\) : [\(y_{1}\),..., \(y_{j}\),..., \(y_{N}\)] where \(y_{j}\in\{0,1\}\)
3:procedureTrainModel (\(X\), \(Y\))
4:\(Circuit\ samples\ X\ and\ labels\ Y\)
5:\(Model\ M_{\Theta}\ initialization\)
6:for\(epoch\in[1\),..., \(NoOfEpochs]\)do
7:for\(x_{j}\in X\) and \(y_{j}\in Y\)do
8:\(out_{j}\) = \(sigmoid\)( \(M_{\Theta}(x_{j})\) )
9:\(loss\) = \(\sum\limits_{j}\) cross_entropy(\(out_{j},y_{j}\))
10:\(\Theta\) = sgd(\(\Theta,\nabla loss\))
11: Return \(M_{\Theta}\)
```
**Algorithm 2** ML Model Training
Formally, the _sigmoid_ layer is a normalized exponential function \(f(x)=\frac{1}{1+e^{-x}}\), which aims at mapping the given vector to a probability value that lies in \([0,1]\). The value of the output of the last layer is the predicted label \(p(y)\) which can be denoted as:
\[p(y)=\frac{1}{1+e^{-(M(s,d))}}\]
where \(s\) and \(d\) denote the source and destination input distribution respectively, and \(M\) denotes a function map for the entire DNN model.
Since it is a binary classification task, for given input \((s,d)\) pairs' labels, their probability distributions are either \((1,0)\) for 'true' (correlated) and \((0,1)\) for 'false' (uncorrelated). Therefore, we choose _binary cross-entropy_ (line 9) as the loss function as follows:
\[loss(p(y))=-\frac{1}{N}\sum\limits_{i=1}^{N}y_{i}\cdot log(p(y_ {i}))\\ +(1-y_{i})\cdot log(1-p(y_{i}))\]
where \(y\) is the label (1 for correlated pairs and 0 for uncorrelated pairs), and \(N\) is the total number of training samples.
Fig. 4: DNN architecture has two convolution layers (C1, C2) and three fully connected layers (FC1, FC2, FC3).
The goal of model training is to minimize the loss function by gradient descent for multiple iterations, where in each step the model parameters \(\Theta\) are updated by \(\Theta^{\prime}=\Theta+\nabla loss(p(y))\).
### _Predicting Correlation_
The trained model can be used for automatic correlation classification. The attacking phase is simple and straightforward as shown in Algorithm 3. During the attacking phase, we feed the two inter-fit delay arrays from a suspicious source (\(S\)) and destination (\(D\)) of the ongoing communication session to the ML model (lines 4-5). The ML model will output 1 if the source and destination are communicating, and 0 otherwise (lines 5). If \(S\) and \(D\) are communicating and the ML model output is 1, our attack has successfully broken the anonymity.
```
1:\(IFD_{S}^{o}\) : outbound inter-fit delay array of S
2:\(IFD_{D}^{i}\) : inbound inter-fit delay array of D
3:\(M_{\Theta}\) : pre-trained model
4:procedureAttack ( \(\{IFD_{S}^{o},IFD_{D}^{i}\}\), \(M_{\Theta}\))
5:\(p(y)\leftarrow\textit{predict}(\{IFD_{S}^{o},IFD_{D}^{i}\}\), \(M_{\Theta}\))
6:return\(p(y)\)
```
**Algorithm 3** Attack on Anonymous Routing
## IV Defending against ML-based Attacks
In this section, we propose a lightweight anonymous routing that can defend against the ML-based attack described in Section III. Figure 5 shows an overview of our proposed anonymous routing that consists of two phases: i) outbound tunnel creation and ii) data transfer with traffic obfuscation. We utilize two traffic obfuscation techniques (chaffing of flits and random delays).
### _Outbound Tunnel Creation_
An outbound tunnel (\(OT_{S}^{i}\)) is a route created from the source router (\(S\)) of the tunnel to an arbitrary router called _tunnel endpoint_ (\(E_{S}^{i}\)). Here, \(i\) indicates the parameter for each tunnel instance. Figure 6 shows how outbound tunnels, \(OT_{S}^{i}\) and \(OT_{D}^{i}\), are used when \(IP_{S}\) and \(IP_{D}\) are injecting packets to the network. It is important to highlight that these \(OT^{i}\)s are only bound to their source router and are independent of any communication session. Each tunnel is associated with a timeout bound. After the timeout, the tunnel that belongs to a particular source \(S\) will cease to exist and a new tunnel will be created with a different endpoint (\(E_{S}^{i+1}\)). \(E_{S}^{i}\) of an \(OT_{S}^{i}\) is randomly selected from any router that is \(h_{min}\) to \(h_{max}\) hops away from the source of the tunnel. We use \(h_{min}=3\) because a minimum of three nodes are needed for anonymous routing and increasing it further will negatively affect the performance [13]. \(h_{max}\) can be configured to balance the performance and the number of endpoint choices.
Figure 7 zooms into the tunnel creation phase. A summary of notations used in tunnel creation can be found in Table I. Tunnel creation is a three-way handshake process. The source broadcasts a Tunnel Initialization (TI) packet to all the routers and only \(E_{S}^{i}\) responds back to the source with a Tunnel Acceptance (TA) packet. Once the source receives an \(ACK\) from \(E_{S}^{i}\), it sends the Tunnel Confirmation (TC) packet to \(E_{S}^{i}\). After these three steps, each router in the tunnel has two random Virtual Circuit Identifiers (VCI) saved in their routing table to define the succeeding and preceding hops representing the tunnel. For the rest of the section, we refer to \(E_{S}^{i}\) as just \(E\).
#### Iv-A1 Tunnel Initialization
In the example (Figure. 6), \(S\) sends a TI packet as:
\[\{TI||OPuK_{S}^{i}||E\tilde{n}_{PuK_{E}}(OPuK_{S}^{i}||r)||TPuK_{S}^{i}\} \tag{1}\]
Fig. 5: Overview of the proposed lightweight anonymous routing to defend against flow correlation attack. It has two phases: tunnel creation and data transfer with traffic obfuscation.
Fig. 6: Two separate outbound tunnels \(OT_{S}^{i}\) and \(OT_{D}^{i}\) are used by \(IP_{S}\) and \(IP_{D}\) for communication. In \(IP_{S}\) to \(IP_{D}\) communication, chaffed flit is inserted at \(NI_{S}\) and winnowed at \(E_{S}^{i}\) (\(R_{4}\)). \(E_{S}^{i}\) adds random delay to the flit sequence. The packet follows normal routing after an outbound tunnel ends.
Fig. 7: Message transfer in a three-way handshake to create an outbound tunnel between router \(R_{1}\) and \(R_{4}\) and final state routing tables of each router representing the outbound tunnel.
\(TI\) identifies the packet as a Tunnel Initiation packet. \(OPuK_{S}^{i}\) is the sources' one-time public key for the \(i^{th}\) tunnel and \(OPrK_{S}^{i}\) is the corresponding private key. In other words, an \(OT_{S}^{i}\) can be uniquely identified by this key pair. \(PuK_{E}\) and \(PrK_{E}\) are the global public and private keys of \(E\), respectively. They will not be changed with each tunnel creation. \(OPuK_{S}^{i}\) and a randomly generated value \(r\) is concatenated and encrypted through public-key encryption using the key \(PuK_{E}\) (\(E\hat{n}_{PuK_{E}}\)). Only \(E\) can decrypt this encryption because only E has the corresponding private key (\(PrK_{E}\)). Finally, the temporary public key (\(TPuK_{S}^{i}\)) is concatenated at the end of the packet. TI packet is broadcasted instead of directly routed to avoid anonymity being broken at its birth.
Any Router (\(R\)) receiving a TI packet will follow Algorithm 4. Tunnel Lookup (TL) table has unique entries for every TI packet comes to the router. First, it tries to match \(OPuK_{S}^{i}\) with the existing entries in the TL table. On match, the message will get discarded to avoid any duplication due to TI packet broadcasting (line 4). Otherwise, \(OPuK_{S}^{i}\) and \(TPuK_{pre(R)}^{i}\) are stored in the TL table (line 6). Next, \(R\) will try to decrypt the message and if it is successful, it should recognize itself as the intended endpoint and run Algorithm 5 (line 8). If not, \(R\) will replace \(TPuK_{pre(R)}^{i}\) with it's own temporary key \(TPuK_{R}^{i}\) and forward the TI packet to the next hop (\(next(R)\))(line 10 and 11). For example, in figure 6, after receiving a TI packet from \(R_{2}\), \(R_{3}\) will generate and forward the following TI packet to \(R_{4}\):
\[\{TI||OPuK_{S}^{i}||E\hat{n}_{PuK_{E}}(OPuK_{S}^{i}||r)||TPuK_{R_{3}}^{i}\} \tag{2}\]
```
1:\(pkt\) :A TI packet
2:procedureHandleTI (\(pkt\))
3:if\(OPuK_{S}^{i}\) in \(\textit{TL}\) tablethen
4: discard \(pkt\)
5:else
6: store \(OPuK_{S}^{i}\) and \(TPuK_{pre(R)}^{i}\)
7:if\(D\hat{e}_{PrK_{R}}(pkt[3])\) is successfulthen
8: GenerateTA (\(D\hat{e}_{PrK_{R}}(pkt[3])\), \(pkt[4]\))
9:else
10:\(pkt[4]\gets TPuK_{R}^{i}\)
11: forward \(pkt\)
```
**Algorithm 4** TI Packet handling at \(R\)
#### Iii-B2 Tunnel Acceptance
Upon receiving the TI packet, \(E\) runs Algorithm 4 first and then calls Algorithm 5 as the endpoint of the tunnel. Algorithm 5 shows the outline of TA packet generation at any endpoint router (\(E\)). First, \(E\) validates the integrity of the packet by comparing decrypted \(OPuK_{S}^{i}\) value and plaintext \(OPuK_{S}^{i}\) value (line 3). If the packet is validated for integrity, Algorithm 5 will execute the following steps. First, it will generate random nonce \(n_{E}\) which will be used as VCI. Next, it will generate a symmetric key \(K_{S-E}\) to use between \(S\) and \(E\). Then it will log both \(n_{E}\) and \(K_{S-E}\) in the TL table and \(n_{E}\) in the _routing table_ as indexed VCI (line 6). Next, it will perform encryption of the concatenation of \(n_{E}\), \(K_{S-E}\) and \(r\) using the key \(OPuK_{S}^{i}\) which will allow only \(S\) to decrypt the content (line 7). Finally, the resultant encryption is encrypted again by the \(TPuK_{next(E)}\) (line 7). In the figure 6, \(E_{S}^{i}\) will generate the following TA packet:
\[\{TA||E\hat{n}_{TPuK_{R_{3}}^{i}}(E\hat{n}_{OPuK_{S}^{i}}(r||n_{E}||K_{S-E}))\} \tag{3}\]
When a router \(R\) receives a TA packet, it will execute Algorithm 6. If the router is the source of the \(OT^{i}\), it will execute Algorithm 7 (line 4). Otherwise, it will go through the following steps. First, it decrypts the packet using the temporary private key (\(TPrK_{R}^{i}\)) (line 6) and generates a random nonce and symmetric key (\(n_{R}\), \(K_{S-R}\)). This generated \(n_{R}\) and \(K_{S-R}\) are stored in \(R\)'s TL table (line 7). The nonce and symmetric key pair is concatenated to the decrypted packet (\(dct\)) (line 7 and 8), which will be encrypted using source public key (\(OPuK_{S}^{i}\)) to add another layer of security (line 8). Finally, \(R\) will encrypt the content with the public key of the next hop \(next(R)\) (line 9). In the example, \(R_{2}\) forwards the following TA packet to \(R_{1}\):
```
1:\(pkt\) : A TA packet
2:procedureHandleTA (\(pkt\))
3:if\(R\) is \(S\) of \(OT^{i}\)then
4: GenerateTC (\(pkt\))
5:else
6:\(dct\gets D\hat{e}_{TPrK_{R}^{i}}(pkt[2])\)
7: generate and store \(n_{R}\) and \(K_{S-R}\)
8:\(enc\gets E\hat{n}_{OPuK_{S}^{i}}(dct||n_{R}||K_{S-R})\)
9:\(enc\gets E\hat{n}_{TPuK_{S}^{i}}(enc)\)
10:return\(\{TA||enc\}\)
```
**Algorithm 5** TA packet generation at \(E\)
#### Iii-B3 Tunnel Acceptance
Upon receiving the TI packet, \(E\) runs Algorithm 4 first and then calls Algorithm 5 as the endpoint of the tunnel. Algorithm 5 shows the outline of TA packet generation at any endpoint router (\(E\)). First, \(E\) validates the integrity of the packet by comparing decrypted \(OPuK_{S}^{i}\) value and plaintext \(OPuK_{S}^{i}\) value (line 3). If the packet is validated for integrity, Algorithm 5 will execute the following steps. First, it will generate random nonce \(n_{E}\) which will be used as VCI. Next, it will generate a symmetric key \(K_{S-E}\) to use between \(S\) and \(E\). Then it will log both \(n_{E}\) and \(K_{S-E}\) in the TL table and \(n_{E}\) in the _routing table_ as indexed VCI (line 6). Next, it will perform encryption of the concatenation of \(n_{E}\), \(K_{S-E}\) and \(r\) using the key \(OPuK_{S}^{i}\) which will allow only \(S\) to decrypt the content (line 7). Finally, the resultant encryption is encrypted again by the \(TPuK_{next(E)}\) (line 7). In the figure 6, \(E_{S}^{i}\) will generate the following TA packet:
\[\{TA||E\hat{n}_{TPuK_{R_{3}}^{i}}(E\hat{n}_{OPuK_{S}^{i}}(r||n_{E}||K_{S-E}))\} \tag{4}\]
When a router \(R\) receives a TA packet, it will execute Algorithm 6. If the router is the source of the \(OT^{i}\), it will execute Algorithm 7 (line 4). Otherwise, it will go through the following steps. First, it decrypts the packet using the temporary private key (\(TPrK_{R}^{i}\)) (line 6) and generates a random nonce and symmetric key (\(n_{R}\), \(K_{S-R}\)). This generated \(n_{R}\) and \(K_{S-R}\) are stored in \(R\)'s TL table (line 7). The nonce and symmetric key pair is concatenated to the decrypted packet (\(dct\)) (line 7 and 8), which will be encrypted using source public key (\(OPuK_{S}^{i}\)) to add another layer of security (line 8). Finally, \(R\) will encrypt the content with the public key of the next hop \(next(R)\) (line 9). In the example, \(R_{2}\) forwards the following TA packet to \(R_{1}\):
\begin{table}
\begin{tabular}{c c} \hline \(E\hat{n}_{K}\) & Encrypts message \(M\) using key \(K\) \\ \(D\hat{e}_{K}\) & Decrypts message \(M\) using key \(K\) \\ \(OPuK_{S}^{i}\) & One-time public key used by source \(S\) \\ \(OPrK_{S}^{i}\) & Corresponding private key to \(OPuK_{S}^{i}\) \\ \(PuK_{S}^{i}\) & Global public key of \(E\) \\ \(PrK_{E}\) & Corresponding private key to \(PuK_{E}\) \\ \(TPuK_{R}^{i}\) & Temporary public key of node \(R\) \\ \(TPrK_{R}^{i}\) & Corresponding private key to \(TPuK_{R}^{i}\) \\ \(K_{S-R}\) & Symmetric key shared between \(S\) and \(R\) \\ \(n_{R}\) & Random nonce generated by node \(R\) \\ \(r\) & Random number generated by \(S\) \\ \(pkt[i]\) & \(i^{th}\) element of a packet \(pkt\) \\ \(per(R)\) & Previous router (in upstream direction) \\ \(next(R)\) & Next router (in downstream direction) \\ \(rand(a,b)\) & Generates random number between \(a\) and \(b\) \\ \hline \end{tabular}
\end{table} TABLE I: Notations used in tunnel creation.
\[\{TA||E\hat{n}_{TPuK^{i}_{B_{2}}}(E\hat{n}_{OpuK^{i}_{S}}(E\hat{n}_{ OpuK^{i}_{S}}(E\hat{n}_{OpuK^{i}_{S}}(\] \[\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
When the input queue of source NI received a packet (line 14), the first scenario is considered (line 15 - 21). The intuition behind this scenario is to hinder the possibility of ML-model using burst of small inter flits delays of inbound and outbound flows in heavy traffic. The example shown in Figure 6 demonstrates the chaffing in second scenario and removing that chaff. Here, \(P_{c}\) limits the number of packets being obfuscated (line 16 - 17). If chosen to be obfuscated, chaff is inserted in the middle of legitimate packets at a random position (Line 18 - 21). \(K_{S-E}\) is used to encrypt _chaffId_, represents the position of chaff flit, which is used by the endpoint to filter the chaffd flit.
A random number generator is already in the NI for cryptographic process. Therefore, the same generator is used for random number generation in line 8, 10 and 16. If the _cltag_ (line 3, 6, 7, 15 and 23) variable is true, it indicates the current gap between fits was already checked for insertion of packet. It is important to note that, (1) the dummy fits are added only when outbound link is idle, therefore, it has less impact on the program running on source IP, and (2) the dummy flits will only impact at most 3 internal links associated with the tunnel, therefore, it has less impact on the other traffic in the network. The scenario two inserts relatively less number of dummy flits and they will only impact at most 3 internal links. Experimental results in Section V-H validate that our obfuscation technique only results in negligible overhead.
#### Iv-B2 Obfuscation with Random Delay
The second obfuscation technique adds random delays to selected flits and tries to tamper with the timing aspect of the traffic flow. Flits belonging to only \(P_{d}\) percentage of packets are subject to added delays. The tunnel endpoint is responsible for adding delays. Traveling through the rest of the hops the flit propagates the delay to the destination tampering with timing features of the inbound flow (\(IFD_{D}^{i}\)). Figure 6 demonstrates the effect of added delay in traffic flows. It is clear that chaffing and random delays obfuscate the actual traffic between source and destination. Both of these techniques can be used simultaneously or in a standalone manner depending on the requirement. Experimental results (Table IX and X) demonstrate that both of these techniques are beneficial in defending against ML-based flow correlation attacks.
## V Experimental Evaluation
We model our proposed ML-based attack and countermeasures on a cycle-accurate Gem5 [26], a multi-core simulator, with Garnet 2.0 [27] for the interconnection network modeling. We use 64-core system with multiple and the detailed system configuration is given in Table II. Splash-2 [28] benchmark applications as well as multiple synthetic traffic patterns were used in evaluation. We used Pytorch library to implement the proposed DNN architecture. In order to evaluate the area and energy overhead of our approach against ARNoC, we implemented them in Verilog and synthesized both designs (ARNoC and our approach) using Synopsys Design Compiler with "lsi 10k" library. First, we show the results of the flow correlation attack in ARNoC [7]. Later, we show robustness of proposed countermeasure to prevent the attack.
### _Data Collection_
This section demonstrates the data collection on Gem5 for the training of DNN. Although the input to DNN is in the same structure, the inherent differences in synthetic traffic and real benchmarks led us to two ways of collecting flow pairs for training.
#### V-A1 Synthetic Traffic
We performed data collection using Uniform-Random synthetic traffic with the following modification. All IPs send packets to randomly selected IPs except two (\(IP_{S}\) and \(IP_{D}\)). These two IPs are the correlated pair communicating in session. From all the packets injected from the source IP (\(IP_{S}\)), only \(p\) percent of packets are sent to the destination IP (\(IP_{D}\)), and the remaining packets ((\(100-p\))%) are sent to other nodes. For example, \(p=80\%\) means 80% of the total outbound packets from \(IP_{S}\) will have \(IP_{D}\) as the destination, while the other 20% can have any other IP except IP \(IP_{S}\) and \(IP_{D}\) as the destination. Note that this 20% can be viewed as noise from the perspective of communication between \(IP_{S}\) and \(IP_{D}\).
To make the dataset generic, for a single \(p\) value, we conduct experiments covering all possible mapping of correlated pairs to NoC nodes, which are 8064 mappings (64\(\times\)63\(\times\)2). We
\begin{table}
\begin{tabular}{|l|l|} \hline
**Parameter** & **Details** \\ \hline Processor configurations & X86, 2GHz \\ \hline L1 1 \& D cache & 1KB, 1KB (64B block size) \\ \hline Coherency Protocol & MI \\ \hline Topology & 8\(\times\)8 Mesh \\ \hline Chaffing rate (\(P_{c}\)) & 50\% \\ \hline Delay addition rate (\(P_{d}\)) & 50\% \\ \hline \end{tabular}
\end{table} TABLE II: System and interconnect configuration
consider four traffic distributions with \(p\) value of 95%, 90%, 85%, and 80%. In other words, we consider four different noise levels (5%, 10%, 15% and 20%) for our data collection simulations. The full dataset for a certain \(p\) value contains 24192 flow pairs (\(\{IFD_{S}^{o}\), \(IFD_{D}^{i}\}\)) which consists of 8064 correlated traffic flow pairs and 16128 uncorrelated traffic flow pairs. Note that for each correlated flow pair, we selected two arbitrary uncorrelated flow pairs.
To evaluate our countermeasures, when collecting obfuscated traffic, we kept both \(P_{c}\) and \(P_{d}\) at 50% to ensure uniform distribution of obfuscation. When obfuscating traffic using added delay, we vary the delay between \(1-5\) cycles because a higher delay may lead to unacceptable performance overhead. We collected three categories of data sets: one with chaffing only, one with random delay only, and one with applying both chaffing and delaying simultaneously.
#### Iv-A2 Real Traffic
We collected data for five Splash-2 benchmark application pairs running on two processors (\(P_{1}\) and \(P_{2}\)) where two memory controllers (\(MC_{1}\) and \(MC_{2}\)) are serving memory requests. The benchmark pairs used are \(\{\textit{ff},\textit{fmm}\}\), \(\{\textit{fmm},\textit{lu}\}\), \(\{\textit{lu},\textit{barnes}\}\), \(\{\textit{barnes},\textit{radix}\}\), \(\{\textit{radix},\textit{fff}\}\), where the first benchmark runs on \(P_{1}\) and the second runs on \(P_{2}\). The selected benchmarks have the diversity to make the dataset generic (for example, _fft_ and _radix_ are significantly different [29]). The address space of the benchmark running in \(P_{1}\) is mapped only to \(MC_{1}\). Therefore, \(P_{1}\) only talks with the \(MC_{1}\), and they are the correlated pair. The address space of the benchmark running in \(P_{2}\) is assigned to both \(MC_{1}\) and \(MC_{2}\) in a way that, the ratio between memory request received by \(MC_{1}\) from \(P_{1}\) to memory request received by \(MC_{1}\) from \(P_{2}\) to be \(p:(100-p)\). This percentage \(p\) is similar to that of synthetic traffic and \((100-p)\%\) is the noise. For example, when \(p=85\%\), \(MC_{1}\) serves \(15\%\) packets to \(P_{2}\) when it severs \(85\%\) packets to \(P_{1}\).
Similar to synthetic traffic, we considered four values for \(p\) which are 95%, 90%, 85%, and 80%. For a single \(p\) value and a single benchmark pair, we conducted experiments covering all possible mapping of correlated pairs to NoC nodes, which are 4032 mappings (64\(\times\)63). The \(MC_{2}\) and \(P_{2}\) were randomly chosen in all these mappings. The full dataset for a certain \(p\) value and benchmark pair contains 16128 flow pairs (4032 correlated pairs and 12096 uncorrelated pairs). To evaluate our countermeasures, we collect obfuscated data similar to synthetic traffic.
### _Hyperparameter Tuning_
Hyperparameters are the components fixed by the user before the actual training of the model begins to achieve the highest possible accuracy on the given dataset. We exhaustively tested different combinations of hyperparameters to attain state-of-the-art performance on attacking success rate. The training process consists of 10 epochs with a consistent learning rate of 0.0001. We performed batch normalization and adjusted the batch size to 10 for the training set. As for convolution layers (C1 and C2 in Figure 4), the channel size is selected as \(k_{1}=1000\) and \(k_{2}=2000\), with \(w_{1}=5\) and \(w_{2}=30\), for C1 and C2, respectively. As for fully connected layers, sizes are selected as 3000, 800, and 100 for FC1, FC2, and FC3, respectively.
There are a lot of challenges in tuning since the finalized parameters reflect a trade-off between cost and effectiveness. First, the learning rate of the raining was reduced from 0.001 to 0.0001 which increases the training time but successfully avoids the Local Minima problem. Also, we halve the training epochs from 20 to 10 to evade the Overfitting problem. Batch size is also decreased from 50 to 10. In this way, fewer samples are provided for one iteration of training, but it improves the stability of training progress. Additionally, the selection of parameters for convolution layers properly addressed their responsibilities. As discussed in Section III-C, C1 focuses on extracting rough relationships while C2 on advancing features. Therefore, C2 possesses two times of channels of C1, and a wider stride (30:5) to improve efficiency.
### _Training and Testing_
In our work, a total dataset of flow pairs for a certain configuration was randomly split in a proportion of 2:1 for training and testing set. The correlated flow pairs were labeled as '1' and the uncorrelated pair as '0'. We use the following four evaluation metrics for our experimental evaluation.
* Accuracy: \(\frac{tp+tn}{tp+tn+fp+fn}\)
* Recall: \(\frac{tp}{tp+fn}\)
* Precision: \(\frac{tp}{tp+fp}\)
* F1 Score: \(2\frac{pPrecision\cdot Recall}{Precision+Recall}\).
Here, \(tp\), \(tn\), \(fp\) and \(fn\) represent true positive, true negative, false positive, and false negative, respectively. Intuitively, recall is a measure of a classifier's exactness, while precision is a measure of a classifier's completeness, and F1 score is the harmonic mean of recall and precision. The reason for utilizing these metrics comes from the limitation of accuracy. For imbalanced test cases (e.g., \(>90\%\) positive labels), a naive ML model which gives always-true output can reach \(>90\%\) accuracy. The goal of the attacker is to identify correlating node pairs and launch complex attacks. Here, \(fn\) is when an actual correlating pair is tagged as non-correlating by the DNN. \(fp\) is when an actual non-correlating pair is tagged as correlating by the DNN. From an attacker's perspective, the negative impact of wasting time on launching an unsuccessful attack on \(fp\) is relatively low compared to an attacker missing a chance to launch an attack due to a \(fn\). Therefore, recall is the most critical metric compared to others when evaluating this flow correlation attack.
### _ML-based Attack on Synthetic Traffic_
We evaluated the proposed attack for all four traffic distributions. The traffic injection rate was fixed to 0.01 and the IFD array size to 250. Table III summarizes the results for the attack. All the considered traffic distributions show good metric numbers. We can see a minor reduction in performance with a reducing value of \(p\). This is expected because of the increase in the number of uncorrelated packets in correlated flow pairs, making the correlation hard to detect. Even for the lowest traffic distribution of 80% between two correlating
pairs, the attacking DNN is able to identify correlated and uncorrelated flow pairs successfully with good metric values. The slight reduction of accuracy and recall shows impact of 20% noise by other uncorrelated traffics. This confirms that our attack is realistic and can be applied on state-of-the-art anonymous routing (ARNoC) to break anonymity. More importantly, the ML model performs well in distinguishing flow correlation across different traffic characteristics with varying noise.
### _Stability of ML-based Attack on Synthetic Traffic_
In this section, we evaluate the attack with varying configurable parameters to further confirm the stability of the proposed ML-based attack. For experiments in this section, we use the value of \(p\) as 85% and the rest of the parameters as discussed in the experimental setup except for the varying parameter.
#### V-E1 Varying traffic injection rates (TIR)
We collected traffic data for four traffic injection rates: 0.001, 0.005, 0.01 and 0.05, and conducted the attack on existing anonymous routing. Table IV provides detailed results on metrics over selected values. We can see a small reduction in overall metrics including recall, with the increase in injection rate. This is because, higher injection rates will create more congestion and buffering delays on NoC traffic. The indirect noise from congestion and buffering delays makes it slightly hard for the ML model to do flow correlation. Overall, our proposed ML model performs well in different injection rates since all the metrics show good performance.
#### V-E2 Varying IFD Array Size
We collected traffic data by varying the size of IFD array size (\(l\)) in the range of 50 to 550 and conducted the attack on existing anonymous routing. Table V shows detailed results on metrics over selected values. For a lower number of flits, the relative values of the recall and other metrics are low. However, with the increasing number of flits, the accuracy also improves until the length is 250. This is due to the increase in the length of the IFD array the ML model has more features for the flow correlation. After the value of 250, the accuracy saturates at around 94.5%. In subsequent experiments, we kept \(l\) to 250 because ML-based attack performs relatively well with less monitoring time.
#### V-E3 Varying Network Size
To evaluate the stability of the ML model on varying network sizes, we analyzed the model on 16 core system with 4\(\times\)4, 64 core system with 8\(\times\)8, and 256 core system with 16\(\times\)16 mesh topology. Table VI shows the performance results of the ML model for different network sizes. Attack on 4\(\times\)4 mesh shows slightly good metric values compared to 8\(\times\)8. Attack on 16\(\times\)16 shows relatively low accuracy and recall since a large network tends to alter temporal features of the traffic due to increased congestion and number of hops. Considering good accuracy and other metrics, our ML-based attack shows stability across different mesh sizes.
### _ML-based Attack on Real Benchmarks_
We trained and tested the model using two techniques. In the first technique, we merge datasets of a single \(p\) value across all 5 benchmark combinations outlined in Section V-A to create the total dataset. Therefore, the total dataset has 80640 flow pairs before the 2:1 test to train split. Table VII summarizes the results for the first technique across all \(p\) values. Good metric numbers across all traffic distributions show the generality of the model across different benchmarks. In other words, our attack works well across multiple benchmarks simultaneously. Even 20% noise (\(p=80\)) shows recall value just below 99% making the attack more favorable to an attacker as discussed in Section V-C. We can see a minor reduction of performance with reduction in \(p\), which is expected due to increased noise in the correlated flow pair.
When we compare the performance of attack on real traffic against synthetic traffic (Table III), attack on real traffic shows better performance. This is primarily for two reasons. (a) The syenthetic traffic generation is totally random. More precisely,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**IFD Array** & **Accuracy** & **Recall** & **Precision** & **F1 Score** \\ \hline
50 & 83.53\% & 96.45\% & 67.96\% & 79.74\% \\ \hline
100 & 90.92\% & 96.17\% & 80.28\% & 87.51\% \\ \hline
150 & 90.93\% & 74.10\% & 98.32\% & 84.51\% \\ \hline
250 & 94.64\% & 91.32\% & 92.30\% & 91.81\% \\ \hline
350 & 94.71\% & 86.21\% & 97.39\% & 91.46\% \\ \hline
450 & 94.58\% & 93.21\% & 90.58\% & 91.87\% \\ \hline
550 & 94.66\% & 88.97\% & 94.30\% & 91.56\% \\ \hline \end{tabular}
\end{table} TABLE V: Proposed ML-based attack on existing anonymous routing for varying number of flits.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**TIR** & **Accuracy** & **Recall** & **Precision** & **F1 Score** \\ \hline
0.001 & 95.32\% & 92.29\% & 93.51\% & 92.89\% \\ \hline
0.005 & 94.72\% & 90.14\% & 93.98\% & 92.02\% \\ \hline
0.01 & 94.64\% & 91.32\% & 92.30\% & 91.81\% \\ \hline
0.05 & 93.86\% & 88.56\% & 92.67\% & 90.56\% \\ \hline \end{tabular}
\end{table} TABLE IV: Accuracy, precision, recall and F1 score of proposed ML-based attack on existing anonymous routing for different traffic injection rates.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**TIR** & **Accuracy** & **Recall** & **Precision** & **F1 Score** \\ \hline
0.001 & 95.32\% & 92.29\% & 93.51\% & 92.89\% \\ \hline
0.005 & 94.72\% & 90.14\% & 93.98\% & 92.02\% \\ \hline
0.01 & 94.64\% & 91.32\% & 92.30\% & 91.81\% \\ \hline
0.05 & 93.86\% & 88.56\% & 92.67\% & 90.56\% \\ \hline \end{tabular}
\end{table} TABLE IV: Accuracy, precision, recall and F1 score of proposed ML-based attack on existing anonymous routing for different traffic injection rates.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**IFD Array** & **Accuracy** & **Recall** & **Precision** & **F1 Score** \\ \hline
50 & 83.53\% & 96.45\% & 67.96\% & 79.74\% \\ \hline
100 & 90.92\% & 96.17\% & 80.28\% & 87.51\% \\ \hline
150 & 90.93\% & 74.10\% & 98.32\% & 84.51\% \\ \hline
250 & 94.64\% & 91.32\% & 92.30\% & 91.81\% \\ \hline
350 & 94.71\% & 86.21\% & 97.39\% & 91.46\% \\ \hline
450 & 94.58\% & 93.21\% & 90.58\% & 91.87\% \\ \hline
550 & 94.66\% & 88.97\% & 94.30\% & 91.56\% \\ \hline \end{tabular}
\end{table} TABLE V: Proposed ML-based attack on existing anonymous routing for varying number of flits.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Mesh Size** & **Accuracy** & **Recall** & **Precision** & **F1 Score** \\ \hline
4\(\times\)4 & 94.76\% & 91.86\% & 92.63\% & 92.24\% \\ \hline
8\(\times\)8 & 94.64\% & 91.32\% & 92.30\% & 91.81\% \\ \hline
16\(\times\)16 & 92.72\% & 80.28\% & 96.98\% & 87.84\% \\ \hline \end{tabular}
\end{table} TABLE VI: Proposed ML-based attack on existing anonymous routing for different mesh sizes.
the interval between two packets is random and the next destination of a specific source is random. This level of randomness is not found in real traffic making flow correlation in real traffic relatively easy. (b) In synthetic traffic all 64 nodes talk with each other making higher buffering delays eventually making flow correlation harder. However, buffering delays has minor impact compared to randomness.
The second technique use dataset of a single \(p\) value and single benchmark pair. Table VII summarizes the results for the second technique when \(p=85\) across five benchmark pairs. All benchmarks shows good metric values, but we can see slight reduction of accuracy and average recall in 3rd and 4th rows. Both benchmark pairs have _barnes_ benchmark, which has lowest bytes per instruction in all benchmarks [30]. This results in sparse inter-flit array, eventually making it relatively harder to do flow correlation.
### _Robustness of the Proposed Countermeasure_
We discuss the robustness of our proposed lightweight anonymous routing in two ways. First, we discuss the robustness of our proposed countermeasure (Section IV) against the ML-based attack (Section III) on both synthetic and real traffic. Second, we discuss the robustness of our proposed attack in terms of breaking anonymity in general.
We evaluate our countermeasure against ML-based attack in three configurations for synthetic traffic: (i) using chaffing, (ii) using a delay, and (iii) using both chaffing and delay to obfuscate traffic. For each of the three configurations, we evaluate the ML-based attack on two scenarios: (1) train with non-obfuscated traffic and test with obfuscated traffic (Table IX), and (2) train and test with obfuscated traffic (Table X). In all three configurations, the attack on the first scenario has performed poorly (the proposed countermeasure defends very well). This is expected because the attacking DNN has not seen any obfuscated data in the training phase. If we focus on the scenario of using a delay to obfuscate traffic (Table X), we can see a significant reduction in all the metrics. Large drops in recall when using chaffing as the obfuscation technique validate that the proposed countermeasure produces a significant negative impact on attackers' end goals. Adding random delay reduces accuracy and recall by about 3% compared to non-obfuscated traffic in all the traffic distributions. Whereas, combining chaffing with delay reduces accuracy and recall by about 3% as compared to chaffing alone. In other words, combining two obfuscation techniques did not seem to have any synergistic effect. We recommend chaffing as a good obfuscation configuration since adding delay has only a small advantage despite its overhead. Note that the poor performance of added random delay as a countermeasure validates the fact that our proposed attack is robust against inherent random network delays in the SoC.
When evaluating the performance of countermeasures using benchmark applications, we consider only chaffing to obfuscate traffic. Furthermore, we only train and test with obfuscated traffic which guarantees to give a strong evaluation of the countermeasure. As discussed in section V-F we evaluate the countermeasure using two techniques, (i) merged datasets across benchmarks (Table XI) and (ii) datasets per benchmark when \(p\) value is fixed (Table XII). When we focus on Table XI, we see an overall reduction of metric values compared to the attack without countermeasure. Even though the accuracy reduction is around 10%, the countermeasure has reduced recall value drastically. This will negatively affect the attacker due to missing a chance to launch an attack due to higher \(fn\). When we compare the performance of the countermeasure on real traffic against synthetic traffic (Table X), the countermeasure on synthetic traffic has performed relatively better. This is due to the same two reasons mentioned in section V-F briefly, the randomness of synthetic traffic and increased buffer delay because every node communicates.
We evaluate the anonymity of proposed lightweight anonymous routing in three attacking scenarios. The first scenario is when _one of the intermediate routers in the outbound tunnel is malicious_. The malicious router only knows the identity of the preceding and succeeding router, so the anonymity of the flits traveling through the tunnel is secured. The second scenario is when _the tunnel endpoint is malicious_. The router will have the actual destination of the packet but not the source information; therefore by having a single packet, the malicious router cannot break the anonymity. This scenario is also considered secure in the traditional onion routing threat model [13]. Complex attacks in malicious routers need a considerable number of packets/flits to be collected. It is hard due to two following reasons: (i) Our proposed solution changes the outbound tunnel of a particular source frequently. (ii) Since the source and destination have two independent outbound tunnels, it is infeasible to collect and map request/reply packets. The final scenario is when _an intermediate router in a normal routing path is malicious_. This scenario arises when flits use normal routing after it comes out of the outbound tunnel. Similar to the previous scenario, the packet only knows about the true destination, and anonymity is not broken using a single packet. In other words, outbound tunnels change frequently, and the source and destination have different tunnels making it hard to launch complex attacks to break anonymity by collecting packets.
We evaluate the robustness of our approach in terms of deadlock handling. We have implemented our model using Garnet 2.0, where the XY routing mechanism is used to guarantee deadlock-free communication. When we focus on our countermeasure, the outbound tunnel forces packets to disregard XY routing protocol. So, the introduction of an outbound tunnel can have the potential to create a deadlock. The first step of tunnel creation (Tunnel Initialization) uses the existing XY routing protocol to broadcast TI packets. The path of the TI packet determines the tunnel shape. Since a TI packet
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**benchmark** & **Accuracy** & **Recall** & **Precision** & **F1 Score** \\ \hline \{fft, fmm\} & 98.29\% & 99.11\% & 94.42\% & 96.71\% \\ \hline \{fmm, lu\} & 99.32\% & 97.63\% & 99.70\% & 98.65\% \\ \hline \{lu, barnes\} & 97.84\% & 91.60\% & 99.76\% & 95.51\% \\ \hline \{barnes, radvis\} & 96.14\% & 84.59\% & 99.73\% & 91.54\% \\ \hline \{radivis,fft\} & 96.62\% & 97.05\% & 90.66\% & 93.75\% \\ \hline \end{tabular}
\end{table} TABLE VIII: Accuracy, precision, recall and F1 score of our ML-based attack on existing anonymous routing (ARNoC [7]) when p=85 across real benchmarks.
cannot take a Y to X turn, any tunnel created on XY routing inherently use only XY turns inside the tunnel. Hence in the data transfer phase, all the communication inside and outside the outbound tunnel will only take X to Y turns, ensuring deadlock-free communication.
### _Overhead of the Proposed Countermeasure_
Figure 7(a) shows the average packet latency for our proposed lightweight countermeasure over ARNoC in the data transmission phase. Obfuscating with chaff flit, which is the recommended obfuscation technique from Section V-G has only a 13% increase in performance overhead. When we consider tunnel creation overhead in Figure 7(b), our approach performs 35.53% better compared to ARNoC. The most important aspect to highlight is that in our approach tunnel creation can happen in the background. Therefore, our tunnel creation does not directly affect the data transfer performance. Overall, our approach is lightweight compared to ARNoC while delivering anonymity against ML-based attacks.
In addition to low performance overhead, our lightweight anonymous routing has the inherent advantage of utilizing any adaptive routing mechanisms supported by NoC architectures (endpoint of output tunnel to the destination), while ARNoC cannot accommodate adaptive routing protocols because of having a pre-built tunnel from the source to destination.
Table XIII compares the area and power overhead lightweight countermeasure against ARNoC in \(8\times 8\) mesh topology. In the implementation, our approach uses only the chaffing obfuscation. There is only a 1.7% increase in area and only a 0.6% increase in power. The area and power overhead are negligible considering the performance improvement and additional security provided by our proposed anonymous routing compared to the state-of-the-art anonymous routing ARNoC [7].
## VI Conclusion
Network-on-Chip (NoC) is a widely used solution for on-chip communication between Intellectual Property (IP) cores in System-on-Chip (SoC) architectures. Anonymity is a critical requirement for designing secure and trustworthy NoCs. In this paper, we made two important contributions. We proposed a machine learning-based attack that uses traffic correlation to break the state-of-the-art anonymous routing for NoC architectures. We developed a lightweight and robust anonymous routing protocol to defend against ML-based attacks. Extensive evaluation using real as well as synthetic traffic demonstrated
\begin{table}
\begin{tabular}{|c|c|c||c|c|c||c|c|c||c|c|} \hline & \multicolumn{3}{c||}{**Chatling**} & \multicolumn{3}{c||}{**Delay**} & \multicolumn{3}{c|}{**Chaffing + Delay**} \\ \hline \(p\) & **Acc.** & **Rec.** & **Prec.** & **F1.** & **Acc.** & **Rec.** & **Prec.** & **F1.** & **Acc.** & **Rec.** & **Prec.** & **F1.** \\ \hline
95 & 76.64\% & 33.60\% & 84.67\% & 48.11\% & 94.22\% & 87.31\% & 94.99\% & 90.99\% & 73.80\% & 25.49\% & 80.70\% & 38.75\% \\ \hline
90 & 79.45\% & 43.71\% & 87.39\% & 58.28\% & 93.58\% & 93.42\% & 87.66\% & 90.45\% & 77.95\% & 46.9\% & 78.14\% & 58.69\% \\ \hline
85 & 78.75\% & 38.93\% & 93.16\% & 54.92\% & 90.65\% & 86.83\% & 84.99\% & 85.90\% & 77.06\% & 48.85\% & 74.65\% & 59.05\% \\ \hline
80 & 79.75\% & 74.41\% & 67.58\% & 70.83\% & 87.70\% & 80.32\% & 82.08\% & 81.19\% & 77.56\% & 74.41\% & 64.13\% & 68.89\% \\ \hline \end{tabular}
\end{table} TABLE X: Accuracy, precision, recall and F1 score of ML-based attack on proposed lightweight anonymous routing for different traffic distributions when trained and tested with non-obfuscated traffic
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**benchmark** & **Accuracy** & **Recall** & **Precision** & **F1 Score** \\ \hline \{ffh, fmm\} & 87.64\% & 65.91\% & 80.24\% & 72.37\% \\ \hline \{fmm, lu\} & 90.67\% & 83.03\% & 80.20\% & 81.59\% \\ \hline \{lu, barnes\} & 84.98\% & 57.75\% & 76.19\% & 65.70\% \\ \hline \{barnes, radix\} & 84.24\% & 40.35\% & 97.59\% & 57.10\% \\ \hline \{radix, flt\} & 82.60\% & 37.67\% & 82.66\% & 51.79\% \\ \hline \end{tabular}
\end{table} TABLE XII: Accuracy, precision, recall and F1 score of ML-based attack on proposed lightweight anonymous routing for real benchmarks.
\begin{table}
\begin{tabular}{|c||c|c|c||c|c|c||c|c|c|c|c|} \hline & \multicolumn{3}{c||}{**Chaffing**} & \multicolumn{3}{c||}{**Delay**} & \multicolumn{3}{c|}{**Chaffing + Delay**} \\ \hline \(p\) & **Acc.** & **Rec.** & **Prec.** & **F1.** & **Acc.** & **Rec.** & **Prec.** & **F1.** & **Acc.** & **Rec.** & **Prec.** & **F1.** \\ \hline
95 & 66.55\% & 0.3\% & 33.33\% & 0.6\% & 81.32\% & 56.70\% & 81.67\% & 66.93\% & 63.36\% & 14.37\% & 37.18\% & 20.70\% \\ \hline
90 & 66.59\% & 25.68\% & 49.78\% & 33.88\% & 71.73\% & 66.15\% & 56.48\% & 60.94\% & 56.47\% & 42.27\% & 40.45\% & 41.34\% \\ \hline
85 & 61.2\% & 2.6\% & 12.4\% & 4.4\% & 72.57\% & 50.59\% & 60.61\% & 55.15\% & 66.33\% & 40.50\% & 49.39\% & 44.51\% \\ \hline
80 & 72.76\% & 26\% & 77.16\% & 38.89\% & 73.41\% & 34.97\% & 70.37\% & 46.72\% & 60.34\% & 30.72\% & 39.75\% & 34.65\% \\ \hline \end{tabular}
\end{table} TABLE IX: Accuracy, precision, recall and F1 score of ML-based attack on proposed lightweight anonymous routing for different traffic distributions when trained and tested with non-obfuscated traffic and tested with obfuscated traffic
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **ARNoC** & **Our Approach** & **Overhead** \\ \hline Area & 1710665 & 1741257 & + 1.7\% \\ \hline Power(\(mW\)) & 1607.40 & 1617.51 & + 0.6\% \\ \hline \end{tabular}
\end{table} TABLE XIII: Comparison of the area and power overhead between ARNoC and proposed countermeasure in NoC IP.
Fig. 8: Comparison of proposed countermeasure versus ARNoC: (a) average packet latency of data transfer phase and (b) average execution time for tunnel creation phase.
that our ML-based attack can break anonymity with high accuracy (up to 99%) for diverse traffic patterns. The results also reveal that our lightweight countermeasure of obfuscating traffic with chaffing is robust against ML-based attacks with minor hardware overhead. The performance overhead of our proposed countermeasure is significantly less compared to the state-of-the-art anonymous routing protocol for NoC-based SoCs.
## Acknowledgments
This work was partially supported by National Science Foundation (NSF) grant SaTC-1936040.
|
2309.10927 | Memory Systems, the Epistemic Arrow of Time, and the Second Law | The epistemic arrow of time is the fact that our knowledge of the past seems
to be both of a different kind and more detailed than our knowledge of the
future. Just like with the other arrows of time, it has often been speculated
that the epistemic arrow arises due to the second law of thermodynamics.
In this paper we investigate the epistemic arrow of time, using a fully
formal framework. We begin by defining a memory system as any physical system
whose present state can provide information about the state of the external
world at some time other than the present. We then identify two types of memory
systems in our universe, along with an important special case of the first
type, which we distinguish as a third type of memory system.
We show that two of these types of memory system are time-symmetric, able to
provide knowledge about both the past and the future. However, the third type
of memory systems exploits the second law of thermodynamics in all of its
instances we find in our universe. The result is that in our universe, this
type of memory system only ever provides information about the past. Finally,
we argue that human memory is of this third type, completing the argument. Our
analysis is indebted to prior work in Wolpert 1992, but expands and improves
upon this work in several respects. | David H. Wolpert, Jens Kipper | 2023-09-19T20:57:49Z | http://arxiv.org/abs/2309.10927v1 | # Memory Systems, the Epistemic Arrow of Time, and the Second Law
###### Abstract
The epistemic arrow of time is the fact that our knowledge of the past seems to be both of a different kind and more detailed than our knowledge of the future. Just like with the other arrows of time, it has often been speculated that the epistemic arrow arises due to the second law of thermodynamics.
In this paper we investigate the epistemic arrow of time, using a fully formal framework. We begin by defining a memory system as any physical system whose present state can provide information about the state of the external world at some time other than the present. We then identify two types of memory systems in our universe, along with an important special case of the first type, which we distinguish as a third type of memory system.
We show that two of these types of memory system are time-symmetric, able to provide knowledge about both the past and the future. However, the third type of memory systems exploits the second law of thermodynamics in all of its instances we find in our universe. The result is that in our universe, this type of memory system only ever provides information about the past. Finally, we argue that human memory is of this third type, completing the argument. Our analysis is indebted to prior work in Wolpert 1992, but expands and improves upon this work in several respects.
## 1 Introduction
It seems obvious that our knowledge of the past is of a different kind and more detailed than our knowledge of the future. It is far less obvious what explains this so-called 'epistemic arrow' of time. As with the other arrows of time,
the fact that the fundamental physical laws are time-symmetric presents a major obstacle to finding such an explanation. Many philosophers and scientists have suggested explanations that appeal to the (time-asymmetric) second law of thermodynamics, or to some more fundamental facts underlying the second law (Reichenbach, 1956; Grunbaum, 1963; Horwich, 1987; Hawking, 1993; Hartle, 2005; Schulman, 2005; Carroll, 2010; Rovelli, 2018, 2022; Stradis, 2021). David Albert (2000; 2014; 2015; 2023) and Barry Loewer (2007; 2012b; 2012a) have developed one such account that has been particularly influential in recent years.
Our own account is based on a formal distinction between three types of memory systems that occur in the physical universe, where by'memory system', we mean any kind of physical system whose present state can provide information about the state of the external world at some time other than the present. On the basis of this formalism, we show that physical systems exemplifying either of the first two types can be sources of knowledge about both the past and the future. The epistemic arrow must therefore be grounded in the third type of memory systems. We argue that all memory systems of this type exploit a reduction of state space, which implies that the information they provide can only be of the past. Finally, we argue that human memory is of this third type. Our paper is indebted to the analysis in Wolpert (1992), but expands and improves upon it in several respects.
The paper is structured as follows. In SS2, we discuss Albert and Loewer's account. As we show, their explanation of the epistemic arrow relies on the doubtful idea that typically, the systems we have knowledge about had a lower entropy in the past. We suggest that such an explanation should instead be based on the idea that the process of acquiring information involves an increase in entropy.
In SS3, we distinguish the three different types of memory systems we find in the physical universe, and present high-level examples of each type. In SS4 we introduce our formalism that captures the three different types of memory systems. We show that the third type is a special case of the first type of memory systems. Our investigation of how these memory systems can function reveals that one of them, namely, Type-3 memory systems, relies on the second law and is therefore time-asymmetric. In SS5, we first discuss whether our account can capture the (putative) asymmetry of records. We then give reasons for thinking that human memory exemplifies Type-3 memory, which would mean that our account is suitable for explaining the epistemic arrow of time in terms of the second law of thermodynamics. Finally, in SS6, we spell out some remaining issues to be addressed by future research.
## 2 Albert and Loewer on the asymmetry of
records
Albert and Loewer's account is part of a highly ambitious project that aims to explain, among other things, all arrows of time. It begins, in essence, with what
in the physics community has been called the argument for the "cosmological origin of the arrow of time" (Davies, 1977). One of its key components is what Albert and Loewer call the "Past Hypothesis", which is the assumption that the entropy of the very early universe was very low. They combine this assumption with the fact that the dynamical micro-physical laws are deterministic and time-symmetric, and with a "probability postulate". The latter corresponds to the standard microcanonical ensemble from statistical physics, which follows from the maximum entropy principle of inference (Jaynes and Bretthorst, 2003), and says that there is uniform probability distribution over the microstates compatible with the Past Hypothesis. Together, these three components determine a probability assignment to all propositions about the history of the universe. Albert (2015) calls this probability assignment the 'Mentaculus'.
Albert and Loewer claim that these three components also explain the "epistemic arrow of time", by which they mean the fact that all records are of the past.1 Intuitive examples of records are impact craters, footsteps on a beach, diary entries, and memory traces in the brain. Albert (2015, ch. 6) calls inference procedures that use dynamical laws to evolve macroscopic information about the present forward or backward 'predictions' and'retrodictions', respectively. He states that records are those inference procedures to other times that aren't predictions or retrodictions. A record is created when a recording device interacts with the external world--Albert calls this interaction a'measurement'. In typical cases, the state of the recording device will then remain stable, which allows us to draw inferences from its current state about the state of the external world at the time of the interaction. Albert and Loewer claim that this inference requires that the recording device was in a particular state--the "ready state"--before the interaction.2
Footnote 1: Many other philosophers have also appealed to an asymmetry of records—e.g., Reichenbach (1956).
Footnote 2: See Wolpert (1992) for earlier work using the same terminology of “predictions” and “retrodictions”, making the same point about the stability of the recording device, using the same examples, and also highlighting the importance of a “ready state”.
It thus appears that to get information from a record, we need to know that the ready state obtained. But this seems to require another measurement, setting up a potential infinite regress. This regress is stopped, according to Albert and Loewer, by the Past Hypothesis, which serves as the universe's "ultimate ready state". By conditioning on it, we can thus acquire knowledge of the past from records.
Of course, people had knowledge from records long before anyone had ever thought of the Past Hypothesis. Moreover, when we observe a record, our backward-chain of remembered measurements terminates much more recently than 13 billion years ago, the time of the initial state of the universe. So how could the Past Hypothesis help us infer that our recording device was in its ready state? As Albert explains (2023, pp. 355-358), the account isn't meant to assume that knowledge from records relies on explicit inferences from the Past Hypothesis. Rather, when we observe a record, the initial low-entropy state of the universe just makes it much more likely that the recording device was
in its ready state before the time of the interaction. He illustrates this with a half-melted block of ice sitting on the floor of a warm room. According to Albert, conditioned on the Past Hypothesis, it is likely that the block of ice was less melted several minutes in the past, and our inferences implicitly rely on this fact. Sean Carroll (2010, p. 40) uses the example of a half-rotten egg to give a very similar account of the role of the Past Hypothesis in inferences from records. He adds that, due to the arrow of entropy, the egg's current state gives us much less information about its future states than about its past states.3
Footnote 3: Notice that the block of ice and the rotting egg are examples of systems whose current state provides information _about its own state_ at a different time, rather than about the external world. If one considers such systems as records, then many records can give information about the future. For example, a gas cloud with sufficient density, mass, etc. can be predicted to form a planet. Further examples of this type are provided by other nonlinear dynamical systems with a point attractor and an associated basin of attraction.
Loewer (2007; 2012a) generalizes this idea. He argues that, given the initial low-entropy state of the universe and the resulting arrow of entropy, information about a system's present state constrains its past states much more than it constrains its future states. The Past Hypothesis effectively imposes a tree structure on the history of the universe, with many more branches leading to the future than to the past. This implies that, typically, observations about the present state of a system give us more information about its past than about its future.
Albert and Loewer's explanation of the epistemic arrow is not fully formal. For instance, they don't provide a formal definition of records,4 or a univocal distinction between the system that contains the record and the system whose state is being recorded. It is nevertheless suggestive and has been highly influential, even though it has also been much criticized (cf., e.g., Winsberg (2004); Frisch (2005, 2007, 2023); Huggett (2023); Earman (2006, pp. 420-422)).
Footnote 4: We will return to the question of how to define records in §5.
Here, we would like to highlight a lacuna in their account that, to our knowledge, hasn't yet been identified. This will help us formulate a general adequacy condition for an explanation of the epistemic arrow.
Albert and Loewer locate the source of the epistemic arrow in the entropy gradient of the objects of our knowledge, i.e., of the systems we have knowledge about. Their explanation thus assumes that at least typically, the entropy of physical systems we encounter is increasing. However, the epistemic asymmetry applies to many systems whose entropy isn't increasing. For instance, we can have much more knowledge about what state a weather system was in five weeks ago than about what state it will be in five weeks from now. One might try to argue that such systems are not typical. But as the following considerations show, this position is untenable.
Since the appearance of the first cognitive systems on our planet, the objects of their information have almost exclusively been physical systems on Earth. Despite our recent efforts to learn more about the Solar System and outer space, this is still very much the case. The Earth system itself has remained far from thermodynamic equilibrium for a very long time. Roughly speaking, this is pos
sible because Earth is an open system that takes in free (relatively low entropy) energy from the sun and radiates away high entropy waste heat. The entirety of the Earth system appears to be entropy-neutral--it has even been argued that its entropy has steadily decreased over the last hundreds of millions of years (Kleidon, 2009a,b). This strongly suggests that typical systems that we have information about don't exhibit an increase in entropy--there should be at least as many such systems whose entropy remains constant or is even decreasing.
At various points, Loewer adds the qualification that the relevant systems must be at least approximately thermally isolated (e.g., Loewer 2007, 2020). It is of course likely that most thermally isolated systems that we have knowledge about evolve towards equilibrium. But it isn't apparent how this could be of help to their explanation of the epistemic arrow, since most of the systems that we have knowledge about aren't even approximately thermally isolated. As we just saw, the Earth system as a whole falls into this category.
We conclude that Albert and Loewer's explanation of the epistemic arrow is at least incomplete. As we saw, a fully adequate explanation must be compatible with the fact that the entropy of many, if not most, of the systems we have knowledge about isn't increasing. Therefore, such an explanation shouldn't appeal to the entropy gradient of the objects of our knowledge. The explanation we develop in what follows satisfies this condition. It is based on the idea that the epistemic arrow is due to the fact that certain processes that create information must involve a global increase in entropy.
Our investigation of the epistemic arrow of time, i.e., of the asymmetry in our knowledge of the past and of the future, doesn't assume that this arrow is constituted by an asymmetry of our knowledge of records. In what follows, we introduce a distinction between three types of memory systems. We then provide fully formal definitions of these three types, showing that they reflect three ways for information about the state of one system at one time to be conveyed to the state of another system at another time.
Importantly, two of them don't yield a temporal asymmetry, and thus, these memory systems do _not_ result in an epistemic arrow. In contrast, another type of memory system we analyze involves a special initialized state (i.e., "ready state"). This state that allows information to be conveyed from one moment to another is created by a process that increases global entropy. This kind of system thus relies on the second law of thermodynamics, just like those considered by Albert and Loewer. However, in this type of system, no assumption is made about the entropy gradient of the system it carries information about. Furthermore, the initialized state, too, needn't have lower entropy than the current state. Indeed, we demonstrate that in common examples of the epistemic arrow, that initialized state has _higher_ entropy than the current state.
## 3 Three types of memory systems
A "memory system", as we understand the term here, is any physical system whose state at the present time, \(t_{0}\), carries information about the state of the
world at time \(t_{1}\neq t_{0}\), where \(t_{1}\) can be either in the future or in the past. By "carry information", we mean that due to the joint probability distribution of the state of the memory at \(t_{0}\) and the state of the world at \(t_{1}\), knowing the state of the memory at \(t_{0}\) provides us with extra information concerning the state of the world at \(t_{1}\), beyond our prior information about the state of the world then.
### Intuitive examples of memory systems
To formulate this idea more carefully, let \(M\) and \(W\) be the state spaces of a memory system and of the external world, respectively. Axiomatically, our probability distributions involve the states of \(M\) and \(W\) and the two times \(t_{0}\) and \(t_{1}\). In addition, below we show that in practice, the states of \(M\) and / or \(W\) at another time \(t_{2}\) may play a role in memory, where either \(t_{0}<t_{1}<t_{2}\) or \(t_{0}>t_{1}>t_{2}\).
Associated with these two systems and three times, we have six jointly distributed random variables, \(W_{0}\), \(W_{1}\), \(W_{2}\), \(M_{0}\), \(M_{1}\), and \(M_{2}\). Our formalizations of different types of memory system specifies different properties of that distribution. In this paper, we often won't discuss how we have come to know (sic) that the joint probability \(P(w_{0},m_{0},w_{1},m_{1},w_{2},m_{2})\) over the six random variables has those properties, or where this distribution comes from, i.e., what physical process may have been involved in its creation.
In what follows, we will often be very loose with the terminology and say that we "observe" the state of a variable at a particular time, as shorthand for saying that we acquire some possibly noisy information about its state. Formally, such an observation involves yet more random variables, statistically coupled with the ones described above. We ignore such variables here.5
Footnote 5: We don’t mean to imply anything more than this by the term “observe”. In particular, we do not imply anything involving the nature of observation in quantum mechanics.
For simplicity, we will speak as though this information we acquire concerns the memory's present state _exactly_, to infinite precision. But it should be understood that in real physical systems, \(M\) and \(W\) will often be elements in coarse-grainings of states in some associated phase spaces. It is straightforward to extend our reasoning to accommodate noisy, imprecise information rather than such coarse-graining.
In some cases, the memory works by combining information about the present state of the memory with information about the present state of the external world. We thus allow for the possibility that in addition to observing the value \(m_{0}\), its user knows that \(w_{0}\) falls within some particular set. We are careful not to stipulate that the user of the memory system "observes" whether that is the case; they may simply assume it. From this information about \(m_{0}\) and possibly \(w_{0}\), we want to draw a probabilistic inference about the state of the external world at another time, \(w_{1}\).
Since the memory system's present state should be relevant to the inference we draw, we require that its information about \(w_{1}\) varies depending on the value of \(M_{0}\). Intuitively, when this condition is satisfied, we can infer from
the observed \(m_{0}\) (perhaps in conjunction with \(w_{0}\)) that \(M\) and \(W\) interacted sometime between \(t_{0}\) and \(t_{1}\), such that, in the course of this interaction, \(M\) acquired information about \(w_{1}\) and then stored it until \(t_{0}\).
Broadly put, our taxonomy categorizes memory systems according to the kind of information they rely on. _Type-1_ memory systems involve information based on an inference from the current state of the memory system, i.e., from the value \(m_{0}\), alone. _Type-2_ memory systems also involve information concerning the state of \(m_{0}\), but are only guaranteed to work when some additional conditions concerning \(w_{0}\) are also met. Finally, _Type-3_ memory systems involve information based on an inference from \(m_{0}\) and \(m_{1}\). They are a special case of Type-1 memory systems. In fact, they are the only examples of Type-1 memory systems we know of that in the real world can accurately provide a lot of information about \(w_{1}\), which is why we assign them their own type. (Below we will not discuss any examples of Type-1 memory systems other than those that are actually Type-3.) These types of memory systems seem to capture many of the instances of memory considered in the literature, sometime under the name of "records". In particular, all instances of memory we know of that involve the second law of thermodynamics are Type-3 memory systems.
These three types of memory systems are closely related to three types of memory considered in Wolpert (1992). Before we formally define them, in the next subsection we present some intuitive examples of Type-2 and Type-3 memory systems, to compare time-symmetric memory systems (like in computers) with time-asymmetric ones, which in practice rely on the second law and so only concern the past (like in footprints on a beach).
### How memory systems work
An example of a Type-2 memory system is memory in a computer. To keep our discussion independent of specific hardware implementations, we focus on abstract memory in abstract computers. Let \(M\) be the contents of a specific piece of Random Access Memory (RAM) that is used in a program of such a computer. The rest of the abstract computer--including the rest of its RAM outside of \(M\), the program it is running, etc.--is \(W\). In such a setup, _only_ observing the value \(m_{0}\) doesn't give us any information about \(w_{1}\), i.e., the state of the rest of the computer at time \(t_{1}\). The reason why a piece of RAM can nevertheless serve as a memory is that the entire system \(M\times W\) consisting of the memory and the rest of the computer evolves deterministically in time. This means that we can infer something about the value of \(w_{1}\) from an observation of \(m_{0}\), if we also assume (or know, via prior knowledge) a salient feature of \(w_{0}\). Specifically, if we know that a particular program is running on the computer at \(t_{0}\), then the current value of RAM, \(m_{0}\), can tell us the contents of some external RAM at \(t_{1}\neq t_{0}\).
It is most natural to think of such computer memory as providing information about the computer's past states. However, it is possible to evolve the system \(M\times W\) forward in time as well as backwards, which means that Type-2 memory can be of the future as well as the past. Notice as well that our observa
tion of the current state of the memory, \(m_{0}\), can vary arbitrarily--varying that state varies what we infer concerning \(w_{1}\), and every value of \(m_{0}\) provides such an inference. On the other hand, we don't consider effects on \(w_{1}\) of varying the "salient feature" concerning the state of the world external to the memory at time \(t_{0}\), i.e., of varying the program running on the computer. Instead, our inference concerning the effects of varying \(m_{0}\) is preconditioned on \(w_{0}\) containing a particular program, i.e., on \(w_{0}\) falling within a particular subset of \(W\).6
Footnote 6: (Wolpert, 1992, 749–762) discusses this kind of memory system in much more detail.
If \(W\) is large and not fully observable, as is typical in real-life situations, then it is usually impossible to determine the precise value \(w_{1}\) by deterministic evolution of \(M\times W\). This might suggest that Type-2 memory systems are neither very common nor very useful. However, it is compatible with our understanding of Type-2 memory systems that the inference about \(w_{1}\) is stochastic and based on a partial observation of \(w_{0}\)--just like with Type-1 and Type-3 memory systems. (See our formal definition below for the details.) If one considers these kinds of cases as well, it becomes plausible that Type-2 memory systems are a very common source of knowledge of the future. For instance, predictions of the future state of the Solar System or of the climate on Earth based on current observations fall into this category.
Examples of Type-3 memory are footprints on a beach, impact craters, photographic film, etc. Consider the case of photographic film. Before exposure, photographic film is in a predetermined stable state, which we call its 'initialized state'. Since this state can be distinguished from any state that the film can be in after exposure, we can infer from the latter, exposed state that the film interacted in a particular way with the external world. The exposed film can thus provide us with detailed information about a past state of \(W\). Since the film's state remains stable after the exposure, this state of \(W\) can lie quite far in the past.
Knowledge from a (non-digital) photograph thus relies on an inference from both the present exposed state of the film, \(m_{0}\), and its initialized state, \(m_{1}\). This explains why photographic films are Type-3 memory systems. Since \(m_{1}\) can't be directly observed at time \(t_{0}\), the question arises how we can come to have knowledge of it. Below, we argue that this knowledge has to be based on the occurrence of a process that takes \(M\) to a known state. Crucially, as we argue, this process of initialization must increase global entropy, which implies that \(m_{1}\) is a past state. Since our argument applies to all Type-3 memory systems, this means that systems of this type can only provide information about the past.
In what follows, we develop formal definitions of the three types of memory systems just sketched, and investigate them in more detail. Our definitions of Type-1, Type-2, and Type-3 memory systems provide formal elaborations of Wolpert's (1992) "b-type", "c-type", and "p-type" systems, respectively.
## 4 Formal definitions of memory systems
As described above, we have six jointly distributed random variables indexed by time, \(W_{0},W_{1},W_{2},m_{0},m_{1},m_{2}\), where the three associated times are index-ordered, i.e., either \(t_{0}<t_{1}<t_{2}\) or \(t_{0}>t_{1}>t_{2}\). (We won't actually make use of \(W_{2}\) in what follows, except for providing some intuition.) We are interested in forming a statistical inference about \(w_{1}\) based on the value \(m_{0}\), perhaps in combination with a constraint on the possible value of \(w_{0}\). We require that the inference we draw varies depending on that value of \(m_{0}\). Intuitively, whenever this is the case, we can conclude from the observed value of \(m_{0}\) (perhaps in conjunction with an assumed constraint on \(w_{0}\)) that \(M\) and \(W\) interacted sometime between \(t_{0}\) and \(t_{1}\), with the interaction transferring some information about the state \(w_{1}\) to the memory system \(M\), where it resides until time \(t_{0}\).
We can formalize the foregoing with what we call **memory systems**. We consider three types of memory systems, which differ from one another depending on whether the memory is based on the value \(m_{0}\), on the value \(w_{0}\), or on the value \(m_{0}\) combined with some knowledge about how the laws of physics arise in the joint dynamics of \(M\times W\).
In the rest of this paper, for simplicity we consider the case where all state spaces are countable, e.g., due to coarse-graining. The extension to uncountable spaces is straightforward. In addition, we write the indicator function as \(\delta(.)\). So for any event \(A\) in the implicit underlying probability measure space, \(\delta(A)\) equals \(1/0\) depending on whether \(A\) is true/false.
In the next subsection, Section 4.1, we begin by introducing a variant of some standard information-theoretic definitions. These will play a central role in our fully formal definitions of those three types of memory systems, which we present in Section 4.2.
### Restricted mutual information
In general, whether the state \(m_{0}\) provides a memory about the state \(w_{1}\) will depend on certain conditions concerning the joint value \((m_{0},w_{0})\) being met. Accordingly, our definitions will involve statements of the form, "If condition \(\mathcal{C}\) concerning \((m_{0},w_{0})\) is met, then the following mutual information will be high". We will not model how the user of the memory system does (or doesn't) come to know whether condition \(\mathcal{C}\) is met. Often it will be background knowledge, over and beyond the background knowledge that determines the joint distribution \(P(m_{0},w_{0},m_{1},w_{1},m_{2},w_{2})\).
To illustrate this, consider again the example of a computer memory described above. In that example, \(M\) is (the state of) part of a computer's RAM, and \(W\) is (the state of) the rest of the computer, including in particular the rest of the RAM. \(P(.)\) depends on the dynamics of the entire computer, as usual. In this example, condition \(\mathcal{C}\) is the knowledge that some specific program is currently executing in \(W\), the rest of the computer outside of the part of the RAM constituting \(M\). It is the precise form of that program which, combined with the current state of the part of the RAM constituting \(M\), provides information
concerning the state of the rest of the RAM at some other time. Note that in this example the constraint doesn't specify \(w_{0}\)_in toto_; many degrees of freedom of the computer are free to vary.
Intuitively, knowledge that \(\mathcal{C}\) holds is a second, different kind of "observation", in addition to the observation of the precise current state of the computer memory in question. The difference between the two types of observations is that we will be considering the effect on what we can infer about \(w_{1}\) by varying over the states \(m_{0}\), while we will not consider varying over whether \(\mathcal{C}\) holds. Again returning to the example of a computer, we distinguish the observation of the part of the RAM that comprises \(M\) from the "observation" of what program is running on the rest of the computer. We are interested in how varying the former leads to different conclusions concerning the state of the external RAM at some other time. In contrast, we are not concerned with the effects of varying the program.
To formalize this distinction, for any jointly distributed pair of random variables \(A,B\) taking values \(a,b\) respectively, let \(\mathcal{C}\) be some set of joint values \((a,b)\). Define \(C\) to be the indicator function specifying whether \((a,b)\in\mathcal{C}\). So \(C\) is a \(0/1\)-valued random variable, jointly distributed with our other random variables. Indicate the joint distribution as \(P(a,b,c)\), where \(c\) is the value of \(C\). Then we can define the random variable,
\[I_{c}(A;B):=-\sum_{a,b}P(a,b|c)\left[\ln P(a|c)-\ln P(a|b,c)\right] \tag{1}\]
Intuitively, \(I_{c}(A;B)\) is the value of the mutual information between \(A\) and \(B\), evaluated only over those \((a,b)\) pairs where condition \(\mathcal{C}\) does / doesn't hold, as specified by the value of \(c\). Note that \(I_{c}(A;B)\) isn't the same as the mutual information between \(A\) and \(B\) conditioned on \(c\),
\[I(A;B|C)=-\sum_{a,b,c}P(c)P(a,b|c)\left[\ln P(a|c)-\ln P(a|b,c)\right] \tag{2}\]
Indeed, \(I(A;B|C)\) is the expectation under \(P(c)\) of \(I_{c}(A;B)\).
We can illustrate this definition by returning to the example where \(M\) is a part of a computer, while the program running in the computer is stored in some other part of the RAM which is (part of) \(W\). In this example, \(c=1\) iff the joint state of a RAM in a computer and the program stored in the rest of the computer fulfills some condition.
We will refer to \(I_{c}(A;B)\) for \(c=1\) as the (\(\mathcal{C}\)-)**restricted** mutual information between \(A\) and \(B\), We will write it as \(I_{\mathcal{C}}(A;B)\), with the value \(c=1\) implicit.
Memory systems are defined in terms of _sufficient_ conditions for information concerning the external world at one time to be conveyed to the memory system at another time, and we make no claims about _necessary and sufficient_ conditions. For this reason, in this paper we are interested in restricted mutual information rather than conditional mutual information, with \(C=1\) for different choices of \(\mathcal{C}\) being the sufficient conditions.
As an aside, note that we can define variants of entropy and conditional entropy that are analogous to \(I_{c}(A;B)\):
\[H_{c}(A) :=-\sum_{a}P(a|c)\ln P(a|c) \tag{3}\] \[H_{c}(A|B) :=-\sum_{a.b}P(a,b|c)\ln P(a|b,c) \tag{4}\]
where as before, \(c\in C\) is a 0-1 valued random variable specifying whether condition \(\mathcal{C}\) holds. For any such random variable \(C\) and either value \(c\) of that random variable,
\[I_{c}(A;B)=H_{c}(A)-H_{c}(A|B) \tag{5}\]
Paralleling our convention for restricted mutual information, we will sometimes write the two types of restricted entropy evaluated for \(c=1\) as \(H_{\mathcal{C}}(A)\) and \(H_{\mathcal{C}}(A|B)\), respectively. So in particular,
\[I_{\mathcal{C}}(A;B)=H_{\mathcal{C}}(A)-H_{\mathcal{C}}(A|B) \tag{6}\]
in direct analogy to the relation among (non-restricted) entropy, conditional entropy, and mutual information.
As a point of notation, we will often write something like "\(a\in\mathcal{C}\) " inside a probability distribution as shorthand for the event that the value of the associated random variable \(C=1\). Similarly, we will write \(I_{a\in\mathcal{C}}(\ldots)\) as shorthand for a \(\mathcal{C}\)-restricted mutual information where the variable \(a\) lies in the set \(\mathcal{C}\). Furthermore, let \(d\in D\) be some random variable. Rather than write "for all \((a,b)\in\mathcal{C},P(d\,|\,a,b)\) obeys..." it will be convenient to write "\(P_{(a,b)\in\mathcal{C}}(d\,|\,a,b)\) obeys...".
### The three types of memory systems
**Definition 1:** A **Type-1** memory is any stochastic process over the space \(M\times W\) where there is some set \(M^{*}\) such that \(I_{m_{0}\in M^{*}}(W_{1};M_{0})\), is large.
**Definition 2:** A **Type-2** memory is any stochastic process over the space \(M\times W\) where there is some set \(W^{*}\) such that \(I_{w_{0}\in W^{*}}(W_{1};M_{0})\) is large.
**Definition 3:** A **Type-3** memory is any stochastic process over the space \(M\times W\) where:
1. There is an \(m^{\dagger}\in M\) and a set \(M^{*}\) such that \(I_{m_{1}=m^{\dagger},m_{0}\in M^{*}}(W_{1};M_{0})\) is large.
2. There is a set \(M^{\prime}\subseteq M\) such that for all \(m_{0}\in M^{*}\): 1. \(P(m_{2}\in M^{\prime}\,|\,m_{0})\) is close to 1.
\[P(w_{1}\,|\,m_{0},m_{1}=m^{\dagger},m_{2})=P(w_{1}\,|\,m_{0},m_{1}=m^{\dagger}) \tag{8}\]
2. For any \(m_{1}\), \[P(m_{1}|m_{0}\in M^{*})\simeq\delta(m_{1},m^{\dagger})\] (9)
3. For any \(m_{0}\), \[P(m_{0}|m_{0}\in M^{*})\simeq P(m_{0}|m_{1}=m^{\dagger},m_{0}\in M^{*})\] (10)
4. For any \(w_{1}\), \[P(w_{1}\,|\,m_{0}\in M^{*})\simeq P(w_{1}\,|\,m_{1}=m^{\dagger},m_{0}\in M^{*})\] (11)
**Proof:** For any \(m_{0}\in M^{*}\) in a Type-3 memory, we can expand
\[P(w_{1}\,|\,m_{0}) =\sum_{m_{1},m_{2}}P(m_{2}\,|\,m_{0})P(m_{1}\,|\,m_{2},m_{0})P(w_{ 1}\,|\,m_{0},m_{1},m_{2}) \tag{12}\] \[\simeq\sum_{m_{1},m_{2}}\frac{P(m_{2}\,|\,m_{0})\delta(m_{2}\in M ^{\prime})}{\sum_{m_{2}}P(m_{2}\,|\,m_{0})\delta(m_{2}\in M^{\prime})}P(m_{1} \,|\,m_{2},m_{0})P(w_{1}\,|\,m_{0},m_{1},m_{2})\] (13) \[=\sum_{m_{1},m_{2}}\frac{P(m_{2}\,|\,m_{0})}{\sum_{m_{2}}P(m_{2} \,|\,m_{0})\delta(m_{2}\in M^{\prime})}\delta(m_{2}\in M^{\prime})P(m_{1}\,| \,m_{2},m_{0})P(w_{1}\,|\,m_{0},m_{1},m_{2})\] (14) \[\simeq\sum_{m_{1},m_{2}}\frac{P(m_{2}\,|\,m_{0})}{\sum_{m_{2}}P(m_{2} \,|\,m_{0})\delta(m_{2}\in M^{\prime})}\delta(m_{2}\in M^{\prime})\delta(m_{1} -m^{\dagger})P(w_{1}\,|\,m_{0},m_{1},m_{2})\] (15) \[=\sum_{m_{2}}P(m_{2}\,|\,m_{0})P(w_{1}\,|\,m_{0},m_{1}=m^{\dagger},m_{2})\] (16) \[=\sum_{m_{2}}P(m_{2}\,|\,m_{0})P(w_{1}\,|\,m_{0},m_{1}=m^{\dagger})\] (17) \[=P(w_{1}\,|\,m_{0},m_{1}=m^{\dagger}) \tag{18}\]
where the second line uses Item 2a, of the definition of Type-3 memory systems, the fourth line uses Item 2b and then the sixth line uses Item 2c. This establishes Lemma 1(1).
Next, expand
\[P(m_{1}|m_{0}\in M^{*}) =\sum_{m_{2}}P(m_{1}|m_{0}\in M^{*},m_{2})P(m_{2}|m_{0}\in M^{*}) \tag{19}\] \[\simeq\sum_{m_{2}}P(m_{1}|m_{0}\in M^{*},m_{2})P(m_{2}|m_{0}\in M ^{*})\delta(m_{2}\in M^{\dagger})\] (20) \[\simeq\sum_{m_{2}}\delta(m_{1},m^{\dagger})P(m_{2}|m_{0}\in M^{*} )\delta(m_{2}\in M^{\dagger})\] (21) \[=\delta(m_{1},m^{\dagger}) \tag{22}\]
where the second line uses Item 2a of the definition of Type-3 memory systems, and the third line uses Item 2c. This establishes Lemma 1(2).
Next, use Lemma 1(2) to expand
\[P(m_{0}|m_{0}\in M^{*}) =\sum_{m_{1}}P(m_{0}|m_{0}\in M^{*},m_{1})P(m_{1}|m_{0}\in M^{*}) \tag{23}\] \[\simeq\sum_{m_{1}}P(m_{0}|m_{0}\in M^{*},m_{1})\delta(m_{1},m^{ \dagger})\] (24) \[=P(m_{0}|m_{0}\in M^{*},m_{1}=m^{\dagger}) \tag{25}\]
This establishes Lemma 1(3).
Finally, apply \(\sum_{m_{0}}P(m_{0}|m_{0}\in M^{*})\) to both sides of Eq. (8), and then use Eq. (10) to replace \(P(m_{0}|m_{0}\in M^{*})\) in the right-hand sum. This establishes Lemma 1(4).
**QED**
We can use Lemma 1 to derive the following result, and thereby prove that systems obeying the four properties of Type-3 memory systems are in fact a special case of Type-1 memory systems, as claimed above:
**Theorem 1:**\(I_{m_{0}\in M^{*}}(W_{1};M_{0})\) is large in any Type-3 memory system.
**Proof:** Using Lemma 1(1) twice allows us to expand
\[I_{m_{0}\in M^{*}}(W_{1};m_{0})=\ -\!\!\sum_{m_{0},w_{1}}P(m_{0},w_{1}|m_{0}\in M ^{*})\bigg{[}\ln P(w_{1}|m_{0}\in M^{*})-\ln P(w_{1}|m_{0},m_{0}\in M^{*}) \bigg{]} \tag{26}\]
\[\simeq\ -\!\!\sum_{m_{0},w_{1}}P(m_{0},w_{1}|m_{0}\in M^{*})\bigg{[} \ln P(w_{1}|m_{0}\in M^{*})-\ln P(w_{1}|m_{0},m_{1}=m^{\dagger},m_{0}\in M^{*}) \bigg{]} \tag{27}\] \[\simeq\ -\!\!\sum_{m_{0},w_{1}}P(m_{0}|m_{0}\in M^{*})P(w_{1}|m_{0},m_{1 }=m^{\dagger},m_{0}\in M^{*})\] \[\times\bigg{[}\ln P(w_{1}|m_{0}\in M^{*})-\ln P(w_{1}|m_{0},m_{1 }=m^{\dagger},m_{0}\in M^{*})\bigg{]} \tag{28}\]
Next, we can use Lemma 1(3) and then Lemma 1(4) to approximate Eq. (28)
as
\[I_{m_{0}\in M^{*}}(W_{1};m_{0})\simeq\sum_{m_{0},w_{1}}P(m_{0}|m_{1}= m^{\dagger},m_{0}\in M^{*})P(w_{1}|m_{0},m_{1}=m^{\dagger},m_{0}\in M^{*})\] \[\times\bigg{[}\ln P(w_{1}|m_{0}\in M^{*})-\ln P(w_{1}|m_{0},m_{1}= m^{\dagger},m_{0}\in M^{*})\bigg{]} \tag{29}\] \[\simeq\sum_{m_{0},w_{1}}P(m_{0}|m_{0}\in M^{*},m_{1}=m^{\dagger}) P(w_{1}|m_{0},m_{1}=m^{\dagger},m_{0}\in M^{*})\] \[\times\bigg{[}\ln P(w_{1}|m_{1}=m^{\dagger},m_{0}\in M^{*})-\ln P (w_{1}|m_{0},m_{1}=m^{\dagger},m_{0}\in M^{*})\bigg{]}\] (30) \[=\sum_{m_{0},w_{1}}P(m_{0},w_{1}|m_{1}=m^{\dagger},m_{0}\in M^{*})\] \[\times\bigg{[}\ln P(w_{1}|m_{1}=m^{\dagger},m_{0}\in M^{*})-\ln P (w_{1}|m_{0},m_{1}=m^{\dagger},m_{0}\in M^{*})\bigg{]}\] (31) \[=I_{m_{1}=m^{\dagger},m_{0}\in M^{*}}(W_{1};M_{0}) \tag{32}\]
Finally, plugging in Item 1 of the definition of memory systems, we conclude that \(I_{m_{0}\in M^{*}}(W_{1};m_{0})\) is large.
**QED**
Thm. 1 establishes that in a Type-3 memory system, so long as \(m_{0}\in M^{*}\), the precise state \(m_{0}\) is informative about the state \(w_{1}\). So whenever that condition is met, the current state of the memory system \(M\) is a _memory_ of \(w_{1}\), the state of the external world at \(t_{1}\), in the sense described in preceding sections.
### Illustrations of our formal definitions
In this subsection we illustrate real-world examples of Type-2 and Type-3 memory systems, to compare the formal definitions of time-symmetric and time-asymmetric memory systems. We can illustrate the definition of Type-2 memory systems using the above example of a computer memory. Recall that in that example, \(M\) is one part of the RAM of the computer, while \(W\) is the rest of the RAM, including in particular the part of the RAM that contains the program currently running on the computer. More precisely, write the state space of the computer as \(Z=(M,Z_{2},Z_{3})\), where \(z_{2}\in Z_{2}\) specifies the particular program currently running (i.e., a particular interval of the coding segment of the computer), _except_ for one variable in that program, namely \(m\in M\). \(Z_{3}\) is then the rest of the RAM and other variables in the computer whose value isn't involved in specifying the program.
So in this computer memory example, \(W\) is \((Z_{2},Z_{3})\). However, it is convenient to parameterize elements of \(W\) by their value of \(Z_{2}\), coarse-graining over
possible values of \(Z_{3}\). In particular, \(W^{*}\) is all states of \((Z_{2},Z_{3})\) where \(z_{2}\) contains a particular program \(z_{2}\), a program that allows inference from the current state of the memory, \(m_{0}\), about the past and / or future of the variable \(m\). Note that such inference relies on the fact that the typical values of \(z_{3}\) have no effect on the dynamics of \((m,z_{2})\).
More generally, in many Type-2 memory systems, \(M\times W\) is a closed system, isolated from the rest of the physical universe during the interval \([t_{0},t_{1}]\). In such a case, since the laws of physics are deterministic and invertible in any closed system, the joint dynamics of \(M\times W\) is deterministic during \([t_{0},t_{1}]\). Type-2 memory systems with this feature can result in almost perfect memory, as described in Section 3.2.
We can illustrate the different parts of the definition of Type-3 memory systems with the example of a line of footprints across a beach. In this example, \(M\) is the set of all versions of the _pattern_ on the surface of the beach--smoothed, with a single line of footprints, churned by many people walking across it, etc. \(M^{*}\) is the set of all versions of the (pattern on the surface of a) beach that are completely smooth, having been swept by ocean waves, during a high tide, with the possible exception that it has a very clearly defined line of footprints. \(M^{\prime}\) is all versions of the beach that are not in some unusual state that would prevent the beach from being swept smooth. \(m^{\dagger}\), the "initialized state", is the beach right after it has been smoothed by ocean waves.7 In contrast, \(W\) is the set of all other systems on the surface of the Earth that could conceivably interact with the surface of the beach some time in the interval between \(t_{0}\) and \(t_{2}\).
Footnote 7: N.b., strictly speaking, \(m^{\dagger}\) isn’t a single state, but a set of very similar states. To simplify the exposition, we will often treat a set of very similar states as though they were a single state, as was also done in the example above of a computer memory.
Item 1 reflects the fact that if we know both that the beach surface was smooth at \(t_{1}\) and that it currently is smooth except for a single line of footprints, then we can conclude that a person must have walked across the beach some time between \(t_{1}\) and \(t_{0}\), with the precise pattern of those footprints providing information about that walk.
Item 2a of the definition of Type-3 memory systems then tells us so long as the current pattern on the beach is a single line of footprints, we have no reason to suppose that the surface of the beach was in some unusual state that could not be wiped smooth just before the most recent high tide.
Item 2b of the definition of Type-3 memory is enforced by the second law of thermodynamics. More precisely, the collapsing of the state space of \(M\) described in Item 2b involves coupling \(M\) with some third system, \(K\). The second law drives an irreversible process that increases total entropy in \(M\times K\), while at the same time collapsing \(M\) from the subset \(m_{2}\in M^{\prime}\) down to the precise value \(m_{1}=m^{\dagger}\).8
Footnote 8: This is related to what was called “external initialization” in Wolpert (1992).
Concretely, a beach has just been initialized as \(m^{\dagger}\) when it has just been smoothed by the ocean waves driven by the tide. \(K\) is those ocean waves, lapping the beach during this re-initialization of the state of the beach. Projected down to the states of the beach, that smoothing of the beach by ocean waves is a
non-invertible process, driven by the second law. This reliance on the second law of course is precisely why this example of a Type-3 memory system is time-asymmetric.9
Footnote 9: As noted above, Item 2c is assumed simply for expository convenience, and clearly holds for this example of a beach.
Note that just like with Type-2 memory systems, with Type-3 memory systems there is an implicit assumption that \(W\) is a minuscule portion of the full physical universe.10 Furthermore, it is implicitly assumed that the dynamics of those degrees of freedom of \(W\) we are concerned with are effectively isolated from that rest of the universe (aside from the possible interaction with a system \(K\)). This assumption implies, for instance, that the sand on the beach was not manipulated by powerful aliens to make it appear as though people had walked over a smooth beach.
Footnote 10: More precisely, we assume that the probability that variables in the physical universe that lie outside of \(W\) are in a state that would cause them to interfere with our inference is effectively 0.
Note also that the fact that the distribution over \(m\) at \(t_{1}\), the end of the initialization process, is (almost) a delta function about \(m^{\dagger}\) means that the distribution over \(M\) at that time, when it's in its initialized state, has _low entropy_. It is the distribution over the joint state, \(M\times K\), whose entropy increases in the initialization of \(M\).
A flash drive is another example of Type-3 memory that provides an even more graphic illustration of how the initialized, ready state of \(M\) can have low entropy. Here, \(M=(Z_{1},Z_{2})\), where \(Z_{1}\) is the contents of the flash drive's binary memory, and \(Z_{2}\) is other attributes of the physical flash drive, in particular whether it has physical damage (e.g., puncture holes in the flash drive's casing). \(M^{*}=M^{\prime}\) is all joint states in \((Z_{1},Z_{2})\) where (\(Z_{2}\) has a value indicating that) the flash drive is undamaged. \(m^{\dagger}\) is the "wiped clean", all-0's joint state of the flash drive's entire memory, i.e., of \(Z_{1}\).
The important thing to note is that this wiped-clean state where the bits are all 0's with probability 1 is _minimal_ entropy. It is produced by coupling the flash drive with an external, electronic initializing system, \(K\), in a "wiping clean" process of the contents of the flash drive. That initialization process relies on the second law of thermodynamics to increase the joint entropy of the flash drive _and the electronic initializing system_. So just like the beach was wiped smooth by the action of waves during a high tide, which increased the joint entropy of the waves and the beach while reducing the marginal entropy of just the beach, the flash drive was wiped clean by action of the electronic initializing system, which increased the joint entropy of the initializing system and the flash drive's bits while reducing the marginal entropy of just the flash drive's bits.
As an alternative, we could reformulate these examples of Type-3 memory systems not to involve an external system \(K\). We would do this by "folding \(K\) in" to the definition of \(M\). In the example of a beach surface memory system, this would mean redefining \(M\) to be the joint state of pattern on the surface of the beach _and the precise physical state of the ocean lapping that beach_.
Finally, we note that it is straightforward to formalize other examples of memory systems considered in the literature (in particular, (Wolpert, 1992)) as Type-3 memory systems. To illustrate this, consider the example of an image on a chemical photographic film in an instant camera. \(M\) is the possible patterns on the surface of the film; \(M^{*}\) is all such patterns aside from those that indicate the camera holding the film was incorrectly exposed to the outside world, e.g., resulting in a fogged image on the surface of the film. \(m^{\dagger}\) is the initialized state of the film, with no image, before exposure of any sort. It has low entropy, and is formed in an entropy-increasing chemical initialization process that involves some external set of chemicals, \(K\). \(W\) is an external photon field, which will result in an image being made some time between \(t_{1}\) and \(t_{0}\) if the camera exposes the film correctly, i.e., if \(m_{0}\in M^{*}\).
### Discussion of our formal definitions
In this subsection we briefly discuss some aspects of the formal definitions of the various types of memory system.
First, note that while there is no need to do so here, we could replace phrases like "\(I_{m_{0}\in M^{*}}(W_{1};M_{0})\) is large" with more formal expressions. For example, suppose that both \(|M^{*}|\) and \(|W|\), the number of states in \(M^{*}\) and in \(W\), respectively, are finite. Then we could replace that phrase by saying that \(I_{m_{0}\in M^{*}}(W_{1};M_{0})\) is close to \(\min(\ln|M^{*}|,\ln|W|)\), its maximum possible value.
Note also that in Type-1 and Type-3 memory systems, we allow the possibility that we can know the value \(m_{0}\) even if it is outside of \(M^{*}\). We even allow for the possibility that there would be nonzero mutual information between the value of \(m_{0}\) and that of \(w_{1}\) for \(m_{0}\not\in M^{*}\). However, our analysis concerns what happens when \(m_{0}\in M^{*}\). (_Mutatis mutandi_ for values of \(w_{0}\) being outside of \(W^{*}\) in the case of Type-2 memory systems.)
In real-world Type-3 memory systems, often \(m\) will not change in \([t_{2},t_{0}]\) except at the time of its interaction with \(W\). While we do not require this, it has the practical advantage that it simplifies the calculation by the memory's user of the relationship between the value of \(w_{1}\) and \(m_{0}\). It also means that we don't need to be precise about when the times \(t_{1}\) and \(t_{2}\) are.
It is important to realize that the system \(K\) in Type-3 memory systems, which couples with \(M\) in an entropy-increasing process to send \(M^{\prime}\) to \(m^{\dagger}\), doesn't explicitly occur in the definition of Type-3 memory systems. Rather it arises _in practice_, as part of the underlying process that enforces the requirement in Item 2a that the conditional distribution \(P(m_{1}\,|\,m_{2},m_{0})\) is peaked about \(m_{1}=m^{\dagger}\). In turn, that requirement is only relevant under the supposition that \(m_{0}\in M^{*}\) and \(m_{2}\in M^{\prime}\).
There are many important ways that the analysis in this paper extends beyond / modifies the analysis in Wolpert (1992), which was written before the revolutionary advances of the last two decades of stochastic thermodynamics. Like all considerations of the thermodynamics of computation at the time, it was based on semi-formal reasoning, grounded in equilibrium statistical physics. However, computers are actually very far from thermal equilibrium, with the
result that the understanding of the relationship between logical and thermodynamic irreversibility at the end of the twentieth century and its implications for the thermodynamics of computation was mistaken. Our paper doesn't rely on that mistaken earlier understanding, and is fully consistent with our modern understanding of statistical physics. (Cf. (Sagawa, 2014; Wolpert, 2019) and references therein for an introduction to the modern understanding of the relationship between logical and thermodynamic irreversibility.)
Another important feature of Wolpert (1992) is its repeated invocation of the Maxent principle of Jaynesian inference. In this paper we do not use Maxent. Indeed, we are careful to make no arguments about how it is that the user of a memory system may arrive at the probability distributions they are using. In particular, it is worth noting that in this paper, we make no _a priori_ assumption that \(P(m_{0},m_{1},w_{0},w_{1})\) has full support (cf. Wolpert, 1992, fn. 9).
## 5 Memory systems, records, and the epistemic arrow
Of the three types of memory system we have considered, Type-3 systems are the only ones that, at least in all of their instances we know of in our physical universe, are time-asymmetric, in that they can only provide information about the past. As we explained, Type-3 memory systems rely on the second law, in that they exploit the fact that an increase in global entropy reliably takes the (local) memory system to its initialized state, which is a known state at \(t_{1}\).
While we have not proven it, we note that in practice, the only way the need for the second law can be circumvented without major sacrifice in the accuracy of the memory is if we have detailed knowledge of those "dynamically relevant" degrees of freedom in the present state of \(W\) that (perhaps together with the precise state of \(M\)) determine the dynamics of \(M\). In practice, as in the computer example of Type-2 memory systems, we in fact have a way to (almost) deterministically calculate the joint dynamics of \(M\times W\).
Note that these requirements do not preclude the possibility that \(W\) is extraordinarily large.11 However, to run a Type-2 memory system with a large \(W\) seems to require a huge number of energy barriers keeping trajectories of \(M\times Z_{2}\) well-separated during the evolution of the joint system, with high probability, i.e., such systems use a huge amount of error correction. (This is certainly true in cloud computers.) Systems with this property seem to only arise with careful engineering by humans. In contrast, memory systems like footprints on a beach do not rely on anything close to that number of energy barriers, allowing the stochastic process governing the dynamics of microstate trajectories to spread out more readily. This may be why they can occur in systems that are not artificially constructed. (See discussion of the Past Hypothesis in Section 6.)
In what follows, we discuss whether Type-3 memory systems might correspond to records. After this, we argue that human memory is plausibly Type-3, which would mean that our analysis is suitable for explaining the epistemic arrow of time.
Common examples of records, such as impact craters, footsteps on the beach, and photographic film, are Type-3. Furthermore, Albert and Loewer claim that records require a ready state, and the initialized state formalized in our definition of Type-3 memory systems as \(m^{\dagger}\) is such a ready state. Does this mean that Type-3 memory systems can be interpreted as a formalization of records? In the absence of a precise definition of records, this question is difficult to answer. We believe that for this interpretation to work, one needs to assume that it is true by definition that records rely on an initialized state--otherwise, we don't see a clear way to distinguish records from Type-2 memory systems. If this assumption is made, then our analysis (which in turn builds on the work in Wolpert (1992), as described above) might provide a new basis for understanding Albert and Loewer's claim that the epistemic arrow is constituted by the temporal asymmetry of records which avoids the problematic aspects of their argument (cf. Section 2).
At present, the physical details of how the human brain stores information are largely unknown. This makes it difficult to determine what type of memory system the human brain represents. Nevertheless, there are reasons to think that human memory is Type-3. First, there is the simple fact that human memory only provides information about the past. Since Type-3 memory systems are the only memory systems that exhibit this kind of temporal asymmetry, this suggests that human memory are Type-3. Second, human memory in the primary sense resides in the brain--we might call this 'internal memory'. But humans also remember things indirectly by means of external devices, such as photographs, books, or digital storage media--we might call this 'external memory'. External memory, at least if it concerns information about events occurring outside of computers, is typically Type-3. (Our discussion in 4.3 demonstrated this for some such systems, namely photographs and flash drives.) This makes it possible for such memory to store very detailed information. Internal memory, too, often provides us with highly detailed information about specific events. An important aspect of the psychological arrow of time is that we experience the future as "open" and the past as "fixed".12 It is plausible that the fact that we have such detailed memories of the past is at least part of the cause of this apparent openness of the future and fixity of the past. The fact that internal memory can provide such detailed information supports the idea that it is Type-3. If this is the case, then our analysis is suitable for explaining how the epistemic arrow arises from the second law of thermodynamics.
Footnote 12: Cf. (Wolpert, 1992, pp. 776–778) for further discussion of the relation between this aspect of the psychological arrow and the epistemic arrow.
Future work and open issues
There are many avenues for investigation that the analysis in this paper highlights but does not address.
In this paper we consider three types of memory systems, which are the three types of memory system we can find examples of in the real, physical world. We provide no proof that no other type of memory system is possible. One obvious avenue for future work is to investigate this issue further.
We show how, due to the second law, there can be Type-3 memory systems of the past. We also argue (semi-formally) that the human brain involves such types of memory. Based on our discussion, we consider it plausible that Type-3 memories cannot be of the future. In essence, this is because we don't see a potential mechanism that could play the role the second law of thermodynamics plays in such putative Type-3 memories of the future. But we provide no formal proof that Type-3 memory systems can only be of the past. This issue will thus have to be left for future research.
Another important issue builds from the discussion at the end of Section 4.4: how exactly is it that the user of the memory comes to "know" the joint distribution in the first place? Does acquiring that knowledge itself rely on memory, of past observations of the physical world? This is an extremely subtle issue, which ultimately requires engaging with the formal impossibility of inductive inference (Adam et al., 2019; Wikipedia contributors, 2023; Wolpert, 2023). _If_ the joint probability distributions of \(M\times W\) at multiple moments in time has the structure of a Type-3 memory system formally defined in Section 4.2, then the relevant mutual information can in principle be exploited. Moreover, sidestepping the problem of inductive inference (Wolpert, 2023), speaking purely as empirical scientists, it seems likely that natural selection has guided (the genes encoding) our biological memories to assume those distributions, in order to increase our biological fitness. But in this paper, we do not grapple with these issues.
Yet another deep problem involves the asymmetry of the second law, which appears to be fundamental to (the asymmetry of Type-3 memory and therefore) the asymmetry of human memory. We are sympathetic to the idea of grounding the second law in the "Past Hypothesis'. Stated carefully, this hypothesis has three parts. First, it (implicitly) models the dynamics of the entropy of the universe as a time-symmetric first order Markov process, either a Focker-Planck or master equation process to be precise, depending on the state space under consideration (Lawler, 2018; Serfozo, 2009). (The time-symmetry is necessary to reflect the time-symmetry of the microscopic laws of physics.) Second, it stipulates that the entropy of the early universe was extraordinarily less than it is now. Using informal reasoning, those two assumptions have been taken to jointly imply that the trend of the stochastic process of the entropy of the universe evolving into our past from our present is monotonically decreasing.
However, note that the Past Hypothesis actually involves knowing the random variable at two times, not just one (in our case, knowing the entropy at both the present and the distant past). It is well-known that the proper way
to calculate the marginal distributions of a random variable evolving under a time-symmetric Markov process given its values at two times is by using a "Brownian bridge". In general, because the underlying stochastic process is symmetric, the Brownian bridge calculation will lead to the conclusion that in the very recent past, just before the present, the entropy of the universe was _not_ likely to be lower than it is today, but is actually more likely to be slightly _higher_. Then as one looks further into the past from the present, the expected coarse-grained entropy starts decreasing, and then falls precipitously, to reach the given, extremely low value in the distant past.
In mesoscopic systems, with a relatively small number of degrees of freedom, the stochastic process has enough diffusion for this "turnover" effect to be readily observable. The result is that the second law of thermodynamics would be violated if one moves a very small amount into the past towards a point in time with a known, very low value of entropy. As a result, the phenomenon that Type-3 memory systems rely on would no longer hold.
In the macroscopic system of the cosmological universe though, one would expect the diffusion term in Markov process to be so much smaller than the drift term, i.e., for the variance of the dynamics to be so much smaller than the underlying trend, that it would require extremely careful and precise experiments to discern the turnover effect. Accordingly, one would suppose that Type-3 memory systems are indeed justified in their reliance on the second law. However, it would be interesting to calculate the precise magnitude of the turnover effect in our physical universe, to confirm this supposition.
## Acknowledgement
DHW would like to acknowledge the Santa Fe Institute for support.
|
2309.17090 | Simulations of Nanocrystalline Iron Formation under High Shear Strain | High-shear methods have long been used in experiments to refine grain
structures in metals, yet the underlying mechanisms remain elusive. We
demonstrate a refinement process using molecular dynamic simulations of iron,
wherein nanocrystalline structures are generated from initially perfect
lattices under high-shear strain. The simulation cells undergo a highly
disordered state, followed by an atomic reordering and grain coarsening,
resulting in nanograins. We explore the dependence on parameters such as
temperature, heat dissipation rate, shear strain rate, and carbon impurity
concentration. Higher temperatures lead to the formation of larger and longer
grains. The faster heat dissipation sample initially yields more small grains,
but their number subsequently reduces, and is lower than the slower heat
dissipation sample at approximately {\gamma} = 1.5. Slower strain rates do not
promote nanograin formation. The presence of carbon impurities appears to have
little effect on grain formation. This detailed analysis affords insight into
the mechanisms that control the formation of nanograins under high-shear
conditions. | Ivan Tolkachev, Pui-Wai Ma, Daniel Mason, Felix Hofmann | 2023-09-29T09:38:11Z | http://arxiv.org/abs/2309.17090v2 | # Simulations of Nanocrystalline Iron Formation under High Shear Strain
###### Abstract
High-shear methods have long been used in experiments to refine grain structures in metals, yet the underlying mechanisms remain elusive. We demonstrate a refinement process using molecular dynamic simulations, wherein nanocrystalline structures are generated from initially perfect lattices under high-shear strain. The simulation cells undergo a highly disordered state, followed by recrystallization and grain coarsening, resulting in nanograins. We explore the dependence on parameters such as temperature, heat dissipation rate, shear strain rate, and carbon impurity concentration. Higher temperatures lead to the formation of larger and longer grains. The faster heat dissipation sample initially yields more small grains, but their number subsequently reduces, and is lower than the slower heat dissipation sample at approximately 1.5 strain. Slower strain rates do not promote nanograin formation. The presence of carbon impurities appears to have little effect on grain formation. This detailed analysis affords insight into the mechanisms that control the formation of nanograins under high-shear conditions.
## I Introduction
Grain refinement can be achieved through various methods, including the use of chemical refiners or spray-forming techniques [1]. An attractive approach to accomplish grain refinement is through severe plastic deformation (SPD) techniques, such as accumulative roll bonding (ARB) and equal channel angular pressing (ECAP) [2]. High-shear methods, such as high-pressure torsion (HPT), can significantly alter the microstructure of metals. Numerous materials, including Al, Cu, and Mg alloys, have shown a reduction in grain size with increasing shear strain [3; 4; 5; 6; 7]. Yusuf _et al._[7] observed a reduction in grain size with increasing shear strain for 316L stainless steel, which also correlates with an increase in Vickers microhardness. Despite numerous experimental studies attempting to elucidate the mechanisms of nanocrystal formation with increased shear strain, the results are inconclusive [8].
The leading theory for grain refinement suggests that dislocations are generated in the material due to shear strain. These dislocations accumulate and form subgrain boundaries. As the shear strain increases, dislocations are annihilated at these boundaries, leading to a rise in the misorientation angle between grains. Some of the formed dislocations are not absorbed and persist, forming low-angle grain boundaries (LAGBs), which perpetuates the grain refinement process [4; 8; 9; 10].
Other factors can also influence grain refinement, such as the number and distribution of precipitates within the material [11; 12; 13], as well as the presence of twinning deformation [14; 15]. Isik _et al._[16] observed a grain subdivision and grain refinement through a deformation-induced martensitic phase transformation, from \(\gamma\rightarrow\epsilon\) phase in Co-28Cr-6Mo alloy.
Numerous computational methods have been employed to explore the mechanism of grain refinement through plastic shearing. Finite element analysis (FEA) has been used to understand how the HPT process can modify the samples [17; 18; 19; 20]. FEA simulations greatly depend on the constitutive model used during the simulation [21]. As such, the physics of a given FEA system are largely defined by the constitutive model, which may not allow other physical mechanisms of grain refinement to exist within the simulation, which in turn affects the observable deformation modes. Nevertheless, numerous FEA studies have been conducted that draw conclusions on possible grain refinement mechanisms under shear strain. Wei _et al._[22] investigated the mechanisms of grain refinement in single crystal Ni subjected to HPT, employing crystal plasticity finite element simulations. The results proposed the existence of two rotation modes that directly contributed to the formation of grain boundaries with high misorientations. Additionally, FEA simulations of high angle pressing (HAP) were conducted by Frydrych _et al._[23] to explore the mechanism of grain refinement in an already nanocrystalline face-centered cubic (FCC) material. This study confirmed that initial grain orientation was a major indicator of their susceptibility to further refinement. The study showed that grains which had larger Taylor factors were more prone to refinement. A larger Taylor factor corresponds to greater plastic work done, when compared over the same range of slip [24]. The work of Frydrych [23] makes no further attempt to quantify the orientations, making it difficult to obtain exact lattice orientations that are more prone to refinement.
Molecular dynamics (MD) simulations have been em
ployed to study microstructural evolution in metals subjected to SPD [25; 26; 27; 28; 29; 30; 31]. However, most of these studies start with simulation cells that are already in a nanocrystalline state, and therefore, do not explore the mechanisms by which a single crystal can transform into a nanocrystalline state. Interestingly, the work of Nikonov [31] used MD simulations to shear a perfect, single crystal simulation cell. However, the results were only used for shear stress vs. strain comparisons with a polycrystalline box, and no conclusions were reached about the single crystal shear. The work of Guan _et al._[32] attempted to replicate the HPT process using an MD simulation of Aluminum. However, similar to the other MD investigations mentioned above, the initial box was already in a nanocrystalline state before the shearing took place, and thus, there was no investigation into the actual formation of nanocrystals.
The precise underlying processes responsible for the development of a nanocrystalline microstructure via HPT remains elusive. Despite the significant disparity between shear rates in simulations and experimental settings, we conducted molecular dynamics simulations on iron to gain insights into potential mechanisms through which shear could induce the formation of nanocrystalline structures. Additionally, we scrutinized the progression of microstructures to facilitate meaningful comparisons with experimental observations.
Iron is chosen as the material of the current study because of its diverse uses in domestic and industrial applications. Especially, in advanced fission and future fusion reactors [33; 34; 35], iron-based steels, such as ferritic/martensitic (FM) steels, are chosen as the structural material. FM steels have the same body-centred cubic (BCC) crystal structure as iron. It was shown experimentally that FM steels exhibited less neutron irradiation-induced swelling than austenitic steels with face-centred cubic (FCC) structure [36].
In the following, we first discuss our simulation methods. Then, we present our simulation results and discuss potential mechanisms underlying the observed phenomena. We also draw comparisons with experimental HPT studies.
## II Simulation setup
All MD simulations were carried out using LAMMPS [37]. Simulation cells were created using Atomsk [38]. We constructed perfect crystal simulation cells with \(80\times 80\times 80\) unit cells, where each unit cell contains 2 atoms in BCC structure with a lattice parameter of 2.8665 A, corresponding to Fe. Each simulation cell contains 1,024,000 atoms. The starting crystal orientations are x = [100], y = [010], and z = [001]. Periodic boundary conditions are applied in all three directions.
We also examined the effect of carbon impurities. We created simulation cells by adding 102 carbon atoms into those perfect cells at random positions. This corresponds to a carbon impurity content of 100 appm.
We adopted the interatomic potential for Fe developed by Ackland _et al._[39], which has been widely used to investigate the microstructure evolution of iron [40; 41; 42; 43]. For Fe-C simulations, we used the Hepburn-Ackland FeC interatomic potential [44], which was developed based on the aforementioned Ackland Fe potential. It has been used in several studies concerned with the effect of carbon on the microstructural evolution of iron [45; 46; 47; 48].
The trajectory of a system of interacting atoms is governed by the Langevin equation of motion:
\[\frac{d\mathbf{r}_{i}}{dt} = \mathbf{v}_{i} \tag{1}\] \[m\frac{d\mathbf{v}_{i}}{dt} = \mathbf{F}_{i}-\gamma\mathbf{v}_{i}+\mathbf{f}_{i} \tag{2}\]
where \(\mathbf{r}_{i}\), \(\mathbf{v}_{i}\), and \(\mathbf{F}_{i}=-\partial U/\partial\mathbf{r}_{i}\) are position, velocity, and force associated with atom \(i\). \(U\) is the interatomic potential energy. \(m\) is the mass of an atom. The temperature is controlled by the Langevin thermostat, where the damping parameter \(\gamma\) is related to the delta-correlated fluctuating force \(\mathbf{f}_{i}\) according to the fluctuation-dissipation theorem [49; 50], namely,
\[\langle\mathbf{f}_{i}(t)\rangle=0, \tag{3}\]
\[\langle f_{i\alpha}(t)f_{j\beta}(t^{\prime})\rangle=\mu\delta_{ij}\delta_{ \alpha\beta}\delta(t-t^{\prime}), \tag{4}\]
where \(\alpha\) and \(\beta\) are Cartesian coordinates, and
\[\mu=2\gamma k_{B}T. \tag{5}\]
The fluctuation and dissipation of atoms can be considered as a result of electron-phonon interaction, so \(\gamma\) describes the strength of electron-phonon coupling. We used the electron-phonon coupling parameter for Fe according to Mason _et al._[51], such that \(\gamma=6.875\) eV fs A\({}^{-2}\). We denote this as \(\gamma_{1}\). In some simulations, we used a damping parameter that is ten times larger, which is denoted as \(\gamma_{2}=10\gamma_{1}\). If we do not explicitly mention the value of \(\gamma\) below, we are using \(\gamma_{1}\).
Before applying any shear, the simulation cells are thermalised to particular temperatures. The cell volumes are also relaxed isotropically, so they attain stress-free conditions. This is done under isobaric conditions with zero hydrostatic stress.
Shear was imposed by continually deforming the simulation cell. The shear strain is applied on the _xy_ plane with displacement in the x direction. This was done by imposing a cell tilt factor change, which is effectively an engineering strain. A total simulation time of 335 ps was used, with a maximum final shear strain of \(\varepsilon=10\). This means the shear rate \(d\varepsilon/dt\) is approximately \(2.985\times 10^{10}\) s\({}^{-1}\). We also performed the same simulation with a 10 times longer simulation time, corresponding to a 10 times lower strain rate.
To avoid boundary self-interactions, we remap the simulation cell when the tilting is more than 0.5 of the cell
length. The cell vector and the positions of atoms are remapped along the shearing direction, with the simulation cell going from a 0.5 to a -0.5 tilt. It is important to note that, due to the periodic boundary conditions, there is no change in the local atomic environment.
We used OVITO [52] for analysis. Grains are identified using the grain segmentation modifier. The minimum grain size was chosen as 50 atoms. We discuss the effect of using a different number of atoms as the minimum grain size in Appendix A.3. Since this algorithm requires the crystal orientation of each atom, the polyhedral template matching (PTM) modifier [53] is adopted beforehand. PTM can determine the local crystal orientation of each atom in terms of a quaternion, where each quaternion can be projected into Rodrigues space [54]. Then, the orientation of each atom in this space can be visualised by mapping orientation to an RGB colouring scheme [55]. Dislocation lines are detected using the dislocation analysis (DXA) modifier [56].
## III Results
We first present the results of our simulations exploring the influence of shear strain on initially perfect, single-crystalline iron. Next, we investigate the behaviour of samples under various conditions, including differing temperatures, damping parameters, strain rates, and the presence of carbon impurities.
### Shear-induced nanocrystalline structure
Figure 1 shows a key result of this work. Figure 0(a) to 0(g) show the simulation cell at different shear strain. The results reveal a process in which nanograins are formed due to shear strain in a real-space representation. This simulation was run for 335 ps, up to \(\varepsilon=10\), at a constant strain rate, at a temperature of 300 K, and with a constant damping parameter \(\gamma_{1}\). We will refer to this as the _benchmark_ simulation. The colouring of each atom in Figure 1 is based on its crystal orientation. Only atoms detected as BCC structures are shown. Since a similar colour means a similar crystal orientation, one can clearly observe the formation of nanograins in real space. We are not aware of any similar simulations in the literature showing the formation of nanograins under high shear strain.
Figure 0(a) shows the initially perfect crystal simulation cell. It is subjected to a shear strain in the \(xy\) direction, which causes the crystal lattice to lose its BCC structure. At around 0.27 strain, there is a high level of atomic disorder, as shown in Figure 0(b). Most atoms deviate from the BCC structure. We can consider that atoms are now in a disordered state. Shortly after that, a recrystallization process begins, as shown in Figure 0(c), at around 0.3 strain. Many small grains are visible. Between Figure 0(c) and 0(d), grain formation clearly occurs, with the atomic crystal orientations still largely similar. By Figure 0(e), at \(\varepsilon=4\), the atomic crystal orientations are largely dissimilar, with grains exhibiting distinct orientations. It appears that the grain refinement process continues between 4 and 7 strain, see Figure 0(f). Then, the grain number and sizes attain a dynamical quasi-steady state.
observe an increase in dislocation density starting at a strain value of 0.27. The dislocation density is given as the total line length divided by the volume of the simulation cell. Dislocation density rises to approximately \(2.6\times 10^{-3}\)/A\({}^{2}\) at 0.82 strain, followed by a decrease to \(1.2\times 10^{-3}\)/A\({}^{2}\) at around 2 strain. The dislocation density experiences a slight increase to \(1.6\times 10^{-3}\)/A\({}^{2}\) at 3 strain and subsequently remains relatively constant during further shearing.
The average atomic von Mises stress, as shown in Figure 2f, and the potential energy per atom are well correlated. There is an initial increase in von Mises stress, followed by a rapid decrease when the highly disordered state is formed at 0.27 strain. Nikonov [31] also found an increase in stress during the single crystal shearing of BCC iron. The same rapid increase, followed by a sharp decrease, was observed and attributed to lattice reorientation. It is noteworthy that the potential energy never decreases to its original value, as some excess energy, associated with dislocations and grain boundaries generated during deformation, is stored within the material [59].
As shown in Figure 1, the atoms become highly disordered, with peak disorder occurring at 0.27 strain, which
Figure 1: Grain refinement process with atoms coloured by crystal orientations. Temperature \(T=\)300 K. Strain rate \(d\varepsilon/dt=1/33.5\) ps\({}^{-1}\approx 2.985\times 10^{10}\) s\({}^{-1}\) and damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). (a) 0 Strain (b) Highly Disordered - 0.27 Strain (c) Recrystallisation - 0.32 Strain (d) 1 Strain (e) 4 Strain (f) 7 Strain (g) 10 Strain.
Figure 2: Benchmark simulation data plotted as a function strain. Temperature. Strain rate \(d\varepsilon/dt=1/33.5\)\(\mathrm{ps}^{-1}\approx 2.985\times 10^{10}\)\(\mathrm{s}^{-1}\) and damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). (a) Kinetic and Potential Energy (b) Kinetic and Potential Energy up to 2 strain (c) Temperature (d) Number of Grains (e) Dislocation Density (f) von Mises Stress
corresponds to a maximum in the potential energy. This maximum is a threshold value after which the atoms within the lattice move and reorganise themselves, as made evident by the increase in kinetic energy in Figure (b)b, and the recrystallisation at 0.3 strain in Figure (c)c. The increasing shear strain causes a breaking of the symmetry in the crystal lattice that changes the potential energy landscape. Since atoms already have a high potential energy, triggered by thermal excitations, the unbalanced forces acting on atoms cause them to accelerate, triggering the onset of the disordered phase.
Subsequently, there is a surge in system temperature, as shown in Figure (c)c, which is due to the increase in atomic velocities. As our system is attached to a Langevin thermostat, the system temperature reduces back to the target temperature. Alongside the cool-down, the system undergoes a recrystallization, causing grain nucleation. The grains are tiny at around 0.3 strain, see Figure (c)c. The microstructure remains dominated by highly disoriented regions. Then, grain coarsening starts and continues, as observed in Figure (d)d. This is driven by the excess free energy of the disordered region manifesting as grain boundaries, which are known to possess excess free energy [60]. This excess free energy provides the driving force for atomic transport and subsequent grain growth [61].
In Fig (e)e, the dislocation density in the simulation cell increases and peaks at around 0.82 strain. Dislocations also possess excess free energy [60], which may contribute to grain growth. We note an important point regarding the DXA: LAGBs can be recognised as arrays of dislocations. Therefore, it is unclear if the detected dislocations are grain boundaries or dislocations within grains. There is a new approach that may resolve this situation. Ma _et al._[62] suggested a new algorithm to calculate the shortest distance of any atom from the determined grain boundaries. By eliminating atoms close to the grain boundaries, one can estimate the dislocation line density inside grains. However, we have not adopted this method in our current work, because the definition of grains is somewhat ambiguous in such a highly disordered structure. Additional analysis is provided in Appendix A2.
Experimental studies have exhibited similar trends for nanocrystal formation. Studies on Fe-8% Al [63], TiAl [64], AZ91 Mg [65], and NiTi [66], all used some form of hot deformation to induce strain into the material, followed by quenching. These studies observed small, recrystallised grains compared to the original structure.
Figure 3 shows the atomic crystal orientation, total energy, von Mises stress, and dislocation lines at different shear strain values. The total energy is the sum of atomic potential and kinetic energy. The figure keys for the total energy and von Mises stress are shown in Figure (a)a. In the DXA analysis, green lines represent dislocations with Burgers vector \(\mathbf{b}=\frac{1}{2}\langle 111\rangle\), and pink lines represent \(\mathbf{b}=\langle 100\rangle\). Figure 3 shows that most dislocation lines have \(\mathbf{b}=\frac{1}{2}\langle 111\rangle\). Only few dislocations have \(\mathbf{b}=\langle 100\rangle\). This agrees with the experimental findings in the literature on iron, iron-chromium alloys and ferritic/martensitic steels [67, 68, 69].
Next, we consider the total energy and von Mises stress, shown in Figure 3. Initially, the atoms within the box have low energy and low stress. As the atoms become disordered, they experience a high energy and stress state, as shown by the colouring in Fig. (c)c. As recrystallisation occurs, it is evident that some atoms return to the low energy and stress state, whilst others remain in the high energy and stress state (Fig. (d)d). As shear strain is continually applied, we can observe areas of both high and low energy and stress. The areas of low energy and low stress correspond to atoms within the grains and are BCC in nature, whilst the areas of high energy and high stress correspond to atoms which are not BCC in nature, and are not shown in the crystal orientation images. As such, Figure 3 shows that the atoms between the grains have high energy and high stress, which confirms the presence of grain boundaries in the simulation box [60].
### Temperature
To further probe the underlying mechanisms for nanocrystal formation from an initially perfect crystal under high-shear strain, certain simulation variables were altered one-by-one. We first changed the thermostat temperature, whilst keeping all other conditions unchanged. Three additional simulations were carried out with thermostat temperatures T = 500 K, 800 K, and 1000 K. Figure 4 shows the kinetic energy, the potential energy, the temperature, the number of grains, the dislocation density, and the von Mises stress of different simulations.
Figure (a)a and (b)b show the change in kinetic energy. They are well correlated with the system temperature in Figure (e)e because of the linear dependence between temperature and kinetic energy. We observe a notable difference in the increase in kinetic energy associated with the formation of the disordered phase at different thermostat temperatures. For example, the kinetic energy increases from around 0.039 eV/atom to 0.072 eV/atom for the 300 K simulation, an increase of 0.033 eV/atom, whilst the 800 K simulation experiences a spike from 0.1 eV/atom to around 0.117 eV/atom, an increase of 0.017 eV/atom, much less than the increment at 300 K. We can also observe from Figure (b)b that the initial increase in kinetic energy occurs earlier for the higher temperature simulations, occurring at 0.27 strain for the 300 K, 0.25 strain for the 500 K, 0.2 strain for the 800 K, and 0.19 strain for the 1000 K simulations.
The increase in kinetic energy corresponds to a decrease in potential energy. Figure (c)c and (d)d show how the potential energy begins to decrease earlier for the higher temperature simulations. Interestingly, the thermostat temperature does not appear to significantly alter the maximum potential energy. Whilst the 300 K simulation increases to a maximum of -3.73 eV/atom, the
Figure 3: Benchmark simulation visualisations. Strain rate \(d\varepsilon/dt=1/33.5\) ps\({}^{-1}\approx 2.985\times 10^{10}\) s\({}^{-1}\) and damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). Colour key found in (a). Left - Atoms coloured by crystal orientations; Centre-left - Total energy; Centre-right - von Mises stress; Right - Dislocation analysis; for different levels of strain in benchmark simulation.
higher temperature 500 K and 800 K simulations increase to -3.74 eV/atom, and the 1000 K simulation increases to -3.72 eV/atom. The higher temperature simulations generally experience higher potential energy after the initial spike, with the 1000 K simulation hovering at around -3.82 eV/atom after \(\varepsilon=4\), whilst the 300 K simulation plateaus at around -3.89 eV/atom.
Inspection of Figure 4f and 4g provides interesting insight into the effect of the thermostat temperature change. All simulations experience an initial spike in grain number, with the spike occurring at a lower strain for higher temperature simulations. The magnitude of the increase does not appear to correlate with thermostat temperature. For example, whilst the 300 K simulation increases to around 400 grains, the 1000 K simulation only increases to around 195 grains, which is less than the 500 K simulation, but more than the 800 K simulation.
All simulations show a rapid decrease in grain number between 0.19 and 0.25 strain, followed by an increase in grain number at \(\sim\)2 strain. The 300 K simulation notably has the most grains at the local maximum point of 2 strain. The other temperature simulations have fewer grains at this point, with the 500 K, 800 K, and 1000 K simulations having roughly the same number of grains up to \(\varepsilon=4\). The grain number in the 500 K and 800 K simulations increases and saturates after \(\varepsilon=7\). The 1000 K simulation experiences a reduction in grain number after \(\varepsilon=4\), and is as low as 11 grains at \(\varepsilon=10\).
Figure 4h shows the von Mises stress. The stress di
Figure 3: Benchmark simulation visualisations. Strain rate \(d\varepsilon/dt=1/33.5\) ps\({}^{-1}\approx 2.985\times 10^{10}\) s\({}^{-1}\) and damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). Colour key found in (a). Left - Atoms coloured by crystal orientations; Centre-left - Total energy; Centre-right - von Mises stress; Right - Dislocation analysis; for different levels of strain in benchmark simulation.
Figure 4: Simulation data plotted as a function of strain at different thermostat temperatures. Strain rate \(d\varepsilon/dt=1/33.5\) ps\({}^{-1}\approx 2.985\times 10^{10}\) s\({}^{-1}\) and damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). Red line - 300 K, Blue line - 500 K, Green line - 800 K, Black line - 1000 K. (a) Kinetic Energy (b) Kinetic Energy up to 2 strain (c) Potential Energy (d) Potential Energy up to 2 strain (e) Temperature (f) Number of Grain (g) Number of Grain up to 2 strain (h) Von Mises Stress up to 2 Strain (i) Dislocation Density.
rectly correlates with the potential energy. This is evident in the figure as the decrease in stress after the initial increase occurs at exactly the same strain value as the potential energy decrease (see Figure 4c and 4d). Nevertheless, whilst the potential energy maximum is roughly the same value for all simulations, the von Mises stress shows an unequal increase. As made evident by Figure 4h, the higher temperature simulations show a lower maximum von Mises stress and, as the simulation temperature decreases, the maximum von Mises stress increases. The subsequent decrease in von Mises stress is also larger in magnitude for the lower temperature simulations, and the value at which the stress plateaus with shearing is lower for the lower temperature simulations.
We observe in Figure 4i that the dislocation densities for the higher temperature simulations also follow the same pattern as the benchmark simulation. For each simulation, there is a marked increase in dislocation density between 0.19 and 0.25 strain, which corresponds to the point when kinetic energy begins to increase, and the potential energy decreases. This is followed by a maximum point in dislocation density, occurring at 0.82 strain for the 300 K simulation, 0.74 strain for the 500 K simulation, 0.7 strain for the 800 K simulation, and 0.6 strain
Figure 4: Simulation data plotted as a function of strain at different temperature simulations. Strain rate \(d\varepsilon/dt=1/33.5\) ps\({}^{-1}\approx 2.985\times 10^{10}\) s\({}^{-1}\) and damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). Red line - 300 K, Blue line - 500 K, Green line - 800 K, Black line - 1000 K. (a) Kinetic Energy (b) Kinetic Energy up to 2 strain (c) Potential Energy (d) Potential Energy up to 2 strain (e) Temperature (f) Number of Grain (g) Number of Grain up to 2 strain (h) Von Mises Stress up to 2 Strain (i) Dislocation Density.
Figure 5: Atoms coloured by orientation for different temperature simulations. Strain rate \(d\varepsilon/dt=1/33.5\)\(\mathrm{ps}^{-1}\approx 2.985\times 10^{10}\)\(\mathrm{s}^{-1}\) and damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). Left - T = 300 K simulation; Centre-left - T = 500 K simulation; Centre-right - T = 800 K simulation; Right - T = 1000 K simulation.
for the 1000 K simulation. We note that the maximum value of dislocation density does not correlate with temperature. Both the 500 K and 800 K simulations have larger dislocation densities, whilst the 300 K and 1000 K simulations have lower maximum densities. As with the benchmark simulation, the dislocation densities decrease up to \(\varepsilon=2\), after which there is another increase and saturation in dislocation density for all simulations.
In Figure 3(d), there appears to be a nearly universal maximum potential energy, around -3.73 eV/atom, for every simulation. We hypothesise that this value corresponds to the maximum potential energy the material can sustain before atoms are permanently displaced from their lattice positions, and it is the value attained before the initiation of the disordered atomic state. After this point, the potential energy decreases while the kinetic energy increases. The higher temperature simulations exhibit higher potential energy at 0 strain due to the principle of equipartition of energy. At higher temperatures, a system under strain should reach the point of instability, in terms of potential energy, sooner. This, in turn, causes the kinetic energy to increase at a lower strain value, meaning that the higher temperature simulations experience the initiation of the disordered state at a lower strain. This observation could explain the discrepancy in the von Mises stress peak between the simulations at different temperatures. As the temperature increases, the stress applied to reach a certain potential energy is reduced because the potential energy is already at a higher level. Similarly, this also explains why the spike in grain number occurs earlier for the higher temperature simulations.
Figure 5: Atoms coloured by orientation for different temperature simulations. Strain rate \(d\varepsilon/dt=1/33.5\) ps\({}^{-1}\approx 2.985\times 10^{10}\) s\({}^{-1}\) and damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). Left - T = 300 K simulation; Centre-left - T = 500 K simulation; Centre-right - T = 800 K simulation; Right - T = 1000 K simulation.
Interestingly, upon analyzing Figure 4f and 4i, we observe that the peak in grain number after recrystallization (at 2 strain) directly corresponds to a minimum value in dislocation density after the formation of the disordered phase. This finding aligns with previous MD simulations of nanocrystalline Cu [25]. The study found that at high dislocation density, a system is highly unstable when nanocrystalline grains are present. With applied strain, the dislocations were able to glide towards and annihilate at grain boundaries.
To further understand the effect of temperature on nanograin formation, we can consider Figure 5. This figure shows atoms coloured according to their local orientation, and only atoms in the BCC phase are visualized. The simulations, from left to right are at 300 K, 500 K, 800 K, and 1000 K, respectively. All simulations begin with a perfect box, thermalised to the desired temperature (Figure 5a). With applied shear strain, the cells all enter a highly disordered state (Figure 5b). The level of disorder appears to be larger for the lower temperature simulations. This is visible in that the number of BCC atoms is less for the 300 K simulation compared to, for example, the 1000 K simulation.
Each simulation experiences a subsequent recrystallization (Figure 5c), followed by a grain growth up to 1 strain (Figure 5d). At this point, through inspecting the colouring of atoms, the majority of atoms have similar crystal orientation to the initially perfect crystal lattice, meaning that the misorientation between grains is small. As the shearing continues, it is evident that grains develop different orientations, as shown in Figure 5e - 5g. It is also evident that the grains are larger and longer for the higher temperature simulations compared to the lower temperature simulations. This is observed by comparing the 300 K and 1000 K simulations in Figure 5g, where the grain number and size are very different. This is also consistent with the lower grain numbers for the higher temperature simulations compared to the 300 K benchmark, seen in Figure 4f.
As \(\alpha\)-iron is a BCC metal, it has a very high stacking fault energy [70]. It is generally agreed that Dynamic Recovery (DRV) is the sole dynamic restoration mechanism in \(\alpha\)-iron and other ferritic metals [70, 71]. However, some studies have also shown that Dynamic Recrystallisation (DRX) occurs in BCC iron [72, 31]. Typically, DRV readily occurs in ferritic steels and iron at temperatures \(>0.4T_{m}\)[73]. On the other hand, the data presented in Figure 4i suggests that this can also occur at room temperature, as this figure shows a marked annihilation of dislocations. The fundamental mechanisms of DRV are dislocation glide, climb, and cross-slip [70]. We speculate that the accumulation of dislocations also contributes to nanocrystal formation, as the dislocations accumulate and form LAGBs. In general, the dislocation density stays fairly constant after around \(\varepsilon=4\) for all simulations, whilst the grains continue to elongate, which is especially prominent for higher temperature simulations. This behaviour is expected for a metallic material under constant strain deformation [70, 74].
### Heat dissipation
The heat dissipation from the lattice subsystem to the environment is represented by the Langevin thermostat. In metals, the dominant heat transfer mechanism is through electrons [75]. Therefore, we take the damping parameter \(\gamma\), which governs the speed of heat dissipation, according to the phonon-electron coupling. The damping parameter has, so far, been kept at \(\gamma=6.875\) eV fs A\({}^{-2}=\gamma_{1}\). To investigate the effect of the rate of heat dissipation on nanocrystal formation, a further simulation was carried out with a damping parameter of \(\gamma=68.75\) eV fs A\({}^{-2}=\gamma_{2}\). All other conditions were kept the same. The Langevin thermostat was set to 300 K.
Figure 6 shows various quantities plotted as a function of shear strain when the two different damping parameters are used. We also compare the percentage of BCC structure found in the simulation cell as a function of strain. This will become important when considering Figure 7, which compares the evolution of the atomic orientations in simulations using different damping parameters.
Figure 6a shows the change in kinetic energy between the lower (or benchmark) and higher damping simulations. The average kinetic energy in the lower damping simulation increases from around 0.038 eV/atom to 0.072 eV/atom, a difference of 0.034 eV/atom, before reducing to a dynamic quasi-steady state. Similarly, the higher damping simulation experiences an increase from 0.034 eV/atom to around 0.045 eV/atom, a difference of 0.011 eV/atom, before it drops. The increase in kinetic energy is much lower for the higher damping simulation because a higher damping parameter leads to a higher quenching rate. The kinetic energy is also noticeably lower for the higher damping simulation once both simulations have reached the dynamic quasi-steady state. The lower damping simulation plateaus at 0.042 eV/atom whilst the higher damping simulation sits at 0.039 eV/atom.
Figure 6b shows that both simulations experience an identical increase in potential energy with increasing shear strain up to a value of -3.73 eV/atom at 0.27 strain, corresponding to the highly disorder state. However, whilst the benchmark simulation reduces to and subsequently stabilises at around -3.89 eV/atom, the higher damping simulation only decreases to around -3.82 eV/atom, which suggests a higher state of disorder. Furthermore, the potential energy shows a gradual decrease after around 4 strain, until it approximately reaches the same value as the benchmark simulation at \(\varepsilon=10\). As before, the average atomic von Mises stress, as shown in Figure 6d, and the potential energy per atom, are well correlated for both simulations.
Next, we consider the grain counts in the simulation cells (Figure 6c). Both the benchmark and higher damping simulations exhibit significa
Figure 6: Simulation data plotted as a function of strain using different damping parameters. Strain rate \(d\varepsilon/dt=1/33.5\) ps\({}^{-1}\approx 2.985\times 10^{10}\) s\({}^{-1}\). Damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\) for red line and damping parameter \(\gamma=68.75\) eV fs Å\({}^{2}\) for blue line. (a) Kinetic energy (b) Potential energy (c) Number of grains (d) von Mises stress (e) Dislocation density (f) Percentage of BCC structure.
0.27 strain. The benchmark cell contains about 400 grains, while the higher damping simulation shows 390. Both simulations show a rapid decrease in grain count. The benchmark simulation subsequently drops to around 30 grains, whereas the higher damping simulation only drops to around 130 grains. The benchmark simulation steadily increases in grain count until about 2 strain, reaching \(\sim\)220 grains. In contrast, the higher damping simulation rapidly rises to 320 grains around 1.1 strain, then sharply declines. By 3 strain, the higher damping simulation maintains around 70 grains, plateauing until the end of simulation, a lower count than the benchmark simulation's plateau at 240 grains.
By observing Figure 6e, which shows the dislocation density for the simulations, we see that the initial spike in dislocation density is much lower for the higher damping simulation. Whilst the benchmark simulation increases to around \(2.6\times 10^{-3}/\text{\AA}^{2}\) at 0.82 strain, the higher damping simulation's dislocation density only increases to \(6.4\times 10^{-4}/\text{\AA}^{2}\) at 0.51 strain. With increased shearing, the dislocation density for the benchmark simulation decreases and then saturates at around \(1.6\times 10^{-3}/\text{\AA}^{2}\). This is in contrast to the higher damping simulation where the dislocation density decreases to \(2.5\times 10^{-5}/\text{\AA}^{2}\) at around 2 strain and begins to increase at around 6.5 strain, with the dislocation density for the higher damping simulation peaking at around \(1.2\times 10^{-3}/\text{\AA}^{2}\) at \(\varepsilon=10\).
Figure 6f shows the percentage of the overall structure in the BCC phase as a function of strain. Both simulations experience a highly disordered state transition at 0.27 strain, where the percentage of BCC atoms within the cells is below 10% for both simulations. However, the benchmark simulation rapidly recovers, and around 50% of the atoms are considered to be BCC after \(\varepsilon=10\). Conversely, the higher damping simulation makes a slight recovery to 17% at 0.44 strain, before reducing to 2% BCC. After 5 strain, the higher damping cell begins to recover its BCC structure, increasing to \(\sim\)32% BCC at 10 strain.
To further explore the data presented in Figure 6, a visual representation of atomic crystal orientation is shown in Figure 7. Both simulations begin with a perfect cell (Figure 7a) before experiencing a disordered state (Figure 7b) followed by a recrystallisation (Figure 7c). The number of atoms in the disordered phase is much larger for the higher damping simulation. During the recrystallisation, the grains appear to be much larger for the benchmark simulation, with many small pockets of BCC structure being present in the higher damping simulation.
In Figure 7d, the higher damping simulation does not experience a grain growth similar to the benchmark simulation, with many fragmented grains and disordered atoms present. By Figure 7e, most of the cell is in the disordered phase. There are only a few pockets of BCC atoms still present within the box. This is consistent with Figure 6f which shows that the percentage of atoms in the BCC phase at 4 strain was 2%. Between 4 strain (7e) and 7 strain (7f), there is a noticeable grain growth, which is even more noticeable at 10 strain (7g), corresponding to a larger number of atoms transitioning into the BCC phase as also seen in Figure 6f.
Previous studies [70, 71] consider DRV to be the only dynamic recovery method in ferritic steels. However, the data presented in Figures 6 and 7 appears to show that DRX mechanisms are at play, which agrees with the observations made by Tsuji _et al._[72], which confirmed the occurrence of DRX in BCC iron.
Considering Figure 7e - 7g, the few remaining grains at 4 strain grow much larger and recrystallisation occurs. This process continues to produce a small number of large grains at \(\varepsilon=10\). This coincides with an increase in the percentage of BCC phase, as shown in 6f. Interestingly, previous analysis of Figure 6c showed that the grain number plateaued at around \(\varepsilon=3\) for the higher damping simulation. This provides further evidence for DRX, as the grain number stays constant whilst the BCC structure continues to recover.
A larger damping parameter fundamentally means that the quenching rate is faster. This means that atoms lose their kinetic energy more quickly, and get trapped in a disordered state. As shown in Figure 7c and 7d, the higher damping simulation experiences recrystallisation, similar to the benchmark simulation, but not to the same extent. Increased shear strain causes the atoms revert into a disordered state, as shown in Figure 7e. This is confirmed by considering Figure 6f, which shows a recovery of the BCC phase from the minimum at 0.27 strain, to 17% BCC at 0.44 strain. This is followed by a return to the disordered phase. It is not immediately evident why the atoms return to the disordered state post recrystallization. In the following section, we will attempt to outline a possible explanation for this.
As previously mentioned, the system is quenched at a higher rate for the higher damping simulation which causes many atoms to remain in the disordered phase. This essentially means that after recrystallisation at 0.32 strain, the grains are very small and the grain boundary volume is large. As such, the dislocations formed as a result of shearing are heavily constrained within small grains [76] and more readily meet and annihilate at the grain boundaries. This is evident from Figure 6e which shows an increase in dislocation density up to \(6.0\times 10^{-4}/\text{\AA}^{2}\) post recrystallisation, followed by a quick drop in dislocation density to \(2.6\times 10^{-6}/\text{\AA}^{2}\) at \(\varepsilon=4\).
As dislocations are annihilated at grain boundaries and an increasing number of atoms enter a disordered state, the percentage of BCC structure decreases. The growth in grain size for the higher damped simulation provides the necessary grain volume for the propagation of dislocations, and an increase in dislocation density can be observed with the increase in BCC structure at \(\varepsilon=7\). It is also possible that, since there are many highly disordered atoms that manifest as grain boundaries, the free energy in the simulation is high, which drives the recrystallisation.
### Strain Rate
The effect of strain rate was explored next. We performed a simulation at a shear strain rate which is 10 times slower than the benchmark simulation, whilst keeping other parameters unchanged. Figure 8 shows different quantities plotted as a function of strain for the normal and slow rate simulation.
In Figure 7(a) and 7(b), the initial kinetic energy spike is not as pronounced for the slow strain rate simulation. Whilst the benchmark simulation shows an increase from 0.039 eV/atom to 0.072 eV/atom, the slow rate simulation only shows an increase from 0.039 eV/atom to
Figure 7: Grain refinement process for different damping parameter simulations. Atoms are coloured according to atomic crystal orientations. Strain rate \(d\varepsilon/dt=1/33.5\) ps\({}^{-1}\approx 2.985\times 10^{10}\) s\({}^{-1}\). Left - Benchmark simulation with damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\); Right - Higher damping simulation with damping parameter \(\gamma=68.75\) eV fs Å\({}^{2}\).
Figure 8: Simulation data plotted as a function of strain for different strain rate simulations. Damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). Strain rate \(d\varepsilon/dt=1/33.5\) ps\({}^{-1}\approx 2.985\times 10^{10}\) s\({}^{-1}\) for red line and strain rate \(d\varepsilon/dt=1/335\) ps\({}^{-1}\approx 2.985\times 10^{9}\) s\({}^{-1}\) for blue line. (a) Kinetic energy (b) Kinetic energy up to 2 strain (c) Potential energy (d) Potential energy up to 2 strain (e) Number of grains (f) Von Mises stress (g) Dislocation density (h) Dislocation density up to 2 strain (i) percentage of BCC structure.
0.056 eV/atom. Furthermore, the spike for the benchmark simulation begins at 0.27 strain and peaks at 0.3 strain, whilst the slow rate simulation experiences a rise starting at 0.24 strain, and peaking at 0.25 strain. After the initial increase and subsequent decrease in kinetic energy, the slow rate simulation plateaus at a lower kinetic energy of around 0.039 eV/atom, compared with the benchmark simulation at 0.042 eV/atom.
The difference in the potential energy is shown in Figure 8c and 8d. Both simulations have identical trajectories in the initial stage. However, the slow rate simulation reaches the peak in potential energy at an earlier stage than the benchmark. The benchmark simulation peaks at -3.73 eV/atom at 0.27 strain, while the slow rate simulation peaks at -3.78 eV/atom at a strain of 0.24. This is followed by a rapid decrease for both simulations. The slow rate simulation decreases and plateaus at a lower potential energy, i.e. -3.95 eV/atom, than the benchmark, i.e. -3.89 eV/atom. The von Mises stress again correlates to the potential energy as shown in Figure 8f.
Though we can observe grain refinement in the benchmark simulation, this does not occur in the slow rate simulation, as shown in Figure 8e. The previously observed large spike in grains for the benchmark simulation does
Figure 8: Simulation data plotted as a function of strain for different strain rate simulations. Damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). Strain rate \(d\varepsilon/dt=1/33.5\) ps\({}^{-1}\approx 2.985\times 10^{10}\) s\({}^{-1}\) for red line and strain rate \(d\varepsilon/dt=1/335\) ps\({}^{-1}\approx 2.985\times 10^{9}\) s\({}^{-1}\) for blue line. (a) Kinetic energy (b) Kinetic energy up to 2 strain (c) Potential energy (d) Potential energy up to 2 strain (e) Number of grains (f) Von Mises stress (g) Dislocation density (h) Dislocation density up to 2 strain (i) percentage of BCC structure.
not occur in the slow rate simulation. Whilst there is a smaller initial peak of 80 grains at 0.27 strain for the slower rate simulation, this does not result in the formation of a largely polycrystalline structure, and after 0.38 strain, a single grain is present. This is contrary to the data obtained for the benchmark, and other previously discussed simulations that experienced the generation of many nanocrystalline grains.
This difference is also inferred by Figure 8i which compares the percentage of atoms in the BCC phase as a function of strain. A highly disordered state is observed for the benchmark simulation at 0.27 strain, where only 7% of the atoms are in the BCC phase, followed by a recovery where around 50% of the atoms remain in the BCC phase. The slow rate simulation also experiences this disordered state, but to a lesser extent. At 0.24 strain, 37% of the atoms in the slow rate simulation are in the BCC phase, which is 30% more than the lowest benchmark value. Furthermore, the vast majority of the atoms regain their BCC phase, with the percentage of BCC atoms plateauing at around 93%. This is logical as the structure in the slow rate simulation is largely a single crystal, and there are no grain boundaries in the highly disordered state.
Several differences between the simulations are observed when analysing the dislocation densities in Figure 8g and 8h. After a shear strain of 0.27, the dislocation density for the benchmark simulation gradually increases up to \(\sim 2.6\times 10^{-3}/\text{\AA}^{2}\) at 0.82 strain, after which it decreases to \(1.2\times 10^{-3}/\text{\AA}^{2}\) at around 2 strain before plateauing at \(1.6\times 10^{-3}/\text{\AA}^{2}\). However, in the slow rate simulation, dislocation densities show a rapid increase to \(3.6\times 10^{-3}/\text{\AA}^{2}\) at 0.3 strain followed by a decrease to around \(1.2\times 10^{-3}/\text{\AA}^{2}\) at 1.6 strain. After this point,
Figure 9: Dislocation network comparison for different strain rate simulations. Damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). Green dislocation lines \(\mathbf{b}=\frac{1}{2}\langle 111\rangle\) and pink dislocation lines \(\mathbf{b}=\langle 100\rangle\). Left - Benchmark simulation with strain rate \(d\varepsilon/dt=1/33.5\); Right - Slower rate simulation with strain rate \(d\varepsilon/dt=1/335\).
the dislocation densities for the simulations stay fairly similar, with the benchmark simulation having a slightly higher dislocation density with continued shearing.
A visual representation of dislocation density is shown in Figure 9 in pairs. The left-hand side is the benchmark simulation, while the right-hand side shows the slow rate simulation. Dislocations are colour-coded in the same way as in Figure 3. During recrystallisation, in Figure 9a, the benchmark simulation shows few dislocation lines, and they are sparse and disconnected. However, in the slow rate simulation, the dislocation density is much higher with a dislocation network being formed. This is in agreement with Figure 8h which depicts a large dislocation density at recrystallisation for the slow rate simulation compared with the benchmark.
At \(\varepsilon=1\) (Figure 9b), a significant increase in dislocation density is observed within the benchmark simulation. In contrast, the slow rate simulation cell displays a reduction in dislocation density, consistent with Figure 8h. From 1 strain (Figure 9b) to 10 strain (Figure 9e), the dislocation pattern in each simulation persists. The benchmark simulation shows short dislocation lines, with dislocation pile-ups attributed to grain boundary interactions. In contrast, the slow rate simulation displays long dislocation lines, that form an extensive dislocation network. This behaviour persists due to the absence of grain boundaries constraining dislocation motion within the slow rate simulation cell.
These observations can be rationalised as follows: the slow rate simulation does not show the formation of nanocrystals. During the initiation of plastic deformation, many dislocation loops are formed, which causes a spike in dislocation density, as seen in Figure 9a. These loops do not form in the benchmark simulation due to the presence of highly disordered atoms that manifest as grain boundaries. Polycrystalline grains limit the volume in which the dislocations can form, resulting in the short dislocation lines observed in the benchmark simulation. With increased strain, the dislocations in the slow rate simulation are allowed to evolve and are not annihilated, as there are few grain boundaries. Instead, as shown in Figure 9, they combine and form a large network of extended dislocations. The presence of grain boundaries does not permit this phenomenon to occur in the benchmark simulation, so short dislocations are observed.
We may speculate about the reason why the slower strain rate simulation does not experience the same nanocrystal formation as the benchmark simulation. The kinetic increases less in the slow rate simulation, which could point to fewer atoms being able to accelerate within the lattice. As such, many atoms will retain their original structure, which is confirmed by Figure 8i. This increase in kinetic energy comes at a lower strain for the slow rate simulation, which results in a smaller increase in potential energy. Ultimately, this means that fewer atoms reach the disordered state in the slow rate simulation. It appears that, with increased strain, it is more energetically favourable for the atoms to recrystallise back into a single crystal structure rather than a polycrystalline one.
The DRV mechanism appears to be present in the slow rate simulation, as the longer time scale gives dislocations more time to recover [77]. As a result dislocations are less likely to pile up, and thus do not form grain boundaries, limiting grain refinement. It is also possible that the dislocations are simply unable to pile up, because the large dislocation network, on different slip planes, impedes extended dislocation glide [78]. As dislocation motion becomes constrained, the formation of LAGBs and thus, further grain refinement, are inhibited.
Experimental works suggest that a lower strain rate correlates to a larger mean grain size [64; 66] and it is possible that the present simulation cell size is the limiting factor. In the future, it may be beneficial to carry out such shearing simulations with a much larger number of atoms to ascertain whether this is the case.
### Carbon Impurities
At which point, may one consider the carbon impurity content to be negligible? Some sources point to a maximum carbon content of 0.006% [79] whilst others argue that a carbon content as low as 0.001% can already immensely affect the properties of iron [80]. Ferritic/martensistic steels that have been selected for fusion applications typically contain less than 0.15% carbon [81]. The atomic size of carbon is small enough that the atom can enter the iron lattice as an interstitial solute atom [82]. Experiments showed that carbon can greatly influence the microstructural behaviour of iron. For example, Stein [83] showed that the dislocation velocity exponent increases in iron with an increase in carbon content when held at room temperature. This alters the yield point and rate of crack propagation within the material. Molecular dynamics simulations have also been carried out which show the effect of carbon interstitials. Carbon atoms can block dislocation motion in an iron simulation cell [84].
In the current work, we inserted 102 carbon atoms into the simulation cell, that is 100 appm or 0.01 atomic %. We attempt to understand the effect of carbon interstitial atoms on the formation of nanograins under shear strain in iron.
Figure 10 shows various simulation properties with and without carbon impurities. Figure 10a shows the differences in kinetic energy between the simulations. The benchmark simulation experiences a rapid increase in kinetic energy, from 0.039 eV/atom at 0.27 strain, to 0.072 eV/atom at 0.3 strain, before rapidly decreasing and plateauing at 0.042 eV/atom. Similarly, the kinetic energy for the carbon-containing simulation increases from 0.039 eV/atom at 0.27 strain, to 0.067 eV/atom at 0.3 strain, before rapidly decreasing. Hence, the difference in maximum kinetic energy between the simulations is around 0.005 eV/atom.Furthermore, both simulations experience kinetic energy reductions to 0.043 eV/atom
Figure 10: Simulation data plotted as a function of strain for benchmark and carbon-containing simulations. Strain rate \(d\varepsilon/dt=1/33.5\) ps\({}^{-1}\approx 2.985\times 10^{10}\) s\({}^{-1}\) and damping parameter \(\gamma=6.875\) eV fs Å\({}^{2}\). The red line is for the benchmark and the blue line is for 100 appm carbon in iron simulation. (a) Kinetic Energy (b) Potential Energy (c) Number of Grains (d) Von Mises Stress (e) Dislocation Density (f) Temperature.
at 0.32 strain. After this point, both kinetic energies remain very similar, with only minor differences between them.
In Figure 9(b), the behaviour of the potential energy is nearly identical for both simulations up to 1 strain. Both cells experience a rapid increase from 0 strain, followed by a rapid decrease at 0.27 strain, to a value of around -3.89 eV/atom at around 0.82 strain. However, the potential energy of the carbon-containing simulation increases again from this point, reaching -3.85 eV/atom at, before decreasing slightly, and once again increasing to -3.84 eV/atom at. This trend is also shown in Figure 9(d), which shows the von Mises stress for both cells.
Figure 9(c) shows the number of grains found in each simulation cell. There are some minor differences between the two simulations. The initial grain number increases up to around 400 for the benchmark simulation, whilst the maximum number of grains found in the cell with carbon atoms is 440, a difference of 40 grains. Nevertheless, both simulations experience a nearly identical trajectory from 0.27 strain to 3 strain. After this point, the number of grains present in the benchmark simulation is noticeably larger than the carbon-containing cell. For example, at, the number of grains found in the benchmark simulation is around 260, whilst the carbon-containing cell only has around 210.
A similar trend is observed when considering the dislocation densities of the simulations in Figure 9(e). Again, the trajectories of the simulations are nearly identical up to. With continued shearing, the dislocation density of the carbon-containing cell is noticeably larger, until the values become similar for both simulations at around.
In Figure 9(f), it is interesting to observe identical simulation temperatures for the benchmark simulation and carbon-containing simulation, whilst the kinetic energy between them is not identical. It is not immediately apparent what could cause this.
Figure 11 shows the mean kinetic energy broken down by atom type for the carbon-containing simulation. Evidently, the carbon atoms have higher kinetic energy that the iron atoms, and the energy fluctuation is large. However, the kinetic energy of the carbon atoms stays fairly consistent on average, at a value of around 0.058 eV/atom.
Carbon atoms can move within the material by jumping between interstitial sites through diffusion [85]. The work done by Wert [86] managed to characterise the diffusion coefficient of carbon in iron as a function of the temperature, and subsequent work has pointed to a diffusion barrier of around 0.87 eV for carbon in iron at room temperature [87]. This is further reinforced by the work of Tapasa _et al._[88], which also found that the activation energy of C migration in Fe to be 0.82 - 0.86 eV, which is much larger than the kinetic energy spikes in Figure 11. Fu _et al._[89] performed density functional theory (DFT) calculations to explore the migration energy of C atoms in Fe. They found that C atoms will migrate from neighbouring octahedral sites through a tetrahedral site, with an energy barrier of 0.87 eV. This value agrees with the work of Wert [86] and Tapasa [88]. Other DFT calculations have also obtained similar values [90, 91].
Figure 11 suggests that these carbon atoms are not overcoming the energy barrier as their average kinetic energy is never increases past the diffusion energy barrier, therefore suggesting that the carbon atoms do not readily move through diffusion. Instead, the movement of carbon atoms may be conducted through a mobile Cottrell atmosphere. In Figure 9(e), the carbon-containing cell initially shows a higher dislocation density compared to the benchmark simulation up to around 8.5 strain, after which they become similar. Carbon, behaving as a solute interstitial in the lattice, impedes dislocation motion [92]. This prohibits dislocations from gliding effectively within the lattice, hampering their movement toward grain boundaries and limiting the formation of new LAGBs. This restraint likely contributes to the lower overall grain count in the carbon-containing cell, as shown in Figure 9(c). Essentially, hindered dislocation glide and limited LAGB formation slow grain refinement, yielding higher dislocation density. Moreover, dislocations carry excess free energy [60], likely contributing to the higher overall potential energy in the carbon-containing cell (Figure 9(b)). Interestingly, the potential energy notably decreases beyond, even surpassing the benchmark simulation, aligning with the reduction in dislocation density in the carbon-containing box.
## IV Conclusion
Nanocrystalline formation in iron under high shear strain has been observed through molecular dynamic
simulations. The process of nucleation and growth of nanograins during the shearing involves a disordered state, recrystallization, and grain coarsening. The disordered state is caused by a sudden surge in kinetic energy gained from the drop in potential energy, which was high and unstable due to shearing. Following this, energy is dissipated into the environment, mimicked by the thermostat. Atoms rearrange locally to achieve energetically favourable configurations that lead to recrystallization and grain coarsening.
We also examined the influence of various factors, such as thermostat temperature, heat dissipation rate, shear strain rate, and carbon content. Simulations at higher temperatures still experience nanocrystalline formation, but with larger and longer grains forming. A faster rate of heat dissipation altered the grain refinement process. This also involved a disordered state, followed by recrystallization. However, dynamic restoration mechanisms were observed to play a major role in nanocrystalline formation. Simulations with a slower strain rate did not produce nanocrystalline material, with only a single crystal structure being observed. The inclusion of carbon interstitial atoms had a minor effect on nanocrystalline formation, with the process of grain refinement being identical to that of the pristine material. Nonetheless, a smaller number of grains were generally observed for the carbon-containing simulations.
The current simulations demonstrate a possible mechanism of nanograin formation using high-shear methods, which we have not found reported in the literature.
## V Data Availability
All input scripts and simulation data presented in the current work are available at _A link will be provided after the review process and before publication_.
###### Acknowledgements.
The authors gratefully acknowledge the Department of Engineering Science at the University of Oxford for their contribution to the funding of the project. This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion) and from the EPSRC [grant number EP/W006839/1]. To obtain further information on the data and models underlying this paper please contact [email protected]. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. The authors acknowledge the use of the Cambridge Service for Data Driven Discovery (CSD3) and associated support services provided by the University of Cambridge Research Computing Services (www.csd3.cam.ac.uk) in the completion of this work. This work used the ARCHER2 UK National Supercomputing Service ([https://www.archer2.ac.uk](https://www.archer2.ac.uk)).
## Appendix A Further analysis
### Slip Systems
To analyze the slip systems within the current orientation, we calculated the Schmid factor [93]. We apply a constant shear strain in the \(xy\) direction, resolving to principal strain at 45\({}^{\circ}\). Although the stress magnitude changes as the simulation progresses, normalizing the applied stress direction to \(\sigma=[1\overline{1}0]\) is feasible. With shear strain confined to the \(xy\) direction and periodic boundary conditions in place, a plane stress is assumed. In BCC structures like \(\alpha\)-iron, slip direction predominantly aligns with the \(\langle 111\rangle\) family, while slip planes encompass the \(\{110\}\), \(\{112\}\), and \(\{123\}\) families for Fe [94; 95]. Using these stress values alongside slip planes and directions, we calculated the Schmid factor using:
\[m=cos(\phi)cos(\lambda), \tag{1}\]
where \(m\) is the Schmid factor, \(\phi\) is the angle between the normal of the slip plane and the direction of applied stress, and \(\lambda\) is the angle between the direction of applied stress and the slip direction.
The Schmid factor can be calculated using:
\[cos(\phi)=\frac{\vec{\sigma}\cdot\vec{n}}{|\vec{\sigma}||\vec{n}|}, \tag{2}\] \[cos(\lambda)=\frac{\vec{\sigma}\cdot\vec{d}}{|\vec{\sigma}||\vec {d}|}, \tag{3}\]
where \(\vec{n}\) is the vector normal to the slip plane, and \(\vec{d}\) is the vector in the direction of the slip.
There is a total of 48 slip systems in BCC metal. A full list of these can be found in [96]. The Schmid factor was calculated for each system and this is shown in Figure 12.
Analyzing Figure 12, we discover a multitude of potential slip systems that can activate within the system when the crystal is oriented along \(x=[100]\). This observation may shed light on why the grain refinement process initiates with numerous atoms entering a highly disordered state. During the shearing of the cell, a diverse set of slip systems is simultaneously engaged. Upon reaching the critical stress point, this leads to a mixing of atoms, inducing high disorientation due to the involvement of multiple slip directions. Subsequent quenching then triggers recrystallization.
### Further Dislocation Analysis
We performed additional DXA calculations after eliminating grain boundaries. This is done by removing atoms that are sitting close to grain boundaries. We adopt the code developed by Mason [97, 62]. It provides the distance \(d\) of each atom to the nearest grain boundary. Atoms with \(d<1\) A were excluded, followed by a subsequent DXA calculation. Figure 13 displays the analysis outcomes.
Figure 13 demonstrates that the standard DXA tends to overestimate dislocation density, possibly due to grain boundaries. Dislocations can glide and accumulate at grain boundaries, with some leading to the formation of LAGBs [8, 9, 4, 10]. It's likely that DXA interprets certain LAGBs as dislocation lines, accounting for the discrepancy in dislocation count. Specifically, conspicuous spikes in dislocation density around 4.8 strain are observed, absent when \(d<1\) atoms are excluded. Thus, we conclude that the standard DXA encompasses dislocation pile-ups and LAGBs in its dislocation density computation. Nevertheless, the current analysis remains a robust foundation for comparing various simulations.
### What Constitutes a Grain?
In this work, we considered a minimum grain size of 50 atoms when determining the number of grains present within the system. This number was selected as it allowed us to observe the transition to the disordered state and the subsequent recrystallisation clearly. Nevertheless, Figure 14 shows the comparison of grain numbers for the benchmark simulation with different numbers of atoms selected to form the minimum grain size. The Ovito default of 100 atoms was selected for comparison, as was 5,558 atoms, which corresponds to a 5nm diameter spherical grain. A 5nm grain corresponds to a minimum grain size which can accurately be resolved using transmission Kikuchi diffraction (TKD, or t-EBSD) [98], allowing comparisons with future experimental HPT data.
Figure 14 shows that the number of grains between the 50-atom and 100-atom analyses largely follow the same trajectory. At 0.27 strain, our-50 atom analysis showed 400 grains whilst the 100-atom analysis showed 215. This means that 185 grains were found to have less than 100 but more than 50 atoms at this point. This further suggests that the spike is caused by a highly disordered state of the atoms whereby only small pockets of BCC phase atoms remain, which is flagged as a grain. At this strain value, the 5nm grain analysis cannot pick up any grains due to the highly disordered state, and the grain number is shown as 0. After 2 strain, the 50- and 100-atom trajectories follow the same trajectory, however, there is al
Figure 14: Total number of grains based on minimum number of atoms per grain.
Figure 12: Schmid Factor for all slip systems in BCC system under shear strain.
Figure 13: Dislocation density comparison between standard DXA and DXA carried out when \(d<1\) atoms removed.
ways a difference of around 40 grains between the analyses. Therefore, we conclude that there are always around 40 grains in the simulation cell which have less than 100 atoms but more than 50. Interestingly, the 5nm grain analysis does not follow the same trajectory, and instead hovers around 1 grain up to around 1.37 strain. After this point, the grain number increases for the 5nm analysis, as shown in Figure 14. A local maximum of grain number is reached at around 2.5 strain, with 30 grains being present. This is followed by a minor decrease, after which the grain number sits steadily at around 25 after 4 strain. As such, we observe that the process of grain refinement still occurs if we define a grain with a minimum of 5,558 atoms.
It further supports our hypothesis that the transition of atoms into the disordered state, followed by a recrystallisation and grain growth, is a major mechanism of grain refinement. It is clear that up to 1.37 strain, all of the grains are small, with many containing less than 100 atoms. Through the processes of grain growth described above, the grains expand, causing the 5nm analysis value to increase. This analysis saturates at around 25-30 grains and any subsequent changes in the simulation cell with shear occur through small grain processing.
By increasing the minimum number of atoms that constitute a grain, we can compare the effects of temperature and carbon on the formation of experimentally observable grains. Figure 15 shows the number of 5nm spherical grains found in different temperature simulations and carbon-containing simulations.
In Figure 14(a), there is no initial spike in 5nm grains for any simulation, and the trajectories of the different simulations are largely similar until around 2.5 strain. This is contrary to the data shown in Figure 3(f) which shows that the 300 K simulation has the most grains at this point, suggesting that at 300 K, the grain numbers are increased due to the presence of many small grains. We also notice that the 1000 K simulation has around 2 grains larger than 5nm at 10 strain, which is visible by considering Figure 4(g). Ultimately, the 300 K simulation still shows the largest number of grains overall. However, this is not to the same extent as in Figure 3(f), again suggesting that the shearing process stimulates the production of many small grains in the 300 K simulation. Nevertheless, the 500 K and 800 K simulations also have many small grains, and this is visible by comparing the data in Figure 14(a) to Figure 3(f). For example, at 6 strain, the total number of grains present for the 500 K simulation is 120 and that number drops to 20 when considering only 5nm grains.
Figure 14(b) also shows the comparison of 5nm grains between the benchmark and carbon-containing simulation. We show here that the deviation in grain number observed in Figure 14(c) does not occur when considering 5nm grain, up to 8.5 strain. This suggests that the benchmark simulation has many smaller grains than the carbon-containing simulation. After 8.5 strain, we notice that the carbon-containing simulation begins to dip in grain number whilst the benchmark simulation cell rises.
### Thermal Stability of Grains
It was important to assess the stability of the newly formed grains to assess their usefulness for future simulations. The benchmark simulation and the variable temperature simulations from Section 3 were chosen for assessment. These cells were thermalised for 1ns using the NPH ensemble, with the Langevin thermostat keeping the temperature at 300 K, 500 K, 800 K, and 1000 K. Figure 16 shows the total number of grains present after the 1ns run time for each simulation cell. For the
Figure 15: Number of 5nm spherical grains found in the simulations; (a) Dependence of temperature on 5nm grains, and (b) Dependence of carbon inclusion on 5nm grains.
purposes of this comparison, the minimum grain size was taken as 50 atoms.
Inspection of Figure 16 shows that all cells follow similar trends. There is first a reduction in the number of grains found in the cell, after which the number of grains stabilises after around 700 ps of thermalisation. The 300 K, 500 K, and 800 K simulations all stabilise at around 150, 56, and 35 grains, respectively. It is shown that the 1000 K simulation stabilises much earlier, at a value of 2 grains. As such, it is inferred that the nanocrystalline structure will remain stable at finite temperatures.
|
2306.17615 | Scalable method for Bayesian experimental design without integrating
over posterior distribution | We address the computational efficiency in solving the A-optimal Bayesian
design of experiments problems for which the observational map is based on
partial differential equations and, consequently, is computationally expensive
to evaluate. A-optimality is a widely used and easy-to-interpret criterion for
Bayesian experimental design. This criterion seeks the optimal experimental
design by minimizing the expected conditional variance, which is also known as
the expected posterior variance. This study presents a novel likelihood-free
approach to the A-optimal experimental design that does not require sampling or
integrating the Bayesian posterior distribution. The expected conditional
variance is obtained via the variance of the conditional expectation using the
law of total variance, and we take advantage of the orthogonal projection
property to approximate the conditional expectation. We derive an asymptotic
error estimation for the proposed estimator of the expected conditional
variance and show that the intractability of the posterior distribution does
not affect the performance of our approach. We use an artificial neural network
(ANN) to approximate the nonlinear conditional expectation in the
implementation of our method. We then extend our approach for dealing with the
case that the domain of experimental design parameters is continuous by
integrating the training process of the ANN into minimizing the expected
conditional variance. Through numerical experiments, we demonstrate that our
method greatly reduces the number of observation model evaluations compared
with widely used importance sampling-based approaches. This reduction is
crucial, considering the high computational cost of the observational models.
Code is available at https://github.com/vinh-tr-hoang/DOEviaPACE. | Vinh Hoang, Luis Espath, Sebastian Krumscheid, Raúl Tempone | 2023-06-30T12:40:43Z | http://arxiv.org/abs/2306.17615v2 | # Scalable method for Bayesian experimental design without integrating over posterior distribution
###### Abstract
We address the computational efficiency in solving the A-optimal Bayesian design of experiments problems for which the observational map is based on partial differential equations and, consequently, is computationally expensive to evaluate. A-optimality is a widely used and easy-to-interpret criterion for Bayesian experimental design. This criterion seeks the optimal experimental design by minimizing the expected conditional variance, which is also known as the expected posterior variance. This study presents a novel likelihood-free approach to the A-optimal experimental design that does not require sampling or integrating the Bayesian posterior distribution. Our proposed approach is developed based on two properties of the conditional expectation: the law of total variance and the property of orthogonal projection. The expected conditional variance is obtained via the variance of the conditional expectation using the law of total variance, and we take advantage of the orthogonal projection property to approximate the conditional expectation. We derive an asymptotic error estimation for the proposed estimator of the expected conditional variance and show that the intractability of the posterior distribution does not affect the performance of our approach. We use an artificial neural network (ANN) to approximate the nonlinear conditional expectation in the implementation of our method. We then extend our approach for dealing with the case that the domain of experimental design parameters is continuous by integrating the training process of the ANN into minimizing the expected conditional variance. Specifically, we propose a nonlocal approximation of the conditional expectation and apply transfer learning to reduce the number of evaluations of the observation model. Through numerical experiments, we demonstrate that our method greatly reduces the number of observation model evaluations compared with widely used importance sampling-based approaches. This reduction is crucial, considering the high computational cost of the observational models. Code is available at [https://github.com/vinh-tr-hoang/DOEviaPACE](https://github.com/vinh-tr-hoang/DOEviaPACE).
###### Contents
* 1 Introduction
* 2 Background
* 3 Estimation of the tECV using PACE
* 3.1 PACE-based MC estimator of the tECV
* 3.2 Estimation of errors
* 3.3 Reduced variance estimator
* 4 Numerical experiment: Linear-Gaussian setting
* 4.1 One-dimensional case
* 4.2 High-dimensional case
* 5 Stochastic optimization for the continuous design domain
* 5.1 Nonlocal approximation of the conditional expectation
* 5.2 Algorithm
* 6 Numerical experiment: Electrical impedance tomography
* 6.1 Formulation and finite element model of EIT experiment
* 6.2 Numerical results
* 6.2.1 Analysis of error in estimating the tECV
* 6.2.2 Minimization of the tECV
* 7 Conclusion
* A Computing tECV using the IS approach
* B Proofs
* B.1 Proof of Proposition 1
* B.2 Proof of Proposition 2
* B.3 Orthogonality property of the CE
* C Linear approximation of the CE
* C.1 Analytical formulation of the linear approximation
* C.2 Empirical linear approximation of the CE
* D Reduced variance estimators of MSE and tECV
* E Gradient of the weighted MSE w.r.t. design parameters
* F Derivation of the finite element formulation
## 1 Introduction
The design of experiments (DOE) aims to systematically plan experiments to collect data and make accurate conclusions about a particular process or system. This methodology has wide-ranging applications, such as in engineering [1, 2], pharmaceutical [3, 4], and biological fields [5, 6]. Fundamentally, ill-posed problems and uncertainties naturally arise in the DOE. The Bayesian experimental design provides a general probabilistic framework for dealing with these challenges [7,
8]. There are several optimality criteria available in the Bayesian experimental design framework, for example, the A-optimal criterion minimizes the expected conditional variance (ECV), whereas the information gain criterion maximizes the expected Kullback-Leibler divergence between the prior and posterior distributions. This study focuses on the first criterion, _i.e._, the A-optimal DOE. Compared with alternative optimality criteria, the A-optimality criterion is of great practical appeal due to its straightforward interpretation. Indeed, a reduced posterior variance indicates a reduction in uncertainty.
The Bayesian experimental design generally requires estimating the expected conditional statistical quantities, such as the ECV or the expected information gain (EIG). Because this task is computationally expensive, many previous works have sought to develop efficient methods for its execution. Alexanderian et al. [9] proposed a scalable method for solving the A-optimal DOE problem, that was specifically tailored to scenarios in which the observational map is linear. For a nonlinear observational map, the posterior distribution is typically intractable. Although the Markov chain Monte Carlo (MCMC) algorithm and the approximate Bayesian computation approach can be used to sample an intractable posterior distribution, these approaches are too computationally intensive for the Bayesian DOE because the problem requires sampling many different posterior distributions for each design candidate [10]. A popular approach to alleviate the computational cost of the Bayesian DOE for nonlinear observational maps is to use a Laplace approximation of the posterior distribution [11, 12, 13, 14]. In [15], Alexanderian et al. used the Laplace approximation approach in the context of the A-optimal DOE. In [16], Beck et al. used this Laplace approximation approach for estimating the EIG, which was later incorporated with the multilevel method in [17] and the stochastic gradient descent to find the optimal design in [18]. For a nonlinear observation map, the approaches mentioned above estimate expected conditional quantities using a three-step process: first, they sample the posterior distribution or its approximation for each observational sample; second, they estimate the posterior characteristic, _e.g._, the posterior variance or the Kullback-Leibler divergences with respect to (_w.r.t._) the prior density; and finally, they repeat the previous step for many observational samples to estimate the expected value of the posterior characteristics. The first two steps are equivalent to solving multiple inverse problems, each of which is computationally expensive, particularly for high-dimensional cases, because the posterior distribution becomes intractable.
This study presents a novel method for estimating the ECV using the projection-based approximation of the conditional expectation (PACE), designed to handle computationally expensive observational maps. The relationship between the uncertainty of the quantities of interest, represented by random variable (RV) \(Q\), and their corresponding experimental observations, denoted as RV \(Y_{d}\), is assumed to be \(Y_{d}=h(Q,d)+\Xi\), where \(h\) is the observational map, \(\Xi\) represents measurement error, and \(d\) denotes the design parameters. Using the law of total variance, the ECV of the RV \(Q\) given RV \(Y_{d}\), denoted as \(\mathrm{E}\left[\mathrm{Var}\left[Q\mid Y_{d}\right]\right]\), is obtained by approximating the conditional expectation (CE) of the RV \(Q\) given RV \(Y_{d}\), denoted as \(\mathrm{E}\left[Q\mid Y_{d}\right]\). Specifically, the ECV can be computed as the difference between the variance of \(Q\) and the variance of \(\mathrm{CE}\,\mathrm{E}\left[Q\mid Y_{d}\right]\). To approximate \(\mathrm{E}\left[Q\mid Y_{d}\right]\), we utilize the orthogonality of CE and introduce the map \(\phi(Y_{d})\), which minimizes the mean square error \(\mathrm{E}\left[\|Q-\phi(Y_{d})\|_{2}^{2}\right]\) under the assumption of the finite variance of both \(Q\) and \(Y_{d}\). The proposed approach prevents the need to sample or approximate the posterior distribution and eliminates the evaluation of the likelihood function. Notably, we show that the relative mean absolute error (MAE) of the PACE-based estimator of the ECV is of the order \(\mathcal{O}(N^{-2})\), where \(N\) represents the number of evaluations of the observational map. Moreover, the computational efficiency of the proposed approach remains unaffected by the intractability of the posterior. In a related work, Hoang et al. [19] used the PACE approach to develop a machine learning-based ensemble conditional mean filter (ML-EnCMF) for nonlinear data assimilation problems and demonstrated that the ML-EnCMF outperforms linear filters in terms of accuracy.
To implement our method, we use an artificial neural network (ANN) [20, 21] to approximate the CE, owing to ANN's versatility and demonstrated effectiveness in handling high-dimensional regression problems. To deal with continuous design domains, we present a novel algorithm that effectively minimizes the ECV using the stochastic gradient descent method. The training process of the ANN is integrated into the algorithm employed to optimize the design parameters, thereby improving computational efficiency. This integration is possible because the objective functions for optimizing the ECV and training the ANN are identical. Moreover, we propose a nonlocal approximation of the CE and apply transfer learning to reduce the number of evaluations of the observation model.
The remainder of the paper is structured as follows. In Sec. 2, we summarize the background of the Bayesian experimental design. Section 3 details the PACE framework used for the A-optimal DOE and its error estimation. In Section 4, we illustrate our method and verify its error estimation using the linear-Gaussian setting of the DOE. Then, in Section 5, we examine the continuous design domain scenario and develop a stochastic optimization algorithm to solve the A-optimal DOE. In Section 6, we apply our approach for the optimal design of electrical impedance tomography experiments used to identify the properties of a composite material. Finally, in Section 7, we conclude the paper with a summary and perspectives.
## 2 Background
In Bayesian experimental design, the uncertainty associated with the quantities of interest is modeled as an RV, denoted here as \(Q\) and takes value in \(\mathbb{R}^{n}\). We consider a deterministic observational map, denoted as \(h\), that is parameterized by a vector of design parameters \(d\) in a domain \(\mathcal{D}\subset\mathbb{R}^{\delta}\). This map transforms each vector \(q\in\mathbb{R}^{n}\) into a corresponding noise-free observational vector in \(\mathbb{R}^{m}\). We consider the scenario in which map \(h\) consists of two components: a computationally expensive numerical model denoted as \(h_{m}\), which solves the partial differential equation that governs the experiments, and a measurement operator \(h_{o}\). Typically, \(h=h_{o}\circ h_{m}\). Assuming that the measurement error is additive, we model the observational RV \(Y_{d}\) for a given vector \(d\) of the experimental design parameters as
\[Y_{d}(\omega)=h(Q(\omega),d)+\Xi(\omega), \tag{1}\]
where \(\Xi\) is the observational error RV, and \(\omega\) denotes an outcome in the sample space \(\Omega\) of the underlying probability space \((\Omega,\mathfrak{A},\mathbb{P})\). Further, subscript \(d\) indicates the dependence of the observational RV \(Y_{d}\) on the design parameter vector \(d\). The inverse problem involves updating the prior distribution of the quantities of interest, given the specific measurement data, \(y\).
Using the Bayesian identification framework is a standard approach to solve the inverse problem owing to the mathematically well-posedness of the framework and its ability to handle uncertainty. Let us assume that the distributions of the RVs \(Q\), \(Y_{d}\), and \(\Xi\) have finite variances and are absolutely continuous, _i.e._, their densities exist. For a fixed vector \(d\), given the prior PDF \(\pi_{Q}\) of RV \(Q\) and the observational data \(y\), the Bayesian posterior PDF \(\pi_{Q|Y_{d}}(\cdot\mid y)\) is given as follows:
\[\pi_{Q|Y_{d}}(q\mid y)=\frac{\pi_{Q}(q)\,\pi_{\Xi}(y-h(q,d))}{\pi_{Y_{d}}(y)}, \tag{2}\]
where \(\pi_{\Xi}\) is the density function of RV \(\Xi\), and \(\pi_{Y_{d}}(y)\) is the evidence given by
\[\pi_{Y_{d}}(y)=\int_{\mathbb{R}^{n}}\pi_{Q}(q)\,\pi_{\Xi}(y-h(q,d))\mathrm{d}q. \tag{3}\]
The Bayesian experimental design searches for the parameter vector \(d\) that minimizes the posterior uncertainty. Here, we consider the A-optimal DOE, which essentially minimizes the expected posterior variance. The posterior variance can be formulated via the Bayes' rule, Eq. (2), as
\[\mathrm{Var}\left[Q\mid y;d\right]=\int_{\mathbb{R}^{n}}q^{\odot 2}\;\pi_{Q|Y_{d}}(q |y)\mathrm{d}q-\left[\int_{\mathbb{R}^{n}}q\;\pi_{Q|Y_{d}}(q|y)\mathrm{d}q \right]^{\odot 2}, \tag{4}\]
where \({}^{\odot}\) denotes the Hadamard square operator, e.g., \(q^{\odot 2}=[q_{1}^{2},\ldots,q_{n}^{2}]^{\top}\). Modeling the measurement data as an RV, the distribution of the posterior variance is represented by the conditional variance \(\mathrm{Var}\left[Q\mid Y_{d}\right]\), which is an \(\mathbb{R}^{n}\)-valued RV defined as
\[\mathrm{Var}\left[Q\mid Y_{d}\right](\omega)\equiv\mathrm{Var}\left[Q\mid Y_{d }(\omega);d\right] \tag{5}\]
The A-optimal DOE approach seeks the experimental design parameters that minimize the ECV of the RV \(Q\) given RV \(Y_{d}\), denoted as \(\mathrm{E}\left[\mathrm{Var}\left[Q\mid Y_{d}\right]\right]\). In cases where \(n>1\), implying a multidimensional scenario, the A-optimal DOE approach uses the element-wise sum of the ECV as the objective function, which is referred to as the _total ECV_ (tECV) in this study. We denote \(Q=[Q_{1},...,Q_{n}]^{\top}\) where \(Q_{i}\) is the \(i\)-th component of \(Q\). The tECV \(V\) for a given vector \(d\in\mathbb{R}^{\delta}\) of the experimental design parameters is given by
\[V(d)=\sum_{i=1}^{n}\mathrm{E}\left[\mathrm{Var}\left[Q_{i}\mid Y_{d}\right] \right], \tag{6}\]
where \(\mathrm{Var}\left[Q_{i}\mid Y_{d}\right]\) is the conditional variance of the \(i\)-th component of the RV \(Q\). Alternatively, the tECV can be expressed as the trace of the expected conditional covariance, represented as \(V(d)\equiv\mathrm{tr}\big{(}\mathrm{Cov}\left[Q\mid Y_{d}\right]\big{)}\), with Cov and \(\mathrm{tr}\) denoting the covariance and trace operators, respectively.
We obtain the A-optimal DOE by minimizing the tECV, _i.e._, its parameter vector \(d_{\mathrm{A}}\) satisfies
\[d_{\mathrm{A}}=\;\arg\;\min_{d}\;V(d). \tag{7}\]
By minimizing the tECV, the A-optimal DOE approach attempts to reduce the overall variability of the posterior.
Solving the optimization problem in Eq. (7) requires an efficient and accurate estimation of the tECV. A double-loop algorithm is needed for using the Monte Carlo (MC) method to estimate the tECV based on Eq. (6). The outer loop samples the RV \(Y_{d}\), and the inner loop estimates the posterior variance for each sample of the outer loop. Let \(\{y^{(i)}\}_{i=1}^{N_{\mathrm{o}}}\) be \(N_{\mathrm{o}}\) samples from the outer loop. Estimating the posterior variance \(\mathrm{Var}\left[Q\mid y^{(i)};\,d\right]\) for each sample \(y^{(i)}\) involves sampling the posterior distribution \(\pi_{Q|Y_{d}}(\cdot\mid y^{(i)})\). However, the sampling of an intractable posterior distribution poses a significant challenge, _i.e._, the posterior can be concentrated in considerably smaller regions compared to the prior, especially in high-dimensional problems where \(n,m\gg 1\). Using the importance sampling (IS) method or the MCMC method to compute the tECV becomes computationally expensive in such cases. The formulation of the IS double-loop estimator can be found in Appendix A. Enhancing the efficiency of the IS estimator can be done using the Laplace approximation. However, this method assumes that the underlying distribution closely resembles a Gaussian distribution and requires that the maximum a posteriori estimator be determined.
Notably, estimating the tECV involves applying the IS method to compute the variances of \(N_{\mathrm{o}}\) different posteriors, which results in a computational cost of the multiplicative order \(N_{\mathrm{o}}\times N_{\mathrm{i}}\), where \(N_{\mathrm{i}}\) is the number of samples in the inner loop. Similarly, other approaches that estimate tECV by sampling the posterior, such as the MCMC method, suffer from the same multiplicative growth of computational cost. Herein, we present a novel approach to approximate the tECV via the CE taking into account the orthogonal projection of the CE. The PACE-based approach, as discussed in Section 3, does not require approximating or sampling the posterior distribution.
Estimation of the tECV using PACE
We demonstrate that the tECV (\(V(d)\)) can be evaluated by approximating the CE rather than sampling the conditional variance. For a given experimental parameter vector \(d\), the CE of RV \(Q\) given RV \(Y_{d}\), which is denoted as \(\operatorname{E}\left[Q\mid Y_{d}\right]\), satisfies
\[\int_{A}Q(\omega)\mathrm{d}\mathbb{P}(\omega)=\int_{A}\operatorname{E}\left[Q \mid Y_{d}\right](\omega)\mathrm{d}\mathbb{P}(\omega), \tag{8}\]
for every measurable set \(A\) in the \(\sigma\)-algebra generated by the RV \(Y_{d}\). According to Doob-Dynkin lemma, the CE is a composition of the map \(\phi_{d}\::\:\mathbb{R}^{m}\to\mathbb{R}^{n}\) and the RV \(Y_{d}\) which is given as
\[\operatorname{E}\left[Q\mid Y_{d}\right](\omega)=\phi_{d}(Y_{d}(\omega)). \tag{9}\]
The map \(\phi_{d}\) can be defined using the posterior density as
\[\phi_{d}(y)=\int_{\mathbb{R}^{n}}q\;\pi_{Q\mid Y_{d}}(q,y)\mathrm{d}q. \tag{10}\]
Alternatively, map \(\phi_{d}\) can be obtained from the orthogonal projection property. For the finite-variance RVs \(Q\) and \(Y_{d}\), the CE \(\operatorname{E}\left[Q\mid Y_{d}\right]\) is the \(L_{2}\) orthogonal projection of RV \(Q\) onto the \(\sigma\)-algebra generated by the RV \(Y_{d}\), which is formulated as
\[\operatorname{E}\left[Q\mid Y_{d}\right]=\phi_{d}(Y_{d}),\quad\text{where} \quad\phi_{d}=\arg\min_{f\in\mathcal{S}(\mathbb{R}^{m},\mathbb{R}^{n})} \operatorname{E}\left[\left\|Q-f\circ Y_{d}\right\|_{2}^{2}\right], \tag{11}\]
where \(\left\|\cdot\right\|_{2}\) is the \(L_{2}\) norm. Here, \(\mathcal{S}(\mathbb{R}^{m},\mathbb{R}^{n})\) represents the set of all functions \(f:\mathbb{R}^{m}\to\mathbb{R}^{n}\) for which the variance of the RV \(f(Y_{d})\) is finite. Appendix B.3 provides a proof of the orthogonal projection property. Theoretical properties of the CE can be found in [22] and in [23, Chapter 4]. The following proposition provides a formulation for the tECV calculated using the CE \(\operatorname{E}\left[Q\mid Y_{d}\right]\).
**Proposition 1**.: _The tECV \(V(d)\) given in Eq. (6) can be expressed using CE \(\operatorname{E}\left[Q\mid Y_{d}\right]\) as:_
\[V(d)=\operatorname{E}\left[\left\|Q-\operatorname{E}\left[Q\mid Y_{d}\right] \right\|_{2}^{2}\right]. \tag{12}\]
A proof of Proposition 1 is given in Appendix B.1. Using Proposition 1, the tECV can be estimated by approximating \(\operatorname{CE}\operatorname{E}\left[Q\mid Y_{d}\right]\).
### PACE-based MC estimator of the tECV
We leverage the orthogonal projection property described in Eq. (11) to approximate the CE, eliminating the necessity to sample the posterior distributions as in Eq. (10). For a nonlinear observational map \(h\), the map \(\phi_{d}\) is typically nonlinear and does not allow a closed-form solution. In such cases, we employ the nonlinear regression method to approximate the CE, which involves three key components. First, we select a suitable parameterized functional subspace of finite dimension \(\mathcal{S}^{\prime}(\mathbb{R}^{m},\mathbb{R}^{n})\subset\mathcal{S}(\mathbb{ R}^{m},\mathbb{R}^{n})\). This subspace represents a restricted set of functions that are used to approximate the map \(\phi_{d}\). Second, we solve the projection problem described in Eq. (11) within this chosen subspace. Last, we employ an appropriate MC estimator to estimate the MSE \(\operatorname{E}\left[\left\|Q-f\circ Y_{d}\right\|_{2}^{2}\right]\) for \(f\in\mathcal{S}^{\prime}\).
By restricting \(\mathcal{S}^{\prime}\) to a linear function space, we obtain a linear approximation of the map \(\phi_{d}\), as detailed in Appendix C.1. However, this linear approximation of the CE is known to be biased
and tends to overestimate the tECV. The remainder of this section is dedicated to developing a suitable method for approximating the nonlinear CE.
Our primary objective in this section is to approximate the CE given a fixed design parameter vector \(d\). We will expand our approach in Section 5 to encompass the neighborhood surrounding a given vector \(d\). Let \(\mathbf{\theta}\in\mathbb{R}^{\beta}\) denote the vector containing the hyperparameters of functions \(f\) in the selected subspace \(\mathcal{S}^{\prime}\). We use the orthogonal projection property (Eq. (11)) to approximate the map \(\phi_{d}\) using a function \(f^{*}:=f(\cdot\;;\mathbf{\theta}^{*})\) in the subspace \(\mathcal{S}^{\prime}\), such that
\[\mathbf{\theta}^{*}=\arg\;\min_{\mathbf{\theta}}\;\operatorname{E}\left[\big{\|}Q-f(Y_ {d};\mathbf{\theta})\big{\|}_{2}^{2}\right]. \tag{13}\]
To simplify the notation, we introduce the MSE function \(\mathcal{M}:\mathcal{S}\to\mathbb{R}_{+}\), which is defined as
\[\mathcal{M}(f)\equiv\operatorname{E}\left[\big{\|}Q-f(Y_{d})\big{\|}_{2}^{2} \right]. \tag{14}\]
With this notation in place, the relations between functions in \(\mathcal{S}\) can be summarized as:
\[\mathcal{M}(f)\geq\mathcal{M}(f^{*})\geq\mathcal{M}(\phi_{d})=V(d),\;\forall f \in\mathcal{S}^{\prime}.\]
A straightforward method for estimating the MSE \(\mathcal{M}(f)\) is to use the crude Monte Carlo (MC) estimator. Let \(D_{N}=\{\big{(}q^{(i)},y^{(i)}\big{)}\}_{i=1}^{N}\) be an \(N\)-sized dataset of _i.i.d._ samples of the RV pairs \((Q,Y_{d})\), where \(\{q^{(i)}\}_{i=1}^{N}\) are the _i.i.d._ samples of RV \(Q\), and \(\{y^{(i)}\}_{i=1}^{N}\) are the corresponding _i.i.d._ samples of RV \(Y_{d}\), which are obtained as
\[\{y^{(i)}\}_{i=1}^{N}=\left\{y^{(i)}\mid y^{(i)}=h(q^{(i)},d)+\xi^{(i)},\quad q ^{(i)}\in\{q^{(i)}\}_{i=1}^{N}\right\}. \tag{15}\]
Here, \(\{\xi^{(i)}\}_{i=1}^{N}\) are the _i.i.d._ samples of \(\Xi\).
Let \(D_{M}=\{\big{(}q^{(i)},y^{(i)}\big{)}\}_{i=1}^{M}\) be an \(M\)-sized dataset of _i.i.d._ samples of the RV pairs \((Q,Y_{d})\), which is statistically independent from \(D_{N}\). We denote the crude MC estimator of the MSE \(\mathcal{M}(f)\) using dataset \(D\in\{D_{N},D_{M}\}\) as:
\[\widehat{\mathcal{M}}(f\mid D)=\frac{1}{\lvert D\rvert}\sum_{(q,y)\in D} \big{\|}q-f(y)\big{\|}_{2}^{2}, \tag{16}\]
where \(\lvert D\rvert\) is the cardinality of dataset \(D\). Owing to Proposition 1, we estimate the tECV using the following _PACE-based MC estimator_\(\widehat{V}_{d}(D_{N},D_{M})\):
\[\widehat{V}_{d}(D_{N},D_{M}) :=\widehat{\mathcal{M}}(f(\cdot;\mathbf{\theta}_{D_{N}})\mid D_{M}) \tag{17a}\] \[\text{where}\quad\mathbf{\theta}_{D_{N}} =\arg\;\min_{\mathbf{\theta}}\;\widehat{\mathcal{M}}(f(\cdot;\mathbf{ \theta})\mid D_{N}), \tag{17b}\]
which requires a numerical solution of the optimization problem stated in Eq. (17b). We observe that unlike the double-loop IS estimator, which incurs a computational cost proportional to the product \((N_{\text{o}}\times N_{\text{i}})\) as explained in Section 2, the PACE-based approach exhibits a linear cost proportional to the sizes of datasets \(D_{N}\) and \(D_{M}\).
Although the PACE-based approach requires approximating the CE, it completely alleviates the need to sample different posterior densities. This allows us to bypass the complication associated with posterior intractability. Using the regressor, we obtain approximate values of the CE samples with minimal computing effort. In the following subsections, we analyze the statistical error of the PACE-based MC estimator of the tECV. Moreover, we propose a control-variate version for this PACE-based MC estimator.
### Estimation of errors
In this subsection, we aim to analyze the MAE of the estimator \(\widehat{V}_{d}(D_{N},D_{M})\). Let \(\epsilon_{S^{\prime}}\), \(\epsilon_{\mathrm{opt}}\), and \(\epsilon_{\mathrm{MC}}\) denote the approximation error due to the choice of the subspace \(\mathcal{S}^{\prime}\), the optimization error for the numerical solution to the minimization problem stated in Eq. (17b) with the finite-size dataset \(D_{M}\), and the estimating error of the MC estimator due to the finite size of the dataset \(D_{N}\), respectively, _i.e._,
\[\epsilon_{S^{\prime}} :=\mathrm{E}\left[\big{|}\mathcal{M}(f^{*})-\mathcal{M}(\phi_{d}) \big{|}\right], \tag{18}\] \[\epsilon_{\mathrm{opt}} :=\mathrm{E}\left[\big{|}\mathcal{M}(f(\cdot;\boldsymbol{\theta} _{D_{N}}))-\mathcal{M}(f^{*})\big{|}\right],\] (19) \[\epsilon_{\mathrm{MC}} :=\mathrm{E}\left[\big{|}\widehat{\mathcal{M}}(f(\cdot; \boldsymbol{\theta}_{D_{N}})\mid D_{M})-\mathcal{M}(f(\cdot;\boldsymbol{ \theta}_{D_{N}}))\big{|}\right]. \tag{20}\]
The MAE between the estimator \(\widehat{V}_{d}(D_{N},D_{M})\) and \(V_{d}\) is bounded using the triangular inequality as follows:
\[\mathrm{E}\left[\big{|}\widehat{V}_{d}(D_{N},D_{M})-V_{d}\big{|}\right] =\,\mathrm{E}\left[\big{|}\widehat{\mathcal{M}}(f(\cdot;\boldsymbol{ \theta}_{D_{N}})\mid D_{M})-\mathcal{M}(\phi_{d})\big{|}\right] \tag{21}\] \[\leq\,\mathrm{E}\left[\big{|}\widehat{\mathcal{M}}(f(\cdot; \boldsymbol{\theta}_{D_{N}})\mid D_{M})-\mathcal{M}(f(\cdot;\boldsymbol{\theta }_{D_{N}}))\big{|}\right]\] \[\quad+\mathrm{E}\left[\big{|}\mathcal{M}(f(\cdot;\boldsymbol{ \theta}_{D_{N}}))-\mathcal{M}(f^{*})\big{|}\right]\] \[\quad+\mathrm{E}\left[\big{|}\mathcal{M}(f^{*})-\mathcal{M}( \phi_{d})\big{|}\right]\] \[=\epsilon_{\mathrm{MC}}+\epsilon_{\mathrm{opt}}+\epsilon_{S^{ \prime}}.\]
Aiming at deriving an asymptotic error estimator that captures the error evolution _w.r.t._ the size of the datasets \(D_{N}\) and \(D_{M}\), we make use of the following assumptions:
**Assumption 1**.: \(\mathcal{S}^{\prime}\) _is a convex set that contains constant functions._
**Assumption 2**.: \[\mathrm{Var}\left[\big{\|}Q-f^{*}(Y_{d})\big{\|}_{2}^{2}\right]=\mathcal{O} \big{(}2\,\mathrm{E}\left[\big{\|}Q-f^{*}(Y_{d})\big{\|}_{2}^{2}\right]^{2} \big{)}\ <\infty.\] (22)
**Assumption 3**.: \[\epsilon_{opt}=\mathcal{O}\left(\mathrm{E}\left[\big{|}\widehat{\mathcal{M}}( f^{*}|D_{N})-\mathcal{M}(f^{*})\big{|}\right]\right).\] (23)
The first assumption is a common constraint imposed on the choice of the subspace \(\mathcal{S}^{\prime}\). This assumption implies that \(\mathrm{E}\left[(Q-f^{*}(Y_{d}))^{\top}g(Y_{d}))\right]=0\) for every \(g\in\mathcal{S}^{\prime}\). By choosing \(g\) as a constant function in this expression, we deduce that the mean of the RV \(Q-f^{*}(Y_{d})\) is zero. We justify the second assumption by noting that for every zero-mean Gaussian RV \(X\), we have \(\mathrm{Var}\left[X^{2}\right]=\mathrm{E}\left[X^{4}\right]-(\mathrm{E}\left[X ^{2}\right])^{2}=2(\mathrm{E}\left[X^{2}\right])^{2}\). The third assumption asserts that the error of the optimization problem under the finite-sized dataset \(D_{N}\) is on the same order as the statistical error of the MC estimator \(\widehat{\mathcal{M}}(f^{*}|D_{N})\).
**Proposition 2** (Error estimation).: _Assuming that the RVs \(Q\) and \(Y_{d}\) have finite variance, and that the Assumptions 1, 2, and 3 hold, the MAE of the PACE-based MC estimator \(\widehat{V}_{d}(D_{N},D_{M})\) satisfies_
\[\mathrm{E}\left[\big{|}\widehat{V}_{d}(D_{N},D_{M})-V_{d}\big{|}\right]= \mathcal{O}\Bigg{(}\Big{(}\frac{2}{\sqrt{\pi N}}+\frac{2}{\sqrt{\pi M}}\Big{)} \,\mathrm{E}\left[\big{\|}Q-f^{*}(Y_{d})\big{\|}_{2}^{2}\right]\Bigg{)}+ \epsilon_{S^{\prime}}. \tag{24}\]
_as \(N,M\ \gg 1\)._
If we further assume the approximation error \(\epsilon_{S^{\prime}}=0\), then
\[\begin{split}\mathrm{E}\left[\left|\widehat{V}_{d}(D_{N},D_{M})-V_{ d}\right|\right]&=\mathcal{O}\Bigg{(}\Big{(}\frac{2}{\sqrt{\pi N}}+\frac{2}{ \sqrt{\pi M}}\Big{)}\,\mathrm{E}\left[\left\|Q-f^{*}(Y_{d})\right\|_{2}^{2} \right]\Bigg{)}\\ &=\mathcal{O}\Bigg{(}\Big{(}\frac{2}{\sqrt{\pi N}}+\frac{2}{ \sqrt{\pi M}}\Big{)}\,\mathrm{E}\left[\left\|Q-\phi_{d}(Y)\right\|_{2}^{2} \right]\Bigg{)}\\ &=\mathcal{O}\left(\frac{2}{\sqrt{\pi N}}+\frac{2}{\sqrt{\pi M}} \right)V_{d}\end{split} \tag{25}\]
To evaluate the accuracy of an estimator \(\widetilde{V}_{d}\), we use the relative MAE defined as
\[\text{relMAE}\;(\widetilde{V}_{d})\equiv\frac{\mathrm{E}\left[\left|\widetilde {V}_{d}-V_{d}\right|\right]}{V_{d}}. \tag{26}\]
When \(\epsilon_{S^{\prime}}=0\), the relative MAE of the estimator \(\widehat{V}_{d}(D_{N},D_{M})\) is given by
\[\begin{split}\text{relMAE}\;(\widehat{V}_{d})& \equiv\frac{\mathrm{E}\left[\left|\widehat{V}_{d}(D_{N},D_{M})-V_{ d}\right|\right]}{V_{d}}\\ &=\mathcal{O}\Big{(}\frac{2}{\sqrt{\pi N}}+\frac{2}{\sqrt{\pi M} }\Big{)},\end{split} \tag{27}\]
for \(N,M\gg 1\). When the upper bounds of the errors \(\epsilon_{S^{\prime}}\), \(\epsilon_{\text{opt}}\), and \(\epsilon_{\text{MC}}\) are available, it is possible to derive more precise bound for the MAE of the PACE-based MC estimator. However, those upper bounds are problem-dependent and beyond the scope of this study.
### Reduced variance estimator
We often model the measurement error RV \(\Xi\) using simple parameterized distributions such as the Gaussian distribution, Poisson distribution, or binomial distribution. Consequently, sampling the RV \(\Xi\) is computationally inexpensive. Considering this observation, we augment datasets \(D_{N}\) and \(D_{M}\) to reduce the statistical error of the PACE-based MC estimators. The augmented datasets are obtained as follows:
\[D_{N}^{\text{rv}} =\left\{\left(q^{(i)},y^{(i,j)}\right)\mid y^{(i,j)}=h(q^{(i)},d) +\xi^{(i,j)}\right\}_{i=1,\ldots,N,\;j=1,\ldots,a}, \tag{28}\] \[D_{M}^{\text{rv}} =\left\{\left(q^{(i)},y^{(i,j)}\right)\mid y^{(i,j)}=h(q^{(i)},d) +\xi^{(i,j)}\right\}_{i=1,\ldots,M,\;j=1,\ldots,a}, \tag{29}\]
where \(q^{(i)}\) and \(\xi^{(i,j)}\) are the _i.i.d._ samples of RVs \(Q\) and \(\Xi\), respectively, and \(a\in\mathbb{N}_{+}\) represents the augmentation multiplier. By substituting the augmented datasets \(D_{N}^{\text{rv}}\) and \(D_{M}^{\text{rv}}\) for \(D_{N}\) and \(D_{M}\), respectively, in Eq. (16), we obtain the reduced variance estimators of the MSEs \(\widehat{\mathcal{M}}(f\mid D_{N})\) and \(\widehat{\mathcal{M}}(f\mid D_{M})\) as follows:
\[\begin{split}\widehat{\mathcal{M}}^{\text{rv}}(f\mid D_{N})& =\frac{1}{\left|D_{N}\right|\times a}\sum_{(q,y)\in D_{N}^{\text {rv}}}\left\|q-f(y)\right\|_{2}^{2}\\ \widehat{\mathcal{M}}^{\text{rv}}(f\mid D_{M})&= \frac{1}{\left|D_{M}\right|\times a}\sum_{(q,y)\in D_{M}^{\text{rv}}}\left\|q- f(y)\right\|_{2}^{2}.\end{split} \tag{30}\]
Appendix D explains the reduction in the statistical error obtained with the use of the estimator \(\widehat{\mathcal{M}}^{\text{rv}}\) relative to the crude one \(\widehat{\mathcal{M}}\).
Numerical experiment: Linear-Gaussian setting
In this section, we present the results of numerical experiments to demonstrate the error estimation outlined in Section 3.2. We focus on a linear-Gaussian scenario, where both distributions of the prior and the measurement error are Gaussian and the observational map is linear. This setting is advantageous for error analysis because it allows for an analytical solution to the tECV. We will expand our numerical experiments to a broader nonlinear setting in Section 6.
To assess the effectiveness of the PACE-based approach, we conduct numerical experiments in two different scenarios. In Section 4.1, we apply our method to a one-dimensional setting, _i.e._, \(q\in\mathbb{R}\), and in Section 4.2, we evaluate our approach in a high-dimensional scenario. Furthermore, we compare the performance of the PACE-based approach with that of the IS-based approach, by examining the effect of two conditions: (1) increasing the dimension of the inferred parameter vector \(q\), and (2) reducing the measurement error variance. These factors significantly exacerbate the intractable property of the posterior distribution.
### One-dimensional case
In this section, we consider the following setting:
\[h(Q,d)=\frac{Q}{(d-0.5)^{2}+1},\quad d\in[0,1], \tag{31a}\] \[Q\sim\mathcal{N}(0,\sigma_{q}^{2}),\;\text{with}\quad\sigma_{q} =2,\] (31b) \[\Xi\sim\mathcal{N}(0,\sigma_{\xi}^{2}), \tag{31c}\]
where \(\mathcal{N}(0,\sigma^{2})\) is a Gaussian distribution with a mean of zero and variance \(\sigma^{2}\). We analyze two cases of variances in measurement error: \(\sigma_{\xi}^{2}=0.01^{2}\) and \(\sigma_{\xi}^{2}=0.001^{2}\). Given that the observational map is linear in terms of \(q\) and both the prior and measurement error distributions are Gaussian, the CE map \(\phi_{d}\) defined in Eq. (11) is linear and possesses a closed-form expression. Thus, we obtain
\[\phi_{d}(y)=\frac{a\sigma_{q}^{2}}{a^{2}\sigma_{q}^{2}+\sigma_{\xi}^{2}}\,y. \;\text{where}\;a=\frac{1}{(d-0.5)^{2}+1}. \tag{32}\]
Appendix C.1 provides a detailed description of this linear approximation.
To implement our method in this setting, we employ linear regression to empirically approximate the CE by solving Eq. (11) within the subspace of linear functions \(S^{\prime}\), given the sample dataset \(D_{N}=\{(q^{(i)},y^{(i)})\}_{i=1}^{N}\). For detailed information, please refer to Appendix C.2. We then utilize the obtained CE approximation to estimate the tECV using Eq (30). To assess the accuracy of our estimation, we compute the empirical relative MAE metric defined in Eq.(26). In particular, we aim to compare the relative MAE with the estimation in Eq.(27), considering the absence of bias error \(\epsilon_{S^{\prime}}\) in the linear-Gaussian setting. Additionally, we implement the IS-based approach described in Appendix A to estimate the tECV and to evaluate its relative MAE.
Based on the result of Proposition 2, setting \(N\equiv M\) is a straightforward decision for the PACE-based approach. In the IS-based approach, the ECV is estimated using a double-loop MC algorithm, which is detailed in Appendix A. Because the posterior variance remains invariant _w.r.t_\(y\) in the linear-Gaussian setting, we set the number of samples used by the outer loop to \(N_{\rm o}=1\). In the PACE-based approach, the total number of samples is \(N+M\), while for the IS-based approach, it is \(N_{\rm i}+1\), where \(N_{\rm i}\) represents the number of processed by the inner loop. Fig. 1 shows the relative MAE for \(d=0.5\) for two different measurement variances, _i.e._, \(\sigma_{\xi}^{2}=0.01^{2}\) and \(\sigma_{\xi}^{2}=0.001^{2}\). Fig. 2 showcases the tECV for different values of \(d\) and for \(\sigma_{\xi}^{2}=0.01^{2}\). To estimate the relative MAE, we perform 1000 statistically independent simulations.
As can be observed in Fig. 1, the PACE-based method requires considerably fewer samples than the IS-based approach for achieving a similar relative MAE value. For example, for \(\sigma_{\xi}^{2}=0.01^{2}\) without applying data augmentation, the required number of samples is reduced by approximately 30 fold, and for \(\sigma_{\xi}^{2}=0.001^{2}\), it is reduced by about 200 fold. When data augmentation is used, our approach further reduces the number of samples by nearly two additional orders of magnitude.
Overall, the evolution of the empirical relative MAE confirms the theoretical error estimation obtained in Proposition 2, _i.e._, \(\mathcal{O}\big{(}\frac{2}{\sqrt{\pi N}}+\frac{2}{\sqrt{\pi M}}\big{)}\). Particularly, we have \(\mathrm{Var}\left[\left(Q-f^{*}(Y_{d})\right)^{2}\right]=2\,\mathrm{E}\left[ \left(Q-f^{*}(Y_{d})\right)^{2}\right]^{2}\), which validates Assumption 2. As the CE in this example is a linear function, \(\epsilon_{opt}\) is much smaller than the estimated order \(\left(\mathrm{E}\left[\left|\widehat{\mathcal{M}}(f^{*}|D_{N})-\mathcal{M}( f^{*})\right|\right]\right)\) that is assumed in the Assumption 3. Consequently, the obtained relative MAE is consistently smaller than the theoretical error estimation of \(\left(\frac{2}{\sqrt{\pi N}}+\frac{2}{\sqrt{\pi M}}\right)\).
We also observe that the relative MAE in the PACE-based approach is _invariant w.r.t._ the measurement error variance. In contrast, the IS-based method requires a considerably larger number of samples for a smaller measurement error variance. Particularly, a reduction in the measurement error variance decreases the evidence terms of Bayes' formula. Consequently, the statistical error inherent in the IS-based approach becomes amplified.
Fig. 2 clearly illustrates that the PACE-based approach accurately estimates the A-optimal DOE, _i.e._, \(d_{A}=0.5\), with 10,000 samples. However, the IS-based approach results in significant errors in identifying the optimal design due to its statistical errors in estimating the tECV.
### High-dimensional case
In this section, we address the high-dimensional linear-Gaussian scenario. The setting can be described as follows:
\[h(q,d) =\frac{1}{(d-0.5)^{2}+1}\ Q,\quad d\in[0,1], \tag{33}\] \[Q \sim\mathcal{N}(0_{n},\mathbf{I}_{n}),\] \[\Xi \sim\mathcal{N}(0_{n},0.1^{2}\mathbf{I}_{n}),\]
Figure 1: Comparison of the PACE- and IS-based approaches for estimating the tECV for \(d=0.5\): (a) \(\Xi\sim\mathcal{N}(0,0.01^{2})\) and (b) \(\Xi\sim\mathcal{N}(0,0.001^{2})\).
where \(0_{n}\in\mathbb{R}^{n}\) and \(\mathbf{I}_{n}\in\mathbb{R}^{n\times n}\) represent a zero vector and an identity matrix, respectively. We perform an analysis similar to the one presented in Section 4.1.
Fig. 3 illustrates the relative MAE for different values of \(n\in\{2,5,10,20\}\). The numerical results demonstrate that the IS-based approach requires an extremely large number of samples for high-dimensional cases. Specifically, for \(n\geq 5\), we do not observe the convergence of the IS-based approach. For \(n=20\), the IS-based approach frequently returns a _not-a-number_ error due to the decreasing evidence term in the Bayes' theorem (see Eq. (3)). In contrast, our PACE-based approach effectively mitigates the curse of dimensionality.
## 5 Stochastic optimization for the continuous design domain
The PACE approach can be used directly to solve the A-ODE problem in a discrete and finite design domain. By estimating the tECV for each experimental design candidate using PACE, we identify the optimal solution as the design with the minimum tECV. In this section, we shift our focus to the continuous design domain. We assume that the gradient of the measurement map _w.r.t._ the design parameters, \(\nabla_{d}h\), exists and can be computed numerically. To solve the A-ODE problem efficiently, we apply stochastic gradient descent techniques on top of the PACE-based MC estimator of tECV.
When stochastic gradient optimization algorithms are used to solve the A-optimal DOE problem, the computation of the gradient of the tECV _w.r.t._ the design parameters, denoted as \(\nabla_{d}V_{d}\), can pose a challenge. To address this issue, we propose a nonlocal approximation of the CE. Our approach involves approximating the CE within a neighborhood surrounding a given design parameter vector to enable the evaluation of \(\nabla_{d}V_{d}\). We utilize ANNs as the approximation tool, although other regression techniques can also be incorporated into our method.
Let \(f_{\mathrm{N}}(\cdot;\mathbf{\theta}):\mathbb{R}^{m+\delta}\rightarrow\mathbb{R}^{n}\) be an ANN map, where \(\mathbf{\theta}\) denotes the hyperparameters of the map. The input of the proposed ANN is the concatenation of an \(m\)-dimensional vector \(y\) and a \(\delta\)-dimensional vector \(d\). By combining Eqs. (7), (11), and (12) the A-optimal design parameter
Figure 2: Comparison of the PACE- and IS-based approaches for estimating ECV with \(\Xi\sim\mathcal{N}(0,0.01^{2})\) using 10,000 samples: (a) PACE-based approach without data augmentation and (b) IS-based approach. The error bar plot depicts the mean and standard deviation of the empirical ECV values estimated from 1000 statistically independent MC simulations.
vector, \(d_{\rm A}\), can be obtained by solving the following optimization problem
\[d_{\rm A}=\arg_{d}\;\min_{d,\mathbf{\theta}}\;{\rm E}\left[\|Q-f_{\rm N}(Y_{d},d;\mathbf{ \theta})\|_{2}^{2}\right] \tag{34}\]
which minimizes the MSE _w.r.t._ the ANN's hyperparameters and design parameters. Notably, the optimization problem that approximates the CE and the problem that seeks the A-optimal DOE minimize the same objective function.
During the course of the iterative optimization process to solve Eq. (34), the ANN may need to be retrained when the algorithm provides a new candidate vector for \(d_{\rm A}\). Here, we employ the transfer learning technique, _i.e._, the weights trained for the previous design candidate are used as the starting point for the current training process. This simple transfer learning technique efficiently reduces the training time and the amount of data required. The reminder of this section provides a more detailed explanation of the nonlocal approximation of the CE in Section 5.1 and discusses the algorithms that are used for implementing our method in Section 5.2.
Figure 3: Comparison of the PACE- and IS-based approaches with high-dimensional linear examples: (a) \(n=2\), (b) \(n=5\), (c) \(n=10\), and (d) \(n=20\). For \(n=20\), all 100 statistically independent MC simulations that uses the IS-based approach return a _not-a-number_ error. Consequently, the numerical result of the IS-based approach for \(n=20\) is not available.
### Nonlocal approximation of the conditional expectation
Given a design candidate \(d_{\kappa}\), we use a PDF \(w_{\kappa}\) defined over the design domain \(\mathcal{D}\) such that \(w_{\kappa}(d):=\frac{w(d_{\kappa}d_{\kappa})}{c}\). Here \(w(d,d_{\kappa})\) is a weight function such as a kernel, and \(c\) is a normalization constant defined as \(c:=\int_{\mathcal{D}}w(d,d_{\kappa})\mathrm{d}d\,<\infty\). In our notation, \(\kappa\in\mathbb{N}_{+}\) denotes the iteration counter of the main procedure, which will be discussed in Section 5.2.
To approximate the CE in the neighborhood of a design \(d_{\kappa}\), we determine the hyperparameters \(\mathbf{\theta}_{\kappa}\) of the ANN \(f_{\mathrm{N}}\) by solving the following optimization problem:
\[\mathbf{\theta}_{\kappa}=\arg\quad\min_{\mathbf{\theta}}\quad L_{\kappa}(\mathbf{\theta}), \tag{35}\]
where \(L_{\kappa}\) is the weighted MSE function defined as
\[L_{\kappa}(\mathbf{\theta})=\int_{\mathcal{D}}\mathrm{E}\left[\left\|Q-f_{ \mathrm{N}}(Y_{d},d;\mathbf{\theta})\right\|_{2}^{2}\right]w_{\kappa}(d)\mathrm{d}d. \tag{36}\]
We use the variance reduction technique, similar to Eq. (30), to approximate the function \(L_{\kappa}\) as
\[\widehat{L}_{\kappa}^{\mathrm{vr}}(\mathbf{\theta})=\frac{1}{N\times a}\sum_{i=1} ^{N}\sum_{j=1}^{a}\left\|q^{(i)}-f_{\mathrm{N}}(y^{(i,j)},d^{(i)};\mathbf{\theta}) \right\|_{2}^{2}, \tag{37}\]
where \(\{q^{(i)}\}_{i=1}^{N}\) are the _i.i.d._ samples of RV \(Q\), \(\{d^{(i)}\}_{i=1}^{N}\) are the _i.i.d_ samples generated according to the PDF \(w_{\kappa}\), and \(y^{(i,j)}=h(q^{(i)},d^{(i)})+\xi^{(i,j)}\), where \(\xi^{(i,j)}\) is the _i.i.d._ sample of RV \(\Xi\).
A simple choice for the kernel function is the density function of a multivariate Gaussian distribution, where the marginal variances are used as tuning parameters. Intuitively, increasing the marginal variances of PDF \(w_{\kappa}\) reduces the error of the CE approximation over the entire design domain; however, this also requires a considerably large number of samples to accurately estimate the weighted MSE \(\widehat{L}_{\kappa}^{\mathrm{vr}}\). In our algorithm, which is discussed in more detail in Section 5.2, for the \(\kappa\)-_th_ iteration, we only need to evaluate the tECV and its derivative in the neighborhood of \(d_{\kappa}\). Therefore, a required standard deviation of PDF \(\eta_{\kappa}\) is much smaller than the characteristic length of the design domain, which improves the computational efficiency by reducing the number of samples required to estimate the weighted MSE \(\widehat{L}_{\kappa}^{\mathrm{vr}}\).
### Algorithm
In this subsection, we present a stochastic gradient descent algorithm to solve Eq. (34). The algorithm utilizes the nonlocal approximation of the CE to estimate the tECV \(V_{d}\) and its derivative \(\nabla_{d}V_{d}\). A typical \(\kappa\)-_th_ iteration of the proposed algorithm consists of two steps:
1. Given a design parameter candidate \(d_{\kappa-1}\), we apply _transfer learning_ and solve Eq. (35) to update the hyper-parameters \(\mathbf{\theta}_{\kappa}\),
2. We update the design parameter candidate using the gradient of the MSE _w.r.t_\(d\), _i.e._, \(\nabla_{d}\ \mathrm{E}\left[\|Q-f_{\mathrm{N}}(Y_{d},d;\mathbf{\theta}_{\kappa})\|_{2}^{2}\right]\). This step returns an updated candidate \(d_{\kappa}\).
Algorithm 1 summarizes the main procedure, which calls the suboptimization procedures that correspond to steps i) and ii) described in Algorithms 2 and 3, respectively. The suboptimization procedures in each step are executed using the Adam algorithm [24].
In Algorithm 2, we generate _i.i.d._ samples \(\{d^{(i)}\}_{i=1}^{N}\), \(\{q^{(i)}\}_{i=1}^{N}\), and \(\{\xi^{(i)}\}_{i=1}^{N}\) following PDFs \(w_{\kappa}\), \(\pi_{Q}\), and \(\pi_{\Xi}\), respectively. Next, we use those samples to compute the samples of the observational RV, \(y^{(i,j)}=h(q^{(i)},d^{(i)})+\xi^{(i,j)}\) for \(j=1,\ldots,a\). Finally, we employ the Adam algorithm to train the ANN. We use the dataset \(\big{\{}\big{(}q^{(i)},d^{(i)},y^{(i,j)}\big{)}\big{\}}_{i=1,\ldots,N;\ j=1, \ldots a}\) and the loss function \(\widehat{L}_{\kappa}^{\text{vr}}\) (defined in Eq. (37)) for this training.
To reduce the numbers of epochs (\(e_{1}\)) and data samples (\(N\times a\)), we employ the _transfer learning_ technique [25, 26], which reuses the hyperparameters obtained from the previous iteration, \(\mathbf{\theta}_{\kappa-1}\), as the initial values for the current iteration \(\kappa\).
```
0:\(d_{\kappa}\), weight function \(w(\cdot;d_{\kappa})\), number of epoch \(e_{1}\), and learning rate \(\alpha_{1}\)
1: Generate \(\{d^{(i)}\}_{i=1}^{N}\sim w_{\kappa}(d)\), \(\{q^{(i)}\}_{i=1}^{N}\sim\pi_{Q}\), \(\{\xi^{(i,j)}\}_{i=1,\ldots,N;\ j=1,\ldots a}\sim\pi_{\Xi}\)
2: Evaluate \(\{h(q^{(i)},d^{(i)})\}_{i=1}^{N}\)
3: Compute samples \(\{y^{(i,j)}\}_{i=1,\ldots,N;\ j=1,\ldots a}\) where \(y^{(i,j)}=h(q^{(i)},d^{(i)})+\xi^{(i,j)}\)
4: Collect data into a dataset \(D_{N}^{\text{vr}}=\big{\{}\big{(}q^{(i)},d^{(i)},y^{(i,j)}\big{)}\big{\}}_{i=1,\ldots,N;\ j=1,\ldots a}\)
5: Train the ANN using the initial weights \(\mathbf{\theta}_{\kappa-1}\) with \(e_{1}\) epoch, Adam algorithm, learning rate \(\alpha_{1}\), loss function \(\widehat{L}_{\kappa}^{\text{vr}}\) defined in Eq. (37) and dataset \(D_{N}^{\text{vr}}\)
6:return Hyper-parameters \(\mathbf{\theta}_{\kappa}\) of ANN \(f_{\text{N}}\)
```
**Algorithm 2** Nonlocal approximation of the CE operator
In Algorithm 3, we generate _i.i.d._ samples \(\{q^{(i)}\}_{i=1}^{M}\) following PDF \(\pi_{Q}\). The dataset \(\{q^{(i)}\}_{i=1}^{M}\) is divided into different batches of size \(M_{b}\), denoted as \(\{B_{1},\ldots,B_{\beta}\}\). For each batch \(B\) and design candidate \(d_{0}\), we compute the corresponding samples of the observational RV as \(y^{(i,j)}=h(q^{(i)},d_{0})+\xi^{(i,j)}\), where \(q^{(i)}\in B\), \(j=1,\ldots,a\), and \(\xi^{(i,j)}\) are the _i.i.d._ samples of the RV \(\Xi\). We then update the design parameter using
\[d^{\prime}=d_{0}-\frac{\alpha_{2}}{M_{b}\times a}\sum_{i=1}^{M_{b}}\sum_{j=1}^ {a}\nabla_{d}\Big{[}\|q-f_{\text{N}}(h(q,d)+\xi,d)\|_{2}^{2}\Big{]}(q^{(i)},y^ {(i,j)},d_{0}), \tag{38}\]
where \(\alpha_{2}\) is a learning rate. Here, \(\nabla_{d}\Big{[}\|q-f_{\text{N}}(h(q,d)+\xi,d)\|_{2}^{2}\Big{]}(q^{(i)},y^{(i,j)},d_{0})\) represents the gradient of \(\|q-f_{\text{N}}(h(q,d)+\xi,d)\|_{2}^{2}\)_w.r.t._\(d\) and is evaluated at \((q^{(i)},y^{(i,j)},d_{0})\). The value of \(d^{\prime}\) is then assigned to \(d_{0}\) for the next batch.
In Appendix E, we use the chain rule to derive the formulation of the gradient \(\nabla_{d}\Big{[}\|q-f_{\mathrm{N}}(h(q)+\xi,d)\|_{2}^{2}\Big{]}(q^{(i)},y^{(i,j)},d _{0})\) in terms of the gradients of the observational map \(h\) and the ANN \(f_{\mathrm{N}}\)_w.r.t_\(d\). Notably, the gradients of the ANN function \(f_{\mathrm{N}}\) are obtained using the backpropagation [27].
## 6 Numerical experiment: Electrical impedance tomography
Electrical impedance tomography (EIT) aims to determine the conductivity of a closed body by measuring the potential of electrodes placed on its surface in response to applied electric currents. In this section, we revisit an experimental design problem previously studied in [16, 17, 18]. The objective of the experiment is to recover the fiber orientation in each orthotropic ply of a composite laminate material. This is achieved by measuring the potential when applying low-frequency electric currents through electrodes attached to the body surface. The formulation of the EIT experiment is described in Section 6.1, where we adopt the entire electrode model from [28].
Previous studies employed the maximization of EIG as the optimality criterion and estimated the EIG using the IS technique combined with the Laplace approximation of the posteriors. In Section 6.2, we analyze our PACE-based approach for estimating the tECV and compare the obtained results with those using the IS-based technique. Subsequently, we utilize the optimization algorithms discussed in Section 5 to solve the DOE problem and present the numerical results.
### Formulation and finite element model of EIT experiment
We consider a rectangular body \(B\) which has boundary \(\partial B\) and consists of two plies \(B=\bigcup_{i=1}^{2}B_{i}\). The exterior boundary surfaces are equipped with \(N_{\mathrm{el}}=10\) electrodes on the area \(E_{1},\ldots,E_{N_{\mathrm{el}}}\).
These electrodes are used to apply known currents, and they also allow us to measure the electric potential. The configuration used in this study is illustrated in Fig. 4.
The fiber orientation over body \(B\) is modeled as a random field \(\eta:B\times\Omega\to[-\pi,\pi]\) as follows:
\[\eta(x,\omega)=\left\{\begin{array}{rl}&\eta_{1}(\omega)\quad\text{in}\;B_{ 1},\\ &\eta_{2}(\omega)\quad\text{in}\;B_{2}.\end{array}\right. \tag{39}\]
The RV \(Q\) in this example is a random vector valued in \([-\pi,\pi]^{2}\) that represents the prior distri
bution of the fiber orientations of the plies, defined as
\[Q(\omega):=[\eta_{1}(\omega),\eta_{2}(\omega)]^{\top}. \tag{40}\]
We represent the quasistatic current flux and the quasi-static potential as random fields as \(\zeta:B\times\Omega\to\mathbb{R}^{3}\) and \(u:B\times\Omega\to\mathbb{R}\), respectively. The electrical current field and the potential field satisfy the following PDE:
\[\begin{split}\nabla\cdot\zeta&=0,\\ \zeta&=\sigma(\eta)\ \nabla u,\end{split} \tag{41}\]
where \(\sigma\) is the conductivity that depends on the fiber orientation of the plies. The relation between the conductivity and the fiber orientations \(\eta\) is given as
\[\sigma(\eta)=\mathcal{R}(\eta)^{\top}\cdot\bar{\sigma}\cdot\mathcal{R}(\eta), \tag{42}\]
where \(\mathcal{R}(\eta)\) is the rotation matrix,
\[\mathcal{R}(\eta)=\begin{bmatrix}\cos(\eta)&0&\sin(\eta)\\ 0&1&0\\ -\sin(\eta)&0&\cos(\eta)\end{bmatrix}. \tag{43}\]
Here, \(\bar{\sigma}\) is a \(3\times 3\) constant matrix, which we set to \(\bar{\sigma}=\operatorname{diag}(10^{-2},10^{-3},10^{-3})\). The boundary conditions are given by
\[\begin{split}\zeta\cdot n=0,\quad\operatorname{on}\partial B \setminus(\cup E_{l}),\\ \int_{E_{l}}\zeta\cdot n\mathrm{d}x=I_{l},\quad l=1,\dots,N_{\mathrm{ el}},\\ \frac{1}{E_{l}}\int_{E_{l}}u\mathrm{d}x+z_{l}\int_{E_{l}} \zeta\cdot n\mathrm{d}x=U_{l},\quad l=1,\dots,N_{\mathrm{el}},\end{split} \tag{44}\]
where \(n\) is the unit outward normal, and \(I_{l}\) and \(U_{l}\) are the applied current and observable electric potential at electrode \(E_{l}\), respectively. The first boundary condition states the no-flux condition, and the second one implies that the total injected current through each electrode is known. The third boundary condition captures the surface impedance effect, meaning that the shared interface between the electrode and material has an infinitesimally thin layer with a surface impedance of \(z_{l}\). In this study, we set to \(z_{l}=0.1\) for the surface impedance. We use the following two constraints (Kirchhoff law of charge conservation and ground potential condition) to guarantee the well-posedness:
\[\sum_{l=1}^{N_{\mathrm{el}}}I_{l}=0\quad\text{and}\quad\sum_{l=1}^{N_{\mathrm{ el}}}U_{l}=0. \tag{45}\]
To solve the PDE stated in Eq. (41), we develop a 2D FE model using a quadratic mesh of size \(50\times 6\) elements. Details on the weak formulation of the FE model are described in Appendix F.
Figure 4: Illustration of the EIT experiment for a two-ply composite sample. The black rectangles indicate the electrode positions, and \(I_{1},\dots,I_{10}\) are their applied currents.
### Numerical results
SettingTen electrodes are placed on the surface of the rectangular domain \(B=[0,20]\times[0,2]\), five on the top and five on the bottom of the surface. The fiber orientations \(\eta_{1}\) in ply \(B_{1}\) and \(\eta_{2}\) in ply \(B_{2}\) are the quantities of interest with the following assumed prior distributions:
\[\eta_{1}\sim\mathcal{U}\left(\frac{\pi}{4.5},\frac{\pi}{3.5}\right),\quad\eta_ {2}\sim\mathcal{U}\left(-\frac{\pi}{3.5},-\frac{\pi}{4.5}\right). \tag{46}\]
The observational model is given as
\[Y_{d} =h(Q,d)+\Xi, \tag{47}\] \[\text{where}\quad h(Q,d) :=[U_{1}(Q,d),\ldots,U_{10}(Q,d)]^{\top},\]
and \(Q\) is defined in Eq. (40). We consider two cases of the observational error distribution, which are \(\Xi\sim\mathcal{N}(0,10^{2}\mathbf{I}_{10})\) and \(\Xi\sim\mathcal{N}(0,3^{2}\mathbf{I}_{10})\), _i.e._, the error standard deviations are about \(5\%\) and \(1.5\%\) of the ground-truth value, respectively. We treat the applied electrode currents as the design parameters denoted as \(d:=[I_{1},\ldots,I_{9}]^{\top}\), where we apply the condition \(I_{10}=-\sum_{l=1}^{9}I_{l}\) in accordance with the Kirchhoff law of charge conservation. We seek the A-optimal DOE parameterized by vector \(d\in[-1,1]^{9}\).
To accelerate our analysis, we construct a surrogate model using an ANN to approximate the observational model \(h\). The input of this surrogate model consists of the fiber orientations \([\eta_{1},\eta_{2}]^{\top}\) and the applied electrode currents \([I_{1},\ldots,I_{9}]^{\top}\), and its output is the vector of the potential at ten electrodes \([U_{1},\ldots,U_{10}]^{\top}\). The surrogate model is trained on a dataset obtained by running the FE model for a large number of input samples such that \([\eta_{1},\eta_{2}]^{\top}\sim\mathcal{U}\left([\frac{\pi}{4.5},\frac{\pi}{3. 5}]\times[-\frac{\pi}{3.5},-\frac{\pi}{4.5}]\right)\) and \([I_{1},\ldots,I_{9}]^{\top}\sim\mathcal{U}([-1,1]^{9})\) conditioned by \(|\sum_{l=1}^{9}I_{l}|\leq 1\).
#### 6.2.1 Analysis of error in estimating the tECV
Here we study the errors in estimating the tECV for a specified design parameter vector. For the PACE-based approach, we implement three models: i) using the linear approximation for the CE (see Appendix C.1), ii) using the ANN for approximating the CE without data augmentation, and iii) using the ANN for approximating the CE and applying the data augmentation method described in Section 3.3. The ANNs in cases ii and iii have two hidden layers of \(100\) neurons each. The input and output layers have \(10\) and \(2\) neurons, respectively, which corresponds to the dimensions of vectors \(y\) and \(q\), respectively. Dataset \(D_{N}\) is split into two datasets for training and testing with a size ratio of \(1\):\(1\). We use the Adam algorithm [24] with a learning rate of \(0.0005\) and a batch size of \(100\). The algorithm is set to run for a maximum of \(10,000\) epochs and features an early-stop mechanism that is activated when the loss function shows no further improvement. To compute the tECV, we fix \(M=N\) based on the Proposal 2.
As a closed form of the tECV is not available in this example, we employ the IS approach with \(200,000\)_i.i.d_ samples, and we use the obtained result as the reference, _i.e._, \(V\) in Eq. (26). Fig. 5 depicts the relative MAEs in estimating the tECV for the optimal design found in [17]. To obtain these estimated relative MAEs, we perform \(50\) statistically independent simulations to empirically compute the expectation operator in Eq. (26).
Because the measurement map is nonlinear, the CE linear approximation contains bias error, which results in a relative MAE of \(20\%\). In contrast, we observe negligible bias error when we use the ANN for approximating the CE. When comparing models ii and iii, we observe that the data augmentation method described in Section 3.3 plays a crucial role in reducing the estimation error, particularly for a small number of samples, as shown in Fig.5. For the case \(N+M=200\), applying the data augmentation reduces the relative MAE by more than three-fold for \(\Xi\sim\mathcal{N}(0,10^{2}\mathbf{I}_{10})\).
With the IS-based approach, estimating the tECV requires a double-loop MC simulation, where the numbers of samples for the outer and inner loops are \(N_{\rm o}\) and \(N_{\rm i}\), respectively (see Appendix A). Here, we set \(N_{\rm o}=N_{\rm i}\) for simplicity. Optimizing the ratio \(N_{\rm o}:N_{\rm i}\) is problem dependent and beyond the scope of this study. Compared with the IS-based approach, the PACE-based approach combined with data augmentation significantly decreases the number of required samples. For example in Fig. 5(a), to reach a relative MAE of \(0.9\%\), the IS approach requires more than \(4000\) samples, while the PACE-based approach using data augmentation needs only \(1000\) samples. Notably, for the case \(\Xi\sim\mathcal{N}(0,3^{2}\mathbf{I}_{10})\) depicted in Fig. 5(b), although the computational efficiency of the PACE approach does not exhibit a significant change compared with the case in Fig. 5(a), the performance of the IS approach deteriorates substantially. In this case, the statistical error of the IS approach is even greater than that of the PACE approach using the linear approximation. Moreover, the relative MAE of the PACE approach verifies its theoretical estimation, \(\mathcal{O}\big{(}\sqrt{\frac{2}{\pi N}}+\sqrt{\frac{2}{\pi M}}\big{)}\) stated in Proposition 2.
#### 6.2.2 Minimization of the tECV
In this section we apply the algorithms developed in Section 5 to determine the A-optimal DOE. We approximate the CE nonlocally using an ANN having two hidden layers. The input layers have \(19\) neurons that correspond to the sum of the dimensions of vectors \(d\in[-1,1]^{9}\) and \(y\in\mathbb{R}^{10}\). The number of neurons in the output layer and each hidden layer are \(2\) and \(100\), respectively. We choose the density of the normal distribution \(\mathcal{N}(0_{9},0.2\times\mathbf{I}_{9})\) as the kernel function used to evaluate the weighted loss function \(L_{\kappa}\) in Eq. (36). From this point onward, we keep \(\Xi\sim\mathcal{N}(0,10^{2}\mathbf{I}_{10})\) as was done [18].
The main procedure described in Algorithm 1 is implemented with \(20\) iterations. In each iteration, we first use Algorithm 2 to approximate the CE over \(1000\) epochs. We use \(N=500\) samples to estimate the weighted MSE \(L_{\kappa}\) (see Eq. (37)), and we augment the dataset by \(30\) times using the data augmentation method described in Section 3.3. Other settings for training the ANN are kept identical to those in Section 6.2.1. Given \(\mathbf{\theta}\), we execute Algorithm 3 over \(20\) epochs using \(N=25\) and a single batch. The dataset is also augmented by \(30\)-fold. In Algorithm 3, the learning
Figure 5: Relative MAE for estimating the tECV for \(d=d_{\rm optimal}\) (a) \(\Xi\sim\mathcal{N}(0,10^{2}\mathbf{I}_{10})\), and (b) \(\Xi\sim\mathcal{N}(0,3^{2}\mathbf{I}_{10})\). For the PACE-based approach, the number of samples is equal to \(N+M\). For the IS-based approach, the number of samples is equal to \((N_{\rm i}+1)\times N_{\rm o}\). For the sake of simplicity, we fix \(N=M\) and \(N_{\rm i}=N_{\rm o}\).
rate is gradually reduced from \(0.1\) to \(0.02\) to reduce the stochastic effect in the final epochs and ensure the convergence of the algorithm. Algorithm 3 returns \(d_{\kappa+1}\), which is then utilized in the subsequent iteration.
For each iteration of the main procedure, Algorithms 2 and 3 must evaluate the observational map \(500\) times. Additionally, the latter evaluates the gradient \(\nabla_{d}h\)\(500\) more times. In total, Algorithm 1 executes \(20,000\) evaluations of the observational map and \(10,000\) evaluations of the gradient \(\nabla_{d}h\). The total number of evaluations of the observational map and its derivative is of the same order of magnitude as the estimations of the tECV for a single design (see Fig. 5). Our method gains significant computational efficiency owing to the nonlocal approximation of the CE and the applied transfer learning technique. The performance of the algorithm in terms of the design vector and the tECV is depicted in Fig. 6. Convergence can be clearly observed after \(15\) iterations.
A-optimal DOEThe A-optimal design found by the algorithm closely resembles the solution \(d_{\text{A}}=[1,\ 1,\ 1,\ -1,-1,\ 1,\ 1,-1,-1]^{\top}\). The potential fields solved using the FE model are illustrated in Fig. 7 for both the initial DOE and the A-optimal DOE. The magnitude of the potential field under the A-optimal DOE, particularly at the electrodes, is considerably larger than that of the initial DOE. Consequently, the effect of the measurement error on the posterior variance is minimized. In this example, minimizing the tECV leads to the DOE that is identical to the DOE maximizing the EIG, reported in [18]. This observation suggests that the conditional variance and the information gain are strongly correlated.
The typical posterior densities obtained from the initial DOE and the A-optimal DOE are depicted in Fig. 8(a) and (b), respectively. The posterior mean that is inferred using the A-optimal DOE predicts the ground-truth value with a negligible error, whereas the posterior variance is significantly reduced compared with that of the initial DOE.
## 7 Conclusion
This study presents an efficient computational method for estimating the tECV within the A-optimal DOE framework using the PACE approach. The intractability of the posterior distribution does not affect the computational efficiency of our approach, as both the approximating and the sampling the posterior distribution are excluded. Moreover, we derive an asymptotic error estimation of our method and verify it through numerical experiments.
To address continuous design domains, we combine the PACE framework with stochastic optimization algorithms to seek the A-optimal DOE. We demonstrate our method by using the ANN to approximate the CE. The stochastic optimization algorithms used for optimizing the ANN's weights and for finding the optimal design are integrated, as their loss functions are identical. We further propose a nonlocal approximation of the CE to reduce the number of evaluations of the observation map, which can be computationally demanding in practice. Numerical experiments show that our approach requires significantly fewer evaluations of the observational map compared to the crude IS method.
Figure 8: Contour plots of the posterior densities and their mean values for (a) initial DOE and (b) A-optimal DOE. The ground truth value is \(\eta_{1}=0.748\), \(\eta_{2}=-0.848\).
Computing tECV using the IS approach
This appendix describes the computation of the ECV using the IS estimator of CE. This approach requires a double-loop MC simulation. The inner one estimates the posterior variances using Eq. (4), while the outer one estimates the expected posterior variance using Eq. (6).
The outer loop for a given design setting \(d\) is formulated as
\[\widehat{V}^{\text{IS}}=\frac{1}{N_{\text{o}}}\sum_{i=1}^{N_{\text{o}}}\sum_{k= 1}^{n}\widehat{\text{Var}}(Q\mid y^{(i)}),\quad\text{with}\quad y^{(i)}=h(q^{(i )},d)+\xi^{(i)},\;i=1,\ldots,N_{\text{o}}, \tag{48}\]
where \(\{q^{(i)}\}_{1}^{N_{\text{o}}}\) and \(\{\xi^{(i)}\}_{1}^{N_{\text{o}}}\) are the \(N_{\text{o}}\)_i.i.d._ samples of the RV \(Q\) and \(\Xi\), respectively. For each sample pair \((q^{(i)},y^{(i)})\), an inner loop is performed to estimate the posterior variance as \(\widehat{\text{Var}}(Q\mid y^{(i)})\)
\[\widehat{\text{Var}}\left[Q\mid y^{(i)}\right]=\frac{\sum_{j=1}^{N_{\text{i}}} [q^{(i,j)}]^{\odot 2}\pi_{\Xi}\left(h(q^{(i,j)},d)-y^{(i)}\right)}{\sum_{j=1}^{N_{\text{i}}} \pi_{\Xi}\left(h(q^{(i,j)})-y^{(i)}\right)}-\left[\frac{\sum_{j=1}^{N_{\text{i }}}q^{(i,j)}\pi_{\Xi}\left(h(q^{(i,j)},d)-y^{(i)}\right)}{\sum_{j=1}^{N_{\text{ i}}}\pi_{\Xi}\left(h(q^{(i,j)},d)-y^{(i)}\right)}\right]^{\odot 2}, \tag{49}\]
where \(\{q^{(i,j)}\}_{i=1,j=1}^{N_{\text{o}},N_{\text{i}}}\) are \(N_{\text{i}}\times N_{\text{o}}\)_i.i.d._ samples of the RV \(Q\). This method requires \((N_{\text{i}}+1)\times N_{\text{o}}\) evaluations of the observational map \(h\).
## Appendix B Proofs
### Proof of Proposition 1
Proof.: By combining the laws of total mean and total variance of the conditional expectation, _i.e._
\[\begin{split}\operatorname{E}\left[\operatorname{E}\left[Q_{i} \mid Y_{d}\right]\right]&=\operatorname{E}\left[Q_{i}\right],\\ \operatorname{Var}\left[Q_{i}\right]&=\operatorname{E }\left[\operatorname{Var}\left[Q_{i}\mid Y_{d}\right]\right]+\operatorname{ Var}\left[\operatorname{E}\left[Q_{i}\mid Y_{d}\right]\right]\end{split} \tag{50}\]
for \(i=1,\ldots,n\), we attain
\[\operatorname{E}\left[\operatorname{Var}\left[Q_{i}|Y_{d}\right]\right] =\operatorname{Var}\left[Q_{i}\right]-\operatorname{Var}\left[ \operatorname{E}\left[Q_{i}\mid Y_{d}\right]\right] \tag{51}\] \[=\operatorname{E}\left[Q_{i}^{2}-\operatorname{E}\left[Q_{i}\right] ^{2}\right]-\operatorname{E}\left[\operatorname{E}\left[Q_{i}\mid Y_{d} \right]^{2}-\left(\operatorname{E}\left[\operatorname{E}\left[Q_{i}\mid Y_{d} \right]\right]\right)^{2}\right]\] (52) \[=\operatorname{E}\left[Q_{i}^{2}-\operatorname{E}\left[Q_{i}\mid Y_ {d}\right]^{2}\right]. \tag{53}\]
Hence, the tECV can be formulated as
\[V(d) \equiv\sum_{i=1}^{n}\operatorname{E}\left[\operatorname{Var}\left[Q _{i}|Y_{d}\right]\right] \tag{54}\] \[=\operatorname{E}\left[\sum_{i=1}^{n}Q_{i}^{2}-\operatorname{E} \left[Q_{i}\mid Y_{d}\right]^{2}\right]\] (55) \[=\operatorname{E}\left[\left\|Q-\operatorname{E}\left[Q\mid Y_{d} \right]\right\|_{2}^{2}\right]. \tag{56}\]
### Proof of Proposition 2
Proof.: We structure our proof of Proposition 2 in three steps.
_Step 1: Estimation of the optimization error \(\epsilon_{opt}\)._ Following the central limit theorem,
\(\sqrt{N}\big{(}\widehat{\mathcal{M}}(f^{*}|D_{N})-\mathcal{M}(f^{*})\big{)} \rightarrow\mathcal{N}(0,\mathrm{Var}\left[\left\|Q-f^{*}(Y_{d})\right\|_{2}^ {2}\right]\big{)}\) as \(N\gg 1\). For a RV \(X\sim\mathcal{N}(0,\sigma^{2})\), we have \(\mathrm{E}\left[|X|\right]=\sqrt{2/\pi}\ \sigma\). Consequently, the statistical error of the MC estimator \(\widehat{\mathcal{M}}(f^{*}|D_{N})\) can be asymptotically estimated as
\[\begin{split}\mathrm{E}\left[\left|\widehat{\mathcal{M}}(f^{*} \mid D_{N})-\mathcal{M}(f^{*})\right|\right]&\approx\sqrt{ \frac{2}{\pi N}}\,\mathrm{Var}\left[\left\|Q-f^{*}(Y_{d})\right\|_{2}^{2} \right]^{1/2}\\ &=\mathcal{O}\Big{(}\frac{2}{\sqrt{\pi N}}\,\mathrm{E}\left[ \left\|Q-f^{*}(Y_{d})\right\|_{2}^{2}\right]\Big{)},\end{split} \tag{57}\]
where we use Assumption 2 to obtain the last step. Combining Assumption 3 and the result stated in Eq. (57) yields the following estimation of the optimization error \(\epsilon_{\mathrm{opt}}\)
\[\epsilon_{\mathrm{opt}}=\mathcal{O}\Big{(}\frac{2}{\sqrt{\pi N}}\,\mathrm{E} \left[\left\|Q-f^{*}(Y_{d})\right\|_{2}^{2}\right]\Big{)}. \tag{58}\]
Moreover, using the orthogonal property \(\mathrm{E}\left[(Q-f^{*}(Y_{d}))^{\top}f(Y_{d})\right)\big{]}=0\) for every \(f\in\mathcal{S}^{\prime}\), we obtain
\[\begin{split}\epsilon_{\mathrm{opt}}&=\mathrm{E} \left[\left\|\mathcal{M}(f(\cdot;\boldsymbol{\theta}_{D_{N}}))-\mathcal{M}(f^ {*})\right\|\right.\\ &=\mathrm{E}\left[\left|\mathrm{E}\left[\left\|Q-f(Y_{d}; \boldsymbol{\theta}_{D_{N}})\right\|_{2}^{2}\ \Big{|}\ \boldsymbol{\theta}_{D_{N}}\right]-\mathrm{E}\left[\left\|Q-f^{*}(Y_{d}) \right\|_{2}^{2}\right]\right|\right]\\ &=\mathrm{E}\left[\left\|f(Y_{d};\boldsymbol{\theta}_{D_{N}})-f^ {*}(Y_{d})\right\|_{2}^{2}\ \Big{|}\ \boldsymbol{\theta}_{D_{N}}\right]\right]\\ &=\mathrm{E}\left[\left\|f(Y_{d};\boldsymbol{\theta}_{D_{N}})-f^ {*}(Y_{d})\right\|_{2}^{2}\right].\end{split} \tag{59}\]
By combining Eqs. (58) and (59), we obtain
\[\mathrm{E}\left[\left\|f(Y_{d};\boldsymbol{\theta}_{D_{N}})-f^{*}(Y_{d}) \right\|_{2}^{2}\right]=\mathcal{O}\Big{(}\frac{2}{\sqrt{\pi N}}\,\mathrm{E} \left[\left\|Q-f^{*}(Y_{d})\right\|_{2}^{2}\right]\Big{)}. \tag{60}\]
_Step 2: Estimation of the MC estimator error \(\epsilon_{MC}\)._ Following the central limit theorem, for fixed hyperparameters \(\boldsymbol{\theta}_{D_{N}}\) as \(M\gg 1\), we obtain
\[\sqrt{M}\,\Big{(}\widehat{\mathcal{M}}(f(\cdot;\boldsymbol{\theta}_{D_{N}}) \ \big{|}\ D_{M})-\mathcal{M}\,(f(Y_{d};\boldsymbol{\theta}_{D_{N}}) \Big{)}\rightarrow\mathcal{N}(0,\mathrm{Var}\left[\left\|Q-f(Y_{d};\boldsymbol {\theta}_{D_{N}})\right\|_{2}^{2}\ \Big{|}\ \boldsymbol{\theta}_{D_{N}}\right]^{1/2}). \tag{61}\]
We estimate the error \(\epsilon_{\mathrm{MC}}\) as \(M\rightarrow\infty\) using the central limit theorem as
\[\begin{split}\epsilon_{\mathrm{MC}}&\equiv\mathrm{E} \left[\left|\widehat{\mathcal{M}}(f(\cdot;\boldsymbol{\theta}_{D_{N}})|D_{M})- \mathcal{M}(f(Y_{d};\boldsymbol{\theta}_{D_{N}}))\right|\Big{|}\ \boldsymbol{\theta}_{D_{N}}\right]\right]\\ &\approx\mathrm{E}\left[\sqrt{\frac{2}{\pi M}}\,\mathrm{Var} \left[\left\|Q-f(Y_{d};\boldsymbol{\theta}_{D_{N}})\right\|_{2}^{2}\ \Big{|}\ \boldsymbol{\theta}_{D_{N}}\right]^{1/2}\right]\\ &=\mathcal{O}\Big{(}\sqrt{\frac{2}{\pi M}}\,\mathrm{Var}\left[ \left\|Q-f^{*}(Y_{d})\right\|_{2}^{2}\right]^{1/2}\Big{)}\\ &=\mathcal{O}\left(\frac{2}{\sqrt{\pi M}}\,\mathrm{E}\left[\left\| Q-f^{*}(Y_{d})\right\|_{2}^{2}\right]\right),\end{split} \tag{62}\]
where the third step is obtained owing to \(\mathrm{E}\left[\left\|f(Y_{d};\boldsymbol{\theta}_{D_{N}})-f^{*}(Y_{d})\right\|_{2 }^{2}\right]=\mathcal{O}(2/\sqrt{\pi N})\) as stated in Eq. (60).
_Step 3._ Eventually, by combining Eqs. (21), (58), and (62), we can estimate the error of the estimator \(\widehat{V}_{d}(D_{N},D_{M})\) as
\[\mathrm{E}\left[\left|\widehat{\mathcal{M}}(f(\cdot;\boldsymbol{\theta}_{D_{N }})\;\big{|}\;D_{M})-\mathcal{M}(\phi_{d})\right|\right]=\mathcal{O}\!\left( \left(\frac{2}{\sqrt{\pi N}}+\frac{2}{\sqrt{\pi M}}\right)\mathrm{E}\left[ \left\|Q-f^{*}(Y_{d})\right\|_{2}^{2}\right]\right)+\epsilon_{S^{\prime}}. \tag{63}\]
### Orthogonality property of the CE
**Theorem 1** (Orthogonality property).: _Let \(Q\) and \(Y\) be the finite-variance RVs valued in \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\), respectively. Let \(L_{2}(\sigma_{Y})\) be the collection of all random variables of type \(g(Y)\), where \(g:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) is an arbitrary function satisfying \(\mathrm{E}\left[\|g(Y)\|_{2}^{2}\right]<\infty\). Then,_
* \[\mathrm{E}\left[\;f(Y)^{\top}\;\left(Q-\mathrm{E}\left[Q\;\middle|\;Y\right] \right)\;\right]=0,\] (64)
* _the CE_ \(\mathrm{E}\left[Q\;\middle|\;Y\right]\) _is the RV in_ \(L_{2}(\sigma_{Y})\) _that minimizes the MSE, i.e.,_ \[\begin{split}\mathrm{E}\left[Q\;\middle|\;Y\right]& =\phi(Y),\\ \text{where}&\phi&=\arg\min_{f(Y)\in L_{2}( \sigma_{Y})}\mathrm{E}\left[\left\|Q-g(Y)\right\|_{2}^{2}\right].\end{split}\] (65)
Proof.: For any measurable set \(B\subset\mathbb{R}^{m}\), let \(A=Y^{f\;-1}(B)\equiv\{\omega\;:Y(\omega)\in B\}\). As \(A\in\sigma_{Y}\), we achieve
\[\begin{split}\mathrm{E}\left[\mathbf{1}_{B}(Y)\;\left(Q-\mathrm{E }\left[Q\;\middle|\;Y\right]\right)\right]&=\mathrm{E}\left[ \mathbf{1}_{B}(Y)\;Q\right]-\mathrm{E}\left[\mathbf{1}_{B}(Y)\,\mathrm{E}\left[ Q\;\middle|\;Y\right]\right]\\ &=\int_{A}Q(\omega)\mathrm{d}\mathbb{P}(\omega)-\int_{A}\mathrm{E }\left[Q\middle|Y\right](\omega)\mathrm{d}\mathbb{P}(\omega)\\ &=0,\end{split} \tag{66}\]
where \(\mathbf{1}_{B}(Y(\omega))=1\) if \(Y(\omega)\in B\) and \(0\) otherwise.
Let \(X^{n}\) be a \(\sigma_{Y}\)-measurable simple RV that converges in terms of distribution to \(g\circ Y\). Using the result from Eq. (66), we prove part A) ot Theorem 1.
Let \(Z=\mathrm{E}\left[Q\;\middle|\;Y\right]-\phi(Y)\), we obtain
\[\begin{split}\mathrm{E}\left[\left\|Q-\phi(Y)\right\|_{2}^{2} \right]&=\mathrm{E}\left[\left\|Q-\mathrm{E}\left[Q\;\middle|\;Y \right]+Z\right\|^{2}\right]\\ &=\mathrm{E}\left[\left\|Q-\mathrm{E}\left[Q\;\middle|\;Y\right] \right\|_{2}^{2}\right]+\mathrm{E}\left[\left\|Z\right\|_{2}^{2}\right],\end{split} \tag{67}\]
as the cross-product term vanishes. Eq. (67) directly shows that
\(\mathrm{E}\left[\left\|Q-\phi(Y)\right\|_{2}^{2}\right]\) is minimized when \(Z\) is the \(n\)-dimensional vector of zeros, or \(\mathrm{E}\left[Q\;\middle|\;Y\right]=\phi(Y)\) (part B).
Linear approximation of the CE
### Analytical formulation of the linear approximation
The linear approximation of the CE \(f_{\text{l}}(Y_{d})=\mathbf{A}Y_{d}+b\) is the solution to the least mean-squares problem
\[\mathbf{A},b=\quad\arg\min_{\mathbf{A}^{\prime}\in\mathbb{R}^{n\times m},b^{\prime}\in \mathbb{R}^{n}}\operatorname{E}\left[\left\|Q-\mathbf{A}^{\prime}Y_{d}-b^{\prime} \right\|_{2}^{2}\right]. \tag{68}\]
Using the first order necessary conditions
\[\operatorname{E}\left[Q-\mathbf{A}Y_{d}-b\right] =0_{n}, \tag{69}\] \[\operatorname{E}\left[(Q-\mathbf{A}Y_{d}-b)Y_{d}^{\top}\right] =\mathbf{0}_{n\times m},\]
where \(\mathbf{0}_{n\times m}\) denotes the \(n\times m\) matrix of zeros, we obtain
\[\mathbf{A} =\operatorname{E}\left[\left(Q-\operatorname{E}[Q]\right)Y_{d}^ {\top}\right]\operatorname{E}\left[\left(Y-\operatorname{E}\left[Y_{d} \right]\right)Y^{\top}\right]^{-1} \tag{70}\] \[=\operatorname{E}\left[\left(Q-\operatorname{E}\left[Q\right] \right)\left(Y_{d}-\operatorname{E}\left[Y_{d}\right]\right)^{\top} \right]\operatorname{E}\left[\left(Y_{d}-\operatorname{E}\left[Y_{d}\right] \right)\left(Y-\operatorname{E}\left[Y_{d}\right]\right)^{\top}\right]^{-1}\] \[=\operatorname{Cov}\left[Q,Y_{d}\right]\operatorname{Cov}\left[Y _{d}\right]^{-1},\] \[b =\operatorname{E}\left[Q-\mathbf{A}Y_{d}\right].\]
### Empirical linear approximation of the CE
Given dataset \(D_{N}=\left(q^{(i)},y^{(i)}\right)_{i=1}^{N}\) of the \(N\)_i.i.d._ samples of the RV pair \((Q,Y)\), the empirical linear approximation of the CE is obtained as
\[\widehat{\phi}(y)=\widehat{\operatorname{Cov}}[Q,Y_{d}]\,\left[\widehat{ \operatorname{Cov}}[Y_{d}]\right]^{-1}y+\widehat{b} \tag{71}\]
where
\[\widehat{\operatorname{Cov}}[Q,Y_{d}] =\frac{1}{N}\sum_{i=1}^{N}[q^{(i)}-\overline{q}]\,[y^{(i)}- \overline{y}]^{\top}, \tag{72}\] \[\widehat{\operatorname{Cov}}[Y_{d},Y_{d}] =\frac{1}{N}\sum_{i=1}^{N}[y^{(i)}-\overline{y}]\,[y^{(i)}- \overline{y}]^{\top},\] \[\widehat{b} =\overline{q}-\frac{1}{N}\sum_{i=1}^{N}\widehat{\operatorname{Cov }}[Q,Y_{d}]\,\left[\widehat{\operatorname{Cov}}[Y_{d}]\right]^{-1}\!y^{(i)}.\]
Here, \(\overline{q}\) and \(\overline{y}\) are empirical means of the RVs \(Q\) and \(Y_{d}\), respectively, computed as
\[\overline{q}=\frac{1}{N}\sum_{i=1}^{N}q^{(i)},\quad\overline{y}=\frac{1}{N} \sum_{i=1}^{N}y^{(i)}. \tag{73}\]
The reduced variance estimators corresponding to those in Eq. (72) are straightforwardly obtained by replacing the role of the dataset \(D_{N}\) with its augmented version \(D_{N}^{\pi}\) (see (Eq. 28)).
Reduced variance estimators of MSE and tECV
We show that the estimator \(\widehat{\mathcal{M}}^{\mathrm{vr}}\) (see Eq. (30)) provides an estimation with reduced variance compared with the crude MC estimator \(\widehat{\mathcal{M}}\) (see Eq. (16)). For a given vector \(d\), we have
\[\begin{split}\lim_{a\to\infty}\widehat{\mathcal{M}}^{\mathrm{vr}}( f\mid D_{N})&=_{\mathrm{a.s.}}\ \frac{1}{N}\sum_{i=1,\ldots,N}\mathrm{E}\Big{[}\big{\|}q^{(i)}-f\big{(}h(q^{(i) },d)+\Xi\big{)}\big{\|}_{2}^{2}\Big{]}\\ &=\ \frac{1}{N}\sum_{i=1,\ldots,N}\mathrm{E}\left[A(Q,\Xi)|Q=q^{(i)} \right],\end{split} \tag{74}\]
\(\mathbb{P}\)-almost surely, where
\[A(q,\xi)\equiv\big{\|}q-f\big{(}h(q,d)+\xi\big{)}\big{\|}_{2}^{2}. \tag{75}\]
Let \(\widehat{\mathcal{M}}^{\mathrm{vr}*}(f)\) denote the right-hand-side term in Eq. (74). Because \(\{q^{(i)}\}_{i=1}^{N}\) are _i.i.d._ samples, we approximately quantify the statistical errors of the estimators \(\widehat{\mathcal{M}}\) and \(\widehat{\mathcal{M}}^{\mathrm{vr}*}(f)\), respectively, as
\[\mathrm{Var}\left[\widehat{\mathcal{M}}(f|D_{N})-\mathrm{E} \left[A(Q,\Xi)\right]\right] \approx\frac{\mathrm{Var}\left[A(Q,\Xi)\right]}{N}, \tag{76a}\] \[\mathrm{Var}\left[\widehat{\mathcal{M}}^{\mathrm{vr}*}(f)- \mathrm{E}\left[A(Q,\Xi)\right]\right] \approx\frac{\mathrm{Var}\left[\mathrm{E}\left[A(Q,\Xi)\mid Q \right]\right]}{N}. \tag{76b}\]
Using the law of total variance,
\[\mathrm{Var}\left[A(Q,\Xi)\right]=\mathrm{Var}\left[\mathrm{E}\left[A(Q,\Xi) \mid Q\right]\right]+\mathrm{E}\left[\mathrm{Var}\left[A(Q,\Xi)\mid Q\right] \right], \tag{77}\]
we obtain
\[\mathrm{Var}\left[\mathrm{E}\left[A(Q,\Xi)\mid Q\right]\right]\leq\mathrm{ Var}\left[A(Q,\Xi)\right]. \tag{78}\]
Therefore, for \(a\gg 1\), \(\mathrm{Var}\left[\widehat{\mathcal{M}}^{\mathrm{vr}}(f\mid D_{N})-\mathrm{E} \left[A(Q,\Xi)\right]\right]\leq\mathrm{Var}\left[\widehat{\mathcal{M}}(f \mid D_{N})-\mathrm{E}\left[A(Q,\Xi)\right]\right]\).
## Appendix E Gradient of the weighted MSE w.r.t. design parameters
The gradient \(\nabla_{d}\Big{[}\big{\|}q-f_{\mathrm{N}}(h(q)+\xi,d)\big{\|}_{2}^{2}\Big{]}\) is expanded using the chain rule as
\[\begin{split}\nabla_{d}\Big{[}\big{\|}q-f_{\mathrm{N}}(h(q,d)+ \xi,d)\big{\|}_{2}^{2}\Big{]}=-2\Big{[}&\nabla_{d}\,f_{\mathrm{N }}(y,d)+\nabla_{y}\,f_{\mathrm{N}}(y,d)\,\nabla_{d}\,h(q,d)\Big{]}^{\top}\\ &\qquad\Big{[}q-f_{\mathrm{N}}(h(q,d)+\xi,d)\Big{]},\end{split} \tag{79}\]
where \(y=h(q,d)+\xi\). Notably, for ANN \(f_{\mathrm{N}}\), the Jacobian matrices \(\nabla_{d}f_{\mathrm{N}}\) and \(\nabla_{y}f_{\mathrm{N}}\) can be directly obtained numerically using the backpropagation [27]. We finally obtain the value of \(\nabla_{d}\Big{[}\big{\|}q-f_{\mathrm{N}}(h(q)+\xi,d)\big{\|}_{2}^{2}\Big{]}(q ^{(i)},y^{(i,j)},d_{0})\) as
\[\begin{split}\nabla_{d}\Big{[}\big{\|}q-f_{\mathrm{N}}(h(q,d)+ \xi,d)\big{\|}_{2}^{2}\Big{]}&(q^{(i)},y^{(i,j)},d_{0})=\\ -2&\Big{[}\nabla_{d}\,f_{\mathrm{N}}(y^{(i,j)},d_{0}) +\ \ \nabla_{y}\,f_{\mathrm{N}}(y^{(i,j)},d_{0})\,\nabla_{d}\,h(q^{(i)},d_{0}) \Big{]}^{\top}\\ &\qquad\Big{[}q^{(i)}-f_{\mathrm{N}}(y^{(i,j)},d_{0})\Big{]}. \end{split} \tag{80}\]
Derivation of the finite element formulation
We define \(\mathcal{H}:=H^{1}(B\times\mathbb{R}^{N_{\text{el}}})\) as the space of the solution for the potential field \((u(\omega),U(\omega))\) for \(\omega\in\Omega\) and the Bochner space \(L^{2}_{\mathbb{P}}(\Omega;\mathcal{H})\) as
\[L^{2}_{\mathbb{P}}(\Omega;\mathcal{H}):=\left\{(u,U):\Omega\to \mathcal{H}\quad\text{s.t.}\,\int_{\Omega}\lVert(u(\omega),U(\omega))\rVert^{2 }_{\mathcal{H}}\mathrm{d}\mathbb{P}(\omega)<\infty\right\}, \tag{81}\]
where \(U(\omega)=[U_{1}(\omega),\ldots,U_{N_{\text{el}}}(\omega)]^{\top}\). For the bilinear form \(\mathcal{B}:\mathcal{H}\times\mathcal{H}\to\mathbb{R}\), which is given as
\[\mathcal{B}((u,U),(v,V)):=\int_{\mathcal{D}}\sigma\,\nabla u\cdot \nabla v\mathrm{d}B+\sum_{l=1}^{N_{\text{el}}}\frac{1}{z_{l}}\int_{E_{l}}(U_{l }-u)(V_{l}-v)\mathrm{d}E_{l}, \tag{82}\]
we aim to determine \((u,U)\in L^{2}_{\mathbb{P}}(\Omega;\mathcal{H})\) such that the weak formulation
\[\mathcal{B}\left((u(\omega),U(\omega)),(v,V)\right)=\sum_{l=1}^{N_{\text{el}}} I_{l}\,U_{l}(\omega) \tag{83}\]
is fulfilled for all \((v,V)\in L^{2}_{\mathbb{P}}(\Omega;\mathcal{H})\) almost surely.
|
2307.16636 | Optimisation and artifacts of photothermal excitation of microresonators | The excitation of microresonators using focused intensity modulated light,
known as photothermal excitation, is gaining significant attention due to its
capacity to accurately excite microresonators without distortions, even in
liquid environments, which is driving key advancements in atomic force
microscopy and related technologies. Despite progress in the development of
coatings, the conversion of light into mechanical movement remains largely
inefficient, limiting resonator movements to tens of nanometres even when
milliwatts of optical power are used. Moreover, how photothermal efficiency
depends on the relative position of a microresonator along the propagation axis
of the photothermal beam remains poorly studied, hampering our understanding of
the conversion of light into mechanical motion. Here, we perform photothermal
measurements in air and water using cantilever microresonators and a
custom-built picobalance, and determine how photothermal efficiency changes
along the propagation beam axis. We identify that far out-of-band laser
emission can lead to visual misidentification of the beam waist, resulting in a
drop of photothermal efficiency of up to one order of magnitude. Our
measurements also unveil that the beam waist is not always the position of
highest photothermal efficiency, and can reduce the efficiency up to 20% for
silicon cantilevers with trapezoidal cross section. | Liping Kevin Ge, Alessandro Tuniz, Martijn de Sterke, James M. Zavislan, Thomas G. Brown, Sascha Martin, David Martinez-Martin | 2023-07-31T13:16:56Z | http://arxiv.org/abs/2307.16636v1 | Optimisation and artifacts of photothermal excitation of microresonators
## 2 Abstract
The excitation of microresonators using focused intensity modulated light, known as photothermal excitation, is gaining significant attention due to its capacity to accurately excite microresonators without distortions, even in liquid environments, which is driving key advancements in atomic force microscopy and related technologies. Despite progress in the development of coatings, the conversion of light into mechanical movement remains largely inefficient, limiting resonator movements to tens of nanometres even when milliwatts of optical power are used. Moreover, how photothermal efficiency depends on the relative position of a microresonator along the propagation axis of the photothermal beam remains poorly studied, hampering our understanding of the conversion of light into mechanical motion. Here, we perform
photothermal measurements in air and water using cantilever microresonators and a custom-built picobalance, and determine how photothermal efficiency changes along the propagation beam axis. We identify that far out-of-band laser emission can lead to visual misidentification of the beam waist, resulting in a drop of photothermal efficiency of up to one order of magnitude. Our measurements also unveil that the beam waist is not always the position of highest photothermal efficiency, and can reduce the efficiency up to 20% for silicon cantilevers with trapezoidal cross section.
## 1 Introduction
Recent years have been marked by a renewed interest in microelectromechanical systems (MEMS), atomic force microscopy (AFM), and picobalance devices, which enable nanosensing applications across biomedicine, nanotechnology and materials science [1-4]. Such applications typically require the accurate excitation of oscillations in microcantilevers and microresonators in vacuum, air, or liquids. Amongst them, liquids present the most challenging environment, because their added mass and viscosity reduce the quality factor of microcantilevers below 10, thus diminishing the overall measurement accuracy.
Many approaches can be used to excite cantilever oscillations: these include _acoustic, magnetic, electrostatic, and Lorentz-force-induced_ excitations. Of these, _acoustic_ approaches are the most common, wherein oscillations are driven by a vibrating dither piezoelectric within the cantilever holder [5, 6]. However, these methods often produce spurious resonance peaks associated with the excitation of other parts of the larger instrument, interfering with the cantilever movement. This is particularly detrimental in liquids, where the cantilever quality factor is low (\(<\)10) [5], thereby demanding high driving powers. _Magnetic_ excitation instead require a magnetic bead attached to the cantilever or a magnetic functionalisation of the cantilever itself, which can degrade over time and can be toxic to biological samples [7, 8]. In addition, the magnetic coil near the cantilever precludes the simultaneous use of a transmission optical microscope, limiting practicality. _Electrostatic_ excitation requires the application of an alternating bias voltage between the cantilever and an electrode, which induces a detrimental surface charge diffusion when working in liquids [9]. Finally, _Lorentz-force-induced_ excitation requires alternating current through a cantilever under a static magnetic field,
but is limited to very specific cantilever geometries that can transport electric currents, significantly increasing the cantilever's temperature, thus hindering biological applications [10].
All the above limitations can be overcome by exciting the cantilever with an intensity-modulated light beam, via the _photothermal excitation method_[11]. Although this method was developed before the invention of AFMs, and was used to excite and detect resonator oscillations via a beam deflection scheme [11], it has recently become available in commercial AFMs. With this scheme, an intensity-modulated light beam is positioned near the fixed end of the cantilever, and induces a localised temperature gradient which dilates a portion of the cantilever at the modulation frequency, producing a well defined cantilever movement [12, 13]. While the first designs used metal-coated cantilevers [14], uncoated cantilevers have recently been developed [4]. Most importantly, regardless of whether the cantilever is in vacuum, air or liquid, this excitation method is accurately described by a simple damped harmonic oscillator response [15], and works over a wide frequency bandwidth of more than 10 MHz [16]. However, large cantilever oscillation amplitudes (\(>\) 10 nm) with a low average optical power (\(<\) 100 \(\mu\)W) remain challenging, particularly when the cantilevers are in liquid [16-19].
Delicate samples such as live mammalian cells or yeast [4, 20] benefit from using low optical powers to prevent damage, which leads to small cantilever amplitudes. However, large cantilever amplitudes (\(>\)10 nm) are known to increase the quality of AFM measurements when quantifying mass [4, 20], rheology [21], and force-distance AFM based measurements [22]. Larger cantilever amplitudes operating at lower powers can be achieved by increasing the photothermal efficiency, which has recently become an increasingly active field of research: approaches typically involve either physical or chemical modifications of cantilevers (e.g., by partially coating them with light-absorbing photo-acoustic materials such as gold, carbon or nanoparticle coating layers), or the use of different cantilever geometries. For example, coatings can enhance optical energy conversion into mechanical motion, increasing a cantilever's oscillation amplitude by up to 6 times for a given power [23-25], whereas trapezoidal cantilever cross-sections have been reported to perform better than their rectangular counterparts [26], noting that the location of the exciting laser spot on the cantilever also plays an important role [15]. However, the dependence of the photothermal excitation efficiency
relative to the location of the cantilever along the propagation axis of the beam is yet to be characterised in detail (Fig. 1a).
Here we present a detailed characterisation of the changes in cantilever excitation efficiency along the propagation axis (here: z-axis) of a photothermal excitation beam using a picobalance [4, 20], which is mounted on an inverted optical microscope in similar fashion to commercially available bio-AFMs (illustrated in the Fig. 1a schematic). Our measurements reveal that, due to a long-wavelength artifact resulting from far out-of-band laser emission, the longitudinal position that leads to the highest photothermal excitation efficiency can easily be misjudged when using an inverted optical microscope, which may reduce the photothermal efficiency by up to approximately one order of magnitude, even though the error introduced in the laser working distance is approximately 5%. We unambiguously identify that the origin of this artifact is due to spurious laser light, and we provide convenient and practical measures to address it. Our work also unveils that the longitudinal position that provides the highest photothermal excitation efficiency is not always at the beam waist but depends on the cantilever properties and can reduce the photothermal efficiency by up to 20%. Considering that photothermal excitation is becoming essential in many AFMs [22] and the emerging picobalance technology [4], often used in conjunction with an inverted optical microscope, we expect this work to inform optimal design procedures and operation guidelines for related technologies.
## 2 RESULTS and DISCUIONS
### Photothermal excitation efficiency along the beam propagation axis
AFMs typically include a laser (wavelength 852 nm in our setup) to determine the cantilever movement. It comes with two lateral degrees of freedom (x- and y) to adjust the position of the laser on the cantilever, whereas its longitudinal (z) position is generally fixed. Photothermal excitation relies on the introduction of a second, intensity modulated laser (wavelength: 405 nm in our setup) [27] whose position is adjusted in a similar fashion. Figure 1b shows a zoomed-in schematic of the experimental setup used in this work: it consists of a custom-built inertial picobalance, a technology whose principal inventor is one of us (DMM) and which is used to characterise the mass of living mammalian and yeast cells, as well as mechanical and rheological cell properties
[4, 20, 21, 28-30]. The picobalance is mounted on an inverted microscope, which can simultaneously provide sample information from both transmitted differential interference contrast microscopy and fluorescence microscopy (Fig. 1a-b). Note that this setup can also operate as an AFM by adding a piezo scanner [31]. However, unlike conventional AFMs, the position of each laser in the picobalance can be adjusted both laterally and longitudinally via piezo-motor positioners, enabling us to characterise the photothermal efficiency along the propagation axis of the photothermal beam (Fig. 1c).
As a first experiment, we recorded cantilever resonance curves (amplitude and phase versus the excitation frequency) driven with photothermal excitation for different relative positions of the cantilever along the longitudinal axis of the photothermal excitation beam. Figure 2a shows the amplitude and phase vs frequency of a cantilever in water at two different longitudinal positions. The red curve in this figure corresponds to the longitudinal position that provides the highest photothermal efficiency (optimal position), while the blue curve corresponds to the longitudinal position at which the photothermal laser beam waist is located on the cantilever according to the optical microscope. Figure 2b shows measurements of the cantilever amplitude at resonance (highest amplitude in a resonance curve) for different relative longitudinal positions of the cantilever. In all these measurements the excitation beam strikes the cantilever near its base at the optimal lateral location [26, 32]. Once this location was found, it was kept constant during the experiments, and only the longitudinal position was changed. The origin of the z axis was chosen at the optimal position. The longitudinal position labelled "apparent beam waist" corresponds to the position at which the beam waist of the photothermal beam coincides with the cantilever, according to optical images. The measurements were performed with average optical powers of the photothermal beam varying from 25.9 W to 46.3 W (Fig. 2b). Optical images of the photothermal beam at the cantilever plane were recorded for different longitudinal positions. Fig 2c depicts an optical image (top image) of the beam at the cantilever plane for the longitudinal position that provided the highest cantilever amplitude; the lower image shows the optical image of the beam for the longitudinal position at which the beam waist appears at the cantilever plane.
These measurements were performed in both air and liquid environments and for cantilevers with standard geometries (rectangular shape with rectangular cross section, triangular shape with rectangular cross section, and rectangular shape with trapezoidal
cross section cantilevers) and made of common materials, including gold coated silicon nitride and silicon cantilevers, and bare silicon cantilevers without coating (Fig. S1). A summary of the results is shown in Fig 2d, where the vertical axis represents the enhancement in efficiency, which is the ratio between the resonance amplitude at the optimal longitudinal position for a given cantilever type and that at the apparent beam waist position. One would expect the highest photothermal efficiency occur near the beam waist of the photothermal beam; however, our experiments showed that the cantilever oscillation amplitude, and therefore the photothermal efficiency, increased by 2-8 times at a longitudinal position that was far away from the beam waist as identified by using the optical microscope (Fig. 2b, d and Fig. S1). Surprisingly, as depicted in Figures 2b and S1, the z position for which the photothermal efficiency was the highest depends on the cantilever type and is at least at 2450 \(\upmu\)m below the discerned beam waist (here termed "apparent beam waist"), therefore well outside the 206 \(\upmu\)m confocal range of our laser (Fig 1c). Yet, it is important to note that by design, the working distance of our laser is 46.2 mm; hence, a change in the working distance of 2450 \(\upmu\)m represents only a 5% change. We now investigate and discuss the origin of this discrepancy in detail.
### Identifying the true beam waist position
The photothermal laser beam in our picobalance is a Gaussian beam with a waist (i.e., minimum 1/e intensity radius) of 3.65 \(\upmu\)m (Fig 1c). Therefore, if we aim the photothermal beam on a cantilever and we move the cantilever in the longitudinal direction (we do this by approaching or withdrawing the laser source to or from the cantilever), the optical power measured beneath the cantilever should be minimum when the cantilever is at the beam waist, as at that point the cantilever would screen most of the laser light (Fig 3a-c). Gold-coated cantilevers are very opaque to the wavelength of our photothermal laser (405 nm), and in silicon cantilevers the penetration depth of this wavelength is only \(\sim\)100 nm [33]. Thus, the cantilevers are generally very opaque at the excitation wavelength.
Figure 3a shows a schematic of the experimental set-up to perform such optical power measurements beneath the cantilever. The photothermal beam is aligned in the lateral directions and can move longitudinally. Below the cantilever an optical power sensor (Thorlabs S120VC) measures the residual optical power of the photothermal beam that is not screened by the cantilever (Fig. 3b). Results are shown in Fig. 3d, where the blue
curve corresponds to the cantilever amplitude at resonance for different z positions of the beam relative to the cantilever, and the red curve shows simultaneous measurements of the residual (unscreened) optical power below the cantilever. As before, the origin of the z axis is chosen to be the longitudinal position that provides the highest resonant cantilever amplitude for a given photothermal beam power. These data demonstrate that the beam waist position identified with the optical microscope (apparent beam waist in Fig. 3c) differs from the real position of the beam waist (beam waist in Fig 3c) by approximately 2450 \(\upmu\)m.
We note that Gaussian beams exhibit a _focal shift_, which causes the position of the beam waist to differ from that of the nominal focal point of a lens (Fig. 3d) [34]. For our Gaussian beam is calculated to be 32 \(\upmu\)m, which is consistent with our observations. Moreover, the wavelength of 405 nm of our photothermal beam is at the edge of the visible part of the spectrum, whereas the objective (CFI Plan Fluor 10X from Nikon) in our setup is aberration corrected at 550 nm. Our measurements (Fig. S2) indicate a position shift of only 50 \(\upmu\)m at the 405 nm wavelength. Therefore neither the focal shift, nor chromatic aberration can explain the observed 2450 \(\upmu\)m discrepancy between the real and the apparent beam waists.
### Origin of the Apparent Beam Waist
With the focal shift and chromatic aberration ruled out, we considered whether the apparent beam waist could be linked to the wavelength distribution of the laser source. Although more than 99% of the laser light produced by a laser diode is usually centred around the nominal wavelength, there is still some light produced outside that range. In order to be able to acquire fluorescent and/or differential interference contrast images, optical filters are placed in front of the camera (Fig 1a): a longpass filter (FEL0450, Thorlabs, US) with a cut-off wavelength of 450 nm to prevent the photothermal beam from saturating the images and a short-pass filter (FES0750, Thorlabs, US) with a cut-off wavelength of 750 nm to prevent the read-out laser from saturating the images. Therefore, the apparent beam waist detected could be the beam waist of unfiltered radiation at a wavelength above 450 nm rather than the actual beam waist of the attenuated 405 nm radiation. To investigate this, we mounted an additional and identical longpass filter to see whether the observed radiation would attenuate further. However, we did not observe further attenuation nor further displacement of the apparent beam
waist, implying that the observed beam waist with the microscope was that of wavelengths higher than 450 nm. To confirm this, we returned to the initial configuration of filters (Fig. 1a) and mounted a bandpass filter (15117 Edmund Optics, US) at the output of the 405 nm laser and next to the existing neutral density filter. The bandpass filter was centred at wavelength of 405 nm and had a bandwidth of 10 nm. With this new filter in place, the apparent beam waist was not detectable by the microscope in the presence of the longpass filter, confirming that it corresponded to a wavelength above 450 nm.
We then removed the longpass filter whilst keeping the 405 nm bandpass filter and repeated the photothermal efficiency measurements simultaneously with optical power measurements beneath the cantilever (similar to Fig. 3c). We found that under these conditions the apparent beam waist and the real beam waist essentially coincide with an error of approximately 300 \(\upmu\)m (Fig. S3). This error compares to the 206 \(\upmu\)m confocal parameter of the laser; the 50 \(\upmu\)m chromatic aberration of the objective at 405 nm; and the 32 \(\upmu\)m Gaussian focal shift. Since without the longpass filter the laser light prevents the acquisition of optical information from the sample, it should only be removed for the purpose of laser alignment. These experimental results demonstrate that the laser diode generates radiation well outside the nominal range. Thus, although generally more than 99% of optical power is within a few nanometres of the nominal wavelength, there is still a portion of the optical power at wavelengths that are well away from this wavelength, which may interfere with measurements of the position of the beam waist (Fig. 4a) or impact the sample.
We then repeated the measurements of Fig. 2 but with the bandpass filter in place (Figure S4). The results demonstrate that for silicon nitride cantilevers with rectangular cross section, the highest photothermal efficiency takes place near the beam waist and within the confocal range of the laser. However, for silicon cantilevers with trapezoidal cross section, we identified that the highest photothermal efficiency takes place at a longitudinal position outside the confocal range of the laser, increasing the photothermal efficiency by up to 20% compare to that at the beam waist (Figure S4). Thus more accurate models are required to better understand photothermal excitation under different conditions.
Finally, we conducted a spectral analysis of our laser source to characterise its power distribution within the wavelength range of 350-850 nm. Figure 4b show the results of these measurements with the long-pass filter (red curve) and without (blue curve). The blue curve confirms that approximately 99.9% of the laser emission is within a window of 10 nm centred at 405 nm. However, the red curves shows that the laser still emits approximately 0.1% of the power at wavelengths hundreds of nm away from the intended 405 nm. Furthermore, the red spectrum also demonstrates that when the 450 nm long-pass filter is used, the 405 nm is essentially fully attenuated and non-intended radiation with wavelengths of 540 nm and 610 nm become dominant, giving rise to the artifact of the apparent beam waist found with the optical microscope.
## Conclusion
In conclusion, we have characterised the changes in photothermal excitation efficiency along the propagation axis of a photothermal beam. We have found that the beam waist location can be erroneously identified from visual inspection alone, which reduces the photothermal efficiency by nearly an order of magnitude. This surprising result is caused by spurious low-power long-wavelength laser radiation that is well above the specified (narrow) range. This radiation dominates in the presence of filters commonly used in correlative atomic force and optical microscopies to prevent camera saturation and which enable the acquisition of fluorescence and DIC from the sample. We have provided an effective means to manage this phenomenon in order to optimise the photothermal excitation of cantilevers and resonators. Moreover, our results demonstrate that the highest photothermal efficiency occurs at different longitudinal positions for different cantilever types. In particular we identified that silicon nitride cantilevers with rectangular cross section are most efficiently excited near the laser beam waist within the confocal range of the laser, whereas silicon cantilevers with trapezoidal cross section are best excited outside this confocal range. These results point to the need to adjust the relative longitudinal position of a microcantilever or resonator to optimise the photothermal efficiency. This is particularly important when very low laser powers (\(\ll\)1 mW) are required, as for instance when working with live cells [4, 20], but also to optimise photothermal excitation when using different types of cantilevers, and cantilever chip thicknesses, as these can vary by hundreds of
micrometres, affecting the longitudinal position of the cantilever relative to the photothermal beam. Given the increasing use and applications of photothermal excitation in multiple systems such as AFMs and picobalances, we expect this work to significantly improve the performance of present and future instruments and to inform the design of new devices.
## Methods
### Experimental setup
A custom-built picobalance device is shown in Fig. 1a-b. It includes two lasers, a blue-violet photothermal excitation laser with wavelength 405 nm (Thorlabs, US), and an infrared detection laser with wavelength 852 nm (Schafter + Kirchhoff GmbH, Germany). The 405 nm laser is driven in Current Control Mode by a laser diode controller (LDC500, Stanford Research System, US). To improve the stability of this laser, its temperature is kept constant with a Peltier element controlled by a thermoelectric controller (LDC500). The 852 nm laser, combined with a four-quadrant Si PIN photodiode (S5980 Hamamatsu, Japan) was used to measure the deflection of the cantilever. A hard-coated bandpass filter (Edmund Optics, US) with a central wavelength of 850 nm and 25 nm bandwidth with an optical density of 4 is located in front of the photodiode. A Ti2-E inverted optical microscopy (Nikon, Japan) mounted with a CFI Plan Fluor 10X (Nikon, Japan) objective can capture real-time differential interference contrast (DIC) images during system operation. ECS nano-positioners (Attocube, Germany) controlled by AMC100 controllers (Attocube, Germany) can accurately adjust both lasers' lateral position relative to the cantilever as well as its longitudinal position.
### Beam waist location and Photothermal efficiency
To confirm whether the excitation laser's beam waist observed from the optical microscope coincided with the actual position, we laterally adjusted the excitation laser spot on the cantilever in air for each measurement until a maximum amplitude was obtained. After that, the blue laser was adjusted longitudinally by a ECS nano-positioner with pre-defined increments of 300 \(\upmu\)m. The cantilever's oscillation amplitude was registered using the detection laser followed by a photodiode detector and a lock-in amplifier. Two different setups of lock-in amplifiers were used, a CX
device from Nanosurf AG (Switzerland) and a BP4.5 control system with an OC5 oscillation controller from Nanonis (Germany). A laser power meter PM100D (Thorlabs, US) was placed underneath the cantilever to measure laser power around the cantilever. The infrared detection laser is switched off to avoid the interference of laser powers with excitation laser during the measurement of transmitted power **(Fig. 1c)**. The excitation laser's longitudinal 'position's origin was set at the position with the highest amplitude. The cantilever oscillation amplitude was estimated from thermal noise measurements using 'Sader's method' to calibrate the cantilever's spring constant [35-37].
**Determination of oscillation amplitude along laser propagation axis**
The comprehensive analysis of peak oscillation amplitude for each position along the excitation laser propagation axis was measured for the different types of cantilevers in both air and water. The cantilever is immersed in the water environment within a 35 mm Petri dish (Ibidi, Germany). Two silicon nitride cantilevers were used: Nunano QUEST R 500 TL (rectangular shape with rectangular cross section) and Bruker NP-010D (triangular shape with rectangular cross section). Both cantilevers were coated with the backside of gold. The silicon cantilever MikroMasch NSC35/CR-AU-C with gold-coated (rectangular shape with trapezoidal cross section) and MikroMasch NSC35/NO AL-A without gold coating (rectangular shape with trapezoidal cross section) were also tested for photothermal efficiency enhancement. The detailed parameters and specs of cantilevers used in the experiments are shown in **Table 1**. The input power of the excitation laser is range from 25.9 \(\mu\)W to 46.3 \(\mu\)W without BP filter and 18.5 \(\mu\)W to 33.1 \(\mu\)W with BP filter.
**Determination of chromatic aberration of objective**
To investigate if the differences between apparent focus and the beam waist were caused by chromatic aberration from the microscopy objective, we placed a pinhole with 2 \(\mu m\) diameter (Edmund Optics) in the center of the 405 nm laser beam and the 550 nm of light emitted from microscopy. We first aligned the laser at the middle of the field of view shown on the microscopy to align the laser and the pinhole on the same axis before placing the pinhole. This avoided the difficulty of alignment without viewing the position. The pinhole was placed on a 35 mm Petri dish (Ibidi) followed by fine-tuning the pinhole position to control the microscopy stage (Nanosurf), ensuring that the laser and the pinhole coincide. The objective height adjusts to find the minimum
spot for both 405 nm laser and 550 nm light as the focal position, and the chromatic aberration of the objective is given as the different heights of focal positions.
**Laser power spectrum acquisition**
A reflective collimator directs the source beam to an inverted microscope (Nikon Instruments Eclipse). The light is focused to a mirror using a microscope objective (Olympus 50X), and its reflection is directed to a calibrated imaging spectrometer (IsoPlane SCT 320 - Princeton Instruments) with visible camera (PIXIS). Wavelength dependent intensity spectra between 350-850nm are obtained by binning the counts at each wavelength. The results are shown in Fig. 4(b). Each spectrum, with- and without- the low pass filter, is obtained after maximizing the power density in the central region of the camera following a small focal adjustment, as per the photothermal excitation measurements shown in the schematic of Fig. 3(d).
### Author contributions
DMM scoped the initial project which was discussed with LKG, MdS and AT who provided insightful suggestions. DMM and LKG mounted the picobalance instruments. DMM, LKG and SM implemented required modifications in the picobalances. LKG performed the photothermal efficiency measurements with different cantilevers in air and water. JMZ and TGB provided insightful discussions and suggested the pinhole experiments to measure chromatic aberration, which were performed by LKG and DMM. AT performed the laser power spectrum experiments with support by LKG and DMM. All the authors reviewed and discussed the data. All authors contributed to write the manuscript.
## Acknowledgement
This work has benefited from the SOAR award and Engineering Research Scholarship (ERS) provided by The University of Sydney. We acknowledged D. Stenger, P. McCarthy and the mechanical workshop from The Faculty of Engineering (The University of Sydney) for their help of manufacturing the enclosures that host the picobalances; A. Tonin and the electronic workshop of the physics department (University of Basel) for helping with the beam deflection electronics; the mechanical
workshop of the physics department (University of Basel) for their help with the development and manufacturing of the devices; P. Buchmann and P. Argast from the electronic and mechanical workshop of the department of biosystems science and engineering (ETH Zurich) for helping with the temperature controlled system of the enclosures; Nanosurf AG for their technical support customising software, mechanical and electronic components.
### Competing of interests
DMM is the principal inventor (including granted and/or pending applications) of two patent families related to an inertial picobalance to measure the mass and mechanical properties of cells and an atomic force microscope, both of them compatible with optical microscopies (US20170052211A1 and WO/2015/120, 991). DMM is the principal inventor (including granted and/or pending applications) of a patent family for a controlled environmental system that provides cell culture conditions and it is compatible with probe-based instruments and optical microscopies (US10545169B2). DMM is the principal inventor (including granted and/or pending applications) of a patent family for microcantilever-based mass sensors, which enable measuring a cell's mass without regard to changes of its position (US20190025257A1). DMM is co-principal inventor (including granted and/or pending applications) of a patent family related to a device to measure rheological properties (LU102351). The remaining authors declare no competing interests.
|
2309.10933 | A learning-based multiscale model for reactive flow in porous media | We study solute-laden flow through permeable geological formations with a
focus on advection-dominated transport and volume reactions. As the fluid flows
through the permeable medium, it reacts with the medium, thereby changing the
morphology and properties of the medium; this in turn, affects the flow
conditions and chemistry. These phenomena occur at various lengths and time
scales, and makes the problem extremely complex. Multiscale modeling addresses
this complexity by dividing the problem into those at individual scales, and
systematically passing information from one scale to another. However, accurate
implementation of these multiscale methods are still prohibitively expensive.
We present a methodology to overcome this challenge that is computationally
efficient and quantitatively accurate. We introduce a surrogate for the
solution operator of the lower scale problem in the form of a recurrent neural
operator, train it using one-time off-line data generated by repeated solutions
of the lower scale problem, and then use this surrogate in application-scale
calculations. The result is the accuracy of concurrent multiscale methods, at a
cost comparable to those of classical models. We study various examples, and
show the efficacy of this method in understanding the evolution of the
morphology, properties and flow conditions over time in geological formations. | Mina Karimi, Kaushik Bhattacharya | 2023-09-19T21:09:20Z | http://arxiv.org/abs/2309.10933v1 | # A learning-based multiscale model for reactive flow in porous media
###### Abstract
We study solute-laden flow through permeable geological formations with a focus on advection-dominated transport and volume reactions. As the fluid flows through the permeable medium, it reacts with the medium, thereby changing the morphology and properties of the medium; this in turn, affects the flow conditions and chemistry. These phenomena occur at various lengths and time scales, and makes the problem extremely complex. Multiscale modeling addresses this complexity by dividing the problem into those at individual scales, and systematically passing information from one scale to another. However, accurate implementation of these multiscale methods are still prohibitively expensive. We present a methodology to overcome this challenge that is computationally efficient and quantitatively accurate. We introduce a surrogate for the solution operator of the lower scale problem in the form of a recurrent neural operator, train it using one-time off-line data generated by repeated solutions of the lower scale problem, and then use this surrogate in application-scale calculations. The result is the accuracy of concurrent multiscale methods, at a cost comparable to those of classical models. We study various examples, and show the efficacy of this method in understanding the evolution of the morphology, properties and flow conditions over time in geological formations.
## 1 Introduction
The transport of water and solutes through permeable geological formations couple various phenomena [19, 5]. As it flows through the permeable medium, solute-laden water reacts with the medium changing the morphology and properties of the medium; this in turn affects the flow and chemistry. Permeable medium supports a menagerie of microbial life, and these are affected by and affect the flow and the chemistry. Chemical interactions, biological activity and flow may cause mechanical failure which in turn changes morphology thereby affecting the flow. Further, these varied phenomena occur, and manifest themselves, at various length and time scales (see for example [26] and the citations there): at the _molecular_ scale where chemistry happens, at the _pore_ scale where morphology, biology, mechanical properties and flow interact, the _core_ scale where averaged features emerge, and finally at the _geological_ scale relevant to aquifer and reservoir hydrology. Furthermore, these geological formations are heterogenous at various scales ranging from the individual grains, to pores, to formations, fissures, caverns and cracks to geological strata and faults. Finally, they are highly anisotropic at various scales ranging from the clay particles to faults to geological formations.
This enormous complexity makes modeling extremely difficult. There are well-developed models for individual processes at particular scales, and in some cases conceptual frameworks to link various scales. Still, _there is a critical gap_ in accurate methods of passing information from one
scale to another, especially when it concerns multiple phenomena and history (or time)-dependent phenomena. In this paper, we address this coupling between the pore, core and geological formation scales with a focus on advection-dominated transport and volume reactions.
We begin with a brief review of the existing approaches, and then describe our contributions.
_Geological scale models._ Continuum models for reactive flow at the geological scale focus on predicting the overall flow and transport. These models usually employ empirical relations to relate the evolving transport parameters to changes in porosity, reactivity, and specific area [5]. To do so, many studies estimate the change in porosity [3], diffusivity [33], permeability [3, 43, 29], reaction rate, and specific area [35] due to chemical reactions. However, these approaches face some limitations. These models are derived assuming simple geometries (e.g. spherical grains) and therefore can not provide detailed insight into the pore structure and specific area, and their evolution. Further, they have a limited range of accuracy in the presence of precipitation and dissolution. In particular, they often fail when porosity approaches critical values or percolation.
_Core scale models._ Core-scale simulations focus on predicting the evolution of the pore structure and average fluid transport properties. Using conventional finite element methods for these simulations is computationally expensive due to the need to update the geometry [5]. Several alternative numerical strategies have been developed to overcome this limitation, such as pixel-based approaches [12, 13], level set methods [18, 39], smoothed particle hydrodynamics [37], and adaptive discontinuous Galerkin finite element methods [36].
_Multiscale models._ These models employ upscaling approaches to bridge the gap between pore and geological scales. Volume averaging methods [6] estimate the effective transport properties and derive macroscopic equations. Initially explored for porous media dispersion by [8, 31], these methods were later extended to spatially periodic porous media for reactive fluid transport by Paine [30]. However, such an approach necessitates incorporating ad hoc assumptions for closure relations.
Early studies by Auriault and Adler [4] and Rubinstein and Mauri [32] investigate multi-scale expansions to upscale Taylor dispersion [38] in periodic porous media, primarily focusing on diffusion-dominated, non-reactive transport. Later, this methodology was extended to reactive flow with surface reactions by Mauri [25]. However, this approach failed to derive effective properties for advection-dominated flows. Other researchers [24, 2, 10] introduced a two-scale expansion approach with drift to address advection-dominated flows. They employed a moving coordinate system with an effective drift velocity to investigate advection's impact on effective equations. This method was further applied by Allaire et al., [1] to study reactive transport with surface reactions while considering a linear adsorption/desorption rate at the solid interface.
_Machine learning._ In recent years, machine learning (ML) techniques have been applied to expedite reactive transport simulations and estimate effective porous medium properties. Liu et al [22] have used an ML approach to predict the effective reaction rate using features of pore structures. Artificial neural networks have been used to accelerate the geological scale reactive flow simulations [9, 34], and deep learning methods have been used to estimate the permeability considering non-reactive transport in the porous medium [42, 40]. These consider a snap-shot in time and do not address evolution over long periods. Wang and Battiato [41] have developed a deep-learning multiscale model to predict the clogging caused by solute precipitation in a microcrack network. Lu et al. [23] have used a neural network model to predict the evolution of the uranium distribution coefficient in the subsurface due to thermal and chemical processes.
In the context of mechanical properties, recurrent neural networks (RNNs) address the history-dependent behaviors [27], Long Short-Term Memory (LSTM) effectively remembers information over long time intervals [11], gated recurrent unit (GRU), a simplified variant of LSTM captures similar temporal relationships in data [28, 44]. However, LSTM-based approaches require millions of
variables to be trained from data. Recently, a recurrent neural operator (RNO) has been introduced inspired by internal variable theory, offering an efficient approach for multiscale modeling [21, 20].
Our Contributions.In this work, we develop a methodology for investigating flow through underground geological formations over long periods of time. We specifically focus on an advection dominated transport in a porous medium and reactions at the fluid-solid interface characterized by nonlinear reaction kinetics. We first use a two-scale expansion method with effective drift velocity [1] to obtain the governing equations are the core and geological formulations. This is described in Section 2.
We then introduce a recurrent neural operator (RNO), and use it to learn the solution operator of the core scale problem. To elaborate, we use data generated by repeated solutions of the core scale problem to train the RNO to learn the map between geological scale variables over time (e.g., velocity and solute concentration histories) and the effective transport properties (e.g., permeability, diffusivity, advection velocity, porosity, and specific area). This is described in Section 3.
Finally, we use the trained RNO surrogate model as a surrogate in geological scale computations to investigate the long time evolution of these formations in the presence of volume reactions induced by flow and transport. We demonstrate the accuracy of the approach, as well as its ability to reveal non-trivial interactions between the core and geological scale in Section 4.
We conclude in Section 5 with a discussion of implications, promises and open issues.
## 2 Two scale model
### Governing equation and non-dimensionalization
We consider a porous geological formation \(\tilde{\Omega}\) composed of a porous region \(\tilde{\Omega}_{p}\) and solid region \(\tilde{\Omega}_{s}\) with \(\tilde{\Omega}_{p}\cap\tilde{\Omega}_{s}=\phi,\ \tilde{\Omega}^{c}=\tilde{\Omega}_{p}^{c}\cup\tilde{ \Omega}_{s}^{c}\) where the superscript 'c' denotes the closure. We have an incompressible fluid flow in the pore governed by the steady Stokes equation
\[\begin{cases}-\tilde{\nabla}\tilde{p}+\tilde{\nu}\Delta\tilde{\mathbf{v}}+ \tilde{\mathbf{f}}=\mathbf{0}&\text{ in }\tilde{\Omega}_{p},\\ \tilde{\nabla}\cdot\tilde{\mathbf{v}}=0&\text{ in }\tilde{\Omega}_{p},\\ \tilde{\mathbf{v}}=\mathbf{0}&\text{ in }\tilde{\Omega}_{s}\end{cases} \tag{1}\]
where \(\tilde{\mathbf{v}}\) is the particle velocity, \(\tilde{p}\) is pressure, \(\tilde{\nu}\) is the viscosity and \(\tilde{\mathbf{f}}\) is the body force. We assume that the fluid carries a solute that is transported in the fluid through a combination of diffusion and advection and reacts with the surface of the solids, resulting either in deposition or dissolution
\[\begin{cases}\tilde{c}_{\tilde{t}}+\tilde{\mathbf{v}}\cdot\tilde{\nabla} \tilde{c}-D\tilde{\Delta}\tilde{c}=0&\text{ in }\tilde{\Omega}_{p},\\ -D\tilde{\nabla}\tilde{c}\cdot\mathbf{n}=-(\tilde{c}-\tilde{m})\tilde{v}_{n}= \tilde{q}_{c}(\tilde{c})&\text{ on }\partial\tilde{\Omega}_{p}\end{cases} \tag{2}\]
where \(\tilde{c}\) is the concentration of the solute, \(D\) is the diffusion constant, \(\mathbf{n}\) the unit normal vector to the surface \(\partial\tilde{\Omega}_{p}\) oriented outward with respect to \(\tilde{\Omega}_{p}\), \(\tilde{m}\) is the concentration of the solute in the solids, \(\tilde{v}_{n}\) is the normal speed of the solid/fluid interface due to dissolution or deposition and \(\tilde{q}_{c}(c)\) is the reaction rate that depends on the solute concentration. Above, the second term of the bulk equation describes advection, while the third describes diffusion. On the interface, the first equation describes the mass balance between the flux of the solute from the fluid to the interface and the growth of the interface, while the second relates the growth of the interface to the interfacial reaction rate. These equations have to be supplemented with appropriate boundary conditions.
Note that we have assumed that the fluid flow is steady while the solute transport is time-dependent. We assume that the reaction rate at the interface and, consequently, the rate of reconstruction of the porous medium is slow compared to the time-scale of the fluid flow. So, the fluid flow reaches a steady state at each time as the medium reconstructs.
Now, the characteristic length of the geological formation \(L\) is very large compared to the characteristic length of the core or representative volume \(\ell\), \(L>>\ell\). This makes the system (1, 2) difficult to solve: we have to resolve the flow and transport with a resolution small compared to \(\ell\), but on a domain of size \(L\). Therefore, we resort to a two-scale asymptotic expansion under the assumption that the ratio of length-scales \(\epsilon=\ell/L\) is small, \(\epsilon<<1\).
In order to do so, we change to non-dimensional units by scaling length with the characteristic length \(L\) of the geological formation, the velocity with the characteristic velocity \(V\) and the pressure with characteristic pressure \(\Pi\). It follows that time is rescaled by the characteristic time \(T=L/V\). We expect slow flows with small \(V\) over long distances \(L\), which means that the characteristic time \(T\) is large and consistent with steady state. Now, recall that the characteristic length of the pores is small, and therefore, in order to have non-trivial flow, we need the viscosity to be extremely small. Therefore, we assume that the characteristic viscosity is \(\Pi T/\epsilon^{2}\). The non-dimensional flow equations are given by
\[\begin{cases}-\nabla p^{\epsilon}+\epsilon^{2}\nu\Delta\mathbf{v}^{\epsilon}+ \mathbf{f}=\mathbf{0}&\text{ in }\Omega_{p}^{\epsilon},\\ \nabla\cdot\mathbf{v}^{\epsilon}=0&\text{ in }\Omega_{p}^{\epsilon},\\ \mathbf{v}^{\epsilon}=\mathbf{0}&\text{ in }\Omega_{s}^{\epsilon}.\end{cases} \tag{3}\]
Above, we have introduced \(\epsilon\) as a superscript in the non-dimensional variables to signify that the porosity and therefore variations in these quantities are at a scale \(\epsilon\).
We now turn to solute transport. We non-dimensionalize the concentration with a characteristic concentration \(C\) and introduce two non-dimensional numbers, the Peclet number \(\mathrm{Pe}\) and the Damkohler number \(\mathrm{Da}\):
\[\mathrm{Pe}^{\epsilon}=\frac{LV}{D},\quad\mathrm{Da}^{\epsilon}=\frac{LK}{D} \tag{4}\]
where \(K\) is a characteristic reaction rate. The non-dimensional equations of solute transport are
\[\begin{cases}c_{t}^{\epsilon}+\mathrm{Pe}^{\epsilon}\mathbf{v}\cdot\nabla c^ {\epsilon}-\Delta c^{\epsilon}=0&\text{ in }\Omega_{p}^{\epsilon}\\ -\nabla c^{\epsilon}\cdot\mathbf{n}=-v_{n}^{\epsilon}(c^{\epsilon}-m)= \mathrm{Da}^{\epsilon}q_{c}^{\epsilon}&\text{ on }\partial\Omega_{p}^{\epsilon}\end{cases} \tag{5}\]
Now, we are interested in situations where we have significant advection at the pore scale, and very slow reactions at the interface. So, we assume that
\[\mathrm{Pe}^{\epsilon}=\frac{\widehat{\mathrm{Pe}}}{\epsilon},\quad\mathrm{Da }^{\epsilon}=\widehat{\epsilon\mathrm{D}\mathrm{a}},\quad v_{n}^{\epsilon}= \epsilon\hat{v}_{n} \tag{6}\]
where \(\widehat{\mathrm{Pe}},\widehat{\mathrm{Da}},\hat{v}_{n}\) are all \(O(1)\).
### Two-scale model
We assume that the porous media is almost periodic, i.e., it is periodic on the scale \(\epsilon\) but can change over long distances compared to \(\epsilon\). To be precise, we assume \(\Omega_{p}^{\epsilon}(x)=\Omega_{p}(x,x/\epsilon)\) where \(\Omega_{p}\) is periodic with period 1 in the second variable; so porous medium is periodic on \(\epsilon Y\) where \(Y\) unit cube or unit cell in the vicinity of the point \(\mathbf{x}\). Further, \(Y=Y_{p}(\mathbf{x})\cup Y_{s}(\mathbf{x})\) where \(Y_{p}(\mathbf{x})\) is the pore in the unit cell in the vicinity of the point \(\mathbf{x}\) in the geological formation. We show that under
this assumption, we can approximate the solution of the system (3,5) by solving a geological scale problem where the constitutive behavior is determined by solving a core scale problem. We first describe the two problems, and then justify this derivation in the following subsections.
#### 2.2.1 Geological scale model
We can find the overall flow and solute transport at the geological scale by solving the following system on \(\Omega\):
\[\begin{cases}\nabla\cdot\mathbf{v}_{0}=0,\quad\mathbf{v}_{0}=\frac{1}{\nu} \mathbf{K}^{*}(\mathbf{f}-\nabla p_{0}),\\ \lambda c_{0t}+\lambda\mathrm{Pe}\mathbf{\nabla}\cdot\nabla c_{0}-\nabla\cdot( \mathbf{D}^{*}\nabla c_{0})=-\gamma\widehat{\mathrm{Da}}q_{c}\end{cases} \tag{7}\]
for the overall velocity \(\mathbf{v}_{0}\), pressure \(p_{0}\), concentration \(c_{0}\) at the geological scale subject to boundary conditions. Above, the parameters \(\mathbf{K}^{*}=\mathbf{K}^{*}(t,\mathbf{x})\) is the permeability tensor, \(\mathbf{D}^{*}=\mathbf{D}^{*}(t,\mathbf{x})\) is the effective diffusivity tensor, \(\lambda=\lambda(t,\mathbf{x})\) is the pore volume fraction, \(\overline{\mathbf{v}}=\overline{\mathbf{v}}(t,\mathbf{x})\) is an effective advection velocity, and \(\gamma=\gamma(t,\mathbf{x})\) is a local surface area per unit volume. Note that these parameters are all functions of time, and they are specified through the core scale problem below.
Note that this geological scale problem is solved at the entire geological \(\Omega\) and the coefficients vary only on the scale of the geological formation. All the information about the pores has been subsumed into parameters that only vary at the geological scale.
#### 2.2.2 Core scale or unit cell model
Given a porous unit cell at the macroscopic point \(\mathbf{x}\) at time \(t\), i.e., given \(Y_{p}\), the unit cell problem is to solve
\[\begin{cases}-\nabla_{y}q^{i}+\nu\Delta_{y}\mathbf{u}^{i}+\mathbf{e}^{i}= \mathbf{0},\ \ \nabla_{y}\cdot\mathbf{u}^{i}=0&\text{ in }Y_{p}\\ \mathbf{u}^{i}=\mathbf{0}&\text{ in }Y_{s}\\ \widehat{\mathrm{Pe}}\ \mathbf{v}\cdot\left(\nabla_{y}\chi^{j}\right)-\Delta_{y} \chi^{j}=\ (\mathbf{v}^{*}-\widehat{\mathrm{Pe}}\ \mathbf{v})\cdot\mathbf{e}^{j}&\text{ in }Y_{p}\\ -\left(\nabla_{y}\chi^{j}\right)\cdot\mathbf{n}=\mathbf{e}^{j}\cdot\mathbf{n}& \text{ on }\partial Y_{p}\end{cases} \tag{8}\]
periodic velocity fluctuation \(\mathbf{u}^{i}\), pressure fluctuation \(q^{i}\), chemical fluctuation \(\chi^{j}\) when the overall flow is in the direction \(\mathbf{e}^{i}\) and overall solute transport in the direction \(\mathbf{e}^{j}\) for \(i,j=1,\dots,d\) (dimension \(d\)). Above, we use \(\nabla_{y},\Delta_{y}\) to signify that these are derivatives with respect to the spatial variable in the unit cell.
We can then use it to find the parameters
\[\mathbf{K}^{*}_{ij} =\int_{Y_{p}}\nabla_{y}\mathbf{u}^{i}\cdot\nabla_{y}\mathbf{u}^{ j}\ dy, \tag{9}\] \[\mathbf{D}^{*}_{ij} =\int_{Y_{p}}\mathbf{e}^{i}\cdot\mathbf{e}^{j}\ dy+\widehat{ \mathrm{Pe}}\int_{Y_{p}}(\overline{\mathbf{v}}_{i}-\mathbf{v}_{i})\chi^{j}\ dy+\int_{Y_{p}}\nabla_{y}\chi^{j} \cdot\mathbf{e}^{i}\ dy,\] (10) \[\overline{\mathbf{v}} =\frac{1}{|Y_{p}|}\int_{Y_{p}}\mathbf{v}\ dy,\quad\lambda=|Y_{p }|,\quad\gamma=|\partial Y_{p}|. \tag{11}\]
Finally, the microstructure evolves according to the equation
\[\hat{v}_{n}=\frac{\widehat{\mathrm{Da}}}{m-\tilde{c}_{0}}q_{c}(\tilde{c}_{0}) \approx\frac{\widehat{\mathrm{Da}}}{m}q_{c}(\tilde{c}_{0}). \tag{12}\]
The last approximation uses the fact that \(\tilde{c}_{0}<<m\). This evolution happens on the geological time scale, and therefore the parameters above change on the geological time scale.
#### 2.2.3 Summary
We summarize the resulting multiscale formulation. We solve (7) on the geological scale. This requires us to obtain the parameters \(\mathbf{K}^{*}(t,\mathbf{x}),\mathbf{D}^{*}(t,\mathbf{x}),\bar{\mathbf{v}}(t, \mathbf{x}),\gamma(t,\mathbf{x}),\lambda(t,\mathbf{x})\). To do so, at each point \(\mathbf{x}\) at the geological scale, we provide a history of the flow \(\mathbf{v}_{0}\) and solute concentration \(c_{0}\) to a unit cell, solve (8) and obtain the parameters from (9), (10) and (11) as a function of time while evolving the microstructure according to (12). This is illustrated in Figure 1.
While this formulation separates the original problem into two separate problems, it is still computationally demanding: we have to solve a core scale problem at at every quadrature point of the geological formation and at every instant of time. The direct implementation of this framework is often referred to as concurrent multiscale approach. We propose an alternate approach in Sections 3 and 4 below.
### Details of the asymptotic expansion
This sub-section outlines the derivation of the two-scale formulation above. It closely follows Allaire et al. [1].
#### 2.3.1 Porous media flow
We look for a solution to the system (3) with the ansatz
\[\mathbf{v}^{\epsilon}(\mathbf{x})=\sum_{i=0}^{+\infty}\epsilon^{i}\mathbf{v}_ {i}\left(\mathbf{x},\frac{\mathbf{x}}{\epsilon}\right)\quad\ p^{\epsilon}( \mathbf{x})=\sum_{i=0}^{+\infty}\epsilon^{i}p_{i}\left(\mathbf{x},\frac{ \mathbf{x}}{\epsilon}\right). \tag{13}\]
By collecting terms and then using the Fredholm alternative, Levy [17] showed that \(\mathbf{v}_{0}=\mathbf{v}_{0}(\mathbf{x}),\ p_{0}=p(\mathbf{x})\) (i.e, they are independent of the fast variable), and that these satisfy (7)\({}_{1}\) with \(\mathbf{K}^{*}\) given by (9) where \(\mathbf{u}^{i}\) satisfies (8)\({}_{1,2}\). Further, up to leading order the velocity and the pressure are
\[\mathbf{v}^{\epsilon}(\mathbf{x})=\mathbf{v}_{0}(\mathbf{x})+\epsilon\sum_{i =1}^{d}(\mathbf{v}_{0})_{i}(\mathbf{x})\mathbf{u}^{i}\left(\frac{\mathbf{x}} {\epsilon}\right),\quad p^{\epsilon}(\mathbf{x})=p_{0}(\mathbf{x})+\epsilon \sum_{i=1}^{d}q^{i}\left(\frac{\mathbf{x}}{\epsilon}\right)\left(\mathbf{f}_{ i}(\mathbf{x})-\frac{\partial p_{0}}{\partial x_{i}}(\mathbf{x})\right). \tag{14}\]
Figure 1: Schematic figure of the two-scale framework.
#### 2.3.2 Solute transport
We look for a solution to the system (5) with the ansatz
\[c^{\epsilon}(t,\mathbf{x})=\sum_{i=0}^{+\infty}\epsilon^{i}\tilde{c}_{i}\left(t, \mathbf{x}-\frac{\mathbf{v}^{*}}{\epsilon}t,\frac{\mathbf{x}}{\epsilon}\right) \tag{15}\]
where \(\mathbf{v}^{*}\) is a _drift velocity_ to be determined. We have chosen a macroscopic coordinate \(\mathbf{x}^{\prime}=\mathbf{x}-\frac{\mathbf{v}^{*}}{\epsilon}t\) that is not stationary in the geological formation in order to account for the advection. We substitute the ansatz into (5), apply the chain rule,
\[c_{t}=\frac{\partial\tilde{c}}{\partial t}-\frac{\mathbf{v}^{*}}{\epsilon} \cdot\nabla^{\prime}\tilde{c},\quad\nabla c=\nabla^{\prime}\tilde{c}+\frac{1} {\epsilon}\nabla_{y}\tilde{c} \tag{16}\]
and collect terms with different powers of \(\epsilon\).
The leading order (smallest power) is \(\epsilon^{-2}\), and we have the following equations at that order:
\[\begin{cases}\mathrm{\hat{Pe}}\mathbf{v}\cdot\nabla_{y}\tilde{c}_{0}-\Delta_ {y}\tilde{c}_{0}=0&\quad\text{in }Y_{p},\\ -\nabla_{y}\tilde{c}_{0}\cdot\mathbf{n}=0&\quad\text{on }\partial Y_{p}\end{cases} \tag{17}\]
which yields to \(\tilde{c}_{0}=\tilde{c}_{0}(\mathbf{x}^{\prime})\).
At the next order, \(\epsilon^{-1}\), we have
\[\begin{cases}\mathrm{\widehat{Pe}}\ \mathbf{v}\cdot\nabla_{y}\tilde{c}_{1}- \Delta_{y}\tilde{c}_{1}=(\mathbf{v}^{*}-\mathrm{\widehat{Pe}}\ \mathbf{v})\cdot\nabla^{\prime}\tilde{c}_{0}&\quad\text{in }Y_{p},\\ -\left(\nabla_{y}\tilde{c}_{1}\right)\cdot\mathbf{n}=\nabla^{\prime}\tilde{c}_ {0}\cdot\mathbf{n}&\quad\text{on }\partial Y_{p}.\end{cases} \tag{18}\]
We seek to solve this equation for \(\tilde{c}_{1}\). This is possible if and only if the following compatibility condition (Fredholm alternative) is satisfied.
\[\int_{Y_{p}}(\mathbf{v}^{*}-\mathrm{\widehat{Pe}})\cdot\nabla^{\prime}\tilde {c}_{0}\ dy-\int_{\partial Y_{p}}\left(\nabla^{\prime}\tilde{c}_{0}\cdot \mathbf{n}\right)dS=0\quad\Rightarrow\quad\mathbf{v}^{*}=\frac{\mathrm{ \widehat{Pe}}}{|Y_{p}|}\int_{Y_{p}}\mathbf{v}\ dy. \tag{19}\]
Recalling the expansion (14)\({}_{1}\) for \(\mathbf{v}\), we can re-write the drift velocity as follow
\[\mathbf{v}_{i}^{*}=\frac{\mathrm{\widehat{Pe}}}{|Y_{p}|}\sum_{i=1}^{d}( \mathbf{v}_{0})_{i}\int_{Y_{p}}\mathbf{u}^{i}\ dy \tag{20}\]
Returning to (18), we notice that the solution \(\tilde{c}_{1}\) will depend linearly on \(\nabla^{\prime}c_{0}\). So, set
\[\tilde{c}_{1}(t,\mathbf{x}^{\prime},\mathbf{y})=\sum_{i=1}^{d}\frac{\partial \tilde{c}_{0}}{\partial x_{i}^{\prime}}(t,\mathbf{x}^{\prime})\chi^{i}( \mathbf{y}). \tag{21}\]
It follows that \(\chi^{i}\) satisfy (8)\({}_{3,4}\).
Now, turning to order \(\epsilon\), we have
\[\begin{cases}\mathrm{\widehat{Pe}}\ \mathbf{v}\cdot\nabla_{y}\tilde{c}_{2}- \Delta_{y}c_{2}\\ \qquad=-\frac{\partial\tilde{c}_{0}}{\partial t}+(\mathbf{v}^{*}-\mathrm{ \widehat{Pe}}\ \mathbf{v})\cdot\nabla^{\prime}\tilde{c}_{1}+\nabla^{\prime}\cdot(\nabla^{ \prime}\tilde{c}_{0}+\nabla_{y}\tilde{c}_{1})+\nabla_{y}\cdot(\nabla^{\prime }\tilde{c}_{1})&\quad\text{in }Y_{p},\\ -\nabla_{y}\tilde{c}_{2}\cdot\mathbf{n}-\nabla^{\prime}\tilde{c}_{1}\cdot \mathbf{n}=\hat{v}_{n}(\tilde{c}_{0}-m)=\mathrm{\widehat{Da}}\ q_{c}(\tilde{c}_{0})& \quad\text{on }\partial Y_{p}.\end{cases} \tag{22}\]
This equation has a solution for \(\tilde{c}_{2}\) if and only if the following compatibility condition (Fredholm alternative) is satisfied
\[\int_{Y_{p}}\left(-\frac{\partial\tilde{c}_{0}}{\partial t}+(\mathbf{v}^{*}- \widehat{\mathrm{Pe}}\ \mathbf{v})\cdot\nabla^{\prime}\tilde{c}_{1}+\nabla^{\prime}\cdot\left( \nabla^{\prime}\tilde{c}_{0}+\nabla_{y}\tilde{c}_{1}\right)\right)\ dy=\int_{ \partial Y_{p}}\widehat{\mathrm{Da}}\ q_{c}(\tilde{c}_{0})\ dS. \tag{23}\]
Substituting for \(\tilde{c}_{1}\), we obtain the homogenized equation
\[-\lambda\frac{\partial\tilde{c}_{0}}{\partial t}=-\nabla^{\prime}\cdot( \mathbf{D}^{*}\nabla^{\prime}\tilde{c}_{0})+\gamma\widehat{\mathrm{Da}}\ q_{c}( \tilde{c}_{0}) \tag{24}\]
where \(\lambda=|Y_{p}|\), and \(\gamma=|\partial Y_{p}|\), and the effective diffusion tensor \(\mathbf{D}^{*}\) is given by (10). We then transform back to stationary coordinates by setting.
\[c_{0}(t,\mathbf{x})=\tilde{c}_{0}\left(t,\mathbf{x}-\frac{\mathbf{v}^{*}}{ \epsilon}t\right) \tag{25}\]
to obtain (7)\({}_{2}\).
Finally, the boundary condition at \(O(\varepsilon)\) gives (12).
## 3 Learning the core scale behavior
The two-scale formulation above requires us to solve the core scale problem at each time step and each quadrature point in the geological formation. This is prohibitively expensive. So, we seek to "learn" the solution operator of the core scale problem. Specifically, for a given initial microstructure \(T^{0}\), we view the core scale problem as a map from the velocity and concentration history to the current permeability, diffusivity, drift velocity, specific area, and pore volume fraction.
\[\Phi:\mathcal{I}[0,t]\rightarrow\mathcal{O}(t),\quad\mathcal{I}(\tau)=\{ \mathbf{v}_{0}(\tau),c_{0}(\tau)\},\ \mathcal{O}(\tau)=\{\mathbf{K}^{*}(\tau),\mathbf{D}^{*}(\tau),\bar{\mathbf{v} }(\tau),\gamma(\tau),\lambda(\tau)\}. \tag{26}\]
where the input \(\mathcal{I}[0,t]=\{\mathbf{v}_{0}(\tau),c_{0}(\tau):\tau\in[0,t]\}\) is specified over the time interval \([0,t]\) and the output \(\mathcal{O}(t)=\{\mathbf{K}^{*}(t),\mathbf{D}^{*}(t),\bar{\mathbf{v}}(t), \gamma(t),\lambda(t)\}\) at time \(t\). We seek an approximation in the form of a parametrized map
\[\Psi:\mathcal{I}[0,t]\times\mathbb{R}^{p}\rightarrow\mathcal{O} \tag{27}\]
and train it using data \(\{\mathcal{I}^{n},\mathcal{O}^{n}\}_{n=1}^{N}\) that is generated using numerical simulation of \(\Phi\). In other words, we shall postulate a form for \(\Psi\) and find the parameters \(\Theta^{*}\) that minimizes a loss function for data generated by repeated solutions of the core-scale problem.
### Recurrent neural operator
There are two issues we have to address in postulating an approximation \(\Psi\). First, the map \(\Phi\) (and hence \(\Psi\)) has as its input a function defined on an interval of time. Thus our approximation has to be an operator. One idea is to discretize the functions in time and then find a neural network approximation with the discretized function as input. Unfortunately, this approximation will depend on the discretization (time step), and hence can only be used at that discretization. However, it is natural in a multi-scale setting to use different discretization for the core scale problem (generating data) and the geological scale problem. Further, one may use an adaptive discretization in the geological scale problem. For these and other reasons, we want approximation to be independent of discretization. Second, the output at time \(t\) depends on the history of the input. So, we want our map to be history dependent.
Following experience and practice in continuum physics, we postulate that the history can be encapsulated in \(k\) state or internal variables \(\{\xi_{\alpha}\}_{\alpha=1}^{k}\) that evolve in time. Then, following recent work in the multi-scale modeling of mechanical properties of materials [7, 21], we postulate \(\Psi\) to be a recurrent neural operator :
\[\Psi:\begin{cases}\mathcal{O}(t)=f\left(\mathcal{I}(t),\{\xi_{\alpha}(t)\}_{ \alpha=1}^{k};\Theta\right),\\ \dot{\xi}_{i}(t)=g_{i}\left(\mathcal{I}(t),\{\xi_{\alpha}(t)\}_{\alpha=1}^{k} ;\Theta\right),\quad i=1,\ldots,k\end{cases} \tag{28}\]
where \(f,g_{i}\) are feed-forward deep neural networks parametrized by \(\Theta\) (weights and biases). The architecture (28) is formulated to be in continuous time. To implement it with time discretization, we use a backward Euler discretization:
\[\begin{cases}\mathcal{O}^{n}=f\left(\mathcal{I}^{n},\{\xi_{\alpha}^{n}\}_{ \alpha=1}^{k}\right)\\ \xi_{i}^{n}=\xi_{i}^{n-1}+(\Delta t_{n})g_{i}\left(\mathcal{I}^{n},\{\xi_{ \alpha}^{n-1}\}_{\alpha=1}^{k}\right),\quad i=1,\ldots,k.\end{cases} \tag{29}\]
Note that \(f,g_{i}\) and the internal variables, and therefore the approximation, is independent of the discretization.
The number \(k\) of internal variables has to be chosen _a priori_, but the actual internal variables are identified from the data as a part of the learning. As noted, they encapsulate the history dependence. They do not necessarily have any intrinsic physical meaning. Indeed, note that the form of the architecture (28) is invariant under the reparametrization \(\xi^{\prime}=\Xi(\xi)\) for any diffeomorphism \(\Xi\). In some special examples, it is possible to choose a parametrization so that the internal variables are interpretable [21]; however, this is not always the case. We refer the reader to [21] for a discussion of these and other aspects of this architecture.
### Data and training
We generate the data by solving the core scale problem (8) over some interval \([0,T]\) to yield our data in the form \(\{\mathcal{I}^{n}[0,t],\mathcal{O}^{n}[0,T]\}_{n=1}^{N}\). To do so, we have to sample the inputs \(\{\mathcal{I}^{n}[0,t]\}_{n=1}^{N}\) in such a manner that is rich enough to represent the actual trajectories encountered in the geological scale model. Broadly, we anticipate trajectories of velocity and concentration that can vary over time, and also change slope as some region gets clogged or fully dissolved. So we use the following strategy. We take our interval \([0,T]\) and divide them into \(M\) sub-intervals \(\{[t^{m-1},t^{m}]\}_{m=1}^{M}\) with \(t^{m}\leq t^{m+1},\ t^{0}=0,\ t^{M}=T\) where \(\{t^{m}\}_{m=1}^{M-1}\) are chosen from a uniform distribution (and relabelled to be increasing). We then set each component \(\mathcal{I}_{i}\) of the input at times \(\{t^{m}\}_{m=1}^{M-1}\)
\[(\mathcal{I}_{i})(t^{m})=(\mathcal{I}_{i})^{m-1}+\nu^{m}\mathcal{I}_{i}^{\max }\sqrt{t},\ \ \text{with}\ \ \ i=1\ \ \text{for}\ \ c_{0},\ \ i=1,\cdots,d\ \ \text{for}\ \ \mathbf{v}_{0} \tag{30}\]
where \(\nu_{n}\in\{-1,1\}\). We then obtain \(\mathcal{I}_{i}[0,T]\) via a cubic spline interpolation. We refer the reader to [21, 20] for a discussion. We clarify that \(\{t^{m}\}\) is distinct from the time steps used for generating the data. We consider \(T=1\), \(M=5\) and use 200 time-steps to calculate the solution.
We emphasize that the data provided to the RNO is at the geological scale: inputs (velocity and concentration history at a point), and output (permeability, diffusivity, advection velocity, specific area, and pore volume fraction). There is no information about the pore scale. The internal variables are inferred from this data as a part of the training process.
After generating the data, we proceed to train the RNO. The training process involves sequentially feeding the input data into the network, computing outputs at each time step, and comparing them to the target values during the forward propagation. We then use the backpropagation through time to calculate gradients over the entire sequence, and optimize the parameters.
### Results
We now demonstrate the ability of the RNO to approximate the core-scale problem. We consider an initial microstructure shown in Figure 2. We generate our data by solving the core-scale problem described in Section 2.2.2 using a numerical algorithm described elsewhere [14].
We consider a fully connected 4-layer neural network, with each layer consisting of 200 nodes. We use the nonlinear activation function scaled exponential linear unit (SELU) [16], and optimize the parameters using the ADAM optimization algorithm [15]. We consider the following loss function
\[\mathcal{L}=\frac{1}{D_{\text{train}}}\sum_{d=1}^{D_{\text{train}}}\frac{ \int_{0}^{T}\left||\overline{\mathcal{O}}_{d}^{\text{truth}}-\overline{ \mathcal{O}}_{d}^{\text{approx}}\right|^{2}dt}{\int_{0}^{T}\left|| \overline{\mathcal{O}}_{d}^{\text{truth}}\right|^{2}dt}, \tag{31}\]
where \(d\) indexes the trajectory in the training dataset and \(\overline{\mathcal{O}}\) is the normalized output. To compute this, we use min-max normalization on each physical component of the output,
\[(\overline{\mathbf{D}}_{ij}^{*})_{d}=\frac{(\mathbf{D}_{ij}^{*})_{d}-\min p,q,r(\mathbf{D}_{pq}^{*})_{r}}{\max p,q,r(\mathbf{D}_{pq}^{*})_{r}-\min p,q,r( \mathbf{D}_{pq}^{*})_{r}} \tag{32}\]
and so forth. We define and train the RNO with \(\log\mathbf{K}^{*}\) instead of \(\mathbf{K}^{*}\) to properly emphasize the almost clogged regime.
Recall that we have to fix the number of internal variables \(k\) before training. We do so, repeating the training for \(k=0,1,\ldots,4\). Thus we have five fully trained RNOs with differing numbers of internal variables.
The results are shown in Figure 3 in terms of the loss (31). Figure 3(a) shows how the training loss changes with the number of epochs with varying number of internal variables. We see that an RNO with no internal variables (i.e., no history dependence) is unable to reduce the training error beyond a certain point. However, RNOs with one or more internal variable can be trained to a high degree of accuracy. Figure 3(b) shows the test loss (the same loss as (31), but computed for the test data set) as a function of the number of internal variables for the trained RNO. We see that the trained RNO with no internal variable provides a very poor approximation, but RNOs with one or more internal variable provides excellent approximation. We repeat the training for various sizes of training data and the average test loss is shown in Figure 3(c) for the case of a single internal variable. We see that the size (800) of our training data set is adequate, and the average test error is small in each component.
Figure 3(d) shows the average normalized test error of the various physical outputs. For each
Figure 2: Initial core-scale microstructure
physical output, we define the normalized test error as
\[\text{average normalized test error in }\mathbf{D}^{*}=\left(\frac{1}{D_{\text{test}}}\sum_{d=1}^{D_{ \text{test}}}\frac{\int_{0}^{T}\left|(\mathbf{D}^{*})_{d}^{\text{truth}}-( \mathbf{D}^{*})_{d}^{\text{approx}}\right|^{2}dt}{\int_{0}^{T}\left|(\mathbf{D }^{*})_{d}^{\text{truth}}\right|^{2}dt}\right)^{1/2} \tag{33}\]
and so forth. We observe that the average normalized test error about 1% for each of the physical quantities except for the effective advection velocity \(\bar{\mathbf{v}}\) where the error is about 6% for a trained RNO with one or more internal variables. The test includes cases where the effective advection velocity is zero up to machine precision and these lead to large apparent errors.
Figure 4 elaborates on the results by focussing on a typical trajectory chosen arbitrarily from the test data set. Figures 4(a,b) show the input, while Figures 4(c-g) compare the ground truth and RNO predictions for the outputs: permeability (\(\mathbf{K}^{*}\)), diffusivity (\(\mathbf{D}^{*}\)), advection velocity (\(\bar{\mathbf{v}}\)), specific area (\(\gamma\)), and pore volume fraction (\(\lambda\)). We see excellent agreement.
Finally, we examine the time discretization independence of the trained RNO. The RNO is initially trained with \(\Delta t\). Figure 5 shows the predictions of the RNO for permeability and diffusivity components evaluated with different time steps - \(0.25\Delta t\), \(0.5\Delta t\), \(\Delta t\), and \(2\Delta t\) - all for the same input trajectory. We see that the results are independent of the time step.
In summary, we conclude that an RNO with one internal variable is able to provide an excellent approximation to the solution operator of the core scale model.
Figure 3: Training and testing the RNO. (a) Training error vs. training epochs, (b) Average test loss of the trained RNOs vs. the number of hidden variables, (c) Average test loss of an RNO with one internal variable vs. training set size. (d) Average normalized test error vs. the number of internal variables.
## 4 Multiscale simulation
We now consider a geological scale simulation, but one that constantly updates the properties, from core scale calculations according to the framework described in Section 2. However, instead of solving the core scale problem at each point at each instant, we use the trained neural approximation described in Section 3 as a surrogate for the core scale problem at each point at each instant. Of particular interest is to understand how the formation and its properties, as well as the flow, change over long periods of time, and how such changes are magnified by heterogeneities in the formation. We implement the geological scale problem with the Python finite elements library, FEniCS, and an unstructured mesh with triangular elements.
### Reactive flow with uniform initial properties
We first consider an example with uniform initial material properties. The geometry of the geological formation and microstructure are shown in Figure 6: a solution with high concentration of solute is injected at high pressure through the well on the left and removed from a well at the right.
Figure 4: Input test trajectories, (a) concentration, (b) velocity trajectories, comparison of estimated values of the RNO with one hidden variable with ground truth, (c) permeability, (d) diffusivity, (e) advection velocity, (f) specific area and (g) pore volume fraction.
Figure 5: Comparison of estimated values of the RNO, permeability components (a)-(c), and diffusivity components (d)-(f) considering various time steps.
Figure 6: Geometry of geological domain.
The properties and boundary conditions are as follows.
\[\text{Pe}=1000,\ \ \text{Da}=0.001,\ \ c^{*}=0.5,\]
left well: \(p_{0}=10^{5},\ \ c_{0}=0.6,\)
right well: \(p_{0}=10^{4},\ \ c_{0}=0.4,\)
rest of the boundary: zero flux.
Figures 7 shows three snapshots of the pressure and concentration profile at the geological scale, at \(t_{1}\), \(t_{2}\), and \(t_{3}\) associated with time steps 1, 110, and 220, respectively. The pressure and concentration change gradually away from the wells in the early times. However, at later times, the pressure and concentration change rapidly in the vicinity of the wells. Recall that the equations (7) at the geological scale are steady state equations, i.e., do not involve time explicitly. Therefore, the evolution with time is related to the change in properties related to the reconstruction of the porous medium at the core scale.
The concentration prescribed at the inlet exceeds the equilibrium concentration, resulting in the gradual precipitation of solutes on the solid/fluid interface in this region. Conversely, the concentration prescribed at the outlet lies below the equilibrium concentration, leading to a dissolution of solid structure close to the outlet. As illustrated in Figure 7, precipitation in the inlet increases pore volume fraction that decreases permeability and diffusivity values while increasing the pressure gradient over time. This reduction in permeability and diffusivity reduces flow and chemical transport in the medium, eventually leading to clogging.
We explore this further at the three points marked in Figure 6: Points 1 and 3 are in the vicinity of the left well (inlet), and right well (outlet), respectively; Point 2 is located between the inlet and outlet. The changes in properties and pore structure with time are shown in Figure 8 as the solid curves. The left column of the figure shows the changes at Point 1 that is close to the inlet. The deposition on the surface of solid leads to a decrease in permeability, diffusivity, advection velocity and pore volume fraction. The decrease is steady initially, but at some time the
Figure 7: Variation of concentration (a)-(c), and pressure (d)-(f), and pore volume fraction (g)-(i) profiles in the geological formation due to chemical reactions.
Figure 8: Comparison of the (a) permeability, (b) diffusivity, (c) advection velocity, (d) pore volume fraction, and (e) specific area components obtained from the RNO with ground truth, and (f) microstructure geometry at 230th time step, at three random points.
permeability, diffusivity and advection velocity effectively goes to zero: this time coincides with when the pores are blocked by deposition. The porosity does not go to zero, but there is no change in its value beyond this time. The specific surface area initially increases, but saturates earlier than full blocking. The deposition leads to an increase of surface area, but this is eventually balanced by a decrease as the pores are blocked and opposite sides of a pore begin to touch.
We see the opposite trends on the right column corresponding to Point 3 that is close to the outlet. We see that the dissolution leads to an increase of permeability, diffusivity, advection velocity and pore volume fraction, and decrease of specific surface area. The middle column shows the results for point 2 located between the inlet and outlet. This only experiences slight changes in microstructure and effective properties, including permeability, diffusivity, pore volume fraction, and specific area. The reduction in the advection velocity at point 2 primarily results from the decrease in the flow reaching this point due to the clogging of the inlet.
Figure 8 provides an _a posteriori_ evaluation of the error in using the RNO surrogate. We take the velocity and concentration experienced by the 3 points during the macroscale calculations. We compare the output of our RNO surrogate (solid lines) and the results of core scale calculations (dashed lines) for these histories. We observe excellent overall agreement. This tells us that our RNO surrogate performs well not only for the histories in the distribution used to train the model, but also real histories experienced in actual calculations.
We now turn to the overall flow through the geometric formulation and the amount of material deposited in the formation as a function of time. This is shown in the center of Figure 9 for the parameters chosen above. We see that the flow decreases and the amount of material deposited increases steadily till they saturate. This is consistent with the observations above.
Figure 9 also shows a parameter study for various Peclet and Damkohler numbers that reveal an interesting interplay between the pore and geological scale phenomena. Recall that the core scale problem does not depend on the exact values of these non-dimensional quantities (as long as they satisfy the scaling). It follows that the RNO is independent of them, and we do not have to retrain it for each pair of numbers. Instead, these non-dimensional quantities only appear in the effective geological scale equations. So, we repeat the geological scale calculation as described above for three distinct values (differing by a factor 2) of both these non-dimensional quantities. We see that the flow decreases and the amount of material deposited increases steadily till they saturate in each case. The change in flow and amount of material deposited increases significantly with increasing Peclet number, but only slightly with increasing Damkohler number.
Recall that we have very high Peclet number and a small Damkohler number. This means that we are in the reaction controlled regime. Therefore, one could expect that the overall deposition rate would be more sensitive to the Damkohler number, and less sensitive to the Peclet number. However, we see the opposite trend in Figure 9. This is because the morphological changes in the formation lead to clogging, and therefore the flow is transport limited at the macroscopic scale. The inset in each graph in Figure 9 shows a snapshot of the concentration at a fixed time: we see that the Peclet number induces a greater effect than the Damkohler number. In other words, even when the core scale is reaction limited, the formation can become transport limited.
Having established the accuracy and its efficacy in parameter study, we study the computational cost of the proposed approach and compare it with both the cost of the classical empirical constitutive model and the cost of concurrent multiscale models. The results are shown in Table 1 for the base simulation above. These calculations were conducted on a single core of an Intel Skylake CPU with a clock speed of 2.1 GHz and an NVIDIA P100 GPU with 3584 cores and 1.3 MHz clock speed. The classical empirical model is described in [5]. We find that the computational cost of solving the macroscopic problem using the trained RNO is comparable to the cost of classical constitutive relations. These are significantly (by factor \(10^{5}\)) smaller than the estimated cost
of the concurrent multiscale approach (we estimate this by using the cost of the unit cell problem and multiplying it by the product of the time steps and spatial grid). The proposed approach has a one-time off-line cost of generating the data and fitting the RNO. This is also smaller (by a factor \(~{}10^{2}\)) than solving a single concurrent multiscale calculation. This off-line cost can further be reduced by parallelization.
In summary, we conclude that our approach is able to provide the accuracy of a concurrent multiscale model at the computational cost comparable to that of a classical constitutive relation.
### Reactive flow with non-uniform initial properties
We now turn to examples with geological formation characterized by initially non-uniform properties as shown in Figure 10. This heterogeneity is defined by the initial value for the internal variable in the geological scale problem.
#### 4.2.1 High initial permeability and diffusivity inclusions
We consider a \(4\times 8\) m domain with embedded blocks of higher initial permeability and diffusivity in the geometry illustrated in Figure 10. A horizontal flow is introduced from the left boundary and
Figure 9: Variation of flow flux (\(q\), solid line) in inlet and outlet, and chemical flux difference (\(\Delta J\), dashed line) between inlet and outlet, considering various values of Peclet and Damkohler numbers.
withdrawn from the right. The parameters are the same as in Section 4.1, and boundary conditions are
\[\begin{array}{ll}\mbox{left boundary: }p_{0}=10^{5},&c_{0}=0.4,\\ \mbox{right boundary: }p_{0}=10^{4},&c_{0}=0.5,\\ \mbox{rest of the boundary: zero flux}.\end{array}\]
Figures 11 (a), and (b) show the temporal evolution of the concentration and pressure fields over time due to changes in the morphology. The left (inlet) concentration is lower than the equilibrium concentration, and this leads to dissolution at that end. The right (outlet) concentration is at the equilibrium concentration, and so we do not see significant morphological changes there. Note that the flow and concentration are not uniform across the (vertical) cross-section even though the boundary conditions are. This is a result of the inclusions. The flow preferentially enters the regions with high permeability and diffusivity, and seeks to connect these regions together. The greater flow leads to greater chemical reaction and further increases permeability; this is clear from the evolution of the pore volume fraction shown in Figure 11 (c)). This further aids the channeling from one inclusion to another.
The differential change in permeability, diffusivity, and advection velocity is emphasized in Figure 12 that shows the evolution of these quantities at two points (marked 1 and 2 in Figure 10). The solid lines correspond to Point 1 located within the high permeability block, while the dashed lines represent the change of effective properties at Point 2 located outside the high permeability block. As expected, the rate of change in effective properties at Point 1 is significantly greater compared to Point 2.
\begin{table}
\begin{tabular}{c c c} \hline Method & Calculation & Cost \\ \hline Classical constitutive relations & Geological scale calculation & 800 (CPU)) \\ \hline Proposed method & Geological scale calculation & 900 (CPU) \\ & Off-line data (sequential) & \(3.6\times 10^{6}\) (CPU) \\ & Off-line data (parallel) & \(4\times 10^{4}\) (GPU) \\ & Off-line training & \(5\times 10^{3}\) (GPU) \\ \hline Concurrent multiscale (estimated) & Geological scale calculation & \(1.64\times 10^{7}\) (CPU)) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of computation cost (wall-clock time in seconds)
Figure 10: Domain geometry and location of blocks.
Figure 11: Variation of concentration (a), pressure (b), and pore volume fraction (c) fields, considering non-uniform initial material properties.
Figure 12: Change in effective properties, (a) permeability, (b) diffusivity, and (c) advection velocity, at points 1 (solid lines), and point 2 (dashed lines).
#### 4.2.2 Low initial permeability and diffusivity inclusions
We now consider the complementary situation with blocks of lower initial permeability and diffusivity. The parameters are the same as in Section 4.1, and boundary conditions are the same as Section 4.2.1. Figure 13 (a)-(b) shows that the flow moves around the block following regions with higher initial permeability and diffusivity. Consequently, there is more reaction and dissolution in the matrix compared to the blocks, Figure 13(c). This further channels the flow through between the blocks. Figure 14 shows the change in material properties at Points 3 and 4 (as shown in figure 10). We observe a substantial change in effective properties at Point 4 outside the blocks and relatively little change at Point 3 inside.
Figure 14: Change in effective properties, (a) permeability, (b) diffusivity, and (c) advection velocity, at points 3 (dashed lines), and point 4 (solid lines).
Figure 13: Variation of concentration (a), pressure (b), and pore volume fraction (c) fields, considering non-uniform initial material properties.
Conclusion
The transport of water and solutes through permeable geological formations is a complex multiscale phenomenon. In this work, we have proposed a framework that harnesses the power of continuum modeling and machine learning to address this complexity with accuracy at reasonable computational cost. We have demonstrated the framework on flow through permeable geological formations with advection-dominated transport and volume reactions.
We begin with an asymptotic analysis of the governing equations that enable us to split the problem into two interconnected problems: one at the scale of the geological formation and another at the scale of a core. A key idea in this analysis is the invocation of a drifting coordinate system to capture the core. We then introduce a recurrent neural operator (RNO) to approximate the solution operator of the core-scale problem. This consists of two feed-forward neural networks and internal variables. The neural networks are trained and the internal variables are identified from data that is generated by repeatedly solving the core-scale problem. The key features of this neural architecture are that it is consistent with common formulations of continuum physics, that it is relatively simple and that it is independent of the time-discretization. We demonstrate that it is able to accurately capture the behavior of the core scale over long periods of time including the morphological changes in the pores and the resulting change in effective permeability, diffusivity, advective velocity, pore volume fraction and specific area. Finally, we solve the problem of transport and morphological changes at the scale of the geological formation by using the trained RNO as a surrogate for the small scale problem. We thus obtain the accuracy of a concurrent multiscale simulations at a cost that is comparable to classical constitutive relations. We demonstrate the ability of this approach to learn subtle features of the interaction between the scales including the change from reaction limited to transport limited regime due to clogging and the positive feedback of channeling in heterogeneous situations.
We now emphasize a few notable aspects of the proposed approach. First, our RNO neural approximation is able to capture morphological and property changes over long periods of time. It is formulated in a time-continuous manner and discretized as necessary for training and use. It follows that the approximation is independent of the discretization. This is important because it is common to use different time discretization for the core scale problem used to generate data and the application at the geological scale. It also enables the use of data generated at different discretizations. Second, the two scale formulation makes the core scale problem to be independent of the physical characteristics of the flow and reaction rate, specifically the Peclet and the Damkohler numbers (as long as we are in the advection dominated regime with slow reactions). This means that we need to generate data and train the RNO only once, and can use the method for different situations as these quantities change. Third, we not only demonstrate accuracy over the distribution used to train the RNO, but also _a posteriori_ over the actual histories encountered in the geological scale calculations. Finally, the approach is highly transferable. In this work, we used examples in two dimensions to demonstrate the framework with modest computational cost. However, the approach holds in three dimensions. Similarly, one can incorporate other phenomena including, for example, multi-phase flows, more complex chemistry, poroelasticity and phase change (melting), as long as we can use scale separation. Importantly, one can extend this to more than two scales as long as they interact pairwise.
We close with a discussion of a few limitations and avenues for future work. First, the approach requires us to train the RNO with a starting pore morphology. This is not an issue as long as the core scale is chosen to be large enough to be statistically representative of the underlying medium. However, this adds to the computational cost. So, one may consider training the RNO on an ensemble of cores. Second, while we demonstrate _a posteriori_ accuracy, it would be useful to have
a systematic approach to quantifying the overall uncertainties. Such a quantification may also enable the use of active learning where we initially train the RNO over synthetic samples as we do in this work, but then progressively add more data based on examples we encounter. Third, in this work, we only use geological scale information obtained by averaging the results of core scale simulations to train the RNO. However, we have access to core scale information. It is possible that this data has insights that may lead to a more robust efficient training procedure. Fourth, we have exclusively used data generated numerically to train the RNO. it would be interesting to use a combination of experimental and computational data. Finally, it remains to use the framework established in this work to actual geological problems. All of this remains a topic of current and future research.
## Data availability
The data is available at CaltechData through [https://doi.org/10.22002/yd0c5-q5s36](https://doi.org/10.22002/yd0c5-q5s36)
## Acknowledgments
We are delighted to acknowledge numerous helpful discussions with Professor Xiaojing (Ruby) Fu. We gratefully acknowledge the financial support of the Resnick Sustainability Institute at the California Institute of Technology. KB also acknowledges the support of the Army Research Office through grant number W911NF-22-1-0269. The simulations reported here were conducted on the Resnick High Performance Computing Cluster at the California Institute of Technology.
|
2306.17601 | Next-to-leading power corrections to the event shape variables | We investigate the origin of next-to-leading power corrections to the event
shapes thrust and $c$-parameter, at next-to-leading order. For both event
shapes we trace the origin of such terms in the exact calculation, and compare
with a recent approach involving the eikonal approximation and momentum shifts
that follow from the Low-Burnett-Kroll-Del Duca theorem. We assess the
differences both analytically and numerically. For the $c$-parameter both exact
and approximate results are expressed in terms of elliptic integrals, but near
the elastic limit it exhibits patterns similar to the thrust results. | Neelima Agarwal, Melissa van Beekveld, Eric Laenen, Shubham Mishra, Ayan Mukhopadhyay, Anurag Tripathi | 2023-06-30T12:21:48Z | http://arxiv.org/abs/2306.17601v1 | # Next-to-leading power corrections to the event shape variables
###### Abstract
We investigate the origin of next-to-leading power corrections to the event shapes thrust and \(c\)-parameter, at next-to-leading order. For both event shapes we trace the origin of such terms in the exact calculation, and compare with a recent approach involving the eikonal approximation and momentum shifts that follow from the Low-Burnett-Kroll-Del Duca theorem. We assess the differences both analytically and numerically. For the \(c\)-parameter both exact and approximate results are expressed in terms of elliptic integrals, but near the elastic limit it exhibits patterns similar to the thrust results.
###### Contents
* I Introduction
* II Next-to-leading power terms and kinematical shifts
* III Thrust
* III.1 Thrust distribution at NLO
* III.2 Next-to-leading power corrections for thrust from kinematical shifts
* III.3 Numerical assessment of the shifted kinematics approximations for thrust
* IV \(C\)-parameter
* IV.1 The \(c\)-parameter distribution at NLO
* IV.2 Next-to-leading power corrections to \(c\)-parameter from shifted kinematics
* IV.3 Numerical assessment of the shifted kinematics approximations for \(c\)-parameter
* V Conclusions
## I Introduction
Providing precise estimates for cross-sections in perturbative QCD is needed to match the ever increasing precision of collider physics measurements. Two complementary directions are pursued in this endeavour. In one, exact higher order calculations in the coupling \(\alpha_{s}\) are performed, and methods there developed. In the other, all-order results are derived in certain kinematic limits, where associated classes of logarithmic terms are enhanced; the latter can often be resummed to all orders, using a varied and ever expanding set of methods. A highly relevant region in this regard is the near-elastic region (which we loosely refer to here also as the threshold region), where the phase space for emitted particles is limited. In such a situation, the cancellation of infrared singularities, guaranteed by the KLN theorem [1; 2], is incomplete in the sense that it leaves large logarithmic remainders at any order in perturbation theory. Our analysis in this paper is relevant for the second direction.
To be more specific, if \(\xi\) is a dimensionless kinematic variable, such that \(\xi\to 0\) towards the elastic region, the corresponding differential cross-section has the generic form
\[\frac{d\sigma}{d\xi}\,=\,\sum_{n=0}^{\infty}\left(\frac{\alpha_{s}}{\pi}\right)^{ n}\,\left[\sum_{m=0}^{2n-1}c_{nm}^{\rm LP}\left(\frac{\log^{m}\xi}{\xi}\right)_{+} \,+\,c_{n}^{(\delta)}\delta(\xi)+\,\sum_{m=0}^{2n-1}c_{nm}^{\rm NLP}\,\log^{m} \xi\,+\,\dots\right]\,. \tag{1}\]
The first term on the right in the above equation is well-known to originate from soft and/or collinear radiation and, together with the second term, makes up the _leading power_ (LP) terms. Much is known about LP terms to arbitrary order, and there have been numerous approaches towards their resummation [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18] and power correction studies [19; 20]. Reviews on some of these different approaches can be found in [21; 22; 23; 24; 25].
The last term on the right represents _next-to-leading power_ (NLP) terms. These originate from both soft gluon and soft quark emissions. Although suppressed by a power of \(\xi\), they can be relevant, since they grow logarithmically towards threshold. In contrast to the LP terms, the precise organization of these NLP terms to all orders and arbitrary logarithmic accuracy are not yet clear. In this paper we focus on such NLP terms for two event shapes in \(e^{+}e^{-}\) collisions: thrust and the \(C\)-parameter. These observables are interesting in this regard because, in contrast to most earlier studies, all QCD effects reside in the final state, and because their definition involves special phase space constraints that were not considered so far [26; 27; 28; 29]. For these observables our aim is to trace the origin of the NLP terms near the elastic limit, to examine to what extent there is a common pattern of NLP terms, and assess their size.
Patterns among NLP terms have been studied for various processes [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. For a number of observables their numerical contribution can be significant [43; 44; 45; 46; 47; 48]. The all-order resummation of NLP terms has been pursued through different approaches, such as a diagrammatic and a path integral approach given in [49; 50], while a physical kernel approach is pursued in [51; 52; 53]. From direct QCD formalism, the development of factorization theorems for NLP terms extending from LP terms are studied in [30; 31; 54; 55; 56; 57; 58] (for NLP factorization see, [59]). The study of NLP effects in the framework of SCET has many aspects, and the active studies are done in the operators contributing at NLP level [60; 61; 62; 63; 64; 65; 66; 67], the development of factorization [42; 33; 68; 43], and the explicit studies of physical observables [69; 70; 71; 72; 73; 36; 74; 75; 76; 77].
Event shapes, which probe final state dynamics through geometrical constraints, have long been used to develop QCD ideas [78; 79; 80; 81], e.g. the development of resummation and fixed-order computations [82; 83; 84; 85; 86; 87; 88; 82; 83; 87]. They have also played a vital role in extracting the strong coupling constant [88; 89; 90; 91]. Here we examine to what extent NLP terms for two event shapes can be predicted using the kinematical shift method [46; 92], as well as the soft quark emission approximation.
Our paper is organized as follows. We compute the thrust distribution in shifted approximation in section III to assess the relevance of NLP terms. We perform a similar assessment in section IV, which is considerably more complicated. We conclude in section V, while appendices contain certain technical aspects of elliptic functions, and a summary table of our results.
## II Next-to-leading power terms and kinematical shifts
It was recently shown [92] that NLP terms, at next-to-leading order (NLO) and to the extent that they are due to soft gluon radiation, may be derived efficiently using a combination of the eikonal approximation and kinematical shifts, in first instance for colour singlet final states. The method holds for matrix elements and thus for differential distributions, and was subsequently extended to final state radiation in [46]. It rests in essence on the Low-Burnett-Kroll-Del Duca (LBKD) theorem [93; 94; 95], which enables the expression of the one-gluon emission amplitude in terms of the elastic amplitude (even if the latter contains loops, such as in the case of Higgs, or multi-Higgs production). The NLO matrix element up to NLP accuracy can be written as a combination of scalar, spin, and orbital terms:
\[\mathcal{M}_{\rm NLP}\,=\,\mathcal{M}_{\rm scal}+\mathcal{M}_{\rm spin}+ \mathcal{M}_{\rm orb}\,. \tag{2}\]
The scalar term is essentially a multiplicative term containing the LP eikonal approximation. The spin-dependent term is of NLP accuracy, as the eikonal approximation cannot resolve emitter spin. The orbital term, also of NLP accuracy, involves derivative operators. These can be represented as a first-order Taylor expansion of kinematically shifted momenta. Combining these terms for the production of colour singlet particles yields the expression
\[\overline{\sum}|\mathcal{M}_{\rm shift}|^{2}\,= g_{s}^{2}N_{c}(N_{c}^{2}-1)\frac{2p_{1}\cdot p_{2}}{(p_{1}\cdot p_{3})(p_{2} \cdot p_{3})}\] \[\times|\mathcal{M}_{0}(p_{1}-\delta p_{1},p_{2}-\delta p_{2})|^{ 2}, \tag{3}\]
where \(|\mathcal{M}_{0}(p_{1},p_{2})|^{2}\) is the matrix element squared at the leading order (LO), and \(\overline{\sum}\) denotes the sum (average) over the final (initial) state spins and colours, \(p_{3}\) is the momentum of the emitted radiation, and \(p_{1}\), \(p_{2}\) are the
momenta of the particles already present at the born level. The shifts in the momenta are given by
\[\delta p_{1}^{\mu} = -\frac{1}{2}\left(\frac{p_{2}\cdot p_{3}}{p_{1}\cdot p_{2}}p_{1}^{ \mu}-\frac{p_{1}\cdot p_{3}}{p_{1}\cdot p_{2}}p_{2}^{\mu}+p_{3}^{\mu}\right),\] \[\delta p_{2}^{\mu} = -\frac{1}{2}\left(\frac{p_{1}\cdot p_{3}}{p_{1}\cdot p_{2}}p_{2}^{ \mu}-\frac{p_{2}\cdot p_{3}}{p_{1}\cdot p_{2}}p_{1}^{\mu}+p_{3}^{\mu}\right). \tag{4}\]
Expressions (3) and (4) yield the dominant NLP contributions to the NLO matrix element. We now turn to examine the implications of this for the two event shapes.
## III Thrust
Thrust [96] in \(e^{+}e^{-}\) collisions is defined as
\[T\,=\,\max_{\hat{\mathbf{n}}}\,\frac{\sum_{i}|\mathbf{p_{i}}\cdot\hat{\mathbf{ n}}|}{\sum_{i}E_{i}}\,, \tag{5}\]
where \(\mathbf{p_{i}}\) and \(E_{i}\) are the three-momentum and energy of the \(i^{\text{th}}\) particle present in the final state. The unit vector \(\hat{\mathbf{n}}\) that maximizes the sum in the numerator is the _thrust axis_. The range of thrust is \([1/2,1]\) where \(T=1/2\) for a spherically symmetric event and \(T=1\) for a pencil-like (dijet) event. It is an infrared-safe event shape variable, i.e. it is insensitive to the soft emissions or collinear splittings. The LO reaction at the parton level is
\[e^{+}(p_{b})+e^{-}(p_{a})\to\gamma^{*}(q)\to q(p_{1})+\overline{q}(p_{2})\,, \tag{6}\]
where we assume all particles to be massless. For this case, \(T=1\). At NLO the real emission process is
\[e^{+}(p_{b})+e^{-}(p_{a})\to\gamma^{*}(q)\to q(p_{1})+\overline{q}(p_{2})+g(p _{3})\,. \tag{7}\]
The diagrams for this process are shown in fig. (1). For a three-body final state \(T\) takes values in the range \([2/3,1]\). The limit \(T=1\) is approached when either \((i)\) the emitted gluon is soft (\(p_{3}\to 0\)), \((ii)\) the quark or the anti-quark is soft (\(p_{1}\to 0\) or \(p_{2}\to 0\)), or \((iii)\) any two final state partons are collinear. It is standard practice to define the dimensionless energy fractions for the final state particles,
\[x_{i}\,=\,\frac{2E_{i}}{Q}\qquad(i=1,2,3)\,, \tag{8}\]
where \(Q\) is the total center of mass energy, with \(x_{1}+x_{2}+x_{3}\,=\,2\). One can readily derive the relations
\[(p_{2}+p_{3})^{2}\,= \,2p_{2}\cdot p_{3}\,=\,Q^{2}(1-x_{1}),\] \[(p_{1}+p_{3})^{2}\,= \,2p_{1}\cdot p_{3}\,=\,Q^{2}(1-x_{2}), \tag{9}\] \[(p_{1}+p_{2})^{2}\,= \,2p_{1}\cdot p_{2}\,=\,Q^{2}(1-x_{3}),\]
using momentum conservation. For a final state with three massless particles, eq. (5) takes the simple form
\[T\,=\,\max(x_{1},x_{2},x_{3})\,. \tag{10}\]
### Thrust distribution at NLO
The thrust distribution at NLO is given by
\[\frac{d\sigma}{dT}\,=\,\frac{1}{2s}\int d\Phi_{3}\overline{\sum}|\mathcal{M}( x_{1},x_{2})|^{2}\delta(T-\max(x_{1},x_{2},x_{3})), \tag{11}\]
where \(s\) is the center of mass energy squared. The matrix element squared for the process in eq. (7) is
\[\overline{\sum}|\mathcal{M}(x_{1},x_{2})|^{2}\] \[= \,8(e^{2}e_{q})^{2}g_{s}^{2}C_{F}N_{c}\frac{1}{3Q^{2}}\,\frac{x_{ 1}^{2}+x_{2}^{2}}{(1-x_{1})(1-x_{2})}\,, \tag{12}\]
where \(\alpha=\left(e^{2}/4\pi\right)\), \(e_{q}\) is the charge of quarks in the unit of fundamental electric charge \(e\), \(\alpha_{s}=g_{s}^{2}/4\pi\), \(C_{F}=(N_{c}^{2}-1)/2N_{c}\), and \(N_{c}\) is the number of quark colours. The three-particle phase space measure, expressed in terms of \(x_{i}\) reads
\[d\Phi_{3}\,= \,\frac{Q^{2}}{16(2\pi)^{3}}\,dx_{1}\,dx_{2}\,dx_{3}\,\delta(x_{1}+x _{2}+x_{3}-2). \tag{13}\]
The phase space in eq. (13) is depicted in fig. (2), with every point in the plane fulfilling the constraint \(x_{1}+x_{2}+x_{3}=2\). The region surrounding point \(A\) (\(C\)) corresponds to the soft anti-quark (quark) region, and the region around \(B\) to the soft gluon region. On line \(BA\), where \(x_{1}=1\), the anti-quark and gluon move collinearly in opposite directions, while on line \(BC\), where \(x_{2}=1\), the quark and the gluon are collinear. On line \(AC\), where \(x_{3}=1\), the quark and the anti-quark are collinear, with the gluon moving in the opposite direction. Along these three lines, \(T=1\).
To obtain the thrust distribution one integrates over the \(x_{1},x_{2}\) variables in eq. (11) with the appropriate limits:
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dT}\,=\,\frac{2\alpha_{s}}{3 \pi}\int_{0}^{1}dx_{1}\int_{0}^{1-x_{1}}\!\!\!dx_{2}\,\,\frac{x_{1}^{2}+x_{2}^{2 }}{(1-x_{1})(1-x_{2})}\] \[\times\,\delta\left(T-\max(x_{1},x_{2},x_{3})\right)\,, \tag{14}\]
where \(x_{3}=2-x_{1}-x_{2}\), and \(\sigma_{0}(s)\) is the LO cross-section, given by
\[\sigma_{0}(s)\,=\,\frac{4\pi\alpha^{2}e_{q}^{2}\,N_{c}}{3s}\,, \tag{15}\]
For our purposes, we label the contributions from three distinct contributions in the phase space integration in eq. (14), defined by either \(x_{1}\), \(x_{2}\), and \(x_{3}\) being the largest, as follows:
* Quark has the largest energy (\(x_{1}>x_{2},x_{3}\))
* Anti-quark has the largest energy (\(x_{2}>x_{1},x_{3}\))
* Gluon has the largest energy (\(x_{3}>x_{1},x_{2}\)
The contribution from region I is then
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dT}\Bigg{|}_{\rm I} = \frac{2\alpha_{s}}{3\pi}\int_{0}^{1}dx_{2}\int_{0}^{1-x_{2}}dx_{1} \,\frac{x_{1}^{2}+x_{2}^{2}}{(1-x_{1})(1-x_{2})} \tag{16}\] \[\times\delta(T-x_{1})\,.\]
With the appropriate limits of integration we have
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dT}\Bigg{|}_{\rm I} = \frac{2\alpha_{s}}{3\pi}\int_{2(1-T)}^{T}dx_{2}\,\frac{T^{2}+x_{2}^{2 }}{(1-T)(1-x_{2})}. \tag{17}\]
Instead of \(T\), we shall mostly use the variable \(\tau\,=\,1-T\), which vanishes in the zero-radius dijet limit. The integration in eq. (17) leads to
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dT}\Bigg{|}_{\rm I}\] \[= \frac{2\alpha_{s}}{3\pi}\bigg{[}\frac{3\tau^{2}+8\tau-3}{2\tau}+ \bigg{(}\frac{\tau^{2}-2\tau+2}{\tau}\bigg{)}\log\bigg{(}\frac{1-2\tau}{\tau} \bigg{)}\bigg{]}.\]
Expanding the above expression around \(\tau=0\) gives
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\rm I}\] \[= \frac{2\alpha_{s}}{3\pi}\bigg{(}\frac{-3-4\log\tau}{2\tau}+2\log \tau+\frac{3\tau}{2}-\tau\log\tau+{\cal O}(\tau^{2})\bigg{)}.\]
The first term in the brackets forms the LP term (in \(\tau\)), with both leading-logarithmic (LL) and next-to-leading logarithmic (NLL) terms. The former derives from soft and collinear gluon emission, the latter from hard collinear gluon emission. The next term constitutes the (NLP) term, which is our main focus here. Subsequent terms are of NNLP and beyond accuracy. The result for thrust distribution from region I is provided in eq. (19); however, this expression does not show the correspondence between the various kinematical configurations and the contributions they produce at the LP and NLP. In order to extract such information, we shall split the in
Figure 1: Feynman diagrams for the real emission of a gluon from the final state quark or anti-quark.
Figure 2: The Dalitz plot for the production of three massless particles. The phase space can be divided into three different regions depending upon which energy fraction is the largest. In region ABG, \(x_{1}\) is the largest of the three \(x_{i}\), while in region BGC, (AGC) \(x_{2}\) (\(x_{3}\)) is the largest. In this figure, we set \(BA=BC=1\). Along the lines AB, BC, and AC one has \(T=1\).
tegration result of eq. (17) into its upper and lower limit contributions separately, as these limits reflect the different regions of the phase space. The contributions from the upper (I,u) and lower (I,l) limits of \(x_{2}\) from the region I are, respectively, after expansion around \(\tau\,=\,0\)
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{I,u}}\] \[= \frac{2\alpha_{s}}{3\pi}\bigg{(}\frac{-2\log\tau}{\tau}+2+2\log \tau-\frac{\tau}{2}+\tau\log\tau+\mathcal{O}\left(\tau^{2}\right)\bigg{)}, \tag{20}\]
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{I,l}} =\,\frac{2\alpha_{s}}{3\pi}\left(\frac{3}{2\tau}+2-2\tau+\mathcal{O}(\tau^{2}) \right). \tag{21}\]
Note that here and henceforth the lower limit term is to be _subtracted_ from the upper limit result. In region I, the upper limit of \(x_{2}\) for small \(\tau\) corresponds to a back-to-back quark and anti-quark configuration allowing the gluon to go soft while the lower limit corresponds to the quark and gluon being back-to-back, allowing the gluon to be hard. The LL terms at LP and NLP follow only from the upper limit (\(x_{2}\,=\,1-\tau\)), whereas the lower limit (\(x_{2}\,=\,2\tau\)) yields NLL terms both at LP and NLP. In fact, identical NLL contributions at NLP follow from both limits; they therefore cancel in the final result for region I.
Thus the upper and lower limit contributions unveil the relation between the particular kinematical configurations and their respective contributions at LP and NLP. To better comprehend the origin of LP and NLP terms, we will from here on first provide our full results and then their breakdown into upper and lower limit contributions.
In region II, due to the symmetry under the interchange of \(x_{1}\leftrightarrow x_{2}\), the contribution from this region is identical to eq. (19). Thus
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{II}} =\,\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{I}}\,. \tag{22}\]
In region III, the thrust axis is aligned with the momentum of the gluon. The contribution is
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dT}\Bigg{|}_{\text{III}} =\,\frac{2\alpha_{s}}{3\pi}\int_{0}^{1}dx_{2}\int_{0}^{1-x_{2}}dx _{1}\,\frac{x_{1}^{2}+x_{2}^{2}}{(1-x_{1})(1-x_{2})}\] \[\qquad\qquad\qquad\times\delta(T+x_{1}+x_{2}-2)\,, \tag{23}\]
which gives
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dT}\Bigg{|}_{\text{III}} =\,\frac{2\alpha_{s}}{3\pi}\int_{2(1-T)}^{T}dx_{2}\,\frac{(2-x_{2}-T)^{2}+x_ {2}^{2}}{(x_{2}+T-1)(1-x_{2})}\,. \tag{24}\]
Expanding in \(\tau\) gives
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{III}}\] \[=\,\,\frac{2\alpha_{s}}{3\pi}\bigg{(}-2-\log\tau+(2-2\log\tau) \tau+\mathcal{O}(\tau^{2})\bigg{)}. \tag{25}\]
Thus no LP contributions arise from region III, as expected. The \(\tau=0\) configurations in this region correspond to either the quark or anti-quark being soft, or the quark and anti-quark pair moving collinearly opposite to the gluon. The latter configuration does not lead to a large contribution because the propagator is then far off the mass-shell. The contribution from upper and lower limits from this region respectively are, for small \(\tau\),
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{III,u}}\] \[=\,\,\frac{2\alpha_{s}}{3\pi}\bigg{(}-2-\log\tau+(1-\log\tau) \tau+\mathcal{O}\left(\tau^{2}\right)\bigg{)}, \tag{26}\]
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{III,l}} =\,\,\frac{2\alpha_{s}}{3\pi}\bigg{(}\log\tau+(-1+\log\tau)\tau+ \mathcal{O}(\tau^{2})\bigg{)}. \tag{27}\]
The upper limit of \(x_{2}\) in region III corresponds to the gluon and anti-quark being back-to-back, and the lower limit has the gluon and quark back-to-back. LL terms at NLP arise from both limits.
Combining contributions from all three regions, the thrust distribution at NLO yields the well-known result
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{NLO}} =\,\frac{2\alpha_{s}}{3\pi}\bigg{[}\frac{2(3\tau^{2}-3\tau+2)}{ \tau(1-\tau)}\,\log\bigg{(}\frac{1-2\tau}{\tau}\bigg{)}\] \[\qquad\qquad\qquad\qquad-\frac{3(1-3\tau)(1+\tau)}{\tau}\bigg{]}\,, \tag{28}\]
with expansion
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{NLO}} =\,\,\frac{2\alpha_{s}}{3\pi}\bigg{(}\frac{-3-4\log\tau}{\tau}-2+ 2\log\tau+(5\tau\] \[\qquad\qquad\qquad\qquad-4\log\tau)\tau+\mathcal{O}(\tau^{2}) \bigg{)}. \tag{29}\]
Note that in eq. (29) at NLP all three regions produce LL terms, but a partial cancellation takes place when combined.
### Next-to-leading power corrections for thrust from kinematical shifts
Next we use the method of kinematical shifts [92; 46] to compute dominant NLP terms for the thrust distribution
due to gluon radiation. The expression for the shifted matrix elements is given in eq. (3), with the momenta shifts given by eq. (4). Note that because the emitted gluon is emitted from final particles the momenta shifts in eq. (3) are subtracted [46] from the radiating particle momenta, rather than added as for the initial state case. The squared, spin-summed, and averaged LO matrix element \(|\mathcal{M}_{0}|^{2}\) for the process in eq. (6) is
\[\overline{\sum}|\mathcal{M}_{0}(p_{1},p_{2})|^{2} = \frac{4(e^{2}e_{q})^{2}N_{c}}{3s}(2p_{1}\cdot p_{2})\,. \tag{30}\]
Inserting shifted momenta according to eq. (4) leads to
\[\overline{\sum}|\mathcal{M}_{0}(p_{1}-\delta p_{1},p_{2}-\delta p _{2})|^{2}\] \[= \frac{8(e^{2}e_{q})^{2}N_{c}}{3Q^{2}}\bigg{[}(p_{1}\cdot p_{2})+p _{3}\cdot(p_{1}+p_{2})+\mathcal{O}(p_{3}^{2})\bigg{]}\,. \tag{31}\]
The last term contains two powers of the momentum \(p_{3}\), and can thus be omitted. Combining eqs. (3) and (31) gives the expression for shifted matrix element squared as
\[\overline{\sum}|\mathcal{M}_{\text{shift}}|^{2}\] \[= 8(e^{2}e_{q})^{2}N_{c}C_{F}g_{s}^{2}\,\frac{1}{3Q^{2}}\] \[\times\bigg{(}\frac{2(p_{1}\cdot p_{2})^{2}}{(p_{1}\cdot p_{3})( p_{2}\cdot p_{3})}+\frac{2p_{1}\cdot p_{2}}{p_{2}\cdot p_{3}}+\frac{2p_{1} \cdot p_{2}}{p_{1}\cdot p_{3}}\bigg{)}\, \tag{32}\]
which, expressed in terms of \(x_{i}\)'s, reads
\[\overline{\sum}|\mathcal{M}_{\text{shift}}(p_{1},p_{2})|^{2}= 8(e^{2}e_{q})^{2}N_{c}g_{s}^{2}C_{F}\,\frac{1}{3Q^{2}}\] \[\times\bigg{(}\frac{2x_{1}+2x_{2}-2}{(1-x_{1})(1-x_{2})}\bigg{)}. \tag{33}\]
Note that when the emitted gluon becomes soft (i.e. \(x_{1},x_{2}\to 1\)), the above matrix element diverges, whereas when the gluon is hard (\(x_{3}\to 1\)), the numerator (\(2x_{1}+2x_{2}-2\)) vanishes. The thrust distribution under this approximation is given by
\[\frac{d\sigma}{dT}\bigg{|}_{\text{shift}} = \frac{1}{2s}\int d\Phi_{3}\ \overline{\sum}\ |\mathcal{M}_{\text{shift}}(x_{1},x_{2})|^{2} \tag{34}\] \[\times\delta(T-\max(x_{1},x_{2},x_{3})).\]
We repeat the steps to compute the thrust distribution from the exact computation for each of the three regions. Region I gives
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dT}\bigg{|}_{\text{I}} = \frac{2\alpha_{s}}{3\pi}\int_{0}^{1}dx_{2}\int_{0}^{1-x_{2}}dx_{1}\ \frac{2x_{1}+2x_{2}-2}{(1-x_{1})(1-x_{2})}\] \[\times\delta(T-x_{1}),\]
which leads to the expansion about \(\tau\,=\,0\)
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{I}} \tag{36}\] \[= \frac{2\alpha_{s}}{3\pi}\left(\frac{-2-2\log\tau}{\tau}+2+2\log \tau+\mathcal{O}(\tau^{2})\right).\]
When we compare the above expression to eq. (19), we see that the LLs at both the LP and NLP are correctly captured, while the NLL terms at LP are only partially reproduced. The contributions from upper and lower limits are
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{I}, \text{u}}= \frac{2\alpha_{s}}{3\pi}\left[-\frac{2(1-\tau)(1+\log\tau)}{\tau} \right], \tag{37}\] \[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{I}, \text{l}}= \frac{2\alpha_{s}}{3\pi}\left[-4-\frac{2(1-\tau)\log(1-2\tau)}{ \tau}\right], \tag{38}\]
which, expanded around \(\tau\,=\,0\), gives
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{I}, \text{u}}= \frac{2\alpha_{s}}{3\pi}\bigg{(}\frac{-2-2\log\tau}{\tau}+2+2\log\tau\] \[+\mathcal{O}(\tau^{2})\bigg{)},\]
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{I}, \text{l}}= \frac{2\alpha_{s}}{3\pi}\bigg{(}\mathcal{O}(\tau^{2})\bigg{)}. \tag{39}\]
The upper limit contribution in region I contains the LL and NLL terms at LP and NLP, while the lower limit acts beyond NLP. The contribution from region II is identical to I. The contribution from region III is
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dT}\Bigg{|}_{\text{III}} = \frac{2\alpha_{s}}{3\pi}\int_{0}^{1}dx_{2}\int_{0}^{1-x_{2}}dx_{1} \tag{41}\] \[\times\delta(T+x_{1}+x_{2}-2)\] \[\times\frac{2x_{1}+2x_{2}-2}{(1-x_{1})(1-x_{2})}\,,\]
which gives
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{III}} = \frac{2\alpha_{s}}{3\pi}\left[\frac{4\tau}{1-\tau}\log\left( \frac{1-2\tau}{\tau}\right)\right]\,. \tag{42}\]
Upon expansion around \(\tau\,=\,0\) this yields only terms beyond NLP accuracy
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{III}} = \frac{2\alpha_{s}}{3\pi}\bigg{(}-4\tau\log\tau+\mathcal{O}(\tau^{2 })\bigg{)}\,, \tag{43}\]
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{III,u}} = \frac{2\alpha_{s}}{3\pi}\bigg{(}-2\tau\log\tau+\mathcal{O}(\tau^{2} )\bigg{)}, \tag{44}\] \[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{III,l}} = \frac{2\alpha_{s}}{3\pi}\bigg{(}2\tau\log\tau+\mathcal{O}(\tau^{2} )\bigg{)}\,. \tag{45}\]
Indeed the hard gluon/soft quark region is not part of the shifted kinematics method.
Combining contributions from all three regions the thrust distribution from shifted kinematics formalism at NLO reads
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{shift}} = \frac{2\alpha_{s}}{3\pi}\bigg{[}\frac{8\tau^{2}-8\tau+4}{\tau(1- \tau)}\log\bigg{(}\frac{1-2\tau}{\tau}\bigg{)}\] \[-\frac{3\tau^{2}-4\tau+1}{\tau(1-\tau)}\bigg{]}\,, \tag{46}\]
or
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{shift }} = \frac{2\alpha_{s}}{3\pi}\bigg{[}\frac{-4-4\log\tau}{\tau}+4+4\log\tau\] \[-(4\log\tau)\tau+\mathcal{O}(\tau^{2})\bigg{]}.\]
When compared to the exact computation in eq. (29), we see that the LL terms at LP have been reproduced, in addition to some NLL terms. At the NLP level, we have captured some, but not all, of the LL terms, in addition to some NLL contributions. The missing LL terms at NLP should arise from soft quark (anti-quark) contributions.
The exact squared matrix element in eq. (12) has the following \(x_{i}\) dependence
\[\overline{\sum}|\mathcal{M}(x_{1},x_{2})|^{2} = 8(e^{2}e_{q})^{2}N_{c}C_{F}g_{s}^{2}\frac{1}{3Q^{2}}\left(\frac{ 2}{(1-x_{1})(1-x_{2})}-\frac{2}{1-x_{1}}-\frac{2}{1-x_{2}}+\frac{1-x_{2}}{1- x_{1}}+\frac{1-x_{1}}{1-x_{2}}\right). \tag{48}\]
The first three terms constitute the shifted matrix element squared, as shown in eq. (33). The eikonal approximation in fact consists of only the first term of eq. (48). The remaining last two terms in eq. (48) we refer to as remainder matrix element squared and it has following form
\[\overline{\sum}|\mathcal{M}_{\text{rem}}(x_{1},x_{2})|^{2}\] \[= 8(e^{2}e_{q})^{2}N_{c}C_{F}g_{s}^{2}\frac{1}{3Q^{2}}\bigg{(} \frac{1-x_{2}}{1-x_{1}}+\frac{1-x_{1}}{1-x_{2}}\bigg{)}\,. \tag{49}\]
Let us take the first term on the right, which diverges for \(x_{1}\to 1\). It is most relevant when \(x_{2}\to 0\) and \(x_{1}\to 1\), i.e. \(x_{3}\to 1\), which corresponds to the emission of a soft anti-quark. Similarly the second term is dominant when \(x_{1}\to 0\) and \(x_{2}\to 1\) i.e. the emission of soft quark.
The contributions from soft quark and anti-quark emissions can alternatively be computed using the quark emission operator defined in [46]. The matrix element for the emission of a soft quark (\(p_{1}\to 0\)) in fig. (1) is
\[i\mathcal{M}_{1,q} = \frac{ig_{s}t_{a}}{2p_{1}\cdot p_{3}}\left[\mathcal{M}_{H_{1}}(p_ {1},p_{2},p_{3})\not{p}_{3}\gamma^{\mu}\bar{\epsilon}_{\mu}\bar{u}(p_{1}) \right]\,, \tag{50}\]
and similarly for the emission of soft anti-quark (\(p_{2}\to 0\))
\[i\mathcal{M}_{2,q} = \frac{ig_{s}t_{a}}{2p_{2}\cdot p_{3}}\left[\mathcal{M}_{H_{2}}(p_ {1},p_{2},p_{3})\not{p}_{3}\gamma^{\mu}\bar{\epsilon}_{\mu}v(p_{2})\right]\,. \tag{51}\]
Here the hard scattering matrix elements \(\mathcal{M}_{H_{1}}\) and \(\mathcal{M}_{H_{2}}\) are defined to contain all the external states except for the polarization vector \(\epsilon_{\mu}\) and spinors for soft fermion emission. The matrix element squared after the sum (average) over the final (initial) state spins and colours
\[\overline{\sum}|\mathcal{M}_{q}|^{2} = \overline{\sum}|\mathcal{M}_{\text{rem}}(x_{1},x_{2})|^{2} \tag{52}\] \[= 8(e^{2}e_{q})^{2}N_{c}C_{F}g_{s}^{2}\frac{1}{3Q^{2}}\left(\frac{ 1-x_{2}}{1-x_{1}}+\frac{1-x_{1}}{1-x_{2}}\right)\,,\]
which indeed reproduces eq. (49). Thus eq. (49) features a clean separation of the soft quark and anti-quark contributions from the next-to-soft gluon contributions. Soft quark emissions contribute to the LL terms at NLP [46]. Their contribution to the thrust distribution from region I is
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dT}\Bigg{|}_{\text{I}} = \frac{2\alpha_{s}}{3\pi}\int_{0}^{1}dx_{2}\int_{0}^{1-x_{2}}\!dx_{ 1}\ \delta(T-x_{1})\] \[\times\left(\frac{1-x_{1}}{1-x_{2}}+\frac{1-x_{2}}{1-x_{1}} \right)\,\]
which gives
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dT}\Bigg{|}_{\text{I}} = \frac{2\alpha_{s}}{3\pi}\left(\frac{1}{2\tau}-2+\frac{3\tau}{2}- \tau\log\tau+\mathcal{O}(\tau^{2})\right).\]
The upper and lower limit components are
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{I,u}} = \frac{2\alpha_{s}}{3\pi}\bigg{(}\frac{1}{2\tau}+-\frac{\tau}{2}- \tau\log\tau+\mathcal{O}(\tau^{2})\bigg{)}, \tag{55}\] \[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{I,l}} = \frac{2\alpha_{s}}{3\pi}\bigg{(}2-2\tau+\mathcal{O}(\tau^{2})\bigg{)}\,. \tag{56}\]
In this region, the hard gluon can become collinear to the anti-quark, and the contribution from this kinematical configuration is captured by the upper limit of \(x_{2}\), which yields the NLL at LP. At the same time, the lower limit captures the NLL at NLP. Region II again gives a contribution identical to eq. (54). Region III gives
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{III}} = \frac{2\alpha_{s}}{3\pi}\int_{0}^{1}dx_{2}\int_{0}^{1-x_{2}}dx_{1} \tag{57}\] \[\times\bigg{(}\frac{1-x_{1}}{1-x_{2}}+\frac{1-x_{2}}{1-x_{1}} \bigg{)}\] \[\times\delta(T+x_{1}+x_{2}-2),\]
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{III}} = \frac{2\alpha_{s}}{3\pi}\bigg{(}-2-2\log\tau+2(1+\log\tau)\tau \tag{58}\] \[+\mathcal{O}(\tau^{2})\bigg{)}.\]
The contributions from upper and lower limits are
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{III,u}} = \frac{2\alpha_{s}}{3\pi}\bigg{(}-2-\log\tau-\tau\log\tau+ \mathcal{O}(\tau^{2})\bigg{)}, \tag{59}\]
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{III, l}} = \frac{2\alpha_{s}}{3\pi}\bigg{(}\log\tau+(-2-\log\tau)\tau+\mathcal{ O}(\tau^{2})\bigg{)}.\]
As the upper limit corresponds to soft quark emission and the lower limit to soft anti-quark contributions, one finds LL contributions at NLP from both limits. The remained matrix element squared in region III reproduces the missing LL contributions at NLP in eq. (47), which the shifted kinematics methods do not capture.
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Bigg{|}_{\text{rem}} = \frac{2\alpha_{s}}{3\pi}\bigg{(}\frac{1}{\tau}-6-2\log\tau+5\tau \bigg{)}. \tag{61}\]
These are, as expected, the terms that were missing from eq. (47). It is now interesting to assess the quality of the shifted kinematics approximation.
### Numerical assessment of the shifted kinematics approximations for thrust
The shifted kinematics approximation at the cross-section level changes the Born cross-section \(\sigma_{0}(s)\) in eq. (15) by replacing \(s\to s(1-\tau)^{-1}\)
\[\sigma_{0}\left(\frac{s}{1-\tau}\right) = \sigma_{0}(s)(1-\tau). \tag{62}\]
The LP term of thrust distribution computed under the formalism of shifted kinematics in eq. (47) is
\[\frac{d\sigma}{d\tau}\Bigg{|}_{\text{LP}} = \frac{2\alpha_{s}}{3\pi}\left(\frac{-4-4\log\tau}{\tau}\right) \sigma_{0}(s), \tag{63}\]
when multiplied by the _shifted_ born cross-section in eq. (62) yields
\[\sigma_{0}\left(\frac{s}{1-\tau}\right)\,\times\frac{d\sigma}{d \tau}\Bigg{|}_{\text{LP}} \tag{64}\] \[= \frac{2\alpha_{s}}{3\pi}\bigg{(}\frac{-4-4\log\tau}{\tau}+4+4 \log\tau\bigg{)}\sigma_{0}(s)\,.\]
which reproduces eq. (47). In fig. (3a) we compare the thrust distribution as computed from the shifted kinematics approximation in eq. (47), up to NLP, with the exact computation in eq. (28).1
Footnote 1: We have taken \(\alpha_{s}=0.1193\) at 172 GeV from LEP2 from two loop result in [97], the numerical results are also given in [98].
In the small \(\tau\) region, the LP terms alone provide an excellent approximation to the exact result. However, as \(\tau\) increases, we notice that the shifted kinematic curve approximates the exact results better. The reason LP terms are such an excellent approximation for thrust is the small coefficient of the NLP LL term; as such the NLL at LP for thrust dominates the LL term at NLP for small values of \(\tau\). This is, as we shall see, in contrast to the \(C\)-parameter, the LL at NLP dominates the NLL term at LP [99, 100] in the dijet limit.
In fig. (3b) we provide two different ratio plots that involve expressions of thrust distribution computed from both of the approaches viz. the exact approach (28) and the shifted approach (47). We define \(X(\tau)\) and \(Y(\tau)\) as
\[X(\tau)\,=\frac{\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Big{|}_{\text{ NLO}}-\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Big{|}_{\text{shift-NLP}}}{ \frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Big{|}_{\text{NLO}}}, \tag{65}\]
\[Y(\tau)\,=\frac{\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Big{|}_{\text{ NLO-LP}}}{\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\Big{|}_{\text{ NLO}}}. \tag{66}\]
As is evident from their expressions, \(X(\tau)\) measures the accuracy of the shifted approximation, whereas \(Y(\tau)\) measures the importance of NLP terms (and beyond) compared to the LP terms. The plot of \(X(\tau)\) shows that the shifted kinematics method approximates the exact result well near the dijet limit. The plot of \(Y(\tau)\) shows that the LP contribution is dominant near \(\tau\to 0\). As \(\tau\) increases, the denominator falls faster than the numerator due to the presence of the NLP terms in addition to the LP contribution, and thus the ratio rises above one.
## IV \(C\)-parameter
In the previous section, we compared the shifted kinematic approximation of the thrust distribution with the exact NLO result, to understand the origin of NLP corrections. Here we perform the same comparison for the more complicated \(C\)-parameter distribution. The \(C\)-parameter for massless particles [101; 102; 103; 104] is defined as
\[C\,=\,3-\frac{3}{2}\sum_{i,\ j}\frac{\left(p_{i}\cdot p_{j}\right)^{2}}{\left( p_{i}\cdot q\right)\left(p_{j}\cdot q\right)}\,, \tag{67}\]
where \(p_{i}\) is the four-momentum of \(i\)-th particle, the total four-momentum is given by \(q=\sum_{i}p_{i}\), and the sums over \(i\) and \(j\) run over all the particles present in the final state. The minimum value taken by \(C\) equals \(0\) (for a zero-radius dijet event), and the maximum value is \(1\) (for an isotropic event). Here we consider a three-body final state, for which the maximal value of \(C\) is \(3/4\); the value \(C=1\) can only be obtained if more than three particles are present in the final state. For the three-body final state in fig. (1), it takes the form
\[C\,=\,\frac{6\left(1-x_{1}\right)\left(1-x_{2}\right)\left(1-x_{3}\right)}{x_ {1}\ x_{2}\ x_{3}}\,, \tag{68}\]
where \(x_{i}\) are the energy fractions given in eq. (9). The \(C\)-parameter has a critical point at \(C=3/4\), which gives rise to a Sudakov shoulder [105]. However, these first occur at second order in \(\alpha_{s}\), where four-particle final states are possible, hence our study does not encounter them. Henceforth we work with a rescaled definition of the \(C\)-parameter
\[c\,=\,\frac{C}{6}\,=\,\frac{\left(1-x_{1}\right)\left(1-x_{2}\right)\left(1-x _{3}\right)}{x_{1}\ x_{2}\ x_{3}}\,, \tag{69}\]
for which the range is \(0<c<1/8\). Note that the definition of the \(c\)-parameter does not involve the selection of a special axis, which thereby distinguishes it from a group of other event shapes variables such as thrust, jet broadening, jet mass, and angularities.
### The \(c\)-parameter distribution at NLO
The \(c\)-parameter distribution is defined as
\[\frac{d\sigma}{dc}\,=\,\frac{1}{2s}\int d\Phi_{3}\,\overline{\sum}\ |\mathcal{M}(x_{1},x_{2})|^{2}\ \delta\left(c(x_{1},x_{2})-c\right), \tag{70}\]
where \(\overline{\sum}\ |\mathcal{M}(x_{1},x_{2})|^{2}\) and \(d\Phi_{3}\) are given in eq. (12) and eq. (13) respectively. The normalized expression at NLO takes the form
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\,=\,\ \frac{2\alpha_{s}}{3\pi}\int_{0}^{1}dx_{1}\int_{0}^{1-x_{1}} dx_{2}\ \frac{x_{1}^{2}+x_{2}^{2}}{(1-x_{1})(1-x_{2})}\] \[\times\delta\left(c(x_{1},x_{2})-c\right). \tag{71}\]
Figure 3: In (a), we plot comparison of the thrust distribution. We show the exact result (solid red curve) in eq. (28), the shifted approximation (dashed blue curve) up to NLP terms given in eq. (47). We also plot the exact LP term (dotted purple curve). In (b), we show \(X(\tau)\) in eq. (65) (solid blue curve) and \(Y(\tau)\) in eq. (66) (purple dashed curve) vs. \(\tau\).
The integrations over energy fractions are considerably more involved than for thrust due to the rational polynomial form in eq. (69). It is advantageous to convert our kinematic variables to \((y,z)\)[100] where
\[y = 2-x_{1}-x_{2},\] \[z = \frac{1-x_{2}}{y}\,, \tag{72}\]
with Jacobian \(J(z,y)=y\). Note that \(y\) is in fact the gluon energy fraction. The expressions for the matrix element and \(c\)-parameter in terms of the new variables read
\[\overline{\sum}\,\left|\mathcal{M}(y,z)\right|^{2}\] \[= \,8(e^{2}e_{q})^{2}N_{c}C_{F}g_{s}^{2}\frac{1}{3Q^{2}}\left(\frac {2+y(y-2yz(1-z)-2)}{y^{2}z(1-z)}\right), \tag{73}\]
and
\[c(y,z) = \,\frac{(1-y)(1-z)yz}{(1-y(1-z))(1-yz)}\,. \tag{74}\]
The \(x_{1}\leftrightarrow x_{2}\) symmetry in the matrix element squared in eq. (12) now appears as \(z\leftrightarrow(1-z)\) symmetry in eq. (73). Substitution into eq. (70) yields
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Bigg{|}_{\text{NLO}}\] \[= \,\frac{2\alpha_{s}}{3\pi}\int_{0}^{1}dy\;\int_{0}^{1}dz\;\frac{ 2+y(y-2yz(1-z)-2)}{yz(1-z)}\] \[\times\delta\left(c(y,z)-c\right). \tag{75}\]
To determine the limits of the \(z\)-integration, we use the phase space in fig. (2) as follows. From eq. (72), it is evident that \(z\) will attain its lowest value when \(x_{2}=1\), yielding \(z=0\) as the lower limit of \(z\). Similarly, \(z\) will attain its largest value when \(x_{2}\) and \(y\) both are smallest. However \(x_{2}\) and \(y\) cannot be zero at the same time. Thus with \(x_{2}\to 0\) (and thus \(y\to 1\)), one finds that \(z=1\) is the upper limit. The limits of the \(y\)-integration can be found from the \(\delta\)-function constraint and the limits of the \(z\)-integration. We first rewrite eq. (75) in the form
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Bigg{|}_{\text{NLO}} = \,\frac{2\alpha_{s}}{3\pi}\int_{0}^{1}dy\int_{0}^{1}dz\frac{2(y(z-1 )+1)^{2}(yz-1)^{2}(y(2y(z-1)z+y-2)+2)}{(y-1)^{2}y^{2}(z-1)z(2z-1)}\bigg{(} \delta(z-z_{1})+\delta(z-z_{2})\bigg{)}\,, \tag{76}\]
where we used the argument of the \(\delta\)-function in eq. (75) and it has following two roots
\[z_{1,2} = \,\frac{1}{2}\left(1\pm\frac{\sqrt{y(y(1+c)-1)(c(y-2)^{2}+y(y-1)) }}{y(y(1+c)-1)}\right)\,. \tag{77}\]
Here \(z_{1}(z_{2})\) corresponds to the \(+(-)\) solution. Note that as \(c\to 0\), we have \(z_{1}\to 1\) and \(z_{2}\to 0\) as expected. The argument of the \(\delta\)-function in eq. (75) is a complicated function of \(y\) and \(c\). As the integral has a symmetry \(z\leftrightarrow(1-z)\), the integral over \(z\) in eq. (76) equals twice the integral between \(z=1/2\) and the upper limit, where only \(\delta(z-z_{1})\) is relevant. After the \(z\) integration the limits of \(y\) change to \((y_{1},y_{2})\), see below in eq. (81). We have now The above expressions are smooth functions of \(z\), as can be seen from fig. (4).
\[\frac{1}{\sigma_{0}(s)}\frac{\text{d}\sigma}{\text{dc}}\Bigg{|}_{\text{NLO}} = \,\frac{2\alpha_{s}}{3\pi}\int_{y_{1}}^{y_{2}}dy\frac{2(1-y)\left(y \left(c(y-2)^{2}+(y-3)y+4\right)-2\right)}{c(cy+y-1)\sqrt{y(cy+y-1)\left(c(y- 2)^{2}+(y-1)y\right)}}\,. \tag{78}\]
To find the integration limits of \(y_{1}\) and \(y_{2}\) we proceed as follows. First, going back to eq. (75), the \(\delta\)-function can be also used to solve for \(y\), for which there are two solutions,
\[y_{1}(z) = \,\frac{2c}{\left[c^{2}(1-2z)^{2}+2c(z-1)z+(z-1)^{2}z^{2}\right]^ {1/2}+c-z^{2}+z}\,, \tag{79}\]
\[y_{2}(z)\,=\,-\frac{\left[c^{2}(1-2z)^{2}+2c(z-1)z+(z-1)^{2}z^{2}\right]^{1/2}+c- z^{2}+z}{2(c+1)(z-1)z}\,. \tag{80}\]
Clearly, there are extrema for both expressions at \(z=1/2\). The second derivative of \(y_{1}(z)\) (\(y_{2}(z)\)) at \(z=1/2\) is positive (negative). Thus the expression of \(y_{1}(z)\) in eq. (79) is the lower limit of \(y\) and \(y_{2}(z)\) in eq. (80) is the upper limit of \(y\). Substituting \(z=1/2\) in eqs. (79) and (80) we find
\[y_{1}\,=\,\frac{1+4c-\sqrt{1-8c}}{2(1+c)}\,,\qquad y_{2}\,=\, \frac{1+4c+\sqrt{1-8c}}{2(1+c)}\,. \tag{81}\]
The behavior of lower and upper limits are shown in figs. (5a) and (5b). For thrust, the integration limits of the final integration in eq. (17) had simple expressions (\(2\tau\) and \(1-\tau\) ), and their relation with the respective kinematic configurations were readily visible in fig. (2). To establish the relation between kinematic configurations and the integration limits \(y_{1}\) and \(y_{2}\) for the case of \(c\)-parameter, we expand their expressions around \(c\,=\,0\) (as \(c\to 0\) is the dijet limit),2
Footnote 2: Note that while performing the final integration in eq. (78) we use the exact expressions of limits as given in eq. (81).
\[y_{1}\,=\,4c+\mathcal{O}(c^{3}),\] \[y_{2}\,=\,1-c-3c^{2}+\mathcal{O}(c^{3}). \tag{82}\]
In the dijet limit, the upper limit of \(y\) corresponds to either \((i)\) a back-to-back hard gluon and quark-anti-quark collinear pair \((ii)\) soft quark (or soft anti-quark) emission, or \((iii)\) the hard gluon collinear to the quark or the anti-quark. The lower limit in dijet limit corresponds to a soft gluon emission and a back-to-back quark and anti-quark. Thus these upper and lower limits correspond to different points in the phase space, as expected. The integration over \(y\) in eq. (78) produces incomplete elliptic integrals of three types, each with somewhat involved arguments and coefficients. After conversion to their so-called complete counterparts3, and carefully collecting their coefficients, the final expression can be organized in a compact form [100] as follows
Footnote 3: More information about these transformations of elliptic integrals and their analytical properties can be found in [106].
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Bigg{|}_{\rm NLO}\,= \,\frac{2\alpha_{s}}{3\pi}\bigg{(}e(c)\,\,{\rm E}[m_{1}(c)]+p(c)\,\,\Pi[n_{1} (c),m_{1}(c)]+k(c)\,\,{\rm K}[m_{1}(c)]\bigg{)}\,, \tag{83}\]
where \({\rm E}\), \(\Pi\), and \({\rm K}\) are the complete elliptic integrals of the first, second, and third kind. The integral representation of these elliptic integrals are as following
\[{\rm E}[m]\,=\,\int_{0}^{1}dt\,\,\frac{\sqrt{1-m^{2}t^{2}}}{\sqrt{1-t^{2}}}\,,\]
\[\Pi[n,m] = \int_{0}^{1}\frac{dt}{(1-nt^{2})\sqrt{(1-t^{2})(1-m^{2}t^{2})}}\,,\] \[\mathrm{K}[m] = \int_{0}^{1}\frac{dt}{\sqrt{(1-t^{2})(1-m^{2}t^{2})}}\,. \tag{84}\]
Here \(m\), and \(n\) are the _parameter_ and _characteristic_ of the elliptic integrals, respectively. The arguments of these elliptic integrals in eq. (83) have the following form
\[n_{1}(c) = \frac{2\sqrt{1-8c}}{1+\sqrt{1-8c}-4c}\,,\] \[m_{1}(c) = \frac{2\sqrt{1-8c}}{1+\sqrt{1-8c}-4c-8c^{2}}\,. \tag{85}\]
The behavior of these arguments is shown in fig. (6). We computed eq. (83) throughout with a massless on-shell gluon and it agrees with the characteristic function derived in [100] for finite gluon virtuality \(\xi\), in the limit \(\xi\to 0\). Notice that the arguments have monotonic behavior. The elliptic integrals in eq. (83) have the following asymptotic behavior as \(c\to 0\)
\[\mathrm{E}[m_{1}(c)] = 1+\mathcal{O}(c^{3}\log c)\,,\] \[\Pi[n_{1}(c),m_{1}(c)] = -\frac{\log c}{8c^{2}}-\frac{1+\log c}{4c}+\mathcal{O}(c^{0}\log c )\,, \tag{86}\] \[\mathrm{K}[m_{1}(c)] = -\frac{3\log c}{2}+\mathcal{O}(c^{3}\log c)\,.\]
The expressions for the coefficients appearing in eq. (83) are
\[e(c) = \frac{-3(1+2c)\sqrt{1-4c(1+2c)+\sqrt{1-8c}}}{\sqrt{2}c(1+c)^{3}}\,,\] \[p(c) = \frac{\sqrt{2}(2+c+2c^{2})(1-\sqrt{1-8c})^{2}}{c(1+c)^{3}\sqrt{1 -4c(1+2c)+\sqrt{1-8c}}}\,, \tag{87}\] \[k(c) = \frac{4\sqrt{2}(1-2c(2+c))}{(1+c)^{3}\sqrt{1-4c(1+2c)+\sqrt{1-8c }}}\,.\]
Their expansions around \(c=0\) are
\[e(c) = -\frac{3}{c}+9+12c+36c^{2}+\mathcal{O}(c^{3})\,,\] \[p(c) = 32c+112c^{2}+\mathcal{O}(c^{3})\,, \tag{88}\] \[k(c) = 4-20c+48c^{2}+\mathcal{O}(c^{3})\,.\]
The coefficient \(e(c)\), when multiplied by the complete elliptic integral of the second kind \(E[m_{1}(c)]\) yields NLL terms at LP, the coefficient \(p(c)\), multiplied with the complete elliptic integral of the third kind \(\Pi[n_{1}(c),m_{1}(c)]\) produces LL and NLL contributions at both LP and NLP, while the coefficient \(k(c)\) together with the complete elliptic integral of the first kind \(K[m_{1}(c)]\) also produces LL terms at NLP. From eqs. (85), (86) and (88) the \(c\)-parameter distribution at NLO for small-\(c\) reads,
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Bigg{|}_{\mathrm{ NLO}}\] \[= \frac{2\alpha_{s}}{3\pi}\bigg{(}\frac{-3-4\log c}{c}+1-28\log c+ \mathcal{O}(c)\bigg{)}. \tag{89}\]
We have computed the \(c\)-parameter distribution here without approximating the event shape variable or the matrix element squared, similar to what we did for thrust in eq. (29). The above result, which agrees with [99; 100], contains the contributions from all regions of the phase space. By comparing eq. (89) and eq. (29) we observe that the LP terms for both event shape variables have an identical structure in the dijet limit. Further similarities between these event-shape variables are discussed in [107; 108; 109; 110; 111]. The expression eq. (89) does not fully expose the relation to different kinematical configurations. To do so, we list the upper and lower limit contributions of the \(y\) integral separately. When computing the contributions from the upper and lower limits separately we have observed that it is only the coefficients \(e(c),p(c)\) and \(k(c)\) of the elliptic integrals that differ, while the general form of the elliptic integrals remains the same as in eq. (83).
Figure 5: Lower and upper limits \(y_{1}(c)\) and \(y_{2}(c)\) respectively as functions of \(c\).
From the upper limit of eq. (78) we have, for small-\(c\),
\[e_{\rm u}(c) = -\frac{3}{c}+9+12c+36c^{2}+\mathcal{O}(c^{3})\,,\] \[p_{u}(c) = 0\,, \tag{90}\] \[k_{\rm u}(c) = -12-24c-108c^{2}+\mathcal{O}(c^{3})\,,\]
For the lower limit, the behavior reads
\[e_{\rm l}(c) = 0\,,\] \[p_{\rm l}(c) = -32c-112c^{2}-912c^{3}+\mathcal{O}(c^{7/2})\,, \tag{91}\] \[k_{\rm l}(c) = -16-4c-156c^{2}+\mathcal{O}(c^{3})\,.\]
Since \(p_{u}(c)=e_{l}(c)=0\), no LL terms at LP derive from the upper limit, nor do we find NLL terms at LP from the lower limit. Collecting the upper and lower limit contributions for the \(c\)-parameter distribution in the dijet limit leads to
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Bigg{|}_{\rm u} = \frac{2\alpha_{s}}{3\pi}\left(-\frac{3}{c}+9(1+2\log c)+\mathcal{ O}(c)\right), \tag{92}\] \[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Bigg{|}_{\rm l} = \frac{2\alpha_{s}}{3\pi}\left(\frac{4\log c}{c}+2(4+23\log c)+ \mathcal{O}(c)\right).\]
Thus the upper limit of \(y\) in the dijet limit corresponds to various kinematic configurations involving a hard gluon such as hard gluon being back-to-back with quark (antiquark) and collinear to anti-quark (quark), which produces NLL terms at LP. As discussed the upper limit also corresponds to the soft quark/anti-quark emissions, LL and NLL terms at NLP also arise from this limit. The lower limit of \(y\) corresponds to kinematic configurations involving a soft gluon and thus yields LL terms at both LP and NLP. No NLL terms at LP are generated here, although there is a NLL contribution at NLP.
### Next-to-leading power corrections to \(c\)-parameter from shifted kinematics
We next compute the \(c\)-parameter distribution using the shifted kinematics method, and again assess to what extent LP and NLP terms in the exact NLO calculation are reproduced. The approximation is
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Bigg{|}_{\rm shift} = \frac{1}{2s}\int d\Phi_{3}\ \overline{\sum}|\mathcal{M}_{\rm shift}(x_{1},x_{2})|^{2} \tag{94}\] \[\times\ \delta\left(c(y,z)-c\right)\,.\]
The shifted matrix element in eq. (33), when written in terms of the transformed variables \((y,z)\), takes the form
\[\overline{\sum}|\mathcal{M}_{\rm shift}(y,z)|^{2} \tag{95}\] \[= 8(e^{2}e_{q})^{2}N_{c}C_{F}g_{s}^{2}\frac{1}{3Q^{2}}\left(\frac {2(y-1)}{y^{2}(z-1)z}\right).\]
Again, we make no approximation to the event shape definition itself. We then proceed in the same manner as with the exact matrix element. We have
\[\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Bigg{|}_{\rm shift} = \frac{2\alpha_{s}}{3\pi}\int_{0}^{1}dy\int_{0}^{1}dz\frac{2(1-y(1 -z))^{2}(1-yz)^{2}}{(1-y)^{2}y^{2}z(z-1)(2z-1)}\bigg{(}\delta(z-z_{1})+\delta( z-z_{2})\bigg{)}\,, \tag{96}\]
Figure 6: Behavior of arguments \(n_{1}(c)\) and \(m_{1}(c)\).
where \(z_{1}\) and \(z_{2}\) are provided in eq. (77). After the \(z\) integration we have
\[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm shift}= \frac{2\alpha_{s}}{3\pi}\int_{y_{1}}^{y_{2}}dy\frac{4(y-1)^{2}}{c\sqrt{y\,(y+ cy-1)\,(c(y-2)^{2}+(y-1)y)}}\,. \tag{97}\]
The result of \(y\) integration can again be organized in a manner similar to eq. (83) as
\[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm shift}= \frac{2\alpha_{s}}{3\pi}\bigg{(}e_{\rm s}(c)E[m_{1}(c)]+p_{\rm s}(c)\Pi[n_{1} (c),m_{1}(c)]+k_{\rm s}(c)K[m_{1}(c)]\bigg{)}\,, \tag{98}\]
A comparison of eq. (97) with eq. (78) shows a significant simplification of the integrand, where the subscript \({\rm s}\) on the coefficients on the right indicates the shifted kinematics method. The elliptic integrals and their arguments are given in eqs. (86) and (85) respectively.
The coefficients in eq. (98) do change from the exact result, and we find
\[k_{\rm s}(c) = -\,\frac{\sqrt{2-2\sqrt{1-8c}-8c-16c^{2}}}{c^{3/2}(1+c)^{5/2}}\,,\] \[e_{\rm s}(c) = -\,\frac{(1+\sqrt{1-8c}-4c)\ \sqrt{1-\sqrt{1-8c}-4c-8c^{2}}}{2 \sqrt{2}c^{5/2}(1+c)^{5/2}}\,, \tag{99}\] \[p_{\rm s}(c) = \frac{(1-\sqrt{1-8c}-4c)\ \sqrt{1-\sqrt{1-8c}-4c-8c^{2}}}{\sqrt{2}c^{5/2}(1+c) ^{5/2}}\,.\]
Their small-\(c\) behavior is
\[e_{\rm s}(c) = -\,\frac{4}{c}+16-4c+72c^{2}+\mathcal{O}(c^{3})\,,\] \[p_{\rm s}(c) = 32c+128c^{2}+\mathcal{O}(c^{3})\,, \tag{100}\] \[k_{\rm s}(c) = -\,8c+\mathcal{O}(c^{3})\,.\]
Expanding around \(c=0\), the \(c\)-parameter distribution from shifted kinematics approximation reads
\[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm shift} \tag{101}\] \[= \frac{2\alpha_{s}}{3\pi}\bigg{(}\frac{-4-4\log c}{c}+8-24\log c+ \mathcal{O}(c)\bigg{)}.\]
For better insight, it is again useful to examine separately the contributions of the upper and lower of the \(y\) integral. The small-\(c\) behavior of the coefficients of the elliptic integrals appearing from the upper limit of eq. (97) is
\[e_{\rm su}(c) = -\frac{4}{c}+16-4c+72c^{2}+252c^{3}+\mathcal{O}(c^{7/2}),\] \[p_{\rm su}(c) = 0, \tag{102}\] \[k_{\rm su}(c) = -16-16c-144c^{2}-688c^{3}+\mathcal{O}(c^{7/2})\,.\]
For the lower limit we have
\[e_{\rm sl}(c) = 0,\] \[p_{\rm sl}(c) = -32c-128c^{2}-928c^{3}+\mathcal{O}(c^{7/2}), \tag{103}\] \[k_{\rm sl}(c) = -16-8c-144c^{2}-616c^{3}+\mathcal{O}(c^{7/2}).\]
Note that the \(\mathcal{O}(c^{3})\) terms contribute at NNLP accuracy. We now have
\[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm su} = \frac{2\alpha_{s}}{3\pi}\left(-\frac{4}{c}+8(2+3\log c)+ \mathcal{O}(c)\right), \tag{104}\] \[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm sl} = \frac{2\alpha_{s}}{3\pi}\left(\frac{4\log c}{c}+8(1+6\log c)+ \mathcal{O}(c)\right).\]
From the above two expressions, we observe that the lower (upper) limit fully captures the LL (NLL) at LP, while the LL and NLL at NLP receive contributions from both limits. The results for the \(c\)-parameter distributions at NLO up to NLP, obtained from the exact and shifted kinematics computation, are given in eqs. (89) and (101), respectively. The integrand in the shifted kinematics method was considerably simpler than for the exact computation. At LP, the method reproduced the LL term. The NLL term at LP was not fully reproduced because the contribution from the hard-collinear gluon is absent. Similarly, not all of the LL at NLP is captured, due to the absence of from soft quark and soft anti-quark contributions. Recall that the shifted kinematics method does not account for soft quark contributions, which we computed separately using soft quark emission vertices in section III.2.
We again examine the remaining contributions to the \(c\)-parameter distribution, such as those from the soft quark, soft anti-quark, and hard-collinear gluon configurations. This allows the mapping of all of the contributions to the \(c\)-parameter from various regions of the phase space. When expressed in terms of the transformed \((y,z)\) variables the remainder matrix element as described in eq. (49) reads
\[\overline{\sum}|\mathcal{M}_{\rm rem}(y,z)|^{2} = 8(c^{2}e_{q})^{2}N_{c}C_{F}g_{s}^{2}\frac{1}{3Q^{2}} \tag{106}\] \[\times\,\bigg{(}\frac{1}{z}+\frac{1}{1-z}-2\bigg{)}.\]
The expression for \(c\)-parameter distribution using the above matrix element squared is
\[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm rem}=\frac{2\alpha_{s }}{3\pi}\int_{0}^{1}dy\int_{0}^{1}dz\frac{1}{z(1-z)}\Big{(}\delta(z-z_{1})+ \delta(z-z_{2})\Big{)}\frac{-y(y-1)^{2}\left(1-2z(1-z)\right)}{(cy+y-1)\sqrt{y( cy+y-1)\left(c(y-2)^{2}+(y-1)y\right)}}, \tag{107}\]
with \(z_{1}\) and \(z_{2}\) are given in eq. (77). After performing the \(z\) integration we are left with
\[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm rem}=\,\frac{2 \alpha_{s}}{3\pi}\int_{y_{2}}^{y_{1}}dy\ \frac{-2y(y-1)(c((y-2)y+2)+(y-1)y)}{c(cy+y-1)\sqrt{y(cy+y-1)\left(c(y-2)^{2}+(y -1)y\right)}}\,. \tag{108}\]
with \(y_{1}\) and \(y_{2}\) given in eq. (81). The outcome takes again the form as in eq. (83), somewhat more complicated than either of the two approximations. The result can be written as
\[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm rem}=\,\frac{2 \alpha_{s}}{3\pi}\bigg{(}e_{\rm r}(c)\ E[m_{1}(c)]+p_{\rm r}(c)\ \Pi[n_{1}(c),m_{1}(c)]+k_{\rm r}(c)\ K[m_{1}(c)]\bigg{)}, \tag{109}\]
The small-\(c\) behavior of the coefficients reads
\[e_{r}(c) = \frac{1}{c}-7+16c-36c^{2}+9c^{3}+\mathcal{O}(c^{7/2})\,,\] \[p_{r}(c) = -16c-16c^{3}+\mathcal{O}(c^{7/2})\,, \tag{110}\] \[k_{r}(c) = 4-12c+48c^{2}+16c^{3}+\mathcal{O}(c^{7/2})\,,\]
\[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm rem}=\,\frac{2 \alpha_{s}}{3\pi}\left(\frac{1}{c}-7-4\log c+\mathcal{O}(c)\right)\,. \tag{111}\]
Here we indeed see contributions from the hard-collinear gluon, the soft quark, and anti-quark configurations, which yield NLL terms at LP (hard-collinear gluon) as well as LL terms at NLP (soft quark/anti-quark) respectively. Combining these contributions with the outcome of shifted kinematics approximation eq. (101) allows us to fully map the distributions of the \(c\)-parameter to the different kinematical configurations.
Let us finally break the contributions from remaining term also down into upper and lower limit components. The coefficients from the upper limit of eq. (108) yield
\[e_{\rm ru}(c) = \frac{1}{c}-7+16c-36c^{2}+9c^{3}+\mathcal{O}(c^{7/2})\,,\] \[p_{\rm ru}(c) = 0\,, \tag{112}\] \[k_{\rm ru}(c) = 4-8c+36c^{2}+64c^{3}+\mathcal{O}(c^{7/2})\,.\]
while for the lower limit, we have
\[e_{\rm rl}(c) = 0,\] \[p_{\rm rl}(c) = 16c+16c^{3}+\mathcal{O}(c^{7/2}), \tag{113}\] \[k_{\rm rl}(c) = 4c-12c^{2}+48c^{3}+\mathcal{O}(c^{7/2}).\]
Using eqs. (85), (86) and above two expressions, the contributions to the \(c\)-parameter distribution from the upper and lower limit respectively are
\[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm ru} = \frac{2\alpha_{s}}{3\pi}\left(\frac{1}{c}-7-6\log c+\mathcal{O}( c)\right), \tag{114}\] \[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm rl} = \frac{2\alpha_{s}}{3\pi}\bigg{(}-2\log c+\mathcal{O}(c)\bigg{)}. \tag{115}\]
The missing LL terms at NLP have been generated here. The upper limit contributes to the missing NLL term at LP (related to hard-collinear gluon emission) and NLP. Both limits contain LL terms at NLP (soft (anti-)quark emission).
### Numerical assessment of the shifted kinematics approximations for \(c\)-parameter
We analyze our \(c\)-parameter results numerically. In the small-\(c\) limit, the effectiveness of the shifted kinematics method is evident from fig. (7a), the shifted kinematics result provides a better approximation to the exact result, than the exact result taken up to LP terms only, even though we did not generate all the dominant terms at LP and NLP from this approximation. The \(c\)-parameter distribution in fig. (7a) has a more pronounced departure of the LP terms from the exact result than the thrust distribution in fig. (3a). This is due to the \(c\)-parameter distribution having a larger LL coefficient at NLP than thrust. We define \(X(c)\) and \(Y(c)\) in similar fashion to \(X(\tau)\) and \(Y(\tau)\) in eqs. (65) and (66) as
\[X(c) = \left.\frac{\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Big{|}_{\text{ NLO-}}-\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Big{|}_{\text{shift-NLP}}}{ \frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Big{|}_{\text{NLO}}}, \tag{116}\]
\[Y(c) = \left.\frac{\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Big{|}_{\text{ NLO-LP}}}{\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\Big{|}_{\text{NLO}}}\,,\right. \tag{117}\]
and exhibit them in fig. (7b). \(X(c)\) exhibits similar behavior as the analogous case for thrust in fig. (3b), but the deviation of \(X(c)\) from zero is faster with increasing \(c\). \(Y(c)\) decreases sharply from unity as \(c\) increases, a behavior opposite to thrust in fig. (7b). Indeed, due to the large negative LL coefficient at NLP for the \(c\)-parameter, the denominator here increases sharply. More precisely, for the \(c\)-parameter, the LL coefficient at NLP is seven times that at LP, with the same sign, while for thrust, the LL coefficient at NLP is half that of LP, with the opposite sign.
## V Conclusions
The fixed order results in perturbation theory for massless fields contain logarithms of the (small) ratio of energy scales, arising from soft or collinear radiation. They occur at both leading and next-to-leading power of this ratio. This work focuses on NLP terms appearing in the thrust and \(C\)-parameter event shape distribution. We identify in exact NLO results the origin of large logarithmic terms at LP and NLP. Subsequently we provide an approximation of these results using the recent shifted kinematics method, designed to capture large logarithms at NLP accuracy due to soft gluon emission. Moreover we compute soft quark emission contributions using corresponding effective Feynman rules.
This formalism indeed reproduces the dominant contributions at LP and NLP from soft gluon radiation correctly for both thrust and \(c\)-parameter. The remaining LL terms at NLP are reproduced the soft (anti-)quark emission, as anticipated in [112, 46]. We were able to map the various sources of contributions at LP and NLP, using an integral form, and its limits, for the distributions.
Our detailed diagnosis of NLP terms in event shape variables, and demonstration that they can be fully reproduced using a shifted kinematics approach together with soft fermion emission vertices, should be a useful resource towards further understanding of NLP terms, in particular those arising from final state emissions.
###### Acknowledgements.
EL and AT would like to thank the MHRD Government of India, for the SPARC grant SPARC/2018-2019/P578/SL, _Perturbative QCD for Precision Physics at the LHC_. SM would like to thank CSIR, Govt. of India, for the SRF fellowship (09/1001(0052)/2019-EMR-I).
## Appendix A Transformation of incomplete elliptic integrals
In this appendix, we demonstrate how we handle the incomplete elliptic integrals that appear in the expression of \(c\)-parameter distribution. The final expression for the \(c\)-parameter distribution is written in a compact manner in eq. (83), where \(K\), \(E\) and \(\Pi\) are the complete elliptic integrals of the first, second and third kind, re
Figure 7: In (a), we plot comparison of the \(c\)-parameter distribution. We show the exact result (solid red curve) in eq. (83), the shifted approximation (dashed blue curve) up to NLP terms given in eq. (101). We also plot the exact LP term given in eq. (89) (dotted purple curve). In (b), we show behavior of \(X(c)\) (solid blue curve) and \(Y(c)\) (purple dashed curve) from eqs. (116) and (117) respectively vs. \(c\).
spectively. However, when we perform the integration over final variable \(y\) in eqs. (78), (97) and (108), this integration produces incomplete elliptic integrals \(F\), \(E\), and \(\Pi\). These incomplete elliptic integrals are later converted into complete integrals to arrive at eq. (83), as first written in [100]. The three kinds of incomplete elliptic integrals appearing in \(c\)-parameter distribution are
\[F[\phi,m] \equiv \int_{0}^{\phi}d\theta\frac{1}{\sqrt{1-m\sin^{2}\theta}} \tag{104}\] \[= \int_{0}^{\sin\phi}\frac{dt}{\sqrt{(1-t^{2})(1-mt^{2})}}\,,\]
\[E[\phi,m] \equiv \int_{0}^{\phi}d\theta\sqrt{1-m\sin^{2}\theta} \tag{105}\] \[= \int_{0}^{\sin\phi}dt\sqrt{\frac{1-mt^{2}}{1-t^{2}}}\,,\]
\[\Pi[n,\phi,m] \equiv \int_{0}^{\phi}d\theta\frac{1}{(1-m\sin^{2}\theta)\sqrt{1-m\sin^{ 2}\theta}} \tag{106}\] \[= \int_{0}^{\sin\phi}\frac{dt}{(1-nt^{2})\sqrt{(1-t^{2})(1-mt^{2})}}\,.\]
Here \(\phi,m\), and \(n\) are called the _amplitude_, _parameter_, and _characteristic_ of the elliptic integrals, respectively. Eq. (84) gives their respective complete form. The corresponding transformation into a complete elliptic integral can be performed using the rule
\[F[\phi,m] = K[m],\] \[E[\phi,m] = E[m], \tag{107}\] \[\Pi[n,\phi,m] = \Pi[n,m].\]
The above transformation is only possible when the amplitude \(\phi=\pi/2\). The indefinite integration of eqs. (78), (97) and (108) results in multiple incomplete elliptic integrals with only two unique amplitudes in their arguments, namely \(\phi_{1}(c,y)\) and \(\phi_{2}(c,y)\), given by
\[\phi_{1}(c,y) = \left(\frac{-1+\sqrt{1-8c}-4c+8c/y}{2\sqrt{1-8c}}\right)^{1/2}, \tag{108}\] \[\phi_{2}(c,y) = \left(\frac{1+\sqrt{1-8c}+4c-8c/y}{2\sqrt{1-8c}}\right)^{1/2}. \tag{109}\]
The amplitudes \(\phi_{1}(c,y)\) and \(\phi_{2}(c,y)\) are present in the arguments of all three kinds of incomplete elliptic integrals. If we directly substitute the upper limit (\(y_{2}\)) after integration, then none of the incomplete elliptic integrals with the amplitude \(\phi_{1}(c,y)\) can be reduced into complete elliptic integral since
\[\phi_{1}(c,y)\Big{|}_{y\,=\,y_{2}} = 0. \tag{110}\]
The transformation into complete elliptic integrals holds only when \(\phi_{1}=\pi/2\) as given in eq. (107). However, the incomplete elliptic integrals with the amplitude \(\phi_{2}(c,y)\) upon substitution of upper limit can be directly reduced into complete elliptic integral as
\[\phi_{2}(c,y)\Big{|}_{y\,=\,y_{2}} = \frac{\pi}{2}. \tag{111}\]
We observe a similar but opposite behavior when we substitute the lower limit (\(y_{1}\)) into the integration result, this time, the incomplete elliptic integrals with the amplitude \(\phi_{1}(c,y)\) can be reduced into complete elliptic integral as
\[\phi_{1}(c,y)\Big{|}_{y\,=\,y_{1}} = \frac{\pi}{2}, \tag{112}\]
while the elliptic integrals with amplitude \(\phi_{2}(c,y)\) cannot be reduced into complete elliptic integral as
\[\phi_{2}(c,y)\Big{|}_{y\,=\,y_{1}} = 0. \tag{113}\]
In order to resolve the issue of transforming the incomplete elliptic integrals into complete elliptic integrals, we modify the upper and lower limits of \(y\) integration as
\[(y_{1},y_{2})\rightarrow(y_{1}+e,y_{2}+e)\,, \tag{114}\]
where \(e\) is an infinitesimal real off-set parameter that will be taken to zero at the end. The relative sign of \(e\) does not affect the final expression for \(c\)-parameter distribution, which is independent of this parameter \(e\). When we substitute the upper limit of integration \(y_{2}\) with the off-set parameter as defined in the above expression, the amplitudes \(\phi_{1}(c,y)\) and \(\phi_{2}(c,y)\) modify and have the following form
\[\phi_{1}(c,y,e)\Big{|}_{y\,=\,y_{2}+e} = \left(\frac{\left(-4c+\sqrt{1-8c}-1\right)(c+1)e}{\sqrt{1-8c} \left(2c(e+2)+\sqrt{1-8c}+2e+1\right)}\right)^{1/2}\,, \tag{115}\] \[\phi_{2}(c,y,e)\Big{|}_{y\,=\,y_{2}+e} = \left(\frac{-\frac{16(c+1)c}{2c(e+2)+\sqrt{1-8c}+2e+1}+4c+\sqrt{ 1-8c}+1}{2\sqrt{1-8c}}\right)^{1/2}, \tag{116}\]
similarly, from the lower limit, the amplitudes are
\[\phi_{1}(c,y,e)\Big{|}_{y\,=\,y_{1}+e} = \left(\frac{\frac{16(c+1)c}{2c(e+2)-\sqrt{1-8c}+2e+1}-4c+\sqrt{1-8c} -1}{2\sqrt{1-8c}}\right)^{1/2}\,, \tag{114}\] \[\phi_{2}(c,y,e)\Big{|}_{y\,=\,y_{1}+e} = \left(-\frac{\left(c+1\right)\left(4c+\sqrt{1-8c}+1\right)e}{ \sqrt{1-8c}\left(-2c(e+2)+\sqrt{1-8c}-2e-1\right)}\right)^{1/2}\,. \tag{115}\]
Note that, it is necessary to use this parameter because the straightforward substitution of limits did not allow the transformation of every incomplete elliptic integral. Let us consider a few examples to demonstrate how this off-set parameter solves the problem with incomplete elliptic integrals that cannot be reduced into complete elliptic integrals as their amplitude \(\phi\neq\pi/2\). We can categorize all the elliptic integrals appearing after \(y\) integration into two classes according to their amplitudes \(\phi\) as \((i)\) non-reducible incomplete elliptic integrals (\(\phi\neq\pi/2\)), and \((ii)\) reducible incomplete elliptic integrals (\(\phi=\pi/2\)).
### Non-reducible incomplete elliptic integrals
Here we consider an incomplete elliptic integral that appears from the upper limit contributions, the expression of the elliptic integral is
\[E[\phi_{1}(c,e),m_{1}(c)]\,=\,E\left[\sin^{-1}\left(\frac{\left(-4c+\sqrt{1-8c }-1\right)(c+1)e}{\sqrt{1-8c}\left(2c(e+2)+\sqrt{1-8c}+2e+1\right)}\right)^{ 1/2},\ \frac{2\sqrt{1-8c}}{1+\sqrt{1-8c}-4c-8c^{2}}\right]\,, \tag{116}\]
where \(m_{1}\) is given in eq. (85), the amplitude of this elliptic function is
\[\phi_{1}(c,e)\,=\,\sin^{-1}\left(\frac{\left(-4c+\sqrt{1-8c}-1\right)(c+1)e}{ \sqrt{1-8c}\left(2c(e+2)+\sqrt{1-8c}+2e+1\right)}\right)^{1/2}. \tag{117}\]
The elliptic integral in eq. (116) cannot be reduced into a complete elliptic integral as in the limit \(e\to 0\) the amplitude \(\phi_{1}(c,e)\to 0\). However, we can expand this elliptic integral in powers of \(e\) around \(e=0\), upon expanding we get
\[E[\phi_{1}(c,e),m_{1}(c)]\,=\,\sqrt{\frac{\left(-4c+\sqrt{1-8c}-1\right)(c+1) }{\sqrt{1-8c}\left(4c+\sqrt{1-8c}+1\right)}}\sqrt{e}+\mathcal{O}(e^{3/2}) \tag{118}\]
The expansion's first term, proportional to \(e^{1/2}\), is the only significant term in the limit when \(e\to 0\). The next term in the above expression is proportional to \(e^{3/2}\), and can be dropped. The small-\(e\) expression for these specific incomplete elliptic integrals occurring is then stored in the form
\[E[\phi_{1}(c,e),m_{1}(c)]\,=\,\sqrt{\frac{\left(-4c+\sqrt{1-8c}-1\right)(c+1) }{\sqrt{1-8c}\left(4c+\sqrt{1-8c}+1\right)}}\sqrt{e}. \tag{119}\]
Now, we shift our attention toward the elliptic integrals that appears from the lower limit contribution, one such incomplete elliptic integral is
\[F[\phi_{2}(c,e),m_{1}(c)]\,=\,F\left[\sin^{-1}\left(\frac{\left(c+1\right) \left(4c+\sqrt{1-8c}+1\right)e}{\sqrt{1-8c}\left(2c(e-2)+\sqrt{1-8c}+2e-1 \right)}\right)^{1/2},\ \frac{2\sqrt{1-8c}}{1+\sqrt{1-8c}-4c-8c^{2}}\right]\,. \tag{120}\]
The above elliptic integral cannot be reduced into a complete elliptic integral as the amplitude \(\phi_{2}(c,e)\) is
\[\phi_{2}(c,e)\,=\,\sin^{-1}\left(\frac{(c+1)\left(4c+\sqrt{1-8c}+1\right)e}{ \sqrt{1-8c}\left(2c(e-2)+\sqrt{1-8c}+2e-1\right)}\right)^{1/2} \tag{104}\]
and in the limit \(e\to 0\) the amplitude \(\phi_{2}(c,e)\to 0\). Following the similar procedure as in eq. (103), the elliptic integral in eq. (102) is expanded around \(e=0\), we get
\[F[\phi_{2}(c,e),m_{1}(c)]=\sqrt{\frac{(c+1)\left(4c+\sqrt{1-8c}+1 \right)}{\sqrt{1-8c}\left(-4c+\sqrt{1-8c}-1\right)}}\sqrt{e}\,. \tag{105}\]
The coefficients of these elliptic functions depend on \(e\) when the integration limits are modified in accordance with eq. (100). When we expand our final result in powers of \(e\), the negative powers of \(\sqrt{e}\) from the coefficients combine with the positive powers of \(\sqrt{e}\) from the stored expressions of non-reducible incomplete elliptic integrals to yield few terms independent of \(e\). Subsequently \(e\) can be set to zero.
### Reducible incomplete elliptic integrals
The category of incomplete elliptic integrals, which can be reduced into complete elliptic integrals is easier to handle as compared with non-reducible ones. An elliptic integral appearing from the upper limit contribution is
\[E[\phi_{2}(c,e),m_{1}(c)]\,=\,E\left[\sin^{-1}\left(\frac{-\frac{16(c+1)c}{2c (e+2)+\sqrt{1-8c}+2e+1}+4c+\sqrt{1-8c}+1}{2\sqrt{1-8c}}\right)^{1/2},\;\frac{ 2\sqrt{1-8c}}{1+\sqrt{1-8c}-4c-8c^{2}}\right]\,, \tag{106}\]
where the amplitude \(\phi_{2}(c,e)\) reads
\[\phi_{2}(c,e)\,=\,\sin^{-1}\left(\frac{-\frac{16(c+1)c}{2c(e+2)+ \sqrt{1-8c}+2e+1}+4c+\sqrt{1-8c}+1}{2\sqrt{1-8c}}\right)^{1/2}\,. \tag{107}\]
In the limit \(e\to 0\) the amplitude \(\phi_{2}(c,e)\to\pi/2\) and this incomplete elliptic integral can be reduced directly to a complete elliptic integral using reduction formula in eq. (105)
\[E[\phi_{2}(c,e),m_{1}(c)]\,=\,E[m_{1}(c)]\,. \tag{108}\]
In this way all reducible incomplete elliptic integrals can be substituted with their corresponding complete elliptic integrals. Upon substituting all the complete elliptic integrals and the stored expressions of incomplete elliptic integrals, our final result is expanded in powers of \(e\). No negative powers occur, and \(e\) can be taken to zero, yielding eq. (83).
## Appendix B Eikonal approximation and result summary
### Eikonal approximation to thrust and \(c\)-parameter distribution
In sections III.3 and IV.3, we compared the shifted approximation with the LP expression of the exact distribution in figs. (3a) and (7a), for the thrust and \(c\)-parameter distribution, respectively. Here we add the results from the simpler eikonal approximation to the comparison. The eikonal case follows from the approximated matrix element squared
\[\overline{\sum}|\mathcal{M}_{\rm eik}(x_{1},x_{2})|^{2}\,= 8(e^{2}e_{q})^{2}g_{s}^{2}C_{F}N_{c}\frac{1}{3Q^{2}}\] \[\times\left(\frac{2}{(1-x_{1})(1-x_{2})}\right)\,. \tag{109}\]
Using the above expression and the exact definitions of thrust and \(c\)-parameter in eqs. (10) and (69), one finds
\begin{table}
\begin{tabular}{|c|c|c|} \hline Observable & Matrix element & Distribution up to NLP \\ \hline Thrust & Exact & \(\dfrac{-3-4\log\tau}{\tau}-2+2\log\tau\) \\ \hline Thrust & Shift & \(\dfrac{-4-4\log\tau}{\tau}+4+4\log\tau\) \\ \hline Thrust & Remainder & \(\dfrac{1}{\tau}-6-2\log\tau\) \\ \hline Thrust & Eikonal & \(\dfrac{-4\log\tau}{\tau}-8-4\log\tau\) \\ \hline \(c\)-parameter & Exact & \(\dfrac{-3-4\log c}{c}+1-28\log c\) \\ \hline \(c\) -parameter & Shift & \(\dfrac{-4-4\log c}{c}+8-24\log c\) \\ \hline \(c\) -parameter & Remainder & \(\dfrac{1}{c}-7-4\log c\) \\ \hline \(c\) -parameter & Eikonal & \(\dfrac{-4\log c}{c}-8-40\log c\) \\ \hline \end{tabular}
\end{table}
Table 1: Table for the result of thrust and \(c\)-parameter distributions calculated using the exact definition of event shape variables and four different definitions of matrix elements.
Figure 8: In (a) and (b), we plot the thrust and \(c\)-parameter distributions, respectively. We show the exact result (solid red curves) given in eqs. (28) and (83), the shifted approximation result (dashed blue curves) up to NLP terms given in eqs. (47) and (101) for \(\tau\) and \(c\) respectively. We also plot the LP term (dotted purple curves) given in eqs. (32) and (33)
for their respective distributions
\[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{d\tau}\right|_{\rm NLO} = \frac{2\alpha_{s}}{3\pi}\left(\frac{-4\log\tau}{\tau}-8-4\log\tau+ \mathcal{O}(\tau)\right), \tag{34}\] \[\left.\frac{1}{\sigma_{0}(s)}\frac{d\sigma}{dc}\right|_{\rm NLO} = \frac{2\alpha_{s}}{3\pi}\left(\frac{-4\log c}{c}-8-40\log c+ \mathcal{O}(c)\right). \tag{35}\]
The eikonal matrix element squared generates LL terms at LP correctly, together with some LL and NLL at NLP. It does not capture any NLL term at LP since it lacks contributions from hard-collinear gluon emission. In figs. (8a) and (8b), we plot these eikonal results together with the thrust and c-parameter distributions computed from the exact approach as given in eq. (28) and eq. (83) along with the shifted approximation results up to NLP from eqs. (47) and (101). Clearly for both event shapes the shifted kinematics methods provides a significantly better approximation than the eikonal approximation.
### Table of results
In table 1, we summarize our results for thrust and the \(c\)-parameter using the different approximations of the matrix elements squared: eikonal, exact, shifted kinematics, as well as the remainder/soft quark part.
|